papers / 20240624 /2308.10094v3.json
yilunzhao's picture
Add files using upload-large-folder tool
a25920f verified
{
"title": "Learning and Communications Co-Design for Remote Inference Systems: Feature Length Selection and Transmission Scheduling",
"abstract": "In this paper, we consider a remote inference system, where a neural network is used to infer a time-varying target (e.g., robot movement), based on features (e.g., video clips) that are progressively received from a sensing node (e.g., a camera). Each feature is a temporal sequence of sensory data. The inference error is determined by (i) the timeliness and (ii) the sequence length of the feature, where we use Age of Information (AoI) as a metric for timeliness. While a longer feature can typically provide better inference performance, it often requires more channel resources for sending the feature. To minimize the time-averaged inference error, we study a learning and communication co-design problem that jointly optimizes feature length selection and transmission scheduling. When there is a single sensor-predictor pair and a single channel, we develop low-complexity optimal co-designs for both the cases of time-invariant and time-variant feature length. When there are multiple sensor-predictor pairs and multiple channels, the co-design problem becomes a restless multi-arm multi-action bandit problem that is PSPACE-hard. For this setting, we design a low-complexity algorithm to solve the problem. Trace-driven evaluations demonstrate the potential of these co-designs to reduce inference error by up to times.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "Introduction",
"text": "The advancement of communication technologies and artificial intelligence has engendered the demand for remote inference in various applications, such as autonomous vehicles, health monitoring, industrial control systems, and robotic systems [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. For instance, accurate prediction of the robotic state during remote robotic surgery is time-critical. The remote inference problem can be tackled by using a neural network that is trained to predict a time-varying target (e.g. robot movement) based on features (e.g., video clips) sent from a remote sensing node (e.g. a camera). Each feature is a temporal sequence of the sensory output and the length of the temporal sequence is called feature length.\nDue to data processing time, transmission errors, and transmission delay, the features delivered to the neural predictor may not be fresh, which can significantly affect the inference accuracy. To measure the freshness of the delivered features, we use the age of information (AoI) metric, which was first introduced in [5 ###reference_b5###]. Let be the generation time of the most recently delivered feature sequence. Then, AoI is the time difference between the generation time and the current time , denoted by . Recent studies [6 ###reference_b6###, 7 ###reference_b7###] have shown that the inference error is a function of AoI for a given feature length, but this function is not necessarily monotonic. Moreover, simulation results in [6 ###reference_b6###] suggest that AoI-aware remote inference, wherein both the feature and its AoI are fed to the neural network, can achieve superior performance than AoI-agnostic remote inference that omits the provision of AoI to the neural network. Hence, the AoI can provide useful information for reducing the inference error.\nAdditionally, the performance of remote inference depends on the sequence length of the feature. Longer feature sequences can carry more information about the target, resulting in the reduction of inference errors. Though a longer feature can provide better training and inference performance, it often requires more communication resources. For example, a longer feature may require a longer transmission time and may end up being stale when delivered, thus resulting in worse inference performance. Hence, it is necessary to study a learning and communications co-design problem that jointly controls the timeliness and the length of the feature sequences. The contributions of this paper are summarized as follows:\nIn [7 ###reference_b7###], it was demonstrated that the inference error is a function of the AoI, whereas the function is not necessarily monotonic. The current paper further investigates the impact of feature length on inference error. Our information-theoretic and experimental analysis show that the inference error is a non-increasing function of the feature length (See Figs. 2 ###reference_###(a)-3 ###reference_###(a), and Lemma 1 ###reference_ma1###).\nWe propose a novel learning and communications co-design framework (see Sec. II ###reference_###). In this framework, we adopted the \u201cselection-from-buffer\u201d model proposed in [7 ###reference_b7###], which is more general than the popular \u201cgenerate-at-will\u201d model that was proposed in [8 ###reference_b8###] and named in [9 ###reference_b9###]. In addition, we consider both time-invariant and time-variant feature length. Earlier studies, for example [7 ###reference_b7###, 10 ###reference_b10###], did not consider time-variant feature length.\nFor a single sensor-predictor pair and a single channel, this paper jointly optimizes feature length selection and transmission scheduling to minimize the time-averaged inference error. This joint optimization is formulated as an infinite time-horizon average-cost semi-Markov decision process (SMDP). Such problems often lack analytical solutions or closed-form expressions. Nevertheless, we are able to derive a closed-form expression for an optimal scheduling policy in the case of time-invariant feature length (Theorem 1 ###reference_orem1###). The optimal scheduling time strategy is a threshold-based policy. Our threshold-based scheduling approach differs significantly from previous threshold-based policies in e.g., [7 ###reference_b7###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###], because our threshold function depends on both the AoI value and the feature length, while prior threshold functions rely solely on the AoI value. In addition, our threshold function is not necessarily monotonic with AoI. This is a significant difference with prior studies [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###].\nWe provide an optimal policy for the case of time-variant feature length. Specifically, Theorem 2 ###reference_orem2### presents the Bellman equation for the average-cost SMDP with time-variant feature length. The Bellman equation can be solved by applying either relative value iteration or policy iteration algorithms [15 ###reference_b15###, Sec. 11.4.4]. Given the complexity associated with converting the average-cost SMDP into a Markov Decision Process (MDP) suitable for relative value iteration, we opt for the alternative: using the policy iteration algorithm to solve our average-cost SMDP. By leveraging specific structural properties of the SMDP, we can simplify the policy iteration algorithm to reduce its computational complexity. The simplified policy iteration algorithm is outlined in Algorithm 1 ###reference_### and Algorithm 2 ###reference_###.\nFurthermore, we investigate the learning and communications co-design problem for multiple sensor-predictor pairs and multiple channels. This problem is a restless multi-armed, multi-action bandit problem that is known to be PSPACE-hard [16 ###reference_b16###]. Moreover, proving indexability condition relating to Whittle index policy [17 ###reference_b17###] for our problem is fundamentally difficult. To this end, we propose a new scheduling policy named \u201cNet Gain Maximization\u201d that does not need to satisfy the indexability condition (Algorithm 4 ###reference_###).\nNumerical evaluations demonstrate that our policies for the single source case can achieve up to times performance gain compared to periodic updating and zero-wait policy (see Figs. 5 ###reference_###-6 ###reference_###). Furthermore, our proposed multiple source policy outperforms the maximum age-first policy (see Fig. 7 ###reference_###) and is close to a lower bound (see Fig. 8 ###reference_###)."
},
{
"section_id": "1.1",
"parent_section_id": "1",
"section_name": "Related Works",
"text": "The age of information (AoI) has emerged as a popular metric for analyzing and optimizing communication networks [18 ###reference_b18###, 19 ###reference_b19###], control systems [13 ###reference_b13###, 20 ###reference_b20###], remote estimation [21 ###reference_b21###, 12 ###reference_b12###], and remote inference [6 ###reference_b6###, 7 ###reference_b7###].\nAs surveyed in [22 ###reference_b22###], several studies have investigated sampling and scheduling policies for minimizing linear and nonlinear functions of AoI [7 ###reference_b7###, 9 ###reference_b9###, 18 ###reference_b18###, 19 ###reference_b19###, 11 ###reference_b11###, 14 ###reference_b14###, 13 ###reference_b13###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###]. In most previous works [9 ###reference_b9###, 18 ###reference_b18###, 19 ###reference_b19###, 11 ###reference_b11###, 14 ###reference_b14###, 13 ###reference_b13###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###], monotonic AoI penalty functions are considered. However, in a recent study [7 ###reference_b7###], it is demonstrated that the monotonic assumption is not always true for remote inference. In contrast, the inference error is a function of AoI, but the function is not necessarily monotonic. The present paper further investigates the impact of feature length on the inference error and jointly optimizes AoI and feature length.\nIn recent years, researchers have increasingly employed information-theoretic metrics to evaluate information freshness [30 ###reference_b30###, 31 ###reference_b31###, 11 ###reference_b11###, 6 ###reference_b6###, 7 ###reference_b7###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###]. In [30 ###reference_b30###, 31 ###reference_b31###, 11 ###reference_b11###], the authors utilized Shannon\u2019s mutual information to quantify the amount of information carried by received data messages about the current source value, and used Shannon\u2019s conditional entropy to measure the uncertainty about the current source value after receiving these messages. These metrics were demonstrated to be monotonic functions of the AoI when the source follows a time-homogeneous Markov chain [31 ###reference_b31###, 11 ###reference_b11###]. Built upon these findings, the authors of [34 ###reference_b34###] extended this framework to include hidden Markov model. Furthermore, a Shannon\u2019s conditional entropy term was used in [32 ###reference_b32###, 33 ###reference_b33###] to quantify information uncertainty. However, a gap still existed between these information-theoretic metrics and the performance of real-time applications such as remote estimation or remote inference. In our recent works [6 ###reference_b6###, 7 ###reference_b7###, 35 ###reference_b35###] and the present paper, we have bridged this gap by using a generalized conditional entropy associated with a loss function , called -conditional entropy, to measure (or approximate) training and inference errors in remote inference, as well as the estimation error in remote estimation. For example, when the loss function is chosen as a quadratic function , the -conditional entropy is exactly the minimum mean squared estimation error in remote estimation. This approach allows us to analyze how the AoI affects inference and estimation errors directly, instead of relying on information-theoretic metrics as intermediaries for assessing application performance. It is worth noting that Shannon\u2019s conditional entropy is a special case of -conditional entropy, corresponding to the inference and estimation errors for softmax regression and maximum likelihood estimation, as discussed in Section II ###reference_###.\n###figure_1### The optimization of linear and non-linear functions of AoI for multiple source scheduling can be formulated as a restless multi-armed bandit problem [14 ###reference_b14###, 7 ###reference_b7###, 36 ###reference_b36###, 37 ###reference_b37###, 38 ###reference_b38###]. Whittle, in his seminal work [17 ###reference_b17###], proposed an index-based policy to address restless multi-armed bandit (RMAB) problems with binary actions. Our multiple source scheduling problem is a RMAB problem with multiple actions. An extension of the Whittle index policy for multiple actions was provided in [39 ###reference_b39###], but it requires to satisfy a complicated indexability condition. In [10 ###reference_b10###], the authors considered joint feature length selection and transmission scheduling, where the penalty function was assumed to be non-decreasing in the AoI, the feature length is time-invariant, and there is only one communication channel. Under these assumptions, [10 ###reference_b10###] established the indexability condition and developed a Whittle Index policy. Compared to [10 ###reference_b10###], our work could handle both monotonic and non-monotonic AoI penalty functions, both time-invariant and time-variant feature lengths, and both one and multiple communication channels.\nBecause of (i) the time-variant feature length and non-monotonic AoI penalty function and (ii) the fact that there exist multiple transmission actions, we could not utilize the Whittle index theory to establish indexability for our multiple source scheduling problem. To address this challenge, we propose a new \u201cNet Gain Maximization\u201d algorithm (Algorithm 4 ###reference_###) for multi-source feature length selection and transmission scheduling, which does not require indexability. During the revision of this paper, we found a related study [33 ###reference_b33###], where the authors introduced a similar gain index-based policy for a RMAB problem with two actions: to send or not to send. The \u201cNet Gain Maximization\u201d algorithm that we propose is more general than the gain index-based policy in [33 ###reference_b33###] due to its capacity to accommodate more than two actions in the RMAB."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "II System Model and Scheduling Policy",
"text": "We consider a remote inference system composed of a sensor, a transmitter, and a receiver, as illustrated in Fig. 1 ###reference_###. The sensor observes a time-varying target and feeds its measurement to the transmitter. The transmitter generates features from the sensory outputs and progressively transmits the features to the receiver through a communication channel. Within the receiver, a neural network infers the time-varying target based on the received features."
},
{
"section_id": "2.1",
"parent_section_id": "2",
"section_name": "II-A System Model",
"text": "The system is time-slotted and starts to operate at time slot . At every time slot , the transmitter appends the sensory output to a buffer that stores the most recent sensory outputs meanwhile, the oldest output is removed from the buffer. We assume that the buffer is full initially, containing signal values at time . This ensures that the buffer remains consistently full at any time .111This assumption does not introduce any loss of generality. If the buffer is no full at time , it would not affect our results. The transmitter progressively generates a feature , where each feature is a temporal sequence of sensory outputs taken from the buffer such that is the set of all -tuples that take values from , , and . For ease of presentation, the temporal sequence length of feature is called feature length and the starting position of feature in the buffer is called feature position. If the channel is idle in time slot , the transmitter can submit the feature to the channel. Due to communication delays and channel errors, the feature is not instantly received. The most recently received feature is denoted as , where the latest observation in feature is generated time slots ago. We call the age of information (AoI) which represents the difference between the time stamps of the target and the latest observation in feature .\n###figure_2### ###figure_3### The receiver consists of trained neural networks, each associated with a specific feature length . The neural network associated with feature length takes the AoI and the feature as inputs and generates an output , where the neural network is represented by the function . The performance of the neural network is measured by a loss function , where indicates the incurred loss if the output is used for inference when . The loss function is determined by the purpose of the application. For example, in softmax regression (i.e., neural network based maximum likelihood classification), the output is a distribution of and the loss function is the negative log-likelihood of the value . In neural network based mean-squared estimation, a quadratic loss function is used, where the action is an estimate of the target value and is the euclidean norm of the vector ."
},
{
"section_id": "2.2",
"parent_section_id": "2",
"section_name": "II-B Inference Error",
"text": "We assume that is a stationary process for every . Given AoI and feature length , the expected inference error is a function of and , given by\nwhere is the joint distribution of the label and feature during online inference and the function represents any trained neural network that maps from to . The inference error can be evaluated through machine learning experiments.\nIn this paper, we conduct two experiments: (i) wireless channel state information (CSI) prediction and (ii) actuator states prediction in the OpenAI CartPole-v1 task [40 ###reference_b40###]. Detailed information regarding the experimental setup for both experiments can be found in Appendix A of the supplementary material. The code for these experiments is available in GitHub repositories222https://github.com/Kamran0153/Channel-State-Information-Prediction ###reference_e-Information-Prediction###333https://github.com/Kamran0153/Impact-of-Data-Freshness-in-Learning ###reference_ta-Freshness-in-Learning###.\nThe experimental results, presented in Figs. 2 ###reference_###(a)-3 ###reference_###(a), demonstrate that the inference error decreases with respect to feature length. Moreover, Figs. 2 ###reference_###(b)-3 ###reference_###(b) illustrate that the inference error is not necessarily a monotonic function of AoI.\nThese findings align with machine learning experiments conducted in [6 ###reference_b6###, 7 ###reference_b7###, 35 ###reference_b35###]. Collectively, the results from this paper and those in [6 ###reference_b6###, 7 ###reference_b7###, 35 ###reference_b35###] indicate that longer feature lengths can enhance inference accuracy and fresher features are not always better than stale features in remote inference."
},
{
"section_id": "2.3",
"parent_section_id": "2",
"section_name": "II-C Feature Length Selection and Transmission Scheduling Policy",
"text": "Because (i) fresh feature is not always better than stale feature and (ii) longer feature can improve inference error, we adopted \u201cselection-from-buffer\u201d model, which is recently proposed in [7 ###reference_b7###]. In contrast to the \u201cgenerate-at-will\u201d model [9 ###reference_b9###, 8 ###reference_b8###], where the transmitter can only select the most recent sensory output , the \u201cselection-from-buffer\u201d model offers greater flexibility by allowing the transmitter to pick multiple sensory outputs (which can be stale or fresh). In other words, \u201cselection-from-buffer\u201d model allows the transmitter to choose feature position and feature length under the constraints and . Feature length selection represents a trade-off between learning and communications: A longer feature can provide better learning performance (see Figs. 2 ###reference_###-3 ###reference_###), whereas it requires more channel resources (e.g., more time slots or more frequency resources) for sending the feature. This motivated us to study a learning-communication co-design problem that jointly optimizes the feature length, feature position, and transmission scheduling.\nThe feature length and feature position may vary across the features sent over time. Feature transmissions over the channel are non-preemptive: the channel must finish sending the current feature, before becoming available to transmit the next feature. Suppose that the -th feature is submitted to the channel at time slot , where is its feature length and is its feature position such that and . It takes time slots to send the -th feature over the channel. The -th feature is delivered to the receiver at time slot , where . The feature transmission time depends on the feature length . Due to time-varying channel conditions, we assume that, given feature length , the \u2019s are i.i.d. random variables, with a finite mean . Once a feature is delivered, an acknowledgment (ACK) is sent back to the transmitter, notifying that the channel has become idle.\nIn time slot , the -th feature is the most recently received feature, where . The receiver feeds the feature to the neural network to infer . We define age of information (AoI) is defined as the difference between the time-stamp of the freshest sensory output in feature and the current time , i.e.,\nBecause , it holds that\nThe initial state of the system is assumed to be , and is a finite constant.\n###figure_4### ###figure_5### Let represent a scheduling policy. We focus on the class of signal-agnostic scheduling policies in which each decision is determined without using the knowledge of the signal value of the observed process. A scheduling policy is said to be signal-agnostic, if the policy is independent of . Let denote the set of all the causal scheduling policies that satisfy the following conditions: (i) the scheduling time , the feature position , and the feature length are decided based on the current and the historical information available at the scheduler such that and , (ii) the scheduler has access to the inference error function and the distribution of for each , and (iii) the scheduler does not have access to the realization of the process . We use to denote the set of causal scheduling policies with time-invariant feature length, defined as\nwhere ."
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "III Preliminaries: Impacts of Feature Length and AoI on Inference Error",
"text": "In this section, we adopt an information-theoretic approach that was developed recently in [7 ###reference_b7###] to show the impact of feature length and AoI on the inference error ."
},
{
"section_id": "3.1",
"parent_section_id": "3",
"section_name": "III-A Information-theoretic Metrics for Training and Inference Errors",
"text": "Training error is expressed as a function of and , given by\nwhere a trained neural network used in (1 ###reference_###) and is the joint distribution of the target and the feature in the training dataset. The training error is lower bounded by\nwhere is the set of all functions that map from to . Because the trained neural network in (5 ###reference_###) satisfies , . The lower bound in (6 ###reference_###) has an information-theoretical interpretation [7 ###reference_b7###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###]: It is a generalized conditional entropy of a random variable given associated to the loss function . For notational simplicity, we call an -conditional entropy of a random variable given . The -entropy of a random variable is defined as [41 ###reference_b41###, 42 ###reference_b42###]\nThe optimal solutions to (7 ###reference_###) may not be unique. Let denote an optimal solution to (7 ###reference_###), which is called a Bayes action [41 ###reference_b41###].\nSimilarly, the -conditional entropy of given is defined as [6 ###reference_b6###, 7 ###reference_b7###, 41 ###reference_b41###, 42 ###reference_b42###]\nand the -conditional entropy of given is given by [6 ###reference_b6###, 7 ###reference_b7###, 41 ###reference_b41###, 42 ###reference_b42###]\nThe inference error can be approximated as the following -conditional cross entropy\nwhere the -conditional cross entropy is defined as [7 ###reference_b7###]\nIf training algorithm considers sets of large and wide neural networks such that and for all and are close to each other, then the difference between the inference error and the -conditional cross entropy is small [7 ###reference_b7###]. Compared to , the -conditional cross entropy are mathematically more convenient to analyze, as we will see next."
},
{
"section_id": "3.2",
"parent_section_id": "3",
"section_name": "III-B Information-theoretic Monotonicity Analysis",
"text": "The following lemma interprets the monotonicity of the -conditional entropy and the -conditional cross entropy with respect to the feature length .\nThe following assertions are true:\nGiven , is a non-increasing function of , i.e., for all\nGiven , if for all and\nthen for all\nLemma 1 can be proven by using the data processing inequality for -conditional entropy [43 ###reference_b43###, Lemma 12.1] and a local information geometric analysis. See Appendix B of the supplementary material for the details. \u220e\nLemma 1 ###reference_ma1###(a) demonstrates that for a given AoI value , the -conditional entropy decreases as the feature length increases. This is due to the fact that a longer feature provides more information, consequently leading to a lower -conditional entropy. Additionally, as indicated in Lemma 1 ###reference_ma1###(b), when the conditional distributions in training and inference data are close to each other (i.e., when in ((b) ###reference_###) is close to ), the -conditional cross entropy is close to a non-increasing function of the feature length . This information-theoretic analysis clarifies the experimental results depicted in Fig. 2 ###reference_###(a) and Fig. 3 ###reference_###(a), where the inference error diminishes with the increasing feature length.\nThe monotonicity of the -conditional cross entropy with respect to the AoI are explained in Theorem 3 of [7 ###reference_b7###] and in [35 ###reference_b35###]. This result is restated in Lemma 2 below for the sake of completeness.\nGiven , a sequence of three random variables and is said to be an -Markov chain, denoted as , if\nwhere\nis KL-divergence and is Shannon conditional mutual information.\n[7 ###reference_b7###, 35 ###reference_b35###]\nIf is an -Markov chain for all and ((b) ###reference_###) holds, then for all\nLemma 2 ###reference_ma2### implies that the monotonic behavior of with respect to AoI is characterized by two key parameters: in the -Markov chain model and the parameter . When is small, the sequence of target and feature random variables approximates a Markov chain. Consequently, becomes non-decreasing with respect to AoI provided that is close to . Conversely, if is significantly large, then can be far from a monotonic function of . This findings provide an explanation for the patterns observed in the experimental results shown in Figs. 2 ###reference_###(b) to 3 ###reference_###(b). Shannon\u2019s interpretation of Markov sources in his seminal work [44 ###reference_b44###] indicates that as the sequence length grows larger, the tuple tends to resemble a Markov chain more closely. Hence, according to Lemma 2 ###reference_ma2###, the inference error approaches to a non-decreasing function of AoI as feature length increases. As illustrated in Figs. 2 ###reference_###(b)-3 ###reference_###(b), the inference error converges to a non-decreasing function of AoI as feature length increases."
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "IV Learning and Communications Co-design: Single Source Case",
"text": "Let denote the feature length of the most recently received feature in time slot . The time-averaged expected inference error under policy is expressed as\nwhere is denoted as the time-averaged inference error, and is the expected inference error at time corresponding to the system state . In this section, we slove two problems. The first one is to find an optimal policy that minimizes the time-averaged expected inference error among all the causal policies in that consider time-invariant feature length. Another problem is to find an optimal policy that minimizes the time-averaged expected inference error among all the causal policies in ."
},
{
"section_id": "4.1",
"parent_section_id": "4",
"section_name": "IV-A Time-invariant Feature Length",
"text": "We first find an optimal policy that minimizes the time-averaged inference error among all causal policies with time-invariant feature length in defined in (4 ###reference_###):\nwhere is the optimum value of (19 ###reference_###). The problem (19 ###reference_###) is an infinite time-horizon average-cost semi-Markov decision process (SMDP). Such problems are often challenging to solve analytically or with closed-form solutions. The per-slot cost function in (19 ###reference_###) depends on two variables: the AoI and the feature length . Prior studies [11 ###reference_b11###, 21 ###reference_b21###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 45 ###reference_b45###, 18 ###reference_b18###, 19 ###reference_b19###, 9 ###reference_b9###] have considered linear and non-linear monotonic AoI functions. Due to the fact that (i) the cost function in (19 ###reference_###) depends on two variables and (ii) is not necessarily monotonic with respect to AoI, finding an optimal solution is challenging and the existing scheduling policies cannot be directly applied to solve (19 ###reference_###). Therefore, it is necessary to develop a new scheduling policy that can address the complexities of (19 ###reference_###).\nSurprisingly, we get a closed-form solution of (19 ###reference_###). To present the solution, we define a function as\nIf \u2019s are i.i.d. with a finite mean for each , then there exists an optimal solution to (19 ###reference_###) that satisfies:\nThe optimal feature position in is time-invariant, i.e., . The optimal feature length and the optimal feature position in are given by\nwhere is the unique root of equation\n, , the sequence is determined by\nand the function is defined in (20 ###reference_###).\nThe optimal scheduling time in is determined by\nwhere is the AoI at time . The optimal objective value of (19 ###reference_###) is\nWe prove Theorem 1 ###reference_orem1### in two steps: (i) We find policies, each of which is optimal among the set of policies where . After that\n(ii) we select the policy that results in the minimum average inference error among the policies. See Appendix C of the supplementary material for details.\nTheorem 1 ###reference_orem1### implies that the optimal scheduling policy has a nice structure. According to Theorem 1 ###reference_orem1###(a), the feature position is constant for all -th features, i.e., . The optimal feature length and the optimal feature position are pre-computed by solving (21 ###reference_###) and then used in real-time. The parameter in (21 ###reference_###) is the unique root of ((a) ###reference_###), which is solved by using low-complexity algorithms, e.g., bisection search, newtons method, and fixed point iteration [12 ###reference_b12###]. Theorem 1 ###reference_orem1###(b) implies that the optimal scheduling time follows a threshold policy. Specifically, a feature is transmitted in time-slot if the following two conditions are satisfied: (i) The channel is idle in time-slot and (ii) the value exceeds the optimal objective value of (19 ###reference_###). The optimal objective value is obtained from (25 ###reference_###). Our threshold-based scheduling policy has a significant distinction from previous threshold-based policies studied in the literature, such as [11 ###reference_b11###, 21 ###reference_b21###, 12 ###reference_b12###, 13 ###reference_b13###]. In these prior works, the threshold function used to determine the scheduling time is based solely on the AoI value and is non-decreasing with respect to AoI. However, in our proposed strategy, (i) the threshold function depends on both the AoI value and the feature length and (ii) the threshold function can be non-monotonic with respect to AoI."
},
{
"section_id": "4.1.1",
"parent_section_id": "4.1",
"section_name": "IV-A1 Monotonic AoI Cost function",
"text": "Consider a special case where the inference error is a non-decreasing function of for every feature length . A simplified solution can be derived for this specific case of (19 ###reference_###). In this scenario, the optimal feature position is , and the threshold function defined in (20 ###reference_###) becomes:\nIn this special case of monotonic AoI cost function, (24 ###reference_###) can be rewritten as a threshold policy of the AoI in the form of , where is defined as:\nHowever, when is not monotonic with respect to AoI , (24 ###reference_###) cannot be reformulated as a threshold policy of the AoI . This is a key difference with earlier studies [11 ###reference_b11###, 13 ###reference_b13###, 14 ###reference_b14###]."
},
{
"section_id": "4.1.2",
"parent_section_id": "4.1",
"section_name": "IV-A2 Connection with Restart-in-state Problem",
"text": "Consider another special case in which all features take time-slot for transmission. For this special case, the threshold function defined in (20 ###reference_###) becomes\nThis special case of (19 ###reference_###) is a restart-in-state problem [46 ###reference_b46###, Chapter 2.6.4]. This is because whenever a feature with the optimal feature length and from the optimal feature position is transmitted, AoI value restarts from in the next time slot. For this restart-in-state problem, the optimal sending time follows a threshold policy [46 ###reference_b46###, Chapter 2.6.4]. Specifically, a feature is transmitted if\nwhere the relative value function of the restart-in-state problem is given by\nBy using (IV-A2 ###reference_###), we can show that (29 ###reference_###) is equivalent to\nwhere the function is defined in (28 ###reference_###). This connection between the restart-in-state problem and AoI minimization was unknown before. The original problem considers more general , which can be considered as a restart-in-random state problem. This is because whenever -th feature with optimal feature length and from optimal feature position is transmitted, AoI restarts from a random value after time slots."
},
{
"section_id": "4.2",
"parent_section_id": "4",
"section_name": "IV-B Time-variant Feature Length",
"text": "Now, we find an optimal scheduling policy that minimizes time-averaged inference error among all causal policies in :\nwhere is the inference error at time slot and is the optimum value of (32 ###reference_###). Because ,\nwhere is the optimum value of (19 ###reference_###). Like (19 ###reference_###), problem (32 ###reference_###) can also be expressed as an infinite time-horizon average-cost SMDP. Note that (32 ###reference_###) is more complex SMDP than (19 ###reference_###) because the feature length in (32 ###reference_###) is allowed to vary over time.\nThe optimal policy can be determined by using a dynamic programming method associated with the average cost SMDP [15 ###reference_b15###, 47 ###reference_b47###]. There exists a function such that for all and , the optimal objective value of (32 ###reference_###) satisfies the following Bellman equation:\nLet be the optimal solution to the Bellman equation (IV-B ###reference_###). There exists an optimal solution to (32 ###reference_###), determined by\nwhere is the optimal waiting time for sending the -th feature after the -th feature is delivered.\nTo get the optimal policy , we need to solve (IV-B ###reference_###). Solving (IV-B ###reference_###) is complex as it requires joint optimization of three variables. Moreover, an optimal solution obtained by the dynamic programming method provides no insight. We are able to simplify (IV-B ###reference_###) in Theorem 2 ###reference_orem2### by analyzing the structure of the optimal solution.\nThe following assertions are true:\nIf \u2019s are i.i.d. with a finite mean for each , then there exists a function such that for all and , the optimal objective value of (32 ###reference_###) satisfies the following Bellman equation:\nwhere is called the relative value function and the function is given by\nand the function is defined in (20 ###reference_###).\nIn addition, there exists an optimal solution to (32 ###reference_###) that is determined by\nwhere is the AoI at time and is the -th feature delivery time.\nTheorem 2 ###reference_orem2###(a) simplifies the Bellman equation (IV-B ###reference_###) to ((a) ###reference_0###). Unlike (IV-B ###reference_###), which involves joint optimization of three variables, ((a) ###reference_0###) is an integer optimization problem. This simplification is possible because, for a given feature length , the original equation (IV-B ###reference_###) can be separated into two separated optimization problems. The first problem involves finding the optimal stopping time, denoted by defined in (39 ###reference_###), and the second problem is to determine the feature position that minimizes . By breaking down the original equation in this way, we can solve the problem more efficiently. Detailed proof of Theorem 2 ###reference_orem2### can be found in Appendix D of the supplementary material.\nFurthermore, Theorem 2 ###reference_orem2###(a) provides additional insights into the solution of (IV-B ###reference_###). Theorem 2 ###reference_orem2###(a) implies that the optimal stopping time in (IV-B ###reference_###) follows a threshold policy. Specifically, if , then equals , which is defined in (39 ###reference_###). Here, is the minimum positive integer value for which defined in (20 ###reference_###) exceeds the optimal objective value .\nTheorem 2 ###reference_orem2###(b) provides an optimal solution to (32 ###reference_###). According to Theorem 2 ###reference_orem2###(b), by using precomputed and the relative value function , we can obtain the optimal feature length from ((b) ###reference_2###) using an exhaustive search algorithm. After obtaining , the optimal feature position can be determined from (41 ###reference_###). The optimal scheduling time provided in (42 ###reference_###) follows a threshold policy. Specifically, the -th feature is transmitted in time-slot if two conditions are satisfied: (i) the previous feature is delivered by time , and (ii) the function exceeds the optimal objective value of (32 ###reference_###)."
},
{
"section_id": "4.2.1",
"parent_section_id": "4.2",
"section_name": "IV-B1 Policy Iteration Algorithm for Computing and",
"text": "To effectively implement the optimal solution for (32 ###reference_###), as outlined in Theorem 2 ###reference_orem2###, it is necessary to precompute the optimal objective value and the relative value function that satisfies the Bellman equation ((a) ###reference_0###). The computation of and can be achieved by employing policy iteration algorithm or relative value iteration algorithm for SMDPs, as detailed in [15 ###reference_b15###, Section 11.4.4]. To apply the relative value iteration algorithm, we need to transform the SMDP into an equivalent MDP. However, this transformation process can be challenging to execute. Therefore, in this paper, we opt to utilize the policy iteration algorithm specifically tailored for SMDPs [15 ###reference_b15###, Section 11.4.4]. Algorithm 2 ###reference_### provides a policy iteration algorithm for obtaining and , which is composed of two steps: (i) policy evaluation and (ii) policy improvement.\nPolicy Evaluation: Let and be the relative value function and the average inference error under policy . Let , , and represent feature length, feature position, and waiting time for sending the -th feature under policy when and . Given , , and for all , we can evaluate the relative value function and the average inference error using Algorithm 1 ###reference_###. The relative value function represents relative value associated with a reference state. We can set as a reference state with .\nBy using , the average inference error is determined by\nwhere . We then use an iterative procedure within Algorithm 1 ###reference_### to determine the relative value function .\nPolicy Improvement: After obtaining and from Algorithm 1 ###reference_###, we apply Theorem 2 ###reference_orem2### to derive an improved policy in Algorithm 2 ###reference_###. Feature length , feature position , and waiting time under policy is determined by\nInstead of a joint optimization problem (IV-B ###reference_###), Algorithm 2 ###reference_### utilizes separated optimization problems (IV-B1 ###reference_5###)-(46 ###reference_###) based on Theorem 2 ###reference_orem2###. If the improved policy is equal to the old policy , then the policy iteration algorithm converges. Theorem 11.4.6 in [15 ###reference_b15###] establishes the finite convergence of the policy iteration algorithm of an average cost SMDP.\nNow, we discuss the time-complexity of Algorithms 1 ###reference_###-2 ###reference_###.\nTo manage the infinite set of AoI values in practice, we introduce an upper bound denoted as . Whenever exceeds , we set for all . Hence, each iteration of our policy evaluation step requires one pass through the approximated state space . Therefore, the time complexity of each iteration is , assuming that the required expected values are precomputed. Considering the bounded set instead of , the time complexities of (IV-B1 ###reference_5###), (45 ###reference_###), and (46 ###reference_###) are , , and , respectively, provided that the expected values in (IV-B1 ###reference_5###)-(46 ###reference_###) are precomputed. The overall complexity of (IV-B1 ###reference_5###)-(46 ###reference_###) is , which is more efficient than the joint optimization problem (IV-B ###reference_###). The latter has a time complexity of .\nIn each iteration of the policy improvement step, the optimization problems (IV-B1 ###reference_5###)-(46 ###reference_###) are solved for all state such that and . Hence, the total complexity of each iteration of the policy improvement step is ."
},
{
"section_id": "5",
"parent_section_id": null,
"section_name": "Learning and Communications Co-design: Multiple Source Case",
"text": ""
},
{
"section_id": "5.1",
"parent_section_id": "5",
"section_name": "System Model",
"text": "###figure_6### Consider a remote inference system consisting of source-predictor pairs connected through shared communication channels, as illustrated in Fig. 4 ###reference_###. Each source has a buffer that stores most recent signal observations at each time slot . At time slot , a centralized scheduler determines whether to send a feature from source with feature length and feature position . We denote if the scheduler decides not to send a feature from source at time . If a feature from source is sent, we assume it will be delivered to the -th neural predictor in the next time slot using channel resources. The transmission model of the multiple source system is significantly different from that of the single source model discussed in Section II-C ###reference_###. In the latter case, only one channel was considered, while communication channels are available in the former. The channels could be from multiple frequencies and/or time resources. For example, if the clock rate in the multiple access control (MAC) layer is faster than that of the application layer, then one application-layer time-slot could comprise multiple MAC-layer time-slots. A feature can utilize multiple channels (i.e., frequency or time resources) for transmission during a single time slot. However, the channel resource is limited, so the system must satisfy\nThe system begins operating at time . Let denote the sending time of the -th feature from the -th source. Since we assume that a feature takes one time-slot to transmit, the corresponding neural predictor receives the -th feature from the -th source at time . The AoI of the source at time slot is defined as\nWe denote as the feature length of the most recent received feature from -th source by time , given by"
},
{
"section_id": "5.2",
"parent_section_id": "5",
"section_name": "Scheduling Policy",
"text": "At time slot , a centralized scheduler determines the value of the feature length and the feature position for every -th source. A scheduling policy is denoted by , where . Let denote the set of all the causal scheduling policies that determine and based on the current and the historical information available at the transmitter such that ."
},
{
"section_id": "5.3",
"parent_section_id": "5",
"section_name": "Problem Formulation",
"text": "Our goal is to minimize the time-averaged sum of the inference errors of the sources, which is formulated as\nwhere is the inference error of source at time slot\n.\nThe problem (50 ###reference_###)-(51 ###reference_###) can be cast into an infinite-horizon average cost restless multi-armed multi-action bandit problem [17 ###reference_b17###, 39 ###reference_b39###] by viewing each source as an arm, where a scheduler needs to decide multiple actions at every time by observing state .\nFinding an optimal solution to the RMAB problem is PSPACE hard [16 ###reference_b16###]. Whittle, in his seminal work [17 ###reference_b17###], proposed a heuristic policy for RMAB problem with binary action. In [39 ###reference_b39###], a modified Whittle index policy is proposed for the multi-action RMAB problems. Whittle index policy is known to be asymptotically optimal [48 ###reference_b48###], but the policy needs to satisfy a complicated indexability condition. Proving indexability is challenging for our multi-action RMAB problem because we allow (i) general penalty function that is not necessarily monotonic with respect to AoI and (ii) time-variant feature length. To this end, we propose a low-complexity algorithm that does not need to satisfy any indexability condition."
},
{
"section_id": "5.4",
"parent_section_id": "5",
"section_name": "Lagrangian Optimization of a Relaxed Problem",
"text": "Similar to Whittle\u2019s approach [17 ###reference_b17###], we utilize a Lagrange relaxation of the problem (50 ###reference_###)-(51 ###reference_###). We first relax the per time-slot channel constraint (51 ###reference_###) as the following time-average expected channel constraint\nThe relaxed constraint (52 ###reference_###) only needs to be satisfied on average, whereas (51 ###reference_###) is required to hold at every time-slot. By this, the original problem (50 ###reference_###)-(51 ###reference_###) becomes\nThe relaxed problem (53 ###reference_###)-(54 ###reference_###) is of interest as the optimal solution of the problem provides a lower bound to the original problem (50 ###reference_###)-(51 ###reference_###)."
},
{
"section_id": "5.4.1",
"parent_section_id": "5.4",
"section_name": "V-D1 Lagrangian Dual Decomposition of (53)-(54)",
"text": "To solve (53 ###reference_###)-(54 ###reference_###), we utilize a Lagrangian dual decomposition method [17 ###reference_b17###, 49 ###reference_b49###]. At first, we apply Lagrangian multiplier to the time-average channel constraint (54 ###reference_###) and get the following Lagrangian dual function\nThe problem (V-D1 ###reference_7###) can be decomposed into sub-problems. The sub-problem associated with the -th source is defined as:\nwhere is the set of all causal scheduling policies . The sub-problem (56 ###reference_###) is an infinite horizon average cost MDP, where a scheduler decides action by observing state . The Lagrange multiplier in (56 ###reference_###) can be interpreted as a transmission cost: whenever , the source has to pay cost of for using channel resources.\nThe optimal solution to (56 ###reference_###) can be obtained by solving the following Bellman equation:\nwhere represents the relative value function of the MDP (56 ###reference_###), and the function is defined as follows\nThe relative value function can be computed using the relative value iteration algorithm [15 ###reference_b15###, 47 ###reference_b47###].\nLet be an optimal solution to (56 ###reference_###), which is derived by using (57 ###reference_###) and (V-D1 ###reference_9###). The optimal feature length is determined by\nwhere the function is given by\nThe optimal feature position in is"
},
{
"section_id": "5.4.2",
"parent_section_id": "5.4",
"section_name": "V-D2 Lagrange Dual Problem",
"text": "Next, we determine the optimal dual cost that solves the following Lagrange dual problem:\nwhere is the Lagrangian dual function defined in (V-D1 ###reference_7###).\nTo get , we apply the stochastic sub-gradient ascent method [49 ###reference_b49###], which iteratively updates as follows\nwhere is the iteration index, determines the step size , and is the feature length of source at the -th iteration. Detailed optimization technique is provided in Algorithm 3 ###reference_###."
},
{
"section_id": "5.5",
"parent_section_id": "5",
"section_name": "Net Gain Maximization Policy",
"text": "After getting optimal dual cost , we can use policy for the relaxed problem (53 ###reference_###)-(54 ###reference_###). But it is infeasible to implement the policy for the original problem (50 ###reference_###)-(51 ###reference_###) because it may violate the scheduling constraint (51 ###reference_###). Motivated by Whittle\u2019s approach [17 ###reference_b17###], we aim to select actions with higher priority, while satisfying the scheduling constraint (51 ###reference_###) at every time slot.\nTowards this end, we introduce \u201cNet Gain\u201d, denoted as , to measure the advantage of selecting feature length , which is given by\nwhere the function is defined in (V-D1 ###reference_9###) and the function is defined in (60 ###reference_###). Substituting (V-D1 ###reference_9###) into (V-E ###reference_1###), we get\nFor a given , the net gain has an economic interpretation. Given the state of source , the net gain measures the maximum reduction in the loss by selecting source with feature length , as opposed to not selecting source at all. If is negative for all , then it better not to select source .\nIf , then the feature length for source is prioritized over the feature length for source . Under the constraint (51 ###reference_###), we select feature lengths that maximize \u201cNet Gain\u201d:\nThe \u201cNet Gain Maximization\u201d problem (66 ###reference_###) with constraint (67 ###reference_###) is a bounded Knapsack problem. By using (66 ###reference_###)-(67 ###reference_###), we propose a new algorithm for the problem (50 ###reference_###)-(51 ###reference_###) in Algorithm 4 ###reference_###.\nAlgorithm 4 ###reference_### starts from . At time , the algorithm takes the dual variable (transmission cost) from Algorithm 3 ###reference_### which is run offline before . The \u201cNet Gain\u201d is precomputed for every source , every feature length , and every state such that , , where we approximate infinite set of AoI values by using an upper bound . We can set if .\nFrom time , Algorithm 4 ###reference_### solves the knapsack problem (66 ###reference_###)-(67 ###reference_###) at every time slot . The knapsack problem is solved by using a dynamic programming method in time [50 ###reference_b50###], where is the number of sources, is the number of channels, and is the maximum buffer size among all source . The feature position is obtained from a look up table that stores the value of function for all and .\nUnlike the Whittle index policy [17 ###reference_b17###], our policy proposed in Algorithm 4 ###reference_### does not need to satisfy any indexability condition. There exists some other policies that do not need to satisfy indexability condition [36 ###reference_b36###, 38 ###reference_b38###]. The policies in [36 ###reference_b36###, 38 ###reference_b38###] are developed based on linear programming formulations, our policy does not need to solve any linear programming."
},
{
"section_id": "6",
"parent_section_id": null,
"section_name": "VI Trace-driven Evaluations",
"text": "In this section, we demonstrate the performance of our scheduling policies. The performance evaluation is conducted using an inference error function obtained from a channel state information (CSI) prediction experiment. In Fig. 2 ###reference_###, one can observe the inference error function of a CSI prediction experiment. The discrete-time autocorrelation function of the generated fading channel coefficient is defined as , where represents the autocorrelation of the CSI signal process with time lag , signifies the variance of the process, denotes the zeroth-order Bessel function, is the channel sampling duration, is the maximum Doppler shift, stands for the velocity of the source, is the carrier frequency, and represents the speed of light. In this experiment, we employed a quadratic loss function. Although we utilize the CSI prediction experiment and a quadratic loss function for evaluating the performance of our scheduling policies, we note that our scheduling policies are not limited to any specific experiment, loss function, or predictor."
},
{
"section_id": "6.1",
"parent_section_id": "6",
"section_name": "VI-A Single Source Scheduling Policies",
"text": "We evaluate the following four single source scheduling policies.\nGenerate-at-Will, Zero Wait with Feature Length : In this policy, , , and for all -th feature transmissions.\nOptimal Policy with Time-invariant Feature Length (TIFL): The policy that we propose in Theorem 1 ###reference_orem1###.\nOptimal Policy with Time-variant Feature Length (TVFL): The policy that we propose in Theorem 2 ###reference_orem2###.\nPeriodic Updating with Feature Length : After every time slot , the policy submits features with feature length and feature position to a First-Come, First-Served communication channel.\nWe evaluate the performance of the above four single source scheduling policies, where the task to infer the current CSI of a source by observing features. For generating the CSI dataset, we set , , , and . Additionally, we add white noise to the feature variable with a variance of .\nIn the single source case, we consider that the -th feature requires time-slots for transmission, where represents the communication capacity of the channel. For example, if the number of bits used for representing a CSI symbol is and the bit rate of the channel is , then .\n###figure_7### ###figure_8### Fig. 5 ###reference_### shows the time-averaged inference error under different policies against the parameter , where . The plot is constrained to since values of is impractical due to the possibility of sending CSI using fewer bits. The buffer size of the source is . Among the four scheduling policies, the \u201cOptimal Policy with TVFL\u201d yields the best performance, while the \u201cOptimal Policy with TIFL\u201d outperforms the other two policies. The findings in Figure 5 ###reference_### demonstrate that when , the \u201cOptimal Policy with TVFL\u201d can achieve a performance improvement of times compared to the \u201cPeriodic Updating, \u201d with and \u201cGenerate-at-Will, Zero Wait, \u201d policies. This result is not surprising since \u201cPeriodic Updating, \u201d and \u201cGenerate-at-Will, Zero Wait, \u201d do not utilize longer features, despite all features with taking only time slot when . When , the average inference error of the \u201cPeriodic Updating\u201d and \u201cGenerate-at-Will, Zero Wait\u201d policies are at least 10 times worse than that of the \u201cOptimal Policy with TVFL.\u201d The reasons are as follows: (1) The \u201cPeriodic Updating\u201d policy does not transmit a feature even when the channel is available, leading to an inefficient use of resources. In our simulation, this situation is evident as and . Again, \u201cPeriodic Updating\u201d may transmit features even when the preceding feature has not yet been delivered, resulting in an extended waiting time for the queued feature. This frequently leads to the receiver receiving a feature with a significantly large AoI value, which is not good for accurate inference. (2) Conversely, the \u201cGenerate-at-Will, Zero-Wait\u201d policy isn\u2019t superior because zero-wait is not advantageous, and the feature position may not be an optimal choice since the inference error is non monotonic with respect to AoI.\nThe policy \u201cOptimal Policy with TIFL\u201d achieves an average inference error very close to that of the \u201cOptimal Policy with TVFL,\u201d but it is simpler to implement. Furthermore, the \u201cOptimal Policy with TIFL\u201d requires only one predictor associated with the optimal time-invariant feature length and does not require switching the predictor.\nFig. 6 ###reference_### plots the time-averaged inference error vs. the buffer size . In this simulation, is considered. The results show that increasing can improve the performance of the \u201cOptimal Policy with TVFL\u201d and \u201cOptimal Policy with TIFL\u201d compared to the other policies. As increases, \u201cOptimal Policy with TVFL\u201d and \u201cOptimal Policy with TIFL\u201d outperform the others. In contrast, the \u201cPeriodic Updating\u201d and \u201cGenerate-at-Will\u201d policies do not utilize the buffer and their performance remains unchanged with increasing . Moreover, we can notice that the buffer size is enough for this experiment as further increase in buffer size does not improve the performance.\n###figure_9### ###figure_10###"
},
{
"section_id": "6.2",
"parent_section_id": "6",
"section_name": "VI-B Multiple Source Scheduling Policies",
"text": "In this section, we evaluate the time-averaged inference error of the following three multiple source scheduling policies.\nMaximum Age First (MAF), Generate-at-will, : As the name suggests, this policy selects the sources with maximum AoI value at each time. Specifically, under this policy, sources with maximum AoI are selected. Moreover, the feature length and the feature position of the selected sources are and , respectively.\nMaximum Age First (MAF), Generate-at-will, : This policy also selects the sources with maximum AoI values at each time, but with feature length . Under this policy, sources with maximum AoI are selected, where is the buffer size of all sources, i.e., for all source . Moreover, the feature position of the selected sources is .\nProposed Policy: The policy in Algorithm 4 ###reference_###.\nThe performance of three multiple source scheduling policies is illustrated in Fig. 7 ###reference_###, where each source sends its observed CSI to the corresponding predictor. In this simulation, three types of sources are considered: (i) type 1 source with a velocity of and a CSI variance of , (ii) type 2 sources with a velocity of and a CSI variance of , and (iii) type 3 sources with a velocity of and a CSI variance of .\nFig. 7 ###reference_### illustrates the normalized average inference error (the total time-averaged inference error divided by the number of sources) plotted against the number of sources , with and . We can observe from Fig. 7 ###reference_### that when the number of sources is less, the normalized average inference error of our proposed policy is times better than \u201cMAF, Generate-at-will, .\u201d However, \u201cMAF, Generate-at-will, \u201d is close to the proposed policy. But, When number of sources is more than , the normalized average inference error becomes times lower than that of the \u201cMAF, Generate-at-will, \u201d policy. As the number of sources increases, the normalized average inference error obtained by \u201cMAF, Generate-at-will, \u201d becomes close to the normalized average inference error of the proposed policy.\nFig. 8 ###reference_### compares the time-averaged inference error of the proposed policy and a lower bound from a relaxed problem. The lower bound is achieved by selecting feature length and feature position by using (V-D1 ###reference_0###) and (61 ###reference_###), respectively under dual cost obtained from Algorithm 3 ###reference_###. For this evaluation, we have taken step size at each iteration In Algorithm 3 ###reference_###. In Fig. 8 ###reference_###, we consider channels and sources, where represents the system scale. Observing Fig. 8 ###reference_###, it becomes evident that our proposed policy converges towards the lower bound as the system scale increases."
},
{
"section_id": "7",
"parent_section_id": null,
"section_name": "VII Conclusion",
"text": "This paper studies a learning and communications co-design framework that jointly determines feature length and transmission scheduling for improving remote inference performance. In single sensor-predictor pair system, we propose two distinct optimal scheduling policies for (i) time-invariant feature length and (ii) time-variant feature length. These two scheduling policies lead to significant performance improvement compared to classical approaches such as periodic updating and zero-wait policies. Using the Lagrangian decomposition of a relaxed formulation, we propose a new algorithm for multiple sensor-predictor pairs. Simulation results show that the proposed algorithm is better than the maximum age-first policy."
}
],
"appendix": [
{
"section_id": "Appendix 1",
"parent_section_id": null,
"section_name": "Appendix A Experimental Setup for Two Machine Learning Experiments",
"text": "In the first experiment: wireless channel state information (CSI) prediction, our objective is to infer the CSI of a source at time by observing a feature consisting of a sequence of stale and noisy CSIs. Specifically, we consider a Rayleigh fading-based CSI. The dataset is generated by using the Jakes model [51 ###reference_b51###]. In the Jakes fading channel model, the CSI can be expressed as a Gaussian random process. Due to the joint Gaussian distribution of the target and feature random variables, the optimal inference error performance is achieved by a linear MMSE estimator. Hence, a linear regression algorithm is adopted in our simulation. Nonetheless, our study can be readily applied to other neural network-based predictors.\nIn the second experiment: actuator state prediction, we employ a neural network based predictor. In this experiment, we use an OpenAI CartPole-v1 task [40 ###reference_b40###] to generate a dataset, where a DQN reinforcement learning algorithm [52 ###reference_b52###] is utilized to control the force on a cart and keep the pole attached to the cart from falling over. Our goal is to predict the pole angle at time based on a sequence of stale information of cart velocity with length . The predictor in this experiment is an LSTM neural network that consists of one input layer, one hidden layer with 64 LSTM cells, and a fully connected output layer. Additional experiments can be found in a recent study [6 ###reference_b6###, 7 ###reference_b7###, 35 ###reference_b35###], including (a) video prediction and (b) robot state prediction."
},
{
"section_id": "Appendix 2",
"parent_section_id": null,
"section_name": "Appendix B Proof of Lemma 1",
"text": "Part (a): Consider the sequence . It can be demonstrated that for any , the Markov chain holds. This is due to the fact that for , the sequence includes as well as . By applying the data processing inequality [43 ###reference_b43###, Lemma 12.1] for -conditional entropy, we can deduce that\nPart (b): Assuming that ((b) ###reference_###) holds for all and , and employing [7 ###reference_b7###, Lemma 3], [35 ###reference_b35###] yields\nCombining (69 ###reference_###) with (68 ###reference_###), we deduce that\nThis concludes the proof. \u220e"
},
{
"section_id": "Appendix 3",
"parent_section_id": null,
"section_name": "Appendix C Proof of Theorem 1",
"text": "Optimal Feature Length Determination: To find the optimal feature length for the time-invariant scheduling problem (19 ###reference_###), we undertake a two-step process:\nCalculation of : Given a feature length , we start by determining , defined as\nwhere represents the set of admissible policies for feature length . This step quantifies the optimal objective value for each specific feature length.\nOptimal Feature Length and Objective Value: Having obtained for all relevant , the optimal feature length can be determined by solving\nwhere represents an upper bound on the feature length. Additionally, the optimal objective value is given by\nThese steps collectively identify the most suitable feature length and its corresponding objective value.\nWe aim to solve the problem (19 ###reference_###) by addressing the sub problems (71 ###reference_###)-(72 ###reference_###). Let\u2019s begin by solving (71 ###reference_###) using [7 ###reference_b7###, Theorem 4.2], restated below for completeness.\n[7 ###reference_b7###, Theorem 4.2]\nIf the transmission times \u2019s are i.i.d. with a finite mean , then there exists an optimal solution to (71 ###reference_###) that satisfies:\nThe optimal feature position in is time-invariant, i.e., . The optimal feature position in is given by\nwhere is the unique root of equation ((a) ###reference_###).\nThe optimal scheduling time in is determined by\nwhere is the AoI at time . The optimal objective value of (71 ###reference_###) is\nUsing Theorem 3 ###reference_orem3###, we obtain values of for all . We can then determine and using (72 ###reference_###) and (73 ###reference_###), respectively. Substituting and into the policy established in Theorem 3 ###reference_orem3###, we derive the optimal policy , as asserted in Theorem 1 ###reference_orem1###. This completes the proof. \u220e"
},
{
"section_id": "Appendix 4",
"parent_section_id": null,
"section_name": "Appendix D Proof of Theorem 2",
"text": "The infinite time-horizon average cost problem (32 ###reference_###) can be cast as an average cost semi-Markov decision process (SMDP) [15 ###reference_b15###, 47 ###reference_b47###]. To describe the SMDP, we define decision times, action, state, state transition, and cost of the SMDP."
}
],
"tables": {},
"image_paths": {
"1": {
"figure_path": "2308.10094v3_figure_1.png",
"caption": "Figure 1: A remote inference system, where Xt\u2212bl:=(Vt\u2212b,Vt\u2212b\u22121,\u2026,Vt\u2212b\u2212l+1)assignsuperscriptsubscript\ud835\udc4b\ud835\udc61\ud835\udc4f\ud835\udc59subscript\ud835\udc49\ud835\udc61\ud835\udc4fsubscript\ud835\udc49\ud835\udc61\ud835\udc4f1\u2026subscript\ud835\udc49\ud835\udc61\ud835\udc4f\ud835\udc591X_{t-b}^{l}:=(V_{t-b},V_{t-b-1},\\ldots,V_{t-b-l+1})italic_X start_POSTSUBSCRIPT italic_t - italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT := ( italic_V start_POSTSUBSCRIPT italic_t - italic_b end_POSTSUBSCRIPT , italic_V start_POSTSUBSCRIPT italic_t - italic_b - 1 end_POSTSUBSCRIPT , \u2026 , italic_V start_POSTSUBSCRIPT italic_t - italic_b - italic_l + 1 end_POSTSUBSCRIPT ) is a feature with sequence length l\ud835\udc59litalic_l.",
"url": "http://arxiv.org/html/2308.10094v3/x1.png"
},
"2(a)": {
"figure_path": "2308.10094v3_figure_2(a).png",
"caption": "(a) Inference error vs. Feature length\nFigure 2: Performance of wireless channel state information prediction: (a) Inference error Vs. Feature length and (b) Inference error Vs. AoI.",
"url": "http://arxiv.org/html/2308.10094v3/x2.png"
},
"2(b)": {
"figure_path": "2308.10094v3_figure_2(b).png",
"caption": "(b) Inference error vs. AoI\nFigure 2: Performance of wireless channel state information prediction: (a) Inference error Vs. Feature length and (b) Inference error Vs. AoI.",
"url": "http://arxiv.org/html/2308.10094v3/x3.png"
},
"3(a)": {
"figure_path": "2308.10094v3_figure_3(a).png",
"caption": "(a) Inference error vs. Feature length\nFigure 3: Performance of actuator state prediction in the OpenAI CartPole-v1 task under mechanical response delay: (a) Inference error Vs. Feature length and (b) Inference error Vs. AoI.",
"url": "http://arxiv.org/html/2308.10094v3/x4.png"
},
"3(b)": {
"figure_path": "2308.10094v3_figure_3(b).png",
"caption": "(b) Inference error vs. AoI\nFigure 3: Performance of actuator state prediction in the OpenAI CartPole-v1 task under mechanical response delay: (a) Inference error Vs. Feature length and (b) Inference error Vs. AoI.",
"url": "http://arxiv.org/html/2308.10094v3/x5.png"
},
"4": {
"figure_path": "2308.10094v3_figure_4.png",
"caption": "Figure 4: A multiple source-predictor pairs and multiple channel remote inference system.",
"url": "http://arxiv.org/html/2308.10094v3/x6.png"
},
"5": {
"figure_path": "2308.10094v3_figure_5.png",
"caption": "Figure 5: Single Source Case: Time-averaged inference error vs. the scale parameter \u03b1\ud835\udefc\\alphaitalic_\u03b1 in transmission time Ti\u2062(l)=\u2308\u03b1\u2062l\u2309subscript\ud835\udc47\ud835\udc56\ud835\udc59\ud835\udefc\ud835\udc59T_{i}(l)=\\lceil\\alpha l\\rceilitalic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_l ) = \u2308 italic_\u03b1 italic_l \u2309 for all i\ud835\udc56iitalic_i.",
"url": "http://arxiv.org/html/2308.10094v3/x7.png"
},
"6": {
"figure_path": "2308.10094v3_figure_6.png",
"caption": "Figure 6: Single Source Case: Time-averaged inference error vs. the buffer size B\ud835\udc35Bitalic_B.",
"url": "http://arxiv.org/html/2308.10094v3/x8.png"
},
"7": {
"figure_path": "2308.10094v3_figure_7.png",
"caption": "Figure 7: Multiple Source Case: Time-averaged inference error vs. the number of sources M\ud835\udc40Mitalic_M.",
"url": "http://arxiv.org/html/2308.10094v3/x9.png"
},
"8": {
"figure_path": "2308.10094v3_figure_8.png",
"caption": "Figure 8: Multiple Source Case: Time-averaged inference error vs. system scale r\ud835\udc5fritalic_r, where M=3\u2062r\ud835\udc403\ud835\udc5fM=3ritalic_M = 3 italic_r and N=10\u2062r\ud835\udc4110\ud835\udc5fN=10ritalic_N = 10 italic_r.",
"url": "http://arxiv.org/html/2308.10094v3/x10.png"
}
},
"validation": true,
"references": [],
"url": "http://arxiv.org/html/2308.10094v3"
}