papers / 20240816 /2401.11622v4.json
yilunzhao's picture
Add files using upload-large-folder tool
49d23ce verified
{
"title": "The Markov-Chain Polytope with Applications to Binary AIFV-\ud835\udc5a Coding11footnote 1Work of both authors partially supported by RGC CERG Grant 16212021.",
"abstract": "This paper is split into two parts.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "Introduction",
"text": "Let be an -state Markov chain. For convenience,\n will denote the set of indices See Figure 1 ###reference_###.\nFor let be the probability of transitioning from state to state . Assume further that\n has a unique stationary distribution\nAdditionally, each state has an associated reward or cost the average steady-state cost or gain of is then defined [7 ###reference_b7###] as\nA state is fully defined by the values and\n###figure_1### Next suppose that, for each instead of\nthere being only one there exists a large set containing all permissible \u201ctype-\u2019 states. Fix\nto be their Cartesian product,\nand further assume that has a unique stationary distribition.\nThe problem is to find the Markov chain in with smallest cost.\nThis problem first arose in the context of binary AIFV- coding [4 ###reference_b4###, 22 ###reference_b22###, 23 ###reference_b23###] (described later in detail in Section 6 ###reference_###),\nin which a code is an -tuple\n of binary coding trees for a size source alphabet; for each there are different restrictions on\nthe structure of \nThe cost of is the cost of a corresponding -state Markov chain, so the problem of finding the minimum-cost binary AIFV- code reduced to finding a minimum-cost Markov chain\n[5 ###reference_b5###].\nThis same minimum-cost Markov chain approach was later used to find better parsing trees\n[17 ###reference_b17###], lossless codes for finite channel coding [18 ###reference_b18###] and AIFV codes for unequal bit-cost coding [15 ###reference_b15###].\nNote that in all of these problems, the input size was relatively small, e.g., a set of probabilities, but the associated had size exponential in \nThe previous algorithms developed for solving the problem were iterative ones\nthat moved from Markov chain to Markov chain in , in some non-increasing cost order. For the specific applications mentioned, they ran in exponential (in ) time. Each iteration step also required solving a local optimization procedure which was often polynomial time (in ).\n[8 ###reference_b8###, 10 ###reference_b10###] developed a different approach for solving the binary AIFV- coding problem, corresponding to a -state Markov chain, in weakly polynomial time using a simple binary search. In those papers, they noted that they could alternatively solve the problem in weakly polynomial time via the Ellipsoid algorithm for linear programming [12 ###reference_b12###] on a two-dimensional polygon.\nThey hypothesized that this latter technique could be extended to but only with a better understanding of the geometry of the problem.\nThat is the approach followed in this paper, in which we define a mapping of type- states to type- hyperplanes in .\nWe show that the unique intersection of any hyperplanes, where each is of a different type, always exists. We call such an intersection point \u201cdistinctly-typed\u201d and prove that its \u201cheight\u201d is equal to the cost of its associated Markov chain. The solution to the minimum-cost Markov-chain problem is thus the lowest height of any \u201cdistinctly-typed\u2019 intersection point.\nWe then define the Markov-Chain polytope to be the lower envelope of the hyperplanes associated with all possible states and note that some lowest-height distinctly-typed intersection point is a highest point on \nThis transforms the problem of finding the cheapest Markov chain to the linear programming one of finding a highest point of\nThe construction and observations described above will be valid for ALL Markov chain problems.\nIn the applications mentioned earlier, the polytope is defined by an exponential number of constraints. But, observed from the proper perspective, the local optimization procedures used at each step of the iterative algorithms in [4 ###reference_b4###, 22 ###reference_b22###, 23 ###reference_b23###, 5 ###reference_b5###, 17 ###reference_b17###] can be repurposed as polynomial time separation oracles for . This permits\nusing the Ellipsoid algorithm approach of [12 ###reference_b12###]\nto solve the binary AIFV- problem in weakly polynomial time instead of exponential time.\nThe remainder of the paper is divided into two distinct parts. Part 1 consists of Sections 2 ###reference_###-5 ###reference_###. These develop a procedure for solving the generic minimum-cost Markov Chain problem.\nSection 2 ###reference_### discusses how to map the problem into a linear programming one and\nhow to interpret the old iterative algorithms\nfrom this perspective.\nSection 3 ###reference_### states our new results while Section 4 ###reference_### discusses their algorithmic implications. In particular, Lemma 4.6 ###reference_numbered6### states sufficient conditions on that guarantees a polynomial time algorithm for finding the minimum cost Markov chain.\nSection 5 ###reference_### then completes Part 1 by proving the main results stated in Section 3 ###reference_###.\nPart 2, in Sections 6 ###reference_###-8 ###reference_###,\nthen discusses how to apply Part 1\u2019s techniques to construct best binary AIFV- codes in weakly polynomial time.\nSection 6 ###reference_### provides necessary background, defining binary AIFV- codes and deriving their important properties.\nSection 7 ###reference_### describes how to apply the techniques from Section 4 ###reference_### to binary AIFV- coding.\nSection 8 ###reference_### proves a very technical lemma specific to binary AIFV- coding required to show that its associated Markov Chain polytope has a polynomial time separation oracle, which is the last piece needed to apply the Ellipsoid method.\nFinally, Section 9 ###reference_### concludes with a quick discussion of other applications of the technique and possible directions for going further."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "Markov Chains",
"text": ""
},
{
"section_id": "2.1",
"parent_section_id": "2",
"section_name": "The Minimum-Cost Markov Chain problem",
"text": "Fix\nA state\n is defined by a set of transition probabilities along with a cost and\nlet be some finite given set of states,\nsatisfying that \nThe states in are known as type- states.\nMarkov Chain is permissible if\nDefine to be the set of permissible Markov chains.\nThe actual composition and structure of each \nis different from problem to problem and, within a fixed problem, different for different \nThe only universal constraint is that stated in (b), that\n This implies that\n is an ergodic unichain, with one aperiodic recurrent class (containing ) and, possibly, some transient states. therefore has a unique stationary distribution,\n(where if and only if is a transient state.)\nLet be a permissible -state Markov chain.\nThe average steady-state cost of is defined to be\nWhat has been described above is a Markov Chain with rewards, with being its gain\n[7 ###reference_b7###].\nThe minimum-cost Markov chain problem is to find satisfying\nComments:\n(i) In the applications motivating this problem, each has size exponential in so the search space has size exponential in \n(ii) The requirement in (b) that (which is satisfied by all the motivating applications,) guarantees that has a unique stationary distribution.\nIt is also needed later in other places in the analysis. Whether this condition is necessary is unknown; this is discussed in Section 9 ###reference_###."
},
{
"section_id": "2.2",
"parent_section_id": "2",
"section_name": "Associated Hyperplanes and Polytopes",
"text": "The next set of definitions map type- states into type- hyperplanes in and then defines lower envelopes of those hyperplanes. In what follows, denotes a vector is a shorthand denoting that and \n denotes a state\nLet .\nDefine the type- hyperplanes as follows:\nFor all define and\nand Markov chain\nFor later use, we note that by definition\nFinally, for all define.\nmaps a type- state to a type- hyperplane\n in For fixed is the\nlower envelope of all of the type- hyperplanes\nEach maps point to the lowest type- hyperplane evaluated at maps point to the Markov chain\nwill be the lower envelope of the .\nSince both and are lower envelopes of hyperplanes, they are the upper surface of convex polytopes in \nThis motivates defining the following polytope:\nThe Markov Chain Polytope in corresponding to is"
},
{
"section_id": "2.3",
"parent_section_id": "2",
"section_name": "The Iterative Algorithm",
"text": "[4 ###reference_b4###, 5 ###reference_b5###, 22 ###reference_b22###, 23 ###reference_b23###] present an iterative algorithm\nthat was first formulated for finding minimum-cost binary AIFV- codes\nand then generalized into a procedure for finding minimum-cost Markov Chains.\nThe algorithm starts with an arbitrary and iterates, at each step constructing an such that\n The algorithm terminates at step if\nAlthough it does not run in polynomial time, it is noteworthy because the fact of its termination\n(Lemma 2.6 ###reference_numbered6###) will be needed later (in Corollary 3.4 ###reference_numbered4###) to prove that contains some point corresponding to a minimum-cost Markov chain.\n[23 ###reference_b23###] proves that the algorithm always terminates when\n and, at termination, is a minimum-cost Markov Chain.\n[4 ###reference_b4###, 5 ###reference_b5###, 22 ###reference_b22###] prove that222They also claim that the algorithm always terminates. The proofs of correctness there are only sketches and missing details but they all\nseem to implictly assume that for all and not just \n,\nfor if the algorithm terminates, then is a minimum-cost Markov Chain.\nA complete proof of termination (and therefore solution of the minimum-cost Markov Chain problem) is provided in [1 ###reference_b1###]. This states\n[Theorem 1, [1 ###reference_b1###]]\nNo matter what the starting value there always exists such that Furthermore, for that\nNotes/Comments:\nThe algorithm in [4 ###reference_b4###, 5 ###reference_b5###, 22 ###reference_b22###, 23 ###reference_b23###] looks different than the one described in [1 ###reference_b1###] but they are actually identical, just expressed in a different coordinate system. The relationship between the two coordinate systems is shown in [11 ###reference_b11###]. The coordinate system in [1 ###reference_b1###] is the same as the one used in this paper.\nEach step of the iterative algorithm requires calculating for the current and all \nIn applications,\nfinding is very problem specific and is usually\na combinatorial optimization problem. For example, the first papers on AIFV- coding [23 ###reference_b23###] and the most current papers on finite-state channel coding [18 ###reference_b18###]\ncalculate them using integer linear programming, as does a recent paper on constructing AIFV codes for unequal bit costs [15 ###reference_b15###].\nThe more recent papers on both AIFV- coding [16 ###reference_b16###, 22 ###reference_b22###, 11 ###reference_b11###] and AIVF coding\n[17 ###reference_b17###]\nuse dynamic programming.\nAs noted, the fact that if the iterative algorithm terminates at some then is a minimum-cost Markov Chain was proven directly in [4 ###reference_b4###, 5 ###reference_b5###, 22 ###reference_b22###, 23 ###reference_b23###]. An alternative proof of this fact is given in Lemma 3.3 ###reference_numbered3### in this paper.\nThe proof of termination given in [1 ###reference_b1###] Theorem 1, strongly depends upon condition (b) from Definition 2.1 ###reference_numbered1###, i.e.,"
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "The Main Results",
"text": "This section states our two main lemmas, Lemma 3.1 ###reference_numbered1### and Lemma 3.7 ###reference_numbered7###, and their consequences. Their proofs are deferred to Section 5 ###reference_###.\n\n###figure_2### Let be a permissible Markov Chain.\nThe -dimensional hyperplanes \nintersect at a unique point .\nWe call such a point a distinctly-typed intersection point.\n\nThe intersection point is on or above the lower envelope of\nthe hyperplanes i.e.,\nThe intersection point also satisfies\nThe\nLemma immediately implies that\nfinding a minimum-cost Markov Chain is equivalent to finding a minimum \u201cheight\u201d (\u2019th-coordinate) distinctly-typed intersection point. It also implies the following corollary.\nLet .\nThis in turn, permits, proving\nIf\nfor some and then\nand\nBy the definition of the and if satisfies (2 ###reference_###),\nRecall that for all \nThus, (2 ###reference_###), implies that the hyperplanes intersect at the point\n\nApplying Lemma 3.1 ###reference_numbered1### (b) proves (3 ###reference_###).\nCombining (3 ###reference_###) with Corollary 3.2 ###reference_numbered2### immediately implies\nBecause the leftmost and rightmost values in (5 ###reference_###) are identical, the two inequalities in (5 ###reference_###) must be equalities, proving (4 ###reference_###).\n\u220e\nCondition (2 ###reference_###) in Lemma 3.3 ###reference_numbered3###\nmeans that the different lower envelopes , must simultaneously intersect at a point .\nIt is not a-priori obvious that such an should always exist but\nLemma 2.6 ###reference_numbered6### tells us that the iterative algorithm always terminates at such an , immediately proving333For completness, we note that\nLemma 7.4 ###reference_numbered4### later also directly proves that condition (2 ###reference_###) in Lemma 3.3 ###reference_numbered3### holds for the special case of AIFV- coding. Thus this paper provides a\nfully self-contained proof of Corollary 3.4 ###reference_numbered4### for AIFV- coding.\nThere exists satisfying (2 ###reference_###) in Lemma 3.3 ###reference_numbered3###. This satisfies\n and\n###figure_3### This suggests a new approach to solving the minimum-cost Markov chain problem.\nUse linear programming to find a highest point\nStarting from find a distinctly-typed intersection point\nReturn\nFrom Corollary 3.4 ###reference_numbered4###, an in step 2 must exist;\nfrom Lemma 3.3 ###reference_numbered3### any such found would yield a solving the\nminimum-cost Markov chain problem.\nWhile it would usually not be difficult to move to a highest vertex when starting from a highest point the added complication that must be distinctly-typed complicates step 2. These complications can be sidestepped using the extension to -restricted search spaces and the pruning procedure described below.\nLet\n denote the set of all subsets of that contain \u201c0\u201d, i.e.,\n\nFor all and all \ndefine the set of all states to which can transition. Since\nNow fix and \nDefine\nto be the subset of states in that only transition to states in\nFurther define\nNote that and . Also note that if then, if is a transient state in\nLet and\nFor all define and\n as\nand\nNote that , and are, respectively, equivalent to\n, and i.e, all types permitted.\nThese definitions permit moving from any \u201chighest\u201d point on to a distinctly-typed intersection point at the same height.\nSuppose satisfies\nLet denote any minimum-cost Markov chain\nand\n the set of its recurrent indices .\nLet \nThen\nIf , then\n\nEquivalently, this implies \nis a minimum-cost Markov chain.\nIf , then\n.\nSuppose satisfies\nThen the procedure in Algorithm 6 ###reference_6### terminates in at most steps. At termination, is a minimum-cost Markov chain.\nAt the start of the algorithm, \nAs long as , Lemma 3.7 ###reference_numbered7### (b) implies that\n If , the size of decreases, so the process can not run more than steps. At termination, so from Lemma 3.7 ###reference_numbered7### (a), is a minimum-cost Markov chain.\n\u220e"
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "Algorithmic Implications",
"text": "Stating our running times requires introducing further definitions.\nLet be a hyperrectangle in Define\nis the maximum time required to calculate for any\nis the maximum time required to calculate \nfor any and any\nNote that for any \nAlso, and will denote and\nIn addition,\ndenotes the number of iterative steps made by the iterative algorithm. The only known bound for this is the number of permissible Markov Chains.\nThe iterative algorithm requires time. Improvements to its running time have focused on improving in specific applications.\nFor binary AIFV- coding, of source-code words, was first solved using integer linear programming [23 ###reference_b23###] so was exponential in \nFor the specific case of this was improved to polynomial time using different dynamic programs. More specifically, for [16 ###reference_b16###] showed that\n, improved to by [9 ###reference_b9###]; for [22 ###reference_b22###]\nshowed that\n, improved to by [11 ###reference_b11###]. These sped up the running time of the iterative algorithm under the (unproven) assumption that the iterative algorithm always stayed within .\nFor AIVF codes, [17 ###reference_b17###] proposed using a modification of a dynamic programming algorithm due to\n[3 ###reference_b3###], yielding that for any fixed , is polynomial time in the number of words permitted in the parse dictionary.\nIn all of these cases, though, the running time of the algorithm is still exponential because \ncould be exponential.\nNote: Although has been studied, nothing was previously known about This is simply because there was no previous need to define and construct . In all the known cases in which algorithms for constructing exist, it is easy to slightly modify them to construct\n in the same running time. So\nWe now see how the properties of the Markov chain polytope will, under some fairly loose conditions, permit finding the minimum-cost Markov chain in polynomial time. This will be done via\nthe Ellipsoid method of Gr\u00f6tschel, Lov\u00e1sz and Schrijver [12 ###reference_b12###, 13 ###reference_b13###, 20 ###reference_b20###], which, given a polynomial time separation oracle for a polytope, permits solving a linear programming problem on the polytope in polynomial time. The main observation is that provides a separation oracle for so the\nprevious application specific algorithms for constructing can be reused to derive polynomial time algorithms."
},
{
"section_id": "4.1",
"parent_section_id": "4",
"section_name": "Separation oracles and",
"text": "Recall the definition of a separation oracle.\nLet be a closed convex set. A separation oracle444Some references label this a strong separation oracle. We follow the formulation of [20 ###reference_b20###] in not adding the word strong.\nfor is a procedure that, for any\n either reports that or, if , returns a hyperplane that separates from That is, it returns such that\nIt is now clear that provides a separation oracle for\nLet be fixed and\n be the Markov Chain polytope. Let . Then\nknowing provides a time algorithm for either reporting that or returning a hyperplane that separates from\nif and only if\nThus knowing immediately determines whether or not. Furthermore if , i.e., let be an index\nsatisfying\nThe hyperplane then separates from because is a supporting hyperplane of at point \n\u220e"
},
{
"section_id": "4.2",
"parent_section_id": "4",
"section_name": "The Ellipsoid Algorithm with Separation Oracles",
"text": "The ellipsoid method of Gr\u00f6tschel, Lov\u00e1sz and Schrijver [12 ###reference_b12###, 13 ###reference_b13###] states that, given a polynomial-time separation oracle, (even if is a polytope defined by an exponential number of hyperplanes) an approximate optimal solution to the \u201cconvex optimization problem\u201d can be found in polynomial time. If is a rational polytope, then an exact optimal solution can be found in polynomial time.\nWe follow the formulation of [20 ###reference_b20###] in stating these results.\nOptimization Problem Let be a rational polyhedron555 is a Rational Polyhedron if\n\nwhere the components of matrix and vector are all rational numbers. is the set of rationals.\n\nin Given the input conclude with one of the following:\ngive a vector with\ngive a vector in with\nassert that is empty.\nNote that in this definition, is the characteristic cone of The characteristic cone of a bounded polytope is [20 ###reference_b20###][Section 8.2] so, if is a bounded nonempty polytope, the optimization problem is to find a vector with\nThe relevant result of [12 ###reference_b12###, 13 ###reference_b13###] is\nThere exists an algorithm ELL such that if ELL is given the input\n where:\n\n \n\n\n and are natural numbers and SEP is a separation oracle for some rational polyhedron in , defined by linear inequalities of size at most and \n \n\nthen ELL solves the optimization problem for for the input in time polynomially bounded by , , the size of and the running time of\nIn this statement, the size of a linear inequality is the number of bits needed to write the rational coefficients of the inequality, where the number of bits required to write rational where are relatively prime integers, is"
},
{
"section_id": "4.3",
"parent_section_id": "4",
"section_name": "Solving the Minimum-Cost Markov Chain problem.",
"text": "Combining all of the pieces, we can now prove our main result.\nGiven let be the maximum number of bits required to write any transition probability or cost of a permissible state .\nFurthermore, assume some known hyper-rectangle\n with the property that there exists satisfying and\n.\nThen the minimum-cost Markov chain problem can be solved in time polynomially bounded by , and\nWithout loss of generality, we assume that To justify, recall that the minimum cost Markov chain has cost\n where is the \u2019th component of the stationary distribution of \nTrivially, if for all then\nThe original problem formulation does not require that . But, we can\nmodify a given input by adding the same constant to for every , This makes every non-negative so the minimum cost Markov chain in this modified problem has non-negative cost.\nSince this modification adds to the cost of every Markov chain, solving the modified problem solves the original problem. Note that this modification can at most double so this does not break the statement of the lemma.\nWe may thus assume that .\nThus\n\nwhere\n\nSince is bounded from above by the cost of any permissible Markov chain, is a bounded non-empty polytope.\nSince\nNow consider the following separation oracle for Let\nIn time, first check whether If no, then and is a separating hyperplane.\nOtherwise, in time, check whether If no, and a separating hyperplane is just the corresponding side of that is outside of.\nOtherwise, calculate in time. From Lemma 4.3 ###reference_numbered3###, this provides a separation oracle.\nConsider solving the optimization problem on polytope with to find satisfying\nSince is a bounded non-empty polytope, we can apply\nTheorem 4.5 ###reference_numbered5### to find such an in time polynomially bounded in , and\nApplying Corollary 3.8 ###reference_numbered8### and its procedure \nthen produces a minimum cost Markov-Chain in another time.\nThe final result follows from the fact that\n\n\u220e\nPart 2, starting in Section 6 ###reference_###, shows how to apply this Lemma to derive a polynomial time algorithm for constructing minimum-cost AIFV- codes.\nWe also note that [2 ###reference_b2###] recently applied Lemma 4.3 ###reference_### in a plug-and-play manner to derive the first polynomial time algorithms for constructing optimal AIVF codes. AIVF codes are a multi-tree generalization of Tunstall coding [17 ###reference_b17###, 18 ###reference_b18###], for which the previous constriction algorithms ran in exponential time."
},
{
"section_id": "5",
"parent_section_id": null,
"section_name": "Proofs of Lemmas 3.1 and 3.7",
"text": ""
},
{
"section_id": "5.1",
"parent_section_id": "5",
"section_name": "Proof of Lemma 3.1",
"text": "Before starting the proof, we note that (a) and the first equality of (b) are, after a change of variables, implicit in the analysis provided in [5 ###reference_b5###] of the convergence of their iterative algorithm. The derivation there is different than the one provided below, though, and is missing intermediate steps that we need for proving our later lemmas.\nIn what follows, denotes , the transition matrix associated with and denotes , its unique stationary distribution.\nTo prove (a) observe that the intersection condition\ncan be equivalently rewritten as\nwhere the right-hand side of (7 ###reference_###) can be expanded into\nEquation (7 ###reference_###) can therefore be rewritten as\nwhere the matrix in (8 ###reference_###), denoted as\n is after subtracting the identity matrix and replacing the first column with s.\nTo prove (a) it therefore suffices to prove that\n\nis invertible.\nThe uniqueness of implies that the kernel of is -dimensional. Applying the rank-nullity theorem, the column span of is -dimensional. Since , each column of is redundant, i.e., removing any column of does not change the column span.\nNext, observe that implying that is not orthogonal to . In contrast, each vector in the column span of is of the form\n for some . Thus , implying that is orthogonal to . Therefore, is not in the column span of .\nCombining these two observations, replacing the first column of with increases the rank of by exactly one.\nHence,\n\nhas rank . This shows invertibility, and the proof of (a) follows.\nTo prove (b) and (c) observe that\nand that for all ,\nApplying these observations by setting and left-multiplying by\nApplying these observations again, it follows that\nproving (b).\nSince the transition probabilities are non-negative,\nproving (c).\n(d) follows by observing that\n\u220e"
},
{
"section_id": "5.2",
"parent_section_id": "5",
"section_name": "Proof of Lemma 3.7",
"text": "To prove (a)\nset \nAssume\nthat,\nBy definition, cannot transition to any where . This implies that\nLemma 3.1 ###reference_numbered1### (b) then implies\nThus, is a minimum-cost Markov chain.\nTo prove (b)\nfirst note that, we are given that Thus, and , as required.\nThe Markov chain starts in state where .\nWe claim that if and then If not, , contradicting that\n Thus, for all .\nNow, suppose that Then, for all ,\nOn the other hand, using Lemma 3.1 ###reference_numbered1### (b) and the fact that \nis a minimum-cost Markov chain,\nThe left and right hand sides of (10 ###reference_###) are the same and so .\nThis in turn forces all the inequalities in (9 ###reference_###) to be equalities, i.e., for all ,\nThus, , proving (b).\n\u220e"
},
{
"section_id": "6",
"parent_section_id": null,
"section_name": "A Polynomial Time Algorithm for binary AIFV- Coding.",
"text": "This second part of the paper introduces binary AIFV- codes and then applies Lemma 4.6 ###reference_numbered6### to find a minimum-cost code of this type in\ntime polynomial in , the number of source words to be encoded, and the number of bits required to state the probability of any source word.\nThe remainder of this section\ndefines binary AIFV- codes\nand describes how they are a\nspecial case of the\nminimum-cost Markov chain problem.\nSection 7 ###reference_### describes how to apply Lemma 4.6 ###reference_numbered6###.\nThis first requires showing that is polynomial in and , which will be straightforward.\nIt also requires identifying a hyperrectangle that contains a highest point and\nfor which\n is polynomial in .\nThat is,\n can be calculated in polynomial time.\nAs mentioned at the start of Section 4 ###reference_###, as part of improving the running time of the iterative algorithm, [16 ###reference_b16###, 9 ###reference_b9###, 22 ###reference_b22###, 11 ###reference_b11###] showed that, for\n\n is polynomial time. As will be discussed in Section 7.2 ###reference_###, the algorithms there can be easily modified to show that .\nCorollary 3.4 ###reference_numbered4### only tells us that there exists some such that is a highest point in . In order to use\nLemma 4.6 ###reference_numbered6###, we will need to show that there exists such an .\nProving this is the\nmost cumbersome part of the proof. It combines a case-analysis of the tree structures of AIFV- trees with the Poincare-Miranda theorem to show that the functions\n must all mutually intersect at some point From Lemma 3.3 ###reference_numbered3###, \nand is therefore the optimum point needed. Section\n8 ###reference_### develops the tools required for this analysis."
},
{
"section_id": "6.1",
"parent_section_id": "6",
"section_name": "Background",
"text": "Consider a stationary memoryless source with alphabet in which symbol\n is generated with probability .\nLet be a message generated by the source.\nBinary compression codes encode each in using a binary codeword.\nHuffman codes are known to be \u201coptimal\u201d such codes. More specifically, they are Minimum Average-Cost Binary Fixed-to-Variable Instantaneous codes. \u201cFixed-to-Variable\u201d denotes that the binary codewords corresponding to the different can have different lengths. \u201cInstantaneous\u201d, that, in a bit-by-bit decoding process, the end of a codeword is recognized immediately after its last bit is scanned. The redundancy of a code is the difference between its average-cost and the Shannon entropy\n of the source. Huffman codes can have worst case redundancy of\nHuffman codes are often represented by a coding tree, with the codewords being the leaves of the tree.\nA series of recent work\n[4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 14 ###reference_b14###, 16 ###reference_b16###, 22 ###reference_b22###, 23 ###reference_b23###] introduced Binary Almost-Instantaneous Fixed-to-Variable- (AIFV-) codes. Section 6.2 ###reference_### provides a complete definition as well as examples. These differ from Huffman codes in that they use different coding trees.\nFurthermore, decoding might require reading ahead bits before knowing that the end of a codeword has already been reached (hence \u201calmost\u201d-instantaneous). Since AIFV- codes include Huffman coding as a special case, they are never worse than Huffman codes.\nTheir advantage is that, at least for they have worst-case redundancy [14 ###reference_b14###, 6 ###reference_b6###], beating Huffman coding.666Huffman coding with blocks of size will also provide worst case redundancy of\n But the block source coding alphabet, and thus the Huffman code dictionary, would then have size In contrast, AIFV- codes have dictionary size\nHistorically, AIFV- codes were preceded by\n-ary Almost-Instantaneous FV (-AIFV) codes,\nintroduced in [21 ###reference_b21###].\n-ary AIFV codes used a character encoding alphabet;\nfor , the procedure used coding trees and had a coding delay of bit. For it used 2 trees and had a coding delay of bits.\nBinary AIFV- codes\nwere introduced later in [14 ###reference_b14###]. These are binary codes that are comprised of an -tuple of binary coding trees and have decoding delay of at most bits.\nThe binary AIFV- codes of [14 ###reference_b14###] are identical to the -ary AIFV codes of [21 ###reference_b21###].\nConstructing optimal777 An \u201coptimal\u201d -AIFV or AIFV- code is one with minimum average encoding cost over all such codes. This will be formally specified later in Definition 6.8 ###reference_numbered8###.\n-AIFV or binary AIFV- codes is much more difficult than constructing Huffman codes.\n[23 ###reference_b23###] described an iterative algorithm for constructing optimal binary AIFV- codes.\n[22 ###reference_b22###] generalized this\nand proved that, for , under some general assumptions, this algorithm would terminate and, at termination would produce an optimal binary AIFV- code. The same was later proven for by\n[4 ###reference_b4###]\nand [22 ###reference_b22###]. This algorithm was later generalized to solve the Minimun Cost Markov chain problem in [5 ###reference_b5###] and is the iterative algoithm referenced in Section 2.3 ###reference_###."
},
{
"section_id": "6.2",
"parent_section_id": "6",
"section_name": "Code Definitions, Encoding and Decoding",
"text": "Note: We assume and that each can be represented using bits, i.e., each probability is an integral multiple of . The running time of our algorithm will, for fixed be polynomial in and , i.e., weakly polynomial.\nA binary AIFV- code will be an -tuple of\n binary code trees satisfying\nDefinitions 6.1 ###reference_numbered1### and 6.2 ###reference_numbered2### below.\nEach contains codewords. Unlike in Huffman codes, codewords can be internal nodes.\nFigure 4 ###reference_###.\nEdges in an AIFV- code tree are labelled as -edges or -edges.\nIf node is connected to its child node via a -edge (-edge) then is \u2019s -child (-child).\nWe will often identify a node interchangeably with its associated (code)word. For example, is the node reached by following the edges down from the root.\nFollowing\n[14 ###reference_b14###], the nodes in AIFV- code trees can be classified as being exactly one of 3 different types:\nComplete Nodes. A complete node has two children: a -child and a -child.\nA complete node has no source symbol assigned to it.\nIntermediate Nodes. An intermediate node has no source symbol assigned to it and has exactly one child.\nAn intermediate node with a -child is called an intermediate- node; with a -child is called an intermediate- node.\nMaster Nodes. A master node has an assigned source symbol and at most one child node. Master nodes have associated degrees:\na master node of degree is a leaf.\na master node of degree is connected to its unique child by a -edge. Furthermore,\nit has exactly consecutive intermediate- nodes as its direct descendants, i.e., for are intermediate- nodes while is not an intermediate- node.\n###figure_4### Binary AIFV- codes are now defined as follows:\nSee Figure 5 ###reference_###. Let be a positive integer. A binary AIFV- code is an ordered -tuple of code trees satisfying the following conditions:\nEvery node in each code tree is either a complete node, an intermediate node, or a master node of degree where .\nFor , the code tree has an intermediate- node connected to the root by exactly -edges, i.e., the node is an intermediate- node.\nConsequences of the Definitions:\nEvery leaf is a master node of degree . In particular, this implies that every code tree contains at least one master node of degree\nDefinition 6.2 ###reference_numbered2###, and in particular Condition (2), result in unique decodability (proven in [14 ###reference_b14###]).\nFor the root of a tree is permitted to be a master node. If a root is a master node, the associated codeword is the empty string (Figure\n5 ###reference_###)! The root of a tree cannot be a master node.\nThe root of a tree may be an intermediate- node.\nFor every tree must contain at least one intermediate- node, the node \nA tree might not contain any intermediate- node.\nFor the root of a tree cannot be a intermediate- node. The root of a tree is permitted to be an intermediate- node (but see Lemma 8.7 ###reference_numbered7###).\n###figure_5### We now describe the encoding and decoding procedures. These are illustrated in Figures 6 ###reference_### and 7 ###reference_###.\nA source sequence is encoded as follows: Set and\nLet be a binary string that is the encoded message.\nSet and\nNote that in order to identify , line 1 might require reading a few bits after the end of . The number of extra bits that must be read is known as the delay.\nBinary AIFV- codes are uniquely decodable with delay at most ."
},
{
"section_id": "6.3",
"parent_section_id": "6",
"section_name": "The cost of AIFV- codes",
"text": "Let denote the set of all possible type- trees that can appear in a binary AIFV- code on source symbols. will be used to denote a tree .\nSet .\nwill be called a binary AIFV- code.\nFigure 8 ###reference_###.\nLet and be a source symbol.\ndenotes the length of the codeword in for .\ndenotes the degree of the master node in assigned to .\ndenotes the average length of a codeword in , i.e.,\nis the set of indices of source nodes that are assigned master nodes of degree in \nSet\nso is a probability distribution.\nIf a source symbol is encoded using a degree- master node in , then the next source symbol will be encoded using code tree . Since the source is memoryless, the transition probability of encoding using code tree immediately after encoding using code tree is .\n###figure_6### This permits viewing the process as a Markov chain whose states are the code trees. Figure 9 ###reference_### illustrates an example.\nFrom Consequence (a) following Definition 6.2 ###reference_numbered2###, every contains at least one leaf, so .\nThus, as described in Section 2.1 ###reference_###,\nthis implies that the associated Markov chain is a unichain whose unique recurrence class contains \nand whose associated transition matrix has a unique stationary distribution .\nThus the Markov chain associated with any\n is permissible.\nLet be some AIFV- code, \nbe the transition matrix of the associated Markov chain and\nbe \u2019s associated unique stationary distribution. Then the average cost of the code is the average length of an encoded symbol in the limit, i.e.,\nConstruct a binary AIFV- code\n\nwith minimum i.e.,\nThis problem is exactly the minimum-cost Markov Chain problem introduced in Section 2.1 ###reference_### with .\nAs discussed in the introduction to Section 4 ###reference_###, this was originally solved in exponential time by using an iterative algorithm."
},
{
"section_id": "7",
"parent_section_id": null,
"section_name": "Using Lemma 4.6 to derive a polynomial time algorithm for binary AIFV- coding",
"text": "Because\nthe minimum-cost binary AIFV- coding problem is a special case of the minimum-cost Markov chain problem,\nLemma 4.6 ###reference_numbered6### can be applied to derive a polynomial time algorithm.\nIn the discussion below working through this application, will denote, interchangeably, both a type- tree and a type- state with transition probabilities\n and cost For example, when writing (as in Definition 2.4 ###reference_numbered4###), will denote the corresponding Markov chain state and not the tree.\nApplying Lemma 4.6 ###reference_numbered6###\nrequires showing that, for fixed the and parameters in its statement are polynomial in and ."
},
{
"section_id": "7.1",
"parent_section_id": "7",
"section_name": "Showing that is polynomial in for AIFV- coding",
"text": "Recall that is the maximum number of bits needed to represent the coefficients of any linear inequality defining a constraint of\nShowing that is polynomial in and is not difficult but will require the following fact proven later in Corollary 8.9 ###reference_numbered9###:\n \nFor all the height of is at most\nis defined by inequalities of size where is the maximum number of bits needed to encode any of the\nNote that the definition of can be equivalently written as\nwhere we set to provide notational consistency between and \nThus, the linear inequalities defining are of the form\nSince each can be represented with bits, for\nsome integral This implies that for\nsome integral So the size of each is\nCorollary 8.9 ###reference_numbered9###, to be proven later, states that the height of is bounded by \nThus each for all and can be written as\n for some integral Thus, has size\n Thus,\nthe size of every inequality (11 ###reference_###) is at most \n\u220e\nRecall that is considered fixed. Thus"
},
{
"section_id": "7.2",
"parent_section_id": "7",
"section_name": "Finding Appropriate and Showing that and are Polynomial in for AIFV- Coding",
"text": "Recall that Fix and\nAs discussed at the starts of Section 4 ###reference_### and 6 ###reference_###, there are dynamic programming algorithms that, for give\n [9 ###reference_b9###] and for [11 ###reference_b11###]. For the best known algorithms for calculating \nuse integer linear programming and run in exponential time.\nIn deriving the polynomial time binary search algorithm for , [10 ###reference_b10###] proved that and could therefore use the DP time algorithm for as a subroutine. We need to prove something similar for\nThe main tool used will be the following highly technical lemma whose proof is deferred to the next section.\nLet be fixed, \n and Then\nIf\nIf\nThe proof also needs a generalization of the intermediate-value theorem:\nLet be continuous functions of variables such that for all indices , implies that and implies that . It follows that there exists a point such that\n.\nCombining the two yields:\nLet be fixed and Then there exists satisfying\nSet\n and for all .\nFrom Lemma 7.2 ###reference_numbered2###, if then and if then \nThe Poincar\u00e9-Miranda theorem then immediately implies the existence of such that, , i.e., (12 ###reference_###).\n\u220e\nLemma 7.4 ###reference_numbered4### combined with\nLemma 3.3 ###reference_numbered3###, immediately show that if there exists\n satisfying .\nAs noted earlier, for there are dynamic programming algorithms for calculating in time when [9 ###reference_b9###] and when [11 ###reference_b11###].\nThus for and for\nThose dynamic programming algorithms work by building the trees top-down. The status of nodes on the bottom level of the partially built tree, i.e., whether they are complete, intermediate nodes or master nodes of a particular degree, is left undetermined. One step of the dynamic programming algorithm then determines (guesses) the status of those bottom nodes and creates a new bottom level of undetermined nodes. It is easy to modify this procedure so that nodes are only assigned a status within some given\n The modified algorithms would then calculate in the same running time as the original algorithms, i.e., time when and when . Thus, for ,"
},
{
"section_id": "7.3",
"parent_section_id": "7",
"section_name": "The Final Polynomial Time Algorithm",
"text": "Fix and set . Since is fixed, we may assume that For smaller the problem can be solved in time by brute force.\nIn the notation of Lemma 4.6 ###reference_numbered6###, Section 7.1 ###reference_### shows that where is the maximum number of bits needed to encode any of the \nSection 7.2 ###reference_### shows that when and when and there always exists\n satisfying .\nThen, from Lemma 4.6 ###reference_numbered6###, the binary AIFV- coding problem\ncan be solved in time polynomially bounded by and i.e., weakly polynomial in the input."
},
{
"section_id": "8",
"parent_section_id": null,
"section_name": "Proof of Lemma 7.2 and Corollary 8.9",
"text": "The polynomial running time of the algorithm rested upon the correctness of the technical Lemma 7.2 ###reference_numbered2### and Corollary 8.9 ###reference_numbered9###.\nThe proof of Lemma 7.2 ###reference_numbered2### requires deriving further technical properties of AIFV- trees. Corollary 8.9 ###reference_numbered9### will be a consequence of some of these derivations.\nThe main steps of the proof of Lemma 7.2 ###reference_numbered2### are:\ngeneralize binary AIFV- codes trees to extended binary code trees;\nprove Lemma 7.2 ###reference_numbered2### for these extended trees;\nconvert this back to a proof of the original lemma.\nWe first introduce the concept of extended binary code trees.\nFix \nAn extended binary AIFV- code tree is defined exactly the same as a except that it is permitted to have an arbitrary number of leaves,\ni.e., master nodes of degree \nassigned the empty symbol The -labelled leaves are given source probabilities 888Since satisfies Definitions 6.1 ###reference_numbered1### and 6.2 ###reference_numbered2###, just for a larger number of master nodes, it also satisfies Consequences (a)-(e) of those definitions. This fact will be used in the proof of Lemmas 8.7 ###reference_numbered7###.\nLet denote the number of -labelled leaves in Note that if\n, then .\nFor notational convenience, let\n,\n,\nand \nrespectively denote the extended versions of , , and\nFor ,\n(a) and (b) below always hold:\nThere exists a function satisfying\nThere exists a function satisfying\n(a) follows directly from the fact that, from Definition 6.2 ###reference_numbered2###,\n so\n. Thus, simply setting\n satisfes the required conditions.\n###figure_7### To see (b), given consider the tree whose root is complete, with the left subtree of the root being a chain of intermediate- nodes, followed by one intermediate- node, and a leaf node assigned to , and whose right subtree is . See Figure 10 ###reference_###. Setting satisfies the required conditions.\n\u220e\nThis permits proving:\nLet Then\nLemma 8.2 ###reference_numbered2###\n(a) implies that for all ,\nBecause this is true for all , it immediately implies (i).\nSimilarly, Lemma 8.2 ###reference_numbered2###(b) implies that for all ,\nBecause this is true for all , it immediately implies (ii).\n\u220e\nPlugging into (i) and into (ii) proves\nLet and Then\nIf\nIf\nNote that this is exactly Lemma 7.2 ###reference_numbered2### but written for extended binary AIFV- coding trees rather than normal ones.\nWhile it is not necessarily true that , we can prove that if is large enough, they coincide in the unit hypercube.\nLet Then,\nfor all and ,\nPlugging this Lemma into Corollary 8.4 ###reference_numbered4### immediately proves Lemma 7.2 ###reference_numbered2###. It therefore only remains to prove the correctness of Lemma 8.5 ###reference_numbered5### ."
},
{
"section_id": "8.1",
"parent_section_id": "8",
"section_name": "Proving Lemma 8.5",
"text": "The proof of Lemma 8.5 ###reference_numbered5### is split into two parts, The first justifies simplifying the structure of AIFV- trees. The second uses these properties to actually prove Lemma 8.5 ###reference_numbered5###.\nThis is not possible for because .\nIt is also not possible for because in that case, has a -child so it can not be an intermediate- node.\nIf then, trivially, Condition 2 from Definition 6.2 ###reference_numbered2### cannot be violated. If then the transformation described leaves as an intermediate- node so Condition 2 from Definition 6.2 ###reference_numbered2### is still not violated.\n###figure_8### This operation of removing an intermediate- node can be repeated until condition (e) is satisfied.\n\u220e\nThis lemma has two simple corollaries.\nThere exists a minimum-cost AIFV- code such that tree satisfies conditions (a)-(e) of Lemma 8.7 ###reference_numbered7###.\nLet be a minimum cost AIFV- code. For each let be the tree satisfying conditions (a)-(e) and equation (13 ###reference_###) and set\n Since Since ,\nBut was a minimum-cost AIFV- code so must be one as well.\n\u220e\nThe corollary implies that, our algorithmic procedures, for all may assume that all satisfy conditions (a)-(e) of Lemma 8.7 ###reference_numbered7###.\nThis assumption permits bounding the height of all trees. The following corollary was needed in the proof of Lemma 7.1 ###reference_numbered1###.\nFor all the height of is at most\nLet satisfy conditions (a)-(e) of Lemma 8.7 ###reference_numbered7###.\nLet be the number of leaves in Since all leaves are master nodes, contains non-leaf master nodes.\nEvery complete node in must contain at least one leaf in each of its left and right subtrees, so the number of complete nodes in is at most\ncontains no intermediate- node if and one intermediate- node if\nIf each intermediate- node in can be written as for some non-leaf master node and \nIf all intermediate- nodes in with the possible exceptions of the left nodes can be written as for some non-leaf master node and\nSo, the total number of intermediate- nodes in the tree is at most\nThe total number of non-leaf nodes in the tree is then at most\nThus any path from a leaf of to its root has length at most \n\u220e\nWe note that this bound is almost tight. Consider a tree which has only one leaf and with all of the other master nodes (including the root) being master nodes of degree This tree is just a chain from the root to the unique leaf, and has of length"
},
{
"section_id": "8.1.1",
"parent_section_id": "8.1",
"section_name": "8.1.1 Further Properties of minimum-cost trees",
"text": "Definitions 6.1 ###reference_numbered1### and 6.2 ###reference_numbered2### are very loose and technically permit many scenarios, e.g., the existence of more than one intermediate- node in a tree or a chain of intermediate- nodes descending from the root of a tree. These scenarios will not actually occur in trees appearing in minimum-cost codes. The next lemma lists some of these scenarios and justifies ignoring them. This will be needed in the actual proof of Lemma 8.5 ###reference_numbered5### in Section 8.1.2 ###reference_.SSS2###\nA node is a left node of if corresponds to codeword for some .\nNote that for contains exactly\n left nodes. With the exception of which must be an intermediate- node, the other left nodes can technically be any of complete, intermediate- or master nodes. By definition, they cannot\nbe intermediate- nodes.\nLet and \nThen there exists\n\nsatisfying\nand the\nfollowing five conditions:\nThe root of is not an intermediate- node;\nThe root of is not an intermediate- node;\nIf is an intermediate- node in then the parent of \nis not an intermediate- node;\nIf is a non-root intermediate- node in and, furthermore, if then is not a left node, then the parent of \nis either a master node or an intermediate- node;\nIf is an intermediate- node in then and \nThis implies that is the unique intermediate- node in\nIf a tree does not satisfy one of conditions (a)-(d), we first show that it can be replaced by a tree with one fewer nodes satisfying (13 ###reference_###). Since this process cannot be repeated forever, a tree satisfying all of the conditions (a)-(d) and satisfying (13 ###reference_###) must exist.\nBefore starting, we emphasize that none of the transformations described below adds or removes -leaves, so\nIf condition (a) is not satisfied in then, from Consequence (e) following Definition 6.2 ###reference_numbered2###, . Let be the intermediate- root of and its child. Create by removing and making the root. Then\n(13 ###reference_###) is valid (with ).\nIf condition (b) is not satisfied, let be the intermediate- root of and its child. Create by removing and making the root. Then again\n(13 ###reference_###) is valid (with ).\nNote that this argument fails for because removing the edge from to would remove the node corresponding to word and would then no longer be in\nIf condition (c) is not satisfied in \nlet\n be a intermediate- node in whose child is also an intermediate- node.\nLet be the unique child of Now create from by pointing the -edge leaving to instead of i.e., removing from the tree. Then and\n(13 ###reference_###) is valid.\n###figure_9### If condition (d) is not satisfied in let be an intermediate- node in and suppose that its parent is either an intermediate- node or a complete node.\nLet be the unique child of \nNow create from by taking the pointer from that was pointing to and pointing it to instead, i.e., again removing from .\nAgain, and\n(13 ###reference_###) is valid.\nNote that the condition that \u201cif then is not a left node\u201d, ensures that Condition 2 from Definition 6.2 ###reference_numbered2### is not violated.\nWe have shown that for any there exists a satisfying (a)-(d) and (13 ###reference_###).\nNow assume that conditions (a)-(d) are satisfied in but condition\n(e) is not. From (a), is not the root of so the parent of exists.\nFrom (c), is not an intermediate- node and from the definition of master nodes is not a master node. Thus is either a complete node or an intermediate- node.\nLet be the unique (-child) of \nFrom (c), cannot be an intermediate- node; from (d), cannot be an intermediate- node. So is either a complete or master node.\nNow create from by taking the pointer from that was pointing to and pointing it to instead, i.e., again removing from\nNote that after the transformation, it is easy to see that\n and\n(13 ###reference_###) is valid.\nNote that since is either a complete or master node, pointing to \ndoes not affect the master nodes above \nIt only remains to show that this pointer redirection is a permissible operation on trees, i.e., that does not violate Condition 2 from Definition 6.2 ###reference_numbered2###.\nSince (e) is not satisfied, There are two cases:\nThis is not possible for because .\nIt is also not possible for because in that case, has a -child so it can not be an intermediate- node.\nIf then, trivially, Condition 2 from Definition 6.2 ###reference_numbered2### ###reference_numbered2### cannot be violated. If then the transformation described leaves as an intermediate- node so Condition 2 from Definition 6.2 ###reference_numbered2### ###reference_numbered2### is still not violated.\n###figure_10### This operation of removing an intermediate- node can be repeated until condition (e) is satisfied.\n\u220e\nThis lemma has two simple corollaries.\nThere exists a minimum-cost AIFV- code such that tree satisfies conditions (a)-(e) of Lemma 8.7 ###reference_numbered7### ###reference_numbered7###.\nLet be a minimum cost AIFV- code. For each let be the tree satisfying conditions (a)-(e) and equation (13 ###reference_### ###reference_###) and set\n Since Since ,\nBut was a minimum-cost AIFV- code so must be one as well.\n\u220e\nThe corollary implies that, our algorithmic procedures, for all may assume that all satisfy conditions (a)-(e) of Lemma 8.7 ###reference_numbered7### ###reference_numbered7###.\nThis assumption permits bounding the height of all trees. The following corollary was needed in the proof of Lemma 7.1 ###reference_numbered1### ###reference_numbered1###.\nFor all the height of is at most\nLet satisfy conditions (a)-(e) of Lemma 8.7 ###reference_numbered7### ###reference_numbered7###.\nLet be the number of leaves in Since all leaves are master nodes, contains non-leaf master nodes.\nEvery complete node in must contain at least one leaf in each of its left and right subtrees, so the number of complete nodes in is at most\ncontains no intermediate- node if and one intermediate- node if\nIf each intermediate- node in can be written as for some non-leaf master node and \nIf all intermediate- nodes in with the possible exceptions of the left nodes can be written as for some non-leaf master node and\nSo, the total number of intermediate- nodes in the tree is at most\nThe total number of non-leaf nodes in the tree is then at most\nThus any path from a leaf of to its root has length at most \n\u220e\nWe note that this bound is almost tight. Consider a tree which has only one leaf and with all of the other master nodes (including the root) being master nodes of degree This tree is just a chain from the root to the unique leaf, and has of length"
},
{
"section_id": "8.1.2",
"parent_section_id": "8.1",
"section_name": "8.1.2 The Actual Proof of Lemma 8.5",
"text": "It now remains to prove Lemma 8.5 ###reference_numbered5###, i.e.,\nthat if \nthen, for all and ,\n(of Lemma 8.5 ###reference_numbered5###.)\nRecall that so,\nFix .\nNow let be a code tree satisfying\nand\namong all trees satisfying (i), is a tree minimizing the number of leaves assigned an .\nBecause\n(13 ###reference_###) in Lemma 8.7 ###reference_numbered7### keeps the same and cannot increase we may also assume that\n satisfies conditions (a)-(e) of Lemma 8.7 ###reference_numbered7###.\nSince ,\nto prove the lemma, it thus suffices to show that\n\nThis implies that so\nSuppose to the contrary that , i.e., contains a leaf assigned an . Let denote the closest (lowest) non-intermediate- ancestor of and\n be the child of that is the root of the subtree containing\nBy the definition of either or\n where are all intermediate- nodes for some \nNote that if no left node is a leaf so, in particular neither or can be a left node.\nWe can therefore apply Lemma 8.7 ###reference_numbered7### (d) to deduce that, if was an intermediate- node, must be a master node. Thus, if is a complete node or intermediate- node, If is a master node, then and so is a master node of degree\nNow work through the three cases:\nis a complete node and (See Figure 13 ###reference_### (a))\nLet be the other child of (that is not ).\nNow remove . If was not already the -child of make the -child of\n###figure_11### The above transformation makes an intermediate- node. The resulting tree remains a valid tree in preserving the same cost but reducing the number of leaves assigned to by . This contradicts the minimality of , so this case is not possible.\nis a master node of degree (See Figure 13 ###reference_### (b)) \nLet be the source symbol assigned to Next remove the path from to \nconverting to a leaf, i.e., a master node of degree . This reduces999This is the only location in the proof that uses by and the number of leaves assigned to by . The resulting code tree is still in and\ncontradicts the minimality of , so this case is also not possible.\n###figure_12### is an intermediate- node and\n (See Figure 14 ###reference_###) \nFrom Lemma 8.7 ###reference_numbered7### (a) and (e), and , .\nSo, is at depth .\nSince cases (a) and (b) cannot occur, \nis the unique leaf assigned an \nAll other master nodes must be assigned some source symbol.\nLet be the deepest source symbol in the tree that is assigned to a non-left node\nand be the master node to which is assigned. Since such a must exist.\nSince every master node in\n except for is assigned a source symbol, from\nconsequence (a) following Definition 6.2 ###reference_numbered2###, is also a leaf.\nNow swap and i.e., assign to and to the node that used to be\nThe resulting tree is still in Furthermore, since the degree of all master nodes associated with source symbols remains unchanged, \nremains unchanged.\nNow consider\nIf before the swap, then\n, and therefore\n are decreased by at least by the swap.\nThis contradicts the minimality of , so this is not possible.\nIf before the swap, then and therefore\n remain unchanged. Thus the new tree satisfies conditions (i) and (ii) at the beginning of the proof. Since the original satisfied conditions (a)-(e) of Lemma 8.7 ###reference_numbered7###, the new tree does as well. Because of the swap, the new tree does not satisfy case (c), so it must be in case (a) or (b). But, as already seen, neither of those cases can occur.\nSo,\nWe have therefore proven that, if , then it must satisfy case (c) with\n We now claim that if , then\n This, combined with would prove the lemma.\nTo prove the claim, let be some tree that maximizes the number of master nodes at depths Call such a tree a -maximizing tree.\nSuppose contained some non-leaf master node of depth . Transform into a complete node by giving it a -child that is a leaf. The resulting tree has the same number of master nodes, so it is also a -maximizing tree, but contains fewer non-leaf master nodes. Repeating this operation yields a -maximizing tree in which all of the master nodes at depth are leaves.\nNext note that a -maximizing tree cannot contain any leaves of depth because, if it did, those leaves could be changed into internal nodes with two leaf children, contradicting the definition of a -maximizing tree.\nWe have thus seen that there is a -maximizing tree in which all of the master nodes are on level . But, any binary tree can have at most total nodes on level and is not a master node in a type- tree, so contains at most master nodes.\nThis implies that any type- tree with master nodes must have some (non-left) master node at depth , proving the claim and thus the lemma.\n\u220e"
},
{
"section_id": "9",
"parent_section_id": null,
"section_name": "Conclusion and Directions for Further Work",
"text": "The first part of this paper introduced the minimum-cost Markov chain problem. We then showed how to translate it into the problem of finding the highest point in the Markov Chain Polytope \nIn particular, Lemma 4.6 ###reference_numbered6### in Section 4.3 ###reference_### identified the problem specific information that is needed to use the Ellipsoid algorithm to solve the\nproblem in polynomial time.\nThis was written in a very general form so that it could be applied to solve problems other than binary Almost Instantaneous Fixed-to-Variable- (AIFV-) coding. For example, recent work\n[2 ###reference_b2###] uses this Lemma to derive polynomial time algorithms for AIVF-coding, a generalization of Tunstall coding\nwhich previously [17 ###reference_b17###, 18 ###reference_b18###] could only be solved in exponential time using an iterative algorithm.\nAnother possible problem use of Lemma 4.6 ###reference_numbered6### would be the construction of optimal codes for finite-state noiseless channels. [18 ###reference_b18###] recently showed how to frame this problem as a minimum cost Markov chain one and solve it using the iterative algorithm. This problem would definitely fit into the framework of Lemma 4.6 ###reference_numbered6###. Unfortunately, the calculation of the corresponding in [18 ###reference_b18###], needed by Lemma 4.6 ###reference_numbered6### as a problem-specific separation oracle,\nis done using Integer Linear Programming and therefore requires exponential time. The development of a polynomial time algorithm for calculating the would, by Lemma 4.6 ###reference_numbered6###, immediately yield polynomial time algorithms for solving the full problem.\nThe second part of the paper restricts itself to binary AIFV- codes, which were the original motivation for the\nminimum-cost Markov Chain problem.\nThese are -tuples of coding trees for lossless-coding that can be modelled as a Markov chain.\nWe derived properties of AIFV- coding trees that then permitted applying Lemma 4.6 ###reference_numbered6###. This yielded the first (weakly) polynomial time algorithm for constructing minimum cost binary AIFV- codes.\nThere are still many related open problems to resolve. The first is to note that\nour algorithm is only weakly polynomial, since its running time is\ndependent upon the actual sizes needed to encode the transition probabilities of the Markov Chain states in binary. For example, in AIFV- coding, this is polynomial in the number of bits needed to encode the probabilities of the words in the source alphabet.\nAn obvious goal would be, at least for AIFV- coding, to find a strongly polynomial time algorithm, one whose running time only depends upon\nA second concerns the definition of permissible Markov chains. Definition 2.1 ###reference_### requires that This guarantees that every permissible Markov Chain has a unique stationary distribution. It is also needed as a requirement for Theorem 1 in [1 ###reference_b1###], which is used to guarantee that there exists a distinctly-typed intersection point in (Corollary 3.2 ###reference_numbered2### only guarantees that every distinctly-typed intersection point is on or above ). The open question is whether is actually needed or whether the looser requirement that every has a unique stationary distribution would suffice to guarantee that contains some distinctly-typed intersection point.\nA final question\nwould return to the iterative algorithm approach of\n[23 ###reference_b23###, 22 ###reference_b22###, 4 ###reference_b4###, 5 ###reference_b5###].\nOur new polynomial time algorithm is primarily of theoretical interest and, like most Ellipsoid based algorithms, would be difficult to implement in practice.\nPerhaps the new geometric understanding of the problem developed here could improve the performance and analysis of the iterative algorithms.\nAs an example, the iterative algorithm of [4 ###reference_b4###, 5 ###reference_b5###, 22 ###reference_b22###, 23 ###reference_b23###]\ncan now be interpreted as moving from point to point in the set of distinctly-typed intersection points (of associated hyperplanes), never increasing the cost of\nthe associated Markov chain, finally terminating at the lowest point in this set.\nThis immediately leads to a better understanding of one of the issues with the iterative algorithms for the AIFV- problem.\nAs noted in Section 2.3 ###reference_###, the algorithm must solve for at every step of the algorithm. As noted in Section 4 ###reference_###, this can be done in polynomial time if but requires exponential time integer linear programming if A difficulty with the iterative algorithm was that it was not able to guarantee that at every step, or even at the final solution, With our new better understanding of the geometry of the Markov Chain polytope for the AIFV- problem, it might now be possible to prove that the condition\n always holds during the algorithm or develop a modified iterative algorithm in which the condition always holds.\nAcknowledgement: The authors would like to thank Reza Hosseini Dolatabadi and Arian Zamani for the generation of Figures 2 ###reference_### and 3 ###reference_###."
}
],
"appendix": [],
"tables": {},
"image_paths": {
"1": {
"figure_path": "2401.11622v4_figure_1.png",
"caption": "Figure 1: The bottom half of the figure is a five-state Markov chain. Arrows represent non-zero transition probabilities. qj\u2062(Sk)subscript\ud835\udc5e\ud835\udc57subscript\ud835\udc46\ud835\udc58q_{j}(S_{k})italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) is the probability of transitioning from state Sksubscript\ud835\udc46\ud835\udc58S_{k}italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT to state Sj.subscript\ud835\udc46\ud835\udc57S_{j}.italic_S start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT .\nNote that q0\u2062(Sk)>0subscript\ud835\udc5e0subscript\ud835\udc46\ud835\udc580q_{0}(S_{k})>0italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) > 0 for all k\u2208[5].\ud835\udc58delimited-[]5k\\in[5].italic_k \u2208 [ 5 ] .\nThe circles in the shaded rectangle \ud835\udd4aksubscript\ud835\udd4a\ud835\udc58{\\mathbb{S}}_{k}blackboard_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT represent the set of all permissible type-k\ud835\udc58kitalic_k states.\nEach Sksubscript\ud835\udc46\ud835\udc58S_{k}italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT in the Markov Chain shown also belongs to the associated set \ud835\udd4ak.subscript\ud835\udd4a\ud835\udc58{\\mathbb{S}}_{k}.blackboard_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT .",
"url": "http://arxiv.org/html/2401.11622v4/x1.png"
},
"2": {
"figure_path": "2401.11622v4_figure_2.png",
"caption": "Figure 2: An illustration of Lemma 3.1 (a) for a 3-state Markov chain \ud835\udc12=(S0,S1,S2).\ud835\udc12subscript\ud835\udc460subscript\ud835\udc461subscript\ud835\udc462\\mathbf{S}=(S_{0},S_{1},S_{2}).bold_S = ( italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) . The table lists the associated qj\u2062(Sk)subscript\ud835\udc5e\ud835\udc57subscript\ud835\udc46\ud835\udc58q_{j}(S_{k})italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) and \u2113\u2062(Sk)\u2113subscript\ud835\udc46\ud835\udc58\\ell(S_{k})roman_\u2113 ( italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) values.\nThe green plane is f0\u2062(\ud835\udc31,S0)=9+x1/4+x2/4,subscript\ud835\udc530\ud835\udc31subscript\ud835\udc4609subscript\ud835\udc6514subscript\ud835\udc6524f_{0}(\\mathbf{x},S_{0})=9+x_{1}/4+x_{2}/4,italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_x , italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = 9 + italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / 4 + italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT / 4 ,\nthe red plane f1\u2062(\ud835\udc31,S1)=11\u2212x1+x2/4,subscript\ud835\udc531\ud835\udc31subscript\ud835\udc46111subscript\ud835\udc651subscript\ud835\udc6524f_{1}(\\mathbf{x},S_{1})=11-x_{1}+x_{2}/4,italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_x , italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) = 11 - italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT / 4 ,\nand the blue plane f2\u2062(\ud835\udc31,S2)=14+x1/4\u2212x2.subscript\ud835\udc532\ud835\udc31subscript\ud835\udc46214subscript\ud835\udc6514subscript\ud835\udc652f_{2}(\\mathbf{x},S_{2})=14+x_{1}/4-x_{2}.italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_x , italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = 14 + italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / 4 - italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT .\nBy calculation, \ud835\udf45\u2062(\ud835\udc12)=(0.6,0.15,0.25)\ud835\udf45\ud835\udc120.60.150.25\\bm{\\pi}(\\mathbf{S})=(0.6,0.15,0.25)bold_italic_\u03c0 ( bold_S ) = ( 0.6 , 0.15 , 0.25 ) so\ncost\u2062(\ud835\udc12)=0.6\u22c59+0.15\u22c511+0.25\u22c514=10.55.cost\ud835\udc12\u22c50.69\u22c50.1511\u22c50.251410.55\\mbox{\\rm cost}(\\mathbf{S})=0.6\\cdot 9+0.15\\cdot 11+0.25\\cdot 14=10.55.cost ( bold_S ) = 0.6 \u22c5 9 + 0.15 \u22c5 11 + 0.25 \u22c5 14 = 10.55 . The planes intersect at unique point (x0,x1,y)=(1.6,4.6,10.55)subscript\ud835\udc650subscript\ud835\udc651\ud835\udc661.64.610.55(x_{0},x_{1},y)=(1.6,4.6,10.55)( italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y ) = ( 1.6 , 4.6 , 10.55 ).",
"url": "http://arxiv.org/html/2401.11622v4/x2.png"
},
"3": {
"figure_path": "2401.11622v4_figure_3.png",
"caption": "Figure 3: An illustration of Lemma 3.3 and Corollary 3.4 for the case m=3.\ud835\udc5a3m=3.italic_m = 3 . 3000 states each of type 00, 1111 and 2222 were generated with associated \u2113\u2062(Sk)\u2113subscript\ud835\udc46\ud835\udc58\\ell(S_{k})roman_\u2113 ( italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) values. The green, red and blue are, respectively the lower envelopes g0\u2062(\ud835\udc31)subscript\ud835\udc540\ud835\udc31g_{0}(\\mathbf{x})italic_g start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( bold_x ), g1\u2062(\ud835\udc31)subscript\ud835\udc541\ud835\udc31g_{1}(\\mathbf{x})italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_x ), g2\u2062(\ud835\udc31),subscript\ud835\udc542\ud835\udc31g_{2}(\\mathbf{x}),italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_x ) , i.e., the lower envelopes of the 3000 associated hyperplanes of each type. \u210d\u210d\\mathbb{H}blackboard_H is the lower envelope of those three lower envelopes. The three gi\u2062(\ud835\udc31)subscript\ud835\udc54\ud835\udc56\ud835\udc31g_{i}(\\mathbf{x})italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_x ) intersect at a unique point (x1\u2217,x2\u2217,y\u2217)superscriptsubscript\ud835\udc651superscriptsubscript\ud835\udc652superscript\ud835\udc66(x_{1}^{*},x_{2}^{*},y^{*})( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) which is a highest point in \u210d.\u210d\\mathbb{H}.blackboard_H . \ud835\udc12\u2217=\ud835\udc12\u2062((x1\u2217,x2\u2217))superscript\ud835\udc12\ud835\udc12superscriptsubscript\ud835\udc651superscriptsubscript\ud835\udc652\\mathbf{S}^{*}=\\mathbf{S}((x_{1}^{*},x_{2}^{*}))bold_S start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT = bold_S ( ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) ) is a minimal cost Markov chain among the 30003superscript300033000^{3}3000 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT permissible Markov chains in \ud835\udd4a\ud835\udd4a{\\mathbb{S}}blackboard_S and cost\u2062(\ud835\udc12\u2217)=y\u2217costsuperscript\ud835\udc12superscript\ud835\udc66{\\rm cost}(\\mathbf{S}^{*})=y^{*}roman_cost ( bold_S start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) = italic_y start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT.",
"url": "http://arxiv.org/html/2401.11622v4/x3.png"
},
"4": {
"figure_path": "2401.11622v4_figure_4.png",
"caption": "Figure 4: Node types in a binary AIFV-3333 code tree: complete node (C\ud835\udc36Citalic_C), intermediate-00 and intermediate-1111 nodes (I0subscript\ud835\udc3c0I_{0}italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, I1subscript\ud835\udc3c1I_{1}italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT), master nodes of degrees 0,1,20120,1,20 , 1 , 2 (M0subscript\ud835\udc400M_{0}italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, M1subscript\ud835\udc401M_{1}italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, M2subscript\ud835\udc402M_{2}italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT), (M0subscript\ud835\udc400M_{0}italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is a leaf)and non-intermediate-00 nodes (N0subscript\ud835\udc410{N_{0}}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT). The N0subscript\ud835\udc410{N_{0}}italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT nodes can be complete, master, or intermediate-1111 nodes, depending upon their location.",
"url": "http://arxiv.org/html/2401.11622v4/x4.png"
},
"5": {
"figure_path": "2401.11622v4_figure_5.png",
"caption": "Figure 5: Example binary AIFV-3333 code for source alphabet {a,b,c,d}.\ud835\udc4e\ud835\udc4f\ud835\udc50\ud835\udc51\\left\\{a,b,c,d\\right\\}.{ italic_a , italic_b , italic_c , italic_d } . The small filled nodes are complete nodes; the small striped nodes are intermediate-1111 nodes and the small empty nodes are intermediate-00 ones. The large nodes are master nodes with their assigned source symbols. They are labelled to their sides as Misubscript\ud835\udc40\ud835\udc56M_{i}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT nodes, indicating that they are master-i\ud835\udc56iitalic_i nodes. Note that T2subscript\ud835\udc472T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT encodes a\ud835\udc4eaitalic_a, which is at its root, with an empty string!",
"url": "http://arxiv.org/html/2401.11622v4/x5.png"
},
"9": {
"figure_path": "2401.11622v4_figure_9.png",
"caption": "Figure 9: Markov chain corresponding to AIFV-3333 code in Figure 5. Note that T1subscript\ud835\udc471T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT contains no degree 1 master node, so there is no edge from T1subscript\ud835\udc471T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT to T1.subscript\ud835\udc471T_{1}.italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT . Similarly, T2subscript\ud835\udc472T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT contains no degree 2 master node, so there is no edge from T2subscript\ud835\udc472T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT to T2.subscript\ud835\udc472T_{2}.italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT .",
"url": "http://arxiv.org/html/2401.11622v4/x6.png"
},
"10": {
"figure_path": "2401.11622v4_figure_10.png",
"caption": "Figure 10: Illustration of Lemma 8.2 case (b), describing Tk\u2032\u2062[T0e\u2062x].subscriptsuperscript\ud835\udc47\u2032\ud835\udc58delimited-[]subscriptsuperscript\ud835\udc47\ud835\udc52\ud835\udc650T^{\\prime}_{k}\\left[T^{ex}_{0}\\right].italic_T start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT [ italic_T start_POSTSUPERSCRIPT italic_e italic_x end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ] . Note that in Tk\u2032\u2062[T0e\u2062x],subscriptsuperscript\ud835\udc47\u2032\ud835\udc58delimited-[]subscriptsuperscript\ud835\udc47\ud835\udc52\ud835\udc650T^{\\prime}_{k}\\left[T^{ex}_{0}\\right],italic_T start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT [ italic_T start_POSTSUPERSCRIPT italic_e italic_x end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ] , leaf 0k\u20621superscript0\ud835\udc5810^{k}10 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT 1 is labelled with an \u03f5italic-\u03f5\\epsilonitalic_\u03f5 so the tree is in \ud835\udcafke\u2062x\u2062(m,n)subscriptsuperscript\ud835\udcaf\ud835\udc52\ud835\udc65\ud835\udc58\ud835\udc5a\ud835\udc5b\\mathcal{T}^{ex}_{k}(m,n)caligraphic_T start_POSTSUPERSCRIPT italic_e italic_x end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_m , italic_n ) but not in \ud835\udcafk\u2062(m,n)subscript\ud835\udcaf\ud835\udc58\ud835\udc5a\ud835\udc5b\\mathcal{T}_{k}(m,n)caligraphic_T start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_m , italic_n ).",
"url": "http://arxiv.org/html/2401.11622v4/x7.png"
},
"11": {
"figure_path": "2401.11622v4_figure_11.png",
"caption": "Figure 11: Illustration of transformations in cases (c) and (d) of Lemma 8.7. Note that the illustration of the second subcase of (d) assumes that v\ud835\udc63vitalic_v is the 1111-child of u\ud835\udc62uitalic_u. The case in which v\ud835\udc63vitalic_v is the 00-child is symmetric.",
"url": "http://arxiv.org/html/2401.11622v4/x8.png"
},
"12": {
"figure_path": "2401.11622v4_figure_12.png",
"caption": "Figure 12: Illustration of transformation in case (e) of Lemma 8.7. v\ud835\udc63vitalic_v is always an intermediate-1111 node and w\ud835\udc64witalic_w can be either a master node or a complete node. z\ud835\udc67zitalic_z is a master node of degree t\ud835\udc61titalic_t. In the case in which u\ud835\udc62uitalic_u is a complete node, v\ud835\udc63vitalic_v may be either the left or right child of u.\ud835\udc62u.italic_u . Both cases are illustrated.",
"url": "http://arxiv.org/html/2401.11622v4/x9.png"
},
"13": {
"figure_path": "2401.11622v4_figure_13.png",
"caption": "Figure 13: Illustration of the first two cases of the proof of Lemma 8.5. In case (a) a\u03f5subscript\ud835\udc4eitalic-\u03f5a_{\\epsilon}italic_a start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT is complete and c\u03f5subscript\ud835\udc50italic-\u03f5c_{\\epsilon}italic_c start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT can be its 1111-child or its 00-child. T\u2032superscript\ud835\udc47\u2032T^{\\prime}italic_T start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT is the subtree rooted at c\u03f5.subscript\ud835\udc50italic-\u03f5c_{\\epsilon}.italic_c start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT . In case (b) a\u03f5subscript\ud835\udc4eitalic-\u03f5a_{\\epsilon}italic_a start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT is a master node and \u2113\u03f5subscript\u2113italic-\u03f5\\ell_{\\epsilon}roman_\u2113 start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT is connected to it via a chain of intermediate-00 nodes.",
"url": "http://arxiv.org/html/2401.11622v4/x10.png"
},
"14": {
"figure_path": "2401.11622v4_figure_14.png",
"caption": "Figure 14: Illustration of case (c) of the proof of Lemma 8.5. a\u03f5=0ksubscript\ud835\udc4eitalic-\u03f5superscript0\ud835\udc58a_{\\epsilon}=0^{k}italic_a start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT = 0 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT is an intermediate-1111 node\nand \u03b2\u03f5=0k\u20621.subscript\ud835\udefditalic-\u03f5superscript0\ud835\udc581\\beta_{\\epsilon}=0^{k}1.italic_\u03b2 start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT = 0 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT 1 . From definition 6.2 (2), this b\u03f5subscript\ud835\udc4fitalic-\u03f5b_{\\epsilon}italic_b start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT must exist in every Tk,subscript\ud835\udc47\ud835\udc58T_{k},italic_T start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , so it may not be removed.\nvisubscript\ud835\udc63\ud835\udc56v_{i}italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is a deepest leaf in Tk,subscript\ud835\udc47\ud835\udc58T_{k},italic_T start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , which is \u201cswapped\u201d with b\u03f5subscript\ud835\udc4fitalic-\u03f5b_{\\epsilon}italic_b start_POSTSUBSCRIPT italic_\u03f5 end_POSTSUBSCRIPT.",
"url": "http://arxiv.org/html/2401.11622v4/x11.png"
}
},
"validation": true,
"references": [
{
"1": {
"title": "Further improvements on the construction of binary AIFV- codes.",
"author": "Reza Hosseini Dolatabadi, Mordecai Golin, and Arian Zamani.",
"venue": "In 2024 IEEE International Symposium on Information Theory\n(ISIT\u201924), 2024.",
"url": null
}
},
{
"2": {
"title": "A polynomial time algorithm for AIVF coding.",
"author": "Reza Hosseini Dolatabadi, Mordecai Golin, and Arian Zamani.",
"venue": "In 2024 IEEE International Symposium on Information Theory\n(ISIT\u201924), 2024.",
"url": null
}
},
{
"3": {
"title": "Individually optimal single-and multiple-tree almost instantaneous\nvariable-to-fixed codes.",
"author": "Danny Dube and Fatma Haddad.",
"venue": "In 2018 IEEE International Symposium on Information Theory\n(ISIT), pages 2192\u20132196. IEEE, 2018.",
"url": null
}
},
{
"4": {
"title": "An optimality proof of the iterative algorithm for AIFV- codes.",
"author": "Ryusei Fujita, Ken-Ichi Iwata, and Hirosuke Yamamoto.",
"venue": "In 2018 IEEE International Symposium on Information Theory\n(ISIT), pages 2187\u20132191, 2018.",
"url": null
}
},
{
"5": {
"title": "An iterative algorithm to optimize the average performance of markov\nchains with finite states.",
"author": "Ryusei Fujita, Ken-ichi Iwata, and Hirosuke Yamamoto.",
"venue": "In 2019 IEEE International Symposium on Information Theory\n(ISIT), pages 1902\u20131906, 2019.",
"url": null
}
},
{
"6": {
"title": "On a redundancy of AIFV- codes for m =3,5.",
"author": "Ryusei Fujita, Ken-ichi Iwata, and Hirosuke Yamamoto.",
"venue": "In 2020 IEEE International Symposium on Information Theory\n(ISIT), pages 2355\u20132359, 2020.",
"url": null
}
},
{
"7": {
"title": "Discrete stochastic processes.",
"author": "Robert G Gallager.",
"venue": "OpenCourseWare: Massachusetts Institute of Technology, 2011.",
"url": null
}
},
{
"8": {
"title": "Polynomial time algorithms for constructing optimal aifv codes.",
"author": "Mordecai Golin and Elfarouk Harb.",
"venue": "In 2019 Data Compression Conference (DCC), pages 231\u2013240,\n2019.",
"url": null
}
},
{
"9": {
"title": "Speeding up the AIFV- dynamic programs by two orders of\nmagnitude using range minimum queries.",
"author": "Mordecai Golin and Elfarouk Harb.",
"venue": "Theoretical Computer Science, 865:99\u2013118, 2021.",
"url": null
}
},
{
"10": {
"title": "A polynomial time algorithm for constructing optimal binary\nAIFV- codes.",
"author": "Mordecai Golin and Elfarouk Harb.",
"venue": "IEEE Transactions on Information Theory, 69(10):6269\u20136278,\n2023.",
"url": null
}
},
{
"11": {
"title": "Speeding up AIFV- dynamic programs by orders of\nmagnitude.",
"author": "Mordecai J Golin and Albert John L Patupat.",
"venue": "In 2022 IEEE International Symposium on Information Theory\n(ISIT), pages 246\u2013251. IEEE, 2022.",
"url": null
}
},
{
"12": {
"title": "The ellipsoid method and its consequences in combinatorial\noptimization.",
"author": "M. Gr\u00f6tschel, L. Lov\u00e1sz, and A. Schrijver.",
"venue": "Combinatorica, 1(2):169\u2013197, Jun 1981.",
"url": null
}
},
{
"13": {
"title": "Geometric algorithms and combinatorial optimization, volume 2.",
"author": "Martin Gr\u00f6tschel, L\u00e1szl\u00f3 Lov\u00e1sz, and Alexander Schrijver.",
"venue": "Springer Science & Business Media, 2012.",
"url": null
}
},
{
"14": {
"title": "Worst-case redundancy of optimal binary AIFV codes and their\nextended codes.",
"author": "Weihua Hu, Hirosuke Yamamoto, and Junya Honda.",
"venue": "IEEE Transactions on Information Theory, 63(8):5074\u20135086,\n2017.",
"url": null
}
},
{
"15": {
"title": "AIFV codes allowing -bit decoding delays for unequal bit cost.",
"author": "Ken-Ichi Iwata, , Kengo Hashimoto, Takahiro Wakayama, and Hirosuke Yamamoto.",
"venue": "In 2024 IEEE International Symposium on Information Theory\n(ISIT\u201924), 2024.",
"url": null
}
},
{
"16": {
"title": "A dynamic programming algorithm to construct optimal code trees of\nAIFV codes.",
"author": "Ken-ichi Iwata and Hirosuke Yamamoto.",
"venue": "In 2016 International Symposium on Information Theory and Its\nApplications (ISITA), pages 641\u2013645, 2016.",
"url": null
}
},
{
"17": {
"title": "Aivf codes based on iterative algorithm and dynamic programming.",
"author": "Ken-ichi Iwata and Hirosuke Yamamoto.",
"venue": "In 2021 IEEE International Symposium on Information Theory\n(ISIT), pages 2018\u20132023. IEEE, 2021.",
"url": null
}
},
{
"18": {
"title": "Joint coding for discrete sources and finite-state noiseless\nchannels.",
"author": "Ken-Ichi Iwata and Hirosuke Yamamoto.",
"venue": "In 2022 IEEE International Symposium on Information Theory\n(ISIT), pages 3327\u20133332. IEEE, 2022.",
"url": null
}
},
{
"19": {
"title": "The Poincar\u00e9-Miranda theorem.",
"author": "Wladyslaw Kulpa.",
"venue": "The American Mathematical Monthly, 104(6):545\u2013550, 1997.",
"url": null
}
},
{
"20": {
"title": "Theory of linear and integer programming.",
"author": "Alexander Schrijver.",
"venue": "John Wiley & Sons, 1998.",
"url": null
}
},
{
"21": {
"title": "Almost instantaneous FV codes.",
"author": "H. Yamamoto and X. Wei.",
"venue": "In 2013 IEEE International Symposium on Information Theory\n(ISIT), pages 1759\u20131763, July 2013.",
"url": null
}
},
{
"22": {
"title": "An iterative algorithm to construct optimal binary AIFV- codes.",
"author": "Hirosuke Yamamoto and Ken-ichi Iwata.",
"venue": "In 2017 IEEE Information Theory Workshop (ITW), pages 519\u2013523,\n2017.",
"url": null
}
},
{
"23": {
"title": "Almost instantaneous fixed-to-variable length codes.",
"author": "Hirosuke Yamamoto, Masato Tsuchihashi, and Junya Honda.",
"venue": "IEEE Transactions on Information Theory, 61(12):6432\u20136443,\n2015.",
"url": null
}
}
],
"url": "http://arxiv.org/html/2401.11622v4"
}