text
stringlengths
1
7.94M
lang
stringclasses
5 values
Lubbock Music Now - Civic Lubbock, Inc. a project of Civic Lubbock, Inc. Lubbock Music NOW: It’s the frame for a picture of life in the Hub City. It’s the artists grinding it out night after night providing a soundtrack for our evenings out. It’s the songs being written now that may endure for years to come. The Project: Compile an album every year made up entirely of locally-produced music. Submitted tracks are judged by several former and/or current members of the Texas branch of The Recording Academy (Texas Grammy Board). The top ranked tracks are selected for inclusion on the current year’s Lubbock Music NOW CD. The Purpose: To honor and recognize musicians living in the Lubbock area “working the circuit,” and to give visitors a picture of what the local scene is producing. It’s a time capsule of music. Congratulations to all of the artists selected for the 2018 edition of Lubbock Music NOW! See the 2016 Lubbock Music NOW Album and where to buy it. See the 2017 Lubbock Music NOW Album and where to buy it.
english
/* * Copyright 2017 Netflix, Inc. * <p> * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * <p> * http://www.apache.org/licenses/LICENSE-2.0 * <p> * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the * specific language governing permissions and limitations under the License. */ package io.rsocket.examples.transport.tcp.duplex; import io.rsocket.AbstractRSocket; import io.rsocket.Payload; import io.rsocket.RSocket; import io.rsocket.client.RSocketClient; import io.rsocket.client.RSocketClient.SocketAcceptor; import io.rsocket.lease.DisabledLeaseAcceptingSocket; import io.rsocket.lease.LeaseEnforcingSocket; import io.rsocket.server.RSocketServer; import io.rsocket.transport.TransportServer.StartedServer; import io.rsocket.transport.netty.client.TcpTransportClient; import io.rsocket.transport.netty.server.TcpTransportServer; import io.rsocket.util.PayloadImpl; import reactor.core.publisher.Flux; import reactor.ipc.netty.tcp.TcpClient; import reactor.ipc.netty.tcp.TcpServer; import java.net.InetSocketAddress; import java.net.SocketAddress; import java.nio.charset.StandardCharsets; import java.time.Duration; import static io.rsocket.client.KeepAliveProvider.*; import static io.rsocket.client.SetupProvider.*; public final class DuplexClient { public static void main(String[] args) { StartedServer server = RSocketServer.create(TcpTransportServer.create(TcpServer.create())) .start((setupPayload, reactiveSocket) -> { reactiveSocket.requestStream(new PayloadImpl("Hello-Bidi")) .map(payload -> StandardCharsets.UTF_8.decode(payload.getData()).toString()) .log() .subscribe(); return new DisabledLeaseAcceptingSocket(new AbstractRSocket() { }); }); SocketAddress address = server.getServerAddress(); RSocketClient rsclient = RSocketClient.createDuplex(TcpTransportClient.create(TcpClient.create(options -> options.connect((InetSocketAddress)address))), new SocketAcceptor() { @Override public LeaseEnforcingSocket accept(RSocket reactiveSocket) { return new DisabledLeaseAcceptingSocket(new AbstractRSocket() { @Override public Flux<Payload> requestStream(Payload payload) { return Flux.interval(Duration.ofSeconds(1)).map(aLong -> new PayloadImpl("Bi-di Response => " + aLong)); } }); } }, keepAlive(never()).disableLease()); RSocket socket = rsclient.connect().block(); socket.onClose().block(); } }
code
After the day being postponed we managed to get out and have a great day of racing. Well done to all of our great swimmers, you all did well. In week 7 of Term 1, the year 7 and 8's went out the Wero white water centre. They had an amazing time and here are a few photos from their trip.
english
The term "Belgravia Supercar Rentals” or "us" or "we" refers to the owner of the website whose registered office is 2 Eaton Gate, Belgravia, London, SW1W 9BJ. The term "you" refers to the user or viewer of our website. You may not create a link to this website from another website or document without Belgravia Supercar Rentals prior written consent.
english
इप्ल २०१८: मुंबई ने चखा जीत का स्वाद, रोहित ने रोका माही का कारवां होम गरमा-गरम इप्ल २०१८: मुंबई ने चखा जीत का स्वाद, रोहित ने रोका माही... दिल्ली। लंबे इंतजार के बाद मुंबई इंडियंस को जीत का स्वाद मिला. आईपीएल सीजन ११ के २७वें मुकाबले में चेन्नई सुपर किंग्स को ८ विकेट से हरा दिया. मुंबई की आईपीएल के मौजूदा सेशन में ये सात मैचों में दूसरी जीत है. मुंबई इंडियंस को जीत का स्वाद टॉस हार कर पहले बल्लेबाजी करने उतरी चेन्नई सुपर किंग्स की टीम ने २० ओवर में ५ विकेट गंवा कर १६९ रन बनाए. मुंबई इंडियंस को जीत के लिए १७० रन बनाने थे. जिसके जवाब में मुंबई इंडियंस ने २ बॉल शेष रहते हुए मैच को अपने नाम कर लिया. चेन्नई सुपरकिंग्स को हरा दिया. रोहित शर्मा ने खेली कप्तानी पारी मुंबई इंडियंस के लिए करो या मरो वाली स्थिति थी. रोहित शर्मा ने कप्तानी पारी खेली. ३३ गेंदों में ६ चौके और २ छक्कों की मदद से नाबाद 5६ रन बनाए. एविन लुइस ने ४३ गेंदों पर ४७ और सूर्य कुमार यादव ने ४४ रनों का योगदान दिया. रोहित शर्मा की कप्तानी पारी की बदौलत स्कोर बोर्ड में टॉप पर काबिज चेन्नई सुपर किंग्स को हराकर आईपीएस के मौजूदा सत्र में दूसरी जीत दर्ज की. पिछली बार उन्होंने रॉयल चैलेंजर्स बेंगलुरू को हराया था, तब भी रोहित शर्मा ने ९४ रन बनाए थे. ये भी पढ़ें: विडियो: आईपीएल-२०१८ का अब तक का सबसे लंबा छक्का रैंकिंग में टॉप पर चेन्नई दूसरी जीत के साथ ही मुंबई इंडियंस ने अपने आप को टूर्नामेंट में बनाए रखा. अब वो सात मैचों में ४ प्वाइंट लेकर छठे स्थान पर है. जबकि चेन्नई सुपर किंग्स सात मैचों में दस अंकों के साथ टॉप पर है. टॉस हारकर पहले बल्लेबाजी करते हुए चेन्नई सुपर किंग्स ने २० ओवर में ५ विकेट गंवा कर १६९ रन बनाए. चेन्नई के लिए सबसे ज्यादा सुरेश रैना ने नाबाद 7५ रन बनाए. सुरेश रैना ने अपनी अर्धशतकीय पारी में ४७ गेंदें खेली. ६ चौकों के अलावा ४ छक्के भी लगाए. उनके अलावा अंबाती रायुडू ने ३५ गेंदों में २ चौकों और चार छक्कों की मदद से ४६ रनों की पारी खेली. महेंद्र सिंह धोनी ने २1 गेंदों में तीन चौके और एक छक्के की मदद से २6 रनों की पारी खेली. मगर मुंबई इंडियंस ने माही की रथ को रोक दिया और जीत अपने नाम की. प्रेवियस आर्टियलपेट्रोल का टेंशन भूल जाइए, स-३४० इलेक्ट्रिक स्कूटर का मजा लीजिए नेक्स्ट आर्टियलतेजप्रताप की शादी के दो-दो कार्ड, विप मेहमानों के लिए है स्पेशल वेडिंग कार्ड माही की 'जीवा' पापा से बेहतर डांस करती है, वीडियो वायरल - न्यूसफ्री २९त अप्रैल २०१८ एट ५:१४ प्म [] ये भी पढ़ें: इप्ल २०१८: मुंबई ने चखा जीत का स्व [] माही की 'जीवा' पापा से बेहतर डांस करती है, वीडियो वायरल २०त मई २०18 एट ११:५१ आम
hindi
Teknorob was established in March, 2009 by engineers with more than 10 years experience in the automotive industry to develop high-tech products and systems to produce solutions for all kinds of automation needs of the industry. Teknorob, expanding its strong technical staff day by day, in addition to increased automation solutions to its portfolio of services with the projects undertaken from 2009 to the present day. Teknorob offers fast deliver and solutions that combine robotic automation, industrial automation, design (Fixture, Special Machine), training, pc software for special process and maintenance activities. Teknorob closely follows the technological applications in its fields of activity and Teknorob offers proactive solutions to its customers through development activities from the principle of absolute customer satisfaction. Robotic automation is realized in a short time, robotic line applications, mechanical manufacturing, mechanical engraving work, steel structures, fixture automation, industrial automation, PLC and SCADA systems with special machine automations, micro-processor electronic card design and R&D works are proving to Teknorob’s technical power and reliability. Teknorob aiming to grow continuously and balanced with supporting the development of its employees and expanding its staff with knowledgeable and component employees. Our company attach importance to occupational health, safety and quality management. Additionally, the company proves them with document of ISO 9001 and OHSAS 18001. Industrial Robots, to overcome the difficulty of finding a skilled workforce or increase normal workforce capacity so that you can choose to help fight low-value of economies. Industrial robots are becoming more necessary in today’s conditions. Their choice, installation, service, and maintenance, which aims further to be taken serious, are a solution investment. This investment will begin to amortize itself from 6 to 18 months with an increase in production quality and reduced labor costs. We export 11 diffrent countries from Middle East countries to Every where. We are at your service with our technical team we employ. We have had a good innings about many products such as from your white goods to your cars. We are running on our projects with our expanding team and with our passion. Do you have any quastion?
english
मोदी ने मुसलमानों को सबक़ सिखाने की बात कही: पुलिस अधिकारी भट्ट - बैक न्यूज हिंदी गुजरात के एक वरिष्ठ पुलिस अधिकारी संजीव भट्ट ने सुप्रीम कोर्ट में दायर अपने शपथपत्र में वर्ष २००२ के गुजरात दंगों की जाँच कर रही विशेष जाँच टीम की निष्पक्षता पर सवाल उठाए हैं और मुख्यमंत्री नरेंद्र मोदी पर गंभीर आरोप लगाए हैं. उन्होंने इन आरोपों को दोहराया है कि २००२ में गुजरात में हुए मुस्लिम विरोधी दंगों के दौरान 'मुख्यमंत्री नरेंद्र मोदी ने पुलिस अफ़सरों को दंगाइयों को नज़रअंदाज़ करने का आदेश दिया था और कहा था कि हिंदुओं को अपनी भड़ास निकालने देने दें.' कौन हैं पुलिस अधिकारी संजीव भट्ट? संजीव भट्ट ने सुप्रीम कोर्ट में दायर शपथपत्र में कहा है कि २७ फ़रवरी २००२ को वो एक बैठक में शामिल थे जिसमें मुख्यमंत्री नरेंद्र मोदी ने वहाँ मौजूद पुलिस अफ़सरों से कहा था कि दंगाइयों को नज़रअंदाज़ करें. साथ ही दंगों का शिकार हो रहे लोगों की ओर से मिलने वाली शिकायतों को भी नज़रअंदाज़ करने का आदेश दिया गया था. उन्होंने (नरेंद्र मोदी ने) बैठक में मौजूद सभी से कहा कि बहुत देर तक गुजरात पुलिस सांप्रदायिक दंगों में हिदुओं और मुसलमानों के ख़िलाफ़ कार्रवाई में संतुलन बनाकर चलती रही है और अब समय का तकाज़ा है कि मुसलमानों को सबक़ सिखाया जाना चाहिए ताकि ऐसी घटनाएँ (गोधरा कांड) कभी दोहराई न जा सकें. मुख्यमंत्री नरेंद्र मोदी ने ये विचार भी ज़ाहिर किए कि हिंदुओं की भावनाएँ भड़की हुई हैं और उन्हें ग़ुस्सा निकालने का मौक़ा दिया जाना ज़रूरी है शपथपत्र में पुलिस अधिकारी संजीव भट्ट उधर गुजरात सरकार के प्रवक्ता जय नारायण व्यास ने इस घटनाक्रम पर टिप्पणी करते हुए कहा है कि भट्ट की बातों की सच्चाई सुप्रीम कोर्ट के सामने साफ़ हो जाएगी. बीबीसी से बातचीत में जय नारायण व्यास ने कहा कि ये हलफनामा दायर होने से मुख्यमंत्री नरेन्द्र मोदी पर व्यक्तिगत तौर से कोई असर नहीं पड़ेगा. उन्होंने कहा, "ये संजीव भट्ट के आरोप हैं, और आरोप-प्रत्यारोप से न्याय नहीं चलता. तथ्यों और न्यायिक प्रावधानों के आधार पर सत्य सामने आएगा." गुजरात दंगों से संबंधित घटनाक्रम पर प्रतिक्रियाएँ दूसरी ओर कांग्रेस के प्रवक्ता मनीश तिवारी ने उम्मीद जताई कि इस हलफ़नामे पर सुप्रीम कोर्ट उचित कार्रवाई करेगा. साथ ही किसी व्यक्ति विशेष का नाम ना लेते हुए मनीश तिवारी ने कहा, "मुझे लगता है कि जो लोग गुजरात के मुख्यमंत्री की प्रशंसा करते हैं, वो इस हलफनामे को पढ़ने का समय ज़रूर निकालेंगे." 'ताकि ऐसा कभी दोबारा न हो' संजीव भट्ट ने शपथपत्र में कहा है कि पुलिस अधिकारियों की बैठक में मुख्यमंत्री नरेंद्र मोदी ने कहा था कि 'हालात ऐसे हैं कि मुसलमानों को सबक़ सिखाया जाना चाहिए ताकि ऐसी घटनाएँ (गोधरा कांड) कभी दोहराई न जा सकें.' सन २००२ में हुए दंगों के वक़्त संजीव भट्ट ख़ुफ़िया पुलिस में तैनात थे. उन्होंने शपथपत्र में लिखा है, "उन्होंने बैठक में मौजूद सभी से कहा कि बहुत देर तक गुजरात पुलिस सांप्रदायिक दंगों में हिदुओं और मुसलमानों के ख़िलाफ़ कार्रवाई में संतुलन बनाकर चलती रही है और अब समय का तकाज़ा है कि मुसलमानों को सबक़ सिखाया जाना चाहिए ताकि ऐसी घटनाएँ (गोधरा कांड) कभी दोहराई न जा सकें. मुख्यमंत्री नरेंद्र मोदी ने ये विचार भी ज़ाहिर किए कि हिंदुओं की भावनाएँ भड़की हुई हैं और उन्हें ग़ुस्सा निकालने का मौक़ा दिया जाना ज़रूरी है." संजीव भट्ट ने गुजरात दंगों की जाँच कर रही विशेष जाँच टीम के सामने कई बार पेश होकर अपने बयान दर्ज करवाए हैं. विशेष जाँच टीम के अधिकारियों के साथ उनकी बातचीत भी गुजरात सरकार के उच्च स्तरीय लोगों तक पहुँच गई थी. इससे ऐसा लगता है कि विशेष जाँच टीम गुजरात में जारी दबाओ-छिपाओ अभियान का हिस्सा बन चुकी है पुलिस अधिकारी संजीव भट्ट इन बयानों का निचोड़ ये है कि मुख्यमंत्री नरेंद्र मोदी ने दंगा पीड़ित लोगों के बचाव के लिए कोई कार्रवाई नहीं की और न ही दंगाइयों को रोकने के लिए प्रशासनिक क़दम उठाए. 'बयानों की गोपनीयता नहीं, परिवार को ख़तरा' शपथपत्र में उन्होंने कहा है कि विशेष जाँच टीम के सामने दिए गए उनके बयानों की गोपनीयता बनी नहीं रह सकी है. संजीव भट्ट ने कहा है, "विशेष जाँच टीम के अधिकारियों के साथ उनकी बातचीत भी गुजरात सरकार के उच्च स्तरीय लोगों तक पहुँच गई थी." उन्होंने कहा है, "इससे ऐसा लगता है कि विशेष जाँच टीम गुजरात में जारी दबाओ-छिपाओ अभियान का हिस्सा बन चुकी है." संजीव भट्ट ने कहा है कि उनके बयान प्रेस में लीक किए जाने के बाद उन्हें अपने और अपने परिवार वालों के लिए ख़तरा होने लगा. उन्होंने राज्य सरकार से सुरक्षा की माँग की लेकिन दुर्भाग्यवश गुजरात सरकार ने मेरी अर्ज़ियों पर कोई ध्यान नहीं दिया. यही नहीं कई बार मेरी बची खुची मामूली सुरक्षा व्यवस्था को भी वापिस लिया गया है.
hindi
ٲخرَس ییٚلہِ ژیوٗن پَتھَر پؠو تہٕ أتھی سٟتۍ پؠو راکھشَس تہِ تہٕ ژوٚلُس زُو نٟرِتھ
kashmiri
In collaboration with the Kootenay Conservation Program, Gregoire Lamoureux, restoration ecologist with Slocan River Streamkeepers explored restoration projects, the importance of good relationship with landowners, the challenges and benefits of the projects and more. The Slocan River Streamkeepers have implemented over 40 riparian restoration projects in the Slocan Valley since 2005, restoring the equivalent of 5 km of riverbank. Some projects have included fish habitat enhancement. More recently, the Streamkeepers have implemented wetland restoration and enhancement projects. Gregoire Lamoureux grew up on a farm in southern Quebec and after travelling across Canada, he moved to the Slocan Valley in 1989 where he now operates a small native plant nursery. In 1991, he created the Kootenay Permaculture Institute to follow his passion for permaculture, regenerative agriculture and ecological restoration. He has been teaching permaculture and consulting across Canada for over 25 years. Gregoire has been working in riparian restoration for over 20 years and more recently in wetland restoration. He co-founded the Slocan River Streamkeepers Society in 2003 with the goals of improving public knowledge on aquatic ecosystems, improving stewardship of aquatic and riparian ecosystems, and restoring ecologically sound and effective projects. Wetlands are a critical component of ecosystems as they support a variety of species at risk, hold and filter water, recharge groundwater, and store carbon. In the Kootenays, many wetlands have been lost due to dam impoundment and land development. In collaboration with the Kootenay Conservation Program, Neil Fletcher from the BC Wildlife Federation shared his wealth of wetland experience and knowledge, having delivered over 100 workshops, many involving hands-on restoration. This webinar explored the results from a BCWF led monitoring study of past restoration projects, recent BCWF led restoration projects in the Kootenays, and ways to get involved in conservation. Neil Fletcher has a broad range of resource management experience, previously working for a watershed authority in Ontario, the Canadian Forest Service and BC Hydro. He is the Chair of Wetlands Stewardship Partner-ship of BC, a multi-agency partnership that focuses on provincial priorities and that is currently working on standardizing a Provincial wetland inventory. Neil also participates in a number of other steering and technical advisory committees supporting initiatives such as the Okanagan Wetlands Strategy, Aquatic Invasive Species of BC, and the National Wetlands Round Table. In 2016, he was named “Canadian Outdoorsman of the Year” by the Canadian Wildlife Federation, in part for the conservation and stewardship work he has accomplished in the Columbia Basin.
english
सेंट्रल बैंक ऑफ इंडिया जुटाएगा ५,००० करोड़ रुपये तक की शेयर पूंजी नयी दिल्ली, १६ जुलाई (भाषा) सार्वजनिक क्षेत्र का सेंट्रल बैंक ऑफ इंडिया विभिन्न माध्यमों से ५,००० करोड़ रुपये तक की शेयर पूंजी जुटाएगा। अपने पूंजी पर्याप्तता अनुपात को पूरा करने के लिए बैंक इस योजना के तहत सार्वजनिक निर्गम और राइट्स इश्यू भी ला सकता है। बैंक ने एक बयान में कहा कि पूंजी जुटाने के लिए वह अनुवर्ती सार्वजनिक निर्गम (एफपीओ), राइट्स इश्यू या पात्र संस्थागत नियोजन (क्यूआईपी) का रास्ता अपनाएगा। बैंक ने कहा कि इसके लिए वह सात अगस्त को होने वाली शेयरधारकों की सालाना आम बैठक में प्रस्ताव मंजूर कराएगा।
hindi
\begin{document} \setlength{\textheight}{8.0truein} \runninghead{Information Reconciliation for QKD}{D. Elkouss, J. Martinez-Mateo and V. Martin} \normalsize\textlineskip \thispagestyle{empty} \setcounter{page}{1} \copyrightheading{0}{0}{2010}{000--000} \vspace*{0.88truein} \alphfootnote \fpage{1} \centerline{\bf INFORMATION RECONCILIATION} \vspace*{0.035truein} \centerline{\bf FOR QUANTUM KEY DISTRIBUTION} \vspace*{0.37truein} \centerline{\footnotesize DAVID ELKOUSS, JESUS MARTINEZ-MATEO, VICENTE MARTIN\footnote{[email protected]}} \vspace*{0.015truein} \centerline{\footnotesize\it Research group on Quantum Information and Computation\footnote{http://gcc.ls.fi.upm.es}} \baselineskip=10pt \centerline{\footnotesize\it Facultad de Inform\'atica, Universidad Polit\'ecnica de Madrid} \baselineskip=10pt \centerline{\footnotesize\it Campus de Montegancedo, 28660 Boadilla del Monte, Madrid, Spain} \vspace*{0.225truein} \publisher{(received date)}{(revised date)} \vspace*{0.21truein} \abstract{Quantum key distribution (QKD) relies on quantum and classical procedures in order to achieve the growing of a secret random string ---the key--- known only to the two parties executing the protocol. Limited intrinsic efficiency of the protocol, imperfect devices and eavesdropping produce errors and information leakage from which the set of measured signals ---the raw key--- must be stripped in order to distill a final, information theoretically secure, key. The key distillation process is a classical one in which basis reconciliation, error correction and privacy amplification protocols are applied to the raw key. This cleaning process is known as information reconciliation and must be done in a fast and efficient way to avoid cramping the performance of the QKD system. Brassard and Salvail proposed a very simple and elegant protocol to reconcile keys in the secret-key agreement context, known as \textit{Cascade}, that has become the de-facto standard for all QKD practical implementations. However, it is highly interactive, requiring many communications between the legitimate parties and its efficiency is not optimal, imposing an early limit to the maximum tolerable error rate. In this paper we describe a low-density parity-check reconciliation protocol that improves significantly on these problems. The protocol exhibits better efficiency and limits the number of uses of the communications channel. It is also able to adapt to different error rates while remaining efficient, thus reaching longer distances or higher secure key rate for a given QKD system.} \vspace*{10pt} \keywords{Quantum cryptography, quantum key distribution, information reconciliation, rate-compatible, low-density parity-check codes} \vspace*{3pt} \communicate{to be filled by the Editorial} \vspace*{1pt}\textlineskip \section{Introduction} \label{sec:introduction} \noindent A quantum key distribution protocol is composed of two parts: a quantum and a classical one~\cite{Gisin_02}. The quantum part involves the actual transmission of qubits, its manipulation and detection, and it is performed using a quantum channel. The classical part is done through a public, albeit integrity-preserving, classical channel and involves basis reconciliation, error correction and privacy amplification protocols. The quantum part results in the production of a raw key at both ends of the quantum channel. The raw key must be cleaned from all the unavoidable errors produced in this part. These include the intrinsic ones, due to the limited efficiency of the protocol, and those arising either from the inevitable imperfections in the physical setup or from eavesdropping. Intrinsic errors are easier to correct and this is usually done by bookkeeping of the detection events and subsequent discussion over the classical channel about the preparation state of the qubits leading to the recorded events. For example, in the standard BB84 protocol~\cite{Bennett_84}, half of the qubits detected by Bob will be in a base orthogonal to the one in which they were prepared by Alice, leading to a 50\% of detections that do not directly contribute bits to the final secret key. In a real setup, the remaining bits will still be affected from errors arising either from the physical implementation itself or from an eavesdropper, these being in principle indistinguishable. The process to clean the key from the errors is known as information reconciliation and is done through the classical channel. The existence of optimal, although inefficient, protocols leaking a minimum of information in the process was demonstrated in~\cite{Brassard_94}. There, a practical protocol trading an acceptable amount of leaked information for efficiency was also proposed. This reconciliation protocol, known as \textit{Cascade}, has become the de-facto standard for all QKD practical implementations. However, it has several shortcomings that make it less than ideal under certain situations that are expected to become more common in real world environments. Work has been done to improve on \textit{Cascade}~\cite{Sugimoto_00, Liu_03, Buttler_03, Han_09}, but none of the resulting methods have become as widespread. Recent advances in QKD systems have seen a tremendous increase in key generation speed~\cite{Shields_08, Stucki_05, Gordon_05}. Current generation systems can be successfully used over longer distances or in noisier environments than before, like those arising when integration with conventional networks is required~\cite{Lancho_09, Zbinden_09}. This changes indicate that the reconciliation protocol must be efficient at high key and error rates. \textit{Cascade} is a highly interactive protocol that requires a high number of uses of the public channel to proceed. The number of uses raises markedly with the quantum bit error rate (QBER), thus it is not well suited for next generation QKD systems. From a practical point of view, the protocol must be implemented using two computers at both ends of the quantum channel that process the key and communicate through the public channel. The typical access latencies to the network are much higher than CPU operations, hence it is easy to produce a bottleneck in highly interactive protocols. In fact, if a specialised communications network is not used, the communications needs of \textit{Cascade} are already the limiting factor in many situations and with current systems, instead of the much more delicate quantum part. On the other hand, a less than ideal efficiency limits the maximum number of errors that can be corrected without publishing too much information, thus reducing the performance of the system when working at a high error rate. In this paper we describe a reconciliation protocol that overcomes these problems. The protocol exhibits a better efficiency than \textit{Cascade}, it can be adapted to a varying QBER with a low information leakage, extending the usable range of the system. It limits the number of uses of the public channel and its structure allows for a hardware implementation, avoiding communications and CPU bottlenecks, thus being well suited to next generation QKD systems in demanding environments. LDPC codes also have an structure that make them well suited for hardware implementations The paper is organised as follows: In Section~\ref{sec:reconciliation}, the information reconciliation problem in the secret-key agreement context is described and the current status of error correction in QKD is discussed. A new Information Reconciliation Protocol able to adapt to different channel parameters is presented and its asymptotic behavior discussed in Section~\ref{sec:rate-compatible}. In Section~\ref{sec:simulation-results} the results of a practical implementation of the protocol are shown. In particular the efficiency of the protocol is compared to its optimal theoretical value and to \textit{Cascade}. \section{Problem statement} \label{sec:reconciliation} \subsection{Information reconciliation} Let Alice and Bob be two parties with access to dependent sources identified by two random variables, $X$ and $Y$ respectively. Information reconciliation is the process by which Alice and Bob extract common information from their correlated sources. In a practical setting Alice and Bob hold $\mathbf{x}$ and $\mathbf{y}$, two $n$-length strings that are the outcome of $X$ and $Y$, and they will agree in some string $\mathbf{s} = f(\mathbf{x}, \mathbf{y})$ through one-way or bidirectional conversation~\cite{VanAssche_06}. The conversation $\phi(\mathbf{x}, \mathbf{y})$ is also a function of the outcome strings, and its quality can be measured by the number of symbols involved in the conversation $M = |\phi(\mathbf{x}, \mathbf{y})|$. Now, the problem of encoding correlated sources is a well known problem in information theory. To independently encode $X$ and $Y$ at least a rate $R \geq H(X) + H(Y)$ is needed. However, in their seminal paper, Slepian and Wolf~\cite{Slepian_73} demonstrated that to jointly encode both variables it is enough with a rate $R \geq H(X, Y)$ even if $X$ and $Y$ are encoded separately. Moreover, if $Y$ is available at the decoder only a rate of $R\geq H(X|Y)$ is needed to encode $X$ (see Fig.~\ref{fig:side-information}), which in the information reconciliation context amounts for the minimum information needed in order to reconcile Alice's and Bob's strings. To measure the quality of a real reconciliation schema, that in a practical setting will encode $X$ with a higher rate than $H(X|Y)$, we use the efficiency parameter $f \geq 1$ defined as: \begin{equation} I_\textrm{real} = f H(X|Y) \geq I_\textrm{opt} \label{eq:efficiency} \end{equation} \noindent where $I_\textrm{real}$ is the information published during the reconciliation process, $I_\textrm{opt}$ is the minimum information that would allow to reconcile Alice's and Bob's strings, and $H(X|Y)$ is the conditional Shannon entropy. However, there are two other parameters to consider when evaluating the quality of a information reconciliation procedure: that is the computational complexity and the interactivity. The first one stresses that a real information reconciliation procedure must be feasible. Any sufficiently long random linear code of the appropriate rate could solve the problem~\cite{Zamir_02}, however optimal decoding is in general an NP-hard problem. The interactivity of a reconciliation protocol has to be taken into account because, specially in high latency scenarios, the communications overhead can pose a severe burden on the performance of the QKD protocol. \begin{figure}\label{fig:side-information} \end{figure} In order to evaluate the quality of the reconciliation, we will concentrate in discrete variable QKD protocols even if the ideas presented here can be easily extrapolated to other scenarios. Most QKD protocols encode the information in discrete variables~\cite{Bennett_84, Bennett_92}, although there are many proposals on continuous variable protocols \cite{Ralph_99, Grosshans_02, Fossier_09}. Errors on the quantum channel are normally uncorrelated and symmetric or, if prior to the reconciliation Alice and Bob apply a random permutation, they can behave as such~\cite{Gottesman_03}. In this situation Alice's and Bob's strings can be regarded as the input and output of a binary symmetric channel (BSC), characterized by the crossover probability $\varepsilon$, and the efficiency parameter $f$ can be described as the relationship between the length of the conversation $M = |\phi(\mathbf{x}, \mathbf{y})|$ and the optimal value $N\cdot H(X|Y) = N\cdot h(\varepsilon)$: \begin{equation} f = \frac{M}{N\cdot h(\varepsilon)} \label{eq:f} \end{equation} It was first shown in~\cite{Liveris_02} that low-density parity-check (LDPC) codes used within Wyner's coset scheme~\cite{Wyner_74, Zamir_02} are a good solution for the compression of binary sources with side information. LDPC codes~\cite{Richardson_01a} are linear codes that have a sparse parity check matrix. These codes, when decoded with the belief propagation algorithm, can perform very close to the theoretical limit. The fundamental idea is to assign each source vector to a bin from a set of $2^{H(X|Y) + \iota}$ known bins, the encoder describes the bin to the decoder, and the decoder searches for the source vector inside the described bin. Let $\mathbf{x}$ and $\mathbf{y}$ be two binary strings of length $n$, and $C$ a $[n,k]$ binary linear code specified by its parity matrix $H$. The syndrome $\mathbf{s}$ of a vector $\mathbf{x}$ is the $n-k$ string defined as $\mathbf{s} = H \mathbf{x}$ being $\mathbf{s} = \mathbf{0}$ for all the codewords in $C$. Each syndrome $\mathbf{s}$ defines a coset $C_\mathbf{s}$ as the set of all strings, $\{ \mathbf{x} \}$, that verify $H \mathbf{x} = \mathbf{s}$. Wyner's schema consists in assigning each source vector to one of the cosets of $C$. Encoding $\mathbf{x}$ amounts then to compute its syndrome, and decoding is simply to find the member in $C_\mathbf{s}$ closest to $\mathbf{y}$. An LDPC message passing decoder was modified in \cite{Liveris_02} to take into account the syndrome decoding proposed by Wyner. This same procedure can be applied to the QKD scenario. \subsection{Previous work} \label{sec:cascade} As mentioned in the introduction, the most widely used and best known protocol for error correction in the QKD context is \textit{Cascade}. Proposed by Brassard and Salvail~\cite{Brassard_94}, this protocol runs for a fixed number of passes. In each pass, Alice and Bob divide their strings into blocks of equal length. The initial block length depends on the estimated error probability, $p$, and it is doubled when starting a new pass. For each block they compute and exchange its parity. A parity mismatch implies an odd number of errors, and a dichotomic search allows both parties to find one of the errors. Whenever an error is found after the first pass, it uncovers an odd number of errors masked on the preceding passes and the algorithm returns to correct those errors previously undetected. This cascading process gives name to the protocol. Several papers propose improvements on \textit{Cascade}~\cite{Sugimoto_00, Liu_03}, these papers analyse how the block length is to be chosen and increased in order to optimise the efficiency of the reconciliation, but the main characteristics remain unaltered. \textit{Cascade}, although highly interactive, is reasonably efficient and easy to implement. It is well known and has become the de-facto standard, hence we have chosen \textit{Cascade} as the benchmark to compare against. \textit{Winnow}~\cite{Buttler_03} is another well know reconciliation protocol in the QKD context, it requires only two communications between the parties. In the first communication Alice and Bob exchange the parities of every block. After that, they exchange the syndrome of a Hamming code for correcting single errors in each block with a parity mismatch. The protocol incorporates a privacy maintenance procedure by discarding one bit per parity revealed (i.e. $m$ bits are discarded when a syndrome of length $m$ is exchanged). Its main advantage is a reduction on the number of communication needed, however the efficiency of the protocol is worse than that of \textit{Cascade} in the error range of interest. Recently, some interesting improvements have been proposed for selecting an optimum block length in this protocol~\cite{Han_09}. Modern coding techniques have not been applied to discrete variable QKD until recently. LDPC codes were proposed and used on~\cite{Elliott_05}. But as the codes had not been specifically designed for the problem, aside from the inherent advantage of forward error correction, the efficiency was worse than that of \textit{Cascade}. These codes have been also used in the context of continuous variable QKD~\cite{Fossier_09}. LDPC codes were first optimized for the BSC on \cite{Elkouss_09}, and although the results were close to optimal for the designed codes, the efficiency curve exhibited a saw behaviour due to a lack of information rate adaptability in the proposed procedure~\cite{Elkouss_10}. Since the error rate can vary among transmissions, it is important for a protocol to be able to cope with this change. \section{Rate-compatible reconciliation} \label{sec:rate-compatible} \noindent Although linear codes are a good solution for the reconciliation problem, since they can be tailored to a given error rate, their efficiency degrades when it is not known beforehand. This is the case in QKD, where the error rate is an a priori unknown that is estimated for every exchange. The QBER might vary significantly in two consecutive key exchanges, specially when the quantum channel is transported through a shared optical fibre that can be used together with several independent classical or quantum channels that can add noise. To address this problem there are two different options: (i) it is possible to build a code once the error rate has been estimated, and (ii) a pre-built code can be modified to adjust its information rate. The computational overhead would make the first option almost unfeasible except for very stable quantum channels, something difficult to achieve in practise and impossible in the case of a shared quantum channel in a reconfigurable network environment~\cite{Lancho_09}. In this paper we propose the use of the second strategy as the easiest and most effective way to obtain a code for the required rate, for which we describe a protocol that adapts pre-built codes in real time while maintaining an efficiency close to the optimal value. \subsection{Rate modulation} \label{sec:rate-modulation} Puncturing and shortening are two common strategies used to adapt the rate of a linear code. This process of adapting the information rate of a pre-built code will be referred as rate modulation. When $p$ punctured symbols of a codeword are removed, a $[n, k]$ code is converted into a $[n-p, k]$ code. Whereas, when shortening, $s$ symbols are removed during the encoding process, and a $[n, k]$ code is converted into a $[n-s, k-s]$ code. A graphical representation, on a Tanner graph, of the procedures just described for puncturing and shortening and its effects on the rate of the sample code is shown in Fig.~\ref{fig:puncturing-shortening}. \begin{figure}\label{fig:puncturing-shortening} \end{figure} These procedures may be regarded as the transmission of different parts of the codeword over different channels (see Fig.~\ref{fig:channel-model}). Since puncturing is a process by which $p$ codeword symbols are eliminated, it can be seen as a transmission over a binary erasure channel (BEC) with erasure probability of $1$, $\textrm{BEC}(1)$. Shortening is a process by which $s$ codeword symbols are known with absolute certainty, as such it can be seen as a transmission over a BEC with erasure probability of $0$, $\textrm{BEC}(0)$. The remaining symbols are transmitted by the real channel which in the present paper can be modelled by a binary symmetric channel with crossover probability $\varepsilon$, $\textrm{BSC}(\varepsilon)$ \begin{figure}\label{fig:channel-model} \end{figure} Supposing that $R_0$ is the original coding rate, the modulated rate is then calculated as: \begin{equation} R = \frac{R_0 - \sigma}{1 - \pi - \sigma} = \frac{k - s}{n - p - s} \label{eq:rate} \end{equation} \noindent where $\pi$ and $\sigma$ represent the ratios of information punctured and shortened respectively. Both strategies, puncturing and shortening, can be applied simultaneously. Given a $[n, k]$ code and $n'\leq n$ bits, if puncturing and shortening are applied with a constant number $d$ of punctured and shortened symbols, a single code can be used to protect the $n'$ bits for different error rates. There are two consequences of applying a constant $d$: (i) there is a limit to the minimum and maximum achievable information rates. These limits, expressed as a function of $\delta = d / n$, define the correction interval: \begin{equation} 0 \leq R_{\textrm{min}} = \frac{R_0 - \delta}{1 - \delta} \leq R \leq \frac{R_0}{1 - \delta} = R_{\textrm{max}} \leq 1 \label{eq:rate-interval} \end{equation} \noindent (ii) puncturing and shortening procedures cause an efficiency loss~\cite{Ha_04}. Therefore, there is a tradeoff between the achievable information rates and reconciliation efficiency. This efficiency loss, caused by high levels of puncturing and shortening, can be avoided if a set of $n$ codes $\zeta_i$ with different information rates is used: $R_0(\zeta_1)\le R_0(\zeta_2)\le R_0(\zeta_n)$. The target error range can then be partitioned into, $[R_{\textrm{min}}(\zeta_1), R_{\textrm{max}}(\zeta_1)] \cup [R_{\textrm{min}}(\zeta_2), R_{\textrm{max}}(\zeta_2)]\cup ... \cup [R_{\textrm{min}}(\zeta_n), R_{\textrm{max}}(\zeta_n)]$, not necessarily with the same size. The number of intervals depends on the width of the error rate range to cover and on the desired efficiency. The compromise between the width of the interval covered and the achieved efficiency in the one code case is transferred to a compromise between efficiency and the added complexity of managing several codes. Fig.~\ref{fig:efficiency-thresholds} shows the computed efficiency thresholds for several families of codes with different coding rates. \begin{figure}\label{fig:efficiency-thresholds} \end{figure} \subsection{Protocol} \label{sec:protocol} We now proceed to describe a rate-compatible information reconciliation protocol using puncturing and shortening techniques as described above. {\it Step 0: Raw key exchange}. Alice and Bob obtain a raw key by running a QKD protocol through a quantum channel (see Section~\ref{sec:reconciliation}). This key exchange may be modelled as follows. Alice sends to Bob the string $\mathbf{x}$, an instance of a random variable $X$, of length $\ell = n - d$ through a binary symmetric channel with crossover probability $\varepsilon$, BSC($\varepsilon$) (or a black box behaving as such). Bob receives the correlated string, $\mathbf{y}$, but with discrepancies to be removed in the following steps. {\it Step 1: Pre-conditions}. Prior to the key reconciliation process Alice and Bob agree on the following parameters: (i) a pool of shared codes of length $n$, constructed for different coding rates; (ii) the size of the sample, $t$, that will be used to estimate the error rate in the communication; and (iii) the maximum number of symbols that will be punctured or shortened to adapt the coding rate, $d = p + s = n\delta$. {\it Step 2: Error rate estimation}. Bob chooses randomly a sample of $t$ bits of $\mathbf{y}$, $\alpha(\mathbf{y})$, and sends them and their positions, $\beta(\mathbf{y})$, to Alice through a noiseless channel (i.e. the public and integrity-preserving channel used in the classic part of a QKD protocol). Using the positions received from Bob, $\beta(\mathbf{y})$, Alice extracts an equivalent sample in $\mathbf{x}$, $\alpha(\mathbf{x})$, and estimates the crossover probability for the exchanged key by comparing the two samples: \begin{equation} \varepsilon' = \frac{\alpha(\mathbf{x}) + \alpha(\mathbf{y})}{t} \end{equation} Once Alice has estimated $\varepsilon'$, she knows the theoretical rate for a punctured and shortened code able to correct the string. Now she computes the optimal rate corresponding to the efficiency of the code she is using: $R = 1 - f(\varepsilon') h(\varepsilon')$; where $h$ is the binary Shannon entropy function and $f$ the efficiency. Then she can derive the optimal values for puncturing and shortening, $p$ and $s$ respectively, as: \begin{equation} \begin{split} s & = \lceil (R_0 - R (1 - d / n)) \cdot n \rceil \\ p & = d - s \end{split} \end{equation} {\it Step 3: Coding}. Alice creates a string $\mathbf{x^+} = g(\mathbf{x}, \mathbf{\sigma_{\varepsilon'}}, \mathbf{\pi_{\varepsilon'}})$ of size $n$. The function $g$ defines the $n - d$ positions to take the values of string $\mathbf{x}$, the $p$ positions to be assigned random values, and the $s$ positions to have values known by Alice and Bob. The set of $n - d$ positions, the set of $p$ positions and the set of $s$ positions and their values come from a synchronised pseudo-random generator. She then sends $s(\mathbf{x^+})$, the syndrome of $\mathbf{x^+}$, to Bob as well as the estimated crossover probability $\varepsilon'$. This process can be regarded as jointly coding (and decoding) the original strings sent through a BSC(QBER) with $p$ bits sent through a binary erasure channel (BEC) with erasure probability 1, and $s$ bits sent through a noiseless channel (see Fig.~\ref{fig:channel-model}). {\it Step 4: Decoding}. Bob can reproduce Alice's estimation of the optimal rate $R$, the positions of the $p$ punctured bits, and the positions and values of the $s$ shortened bits. Bob then creates the corresponding string $\mathbf{y^+} = g(\mathbf{y}, \mathbf{\sigma_{\varepsilon'}}, \mathbf{\pi_{\varepsilon'}})$. He should now be able to decode Alice's codeword with high probability, as the rate has been adapted to the channel crossover probability. Bob sends an acknowledgement to Alice to indicate if he successfully recovered $\mathbf{x^+}$. \begin{figure}\label{fig:interactive-protocol} \end{figure} {\it Step 5: (Optional) Interactive decoding}. If Bob does not succeed in recovering Alice's string, Alice can reduce the information rate of the code by revealing some $r^* \le p$ of the punctured bits on the public channel, that become shortened bits (see Fig.~\ref{fig:interactive-protocol}). Steps 3 and 4 are then repeated, Alice computes the new syndrome and sends it to Bob, who tries to decode and send an acknowledge to Alice. Let $p^{(i)}$ and $s^{(i)}$ be the number of punctured and shortened bits respectively in the $i$-th round of the proposed protocol, and $r^{(i+1)}$ the number of punctured bits to be revealed for the next round, the new proportion of punctured and shortened bits used for the reconciliation are calculated as $p^{(i+1)} = p^{(i)} - r^{(i+1)}$ and $s^{(i+1)} = s^{(i)} + r^{(i+1)}$, respectively. These steps can be repeated while Bob does not find the correct string and there are punctured bits that have not been revealed as shortened bits, i.e. while $p \ge 0$. \section{Simulation results} \label{sec:simulation-results} In this section we discuss the efficiency of the rate-compatible information reconciliation protocol without the interactive decoding step, comparing the results of the protocol to regular LDPC codes as proposed in~\cite{Elkouss_09} and to~\textit{Cascade}. The purpose of this simulations is to highlight that the proposed protocol extends the working range in QKD, allowing to distill a key in a wider QBER range than previous information reconciliation protocols. Fig.~\ref{fig:simulations-results} shows the efficiency, calculated as defined in Eq.~(\ref{eq:efficiency}), in the reconciliation process simulated for three different alternatives: (i) using the \textit{Cascade} protocol, (ii) using LDPC codes without adapting the information rate, and (iii) using LDPC codes adapting the information rate with the rate-compatible protocol proposed here. The target error range selected is $[0.055,0.11]$, where a high efficiency protocol is a must. Low QBER rates do not demand a close to optimal efficiency since other requisites, such as the throughput, are more critical in obtaining a high secret key rate. In order to achieve a efficiency close to $1$, the error range $[0.055,0.11]$ has been divided into two correction intervals: $R_0(\zeta_1)=0.5$, $R_0(\zeta_2)=0.6$ and $\delta=0.1$. The codes have been constructed using families of LDPC codes specifically optimised for the BSC. Generating polynomials of these families can be found in~\cite{Elkouss_09}, however they were not designed for shortening and puncturing. Taking into account these parameters in the generating polynomial design process would allow to cover the whole QBER range with high efficiency. The construction process has been also optimised using a modified progressive edge-growth algorithm for irregular codes with a detailed check node degree distribution~\cite{Martinez_10}. A codeword length of $2 \times 10^5$ bits has been used. \begin{figure}\label{fig:simulations-results} \end{figure} The results show that there is a small price to pay for the rate adaptation. LDPC codes without puncturing and shortening behave slightly better near their threshold, however for the $\delta$ value chosen the penalty is very small and the rate-compatible protocol allows to reconcile strings in all the range with $f \le 1.1$. The unmodulated LDPC codes exhibit an undesirable saw behaviour that can lead to efficiencies worse than that of \textit{Cascade} unless many different codes are calculated, incurring in an unacceptable penalty in CPU time. The new protocol works at a much better efficiency than \textit{Cascade}, that performs in all the tested range with $f \ge 1.17$. \section{Conclusions} \label{sec:conclusions} We have demonstrated how to adapt an LDPC code for rate compatibility. The capability to adapt to different error rates while minimizing the amount of published information is an important feature for secret-key reconciliation in the QKD context, specially whenever it is used in long distance links or in noisy environments such as those arising in shared optical networks. In these demanding environments high efficiency is necessary to distill a key. The protocol improves on \textit{Cascade}, allowing to reach efficiencies close to one while limiting the information leakage and having the important practical advantage of low interactivity: only one message is exchanged by both parties. This high efficiency allows to extend the working range in QKD, that is, it allows to distill a key in a wider QBER range than previous information reconciliation protocols. \section*{Acknowledgment} \noindent This work has been partially supported by the project Quantum Information Technologies in Madrid\footnote{http://www.quitemad.org} (QUITEMAD), Project P2009/ESP-1594, \textit{Comunidad Aut\'onoma de Madrid}. The authors gratefully acknowledge the computer resources, technical expertise and assistance provided by the \emph{Centro de Supercomputaci\'on y Visualizaci\'on de Madrid}\footnote{http://www.cesvima.upm.es} (CeSViMa) and the Spanish Supercomputing Network. \nonumsection{References} \end{document}
math
कपास यार्न डाइड चेक शर्टिंग फैब्रिक होम > उत्पादों > कपास यार्न डाइड चेक शर्टिंग फैब्रिक(कपास यार्न डाइड चेक शर्टिंग फैब्रिक के लिए कुल २४ उत्पादों) कपास यार्न डाइड चेक शर्टिंग फैब्रिक - निर्माता, कारखाने, आपूर्तिकर्ता चीन से हम विशेष हैं कपास यार्न डाइड चेक शर्टिंग फैब्रिक निर्माताओं और आपूर्तिकर्ताओं / कारखाने चीन से। कम कीमत / सस्ते के रूप में उच्च गुणवत्ता के साथ थोक कपास यार्न डाइड चेक शर्टिंग फैब्रिक, चीन से अग्रणी ब्रांडों में से एक कपास यार्न डाइड चेक शर्टिंग फैब्रिक में से एक, शानदोंग ज़िंगटंग इंटरनेशनल ट्रेड को., लैड.। बुना हुआ ठीक ग्रिड १००% कपास सादा कपड़ा टैग: ललित ग्रिड १००% कपास सादा , छोटे चेक १००% कपास यार्न-रंगे कपड़े , कपास यार्न डाइड चेक शर्टिंग फैब्रिक उत्पाद वर्णन बुना हुआ ठीक ग्रिड १००% कपास सादा कपड़ा यह १००% कपास यार्न डाइड फैब्रिक ने नेवी नीले और सफेद में एक जिंगम चेक पैटर्न पेश किया है। शुद्ध सूती से बने, यह अपारदर्शी, सांस लेने वाला और प्रसन्नतापूर्ण नरम है, जो गर्म मौसम के कपड़े, ब्लाउज,...
hindi
दैनिक भास्कर की खबर छोटी लेकिन अच्छी, पहले पन्ने में टॉप पर ईवीएम विवाद सत्तारूढ़ दल बनाम विपक्ष या विपक्ष बनाम चुनाव आयोग नहीं है। ईवीएम से जो नतीजे मिलें वे बिल्कुल सही हों (एक-एक वोट की गिनती) यह उतना ही जरूरी है जितना एक लीटर पेट्रोल के पैसे देने पर एक लीटर पेट्रोल मिलना या एक मीटर कपड़े की कीमत देने पर एक मीटर कपड़ा मिलना चाहिए। इसमें माप शुद्ध होना ही जरूरी नहीं है पैमाने की शुद्धता और उसका मानक होना भी जरूरी है। मतलब बाट की जगह पत्थर रखकर कोई गरीब सब्जी विक्रेता भी सब्जी नहीं बेच सकता है। ना ही लकड़ी के टुकड़े से मीटर नापा जा सकता है। देश का कानून ऐसा ही है और माप-तौल विभाग इसी को लागू कराने का काम करता है। यह अलग बात है कि भ्रष्टाचार के कारण माप कम मिले पर पैमाने का मानक होना भी उतना ही जरूरी है। इसके बावजूद ईवीएम विवाद को विपक्ष बनाम भाजपा या कुछ मामलों में विपक्ष बनाम चुनाव आयोग बना दिया गया है। अखबारों में यह विपक्ष बनाम भाजपा ही छपता है। आज भी लंबी-चौड़ी खबरों में मामला क्या है वह सही नहीं बताया गया है। सुबह अखबारों की खबरों पर अपनी रिपोर्ट में मैंने लिखा है कि भाजपा ईवीएम में सुधार की मांग से क्यों घबरा जाती है। पर असल मुद्दा उससे भी गंभीर है। और यह सभी खबरों में है। आमतौर पर अखबार सरकार और भाजपा का पक्ष लेते हैं और उसकी तरफ झुके हुए लगते हैं। वही हाल इस खबर का भी है। आज की खबर पर भाजपा की प्रतिक्रिया जो खबर के साथ न भी हो इतनी चर्चित और प्रसारित हुई है कि पाठक यही समझेगा कि विपक्ष हार के डर से ईवीएम का मुद्दा उठा रहा है। इसपर दैनिक जागरण के संपादकीय का जिक्र मैं कर चुका हूं। अब बताता हूं कि खबर क्या है। इस खबर को सबसे कम शब्दों में सबसे सही ढंग से दैनिक भास्कर ने छापा है। शीर्षक है, विपक्षी दल बोले वोट किसी को, पर्ची किसी और की निकली। भास्कर न्यूज की खबर इस प्रकार है, कांग्रेस, टीडीपी और आम आदमी पार्टी ने आरोप लगाया है कि ईवीएम से साथ लगे वीवीपैट से गलत पर्चियां निकल रही हैं। इसे लेकर कांग्रेस नेता अभिषेक मनु सिंघवी, कपिल सिब्बल, दिल्ली के सीएम (अरविन्द) केजरीवाल और आंध्र (प्रदेश) के सीएम चंद्रबाबू नायडू समेत कई नेताओं ने दिल्ली में संयुक्त प्रेस कॉन्फ्रेंस की। सभी बोले कि वे इस मुद्दे को लेकर सुप्रीम कोर्ट जा रहे हैं। कम से कम ५०% मतदान पर्चियों का मिलान ईवीएम से होना चाहिए। सिंघवी बोले- ईवीएम बटन दबाया तो उसके साथ लगी वीवीपैट मशीन से निकली पर्ची सिर्फ ३ सेकंड के लिए नजर आई। ऐसा लगातार हो रहा है कि वोट किसी को दिया, पर्ची किसी दूसरी पार्टी की निकली। केजरीवाल ने कहा कि- इंजीनियर होने के नाते में समझता हूं कि ईवीएम को भाजपा के हिसाब से प्रोग्राम किया गया है। मैंने इससे पहले की पोस्ट में लिखा है कि चुनाव आयोग गए प्रतिनिधिमंडल में हरिप्रसाद भी थे। अमर उजाला ने उन्हें ईवीएम चोर तो लिखा है पर यह नहीं बताया है कि वे कौन हैं और असल में चोर नहीं हैं। भास्कर में ईवीएम विवाद लौटा फ्लैग शीर्षक के साथ मुख्य खबर और दो सिंगल कॉलम की खबरें हैं कंट्रोवर्सी: नायडू के साथ ईवीएम चोर था। खबर इस प्रकार है, चुनाव आयोग पहुंचे चंद्रबाबू नायडू के साथ आईटी एक्सपर्ट हरि प्रसाद वेमुरु भी था। उसे २०१० में ईवीएम चोरी के आरोप में गिरफ्तार किया गया था। चुनाव आयोग ने नायडू को पत्र लिखकर कहा कि वेमुरु का नाम प्रतिनिधिमंडल में शामिल नहीं था। फिर वह आयोग कैसे आया। भास्कर ने दूसरी खबर भी छापी है जो अमर उजाला या दूसरे अखबारों में प्रमुखता से नहीं है, नायडू बोले- वेमुरु चोर नहीं, व्हिसल-ब्लोअर हैं। खबर इस प्रकार है, नायडू ने जवाबी पत्र में कहा कि- हमने चुनाव आयुक्त से मुलाकात में दिखाया कि ईवीएम ठीक से काम नहीं कर रहे हैं। उन्हें (वेमुरु) को चर्चा के लिए बुलाया गया था। वह व्हिसल-ब्लोअर हैं। उन्होंने चोरी नहीं की है। आयोग गड़बड़ी के मुद्दे की बजाय दूसरी बातें कर रहा है। आप अपने अखबार में देखिए कि खबर क्या है और कैसे छपी है। इससे आप समझ सकते हैं कि आपका अखबार आपको कैसी मिलावटी खबरें दे रहा है। खबर का विस्तार देना एक बात है और काम की जानकारी नहीं देना बिल्कुल अलग। अखबारों में दोनों हो रहा है। आप देखिए आपका अखबार क्या कर रहा है। आज दो बड़े अखबारों दैनिक हिन्दुस्तान और राजस्थान पत्रिका में हरि प्रसाद का मामला मूल खबर के साथ प्रमुखता से नहीं है। दैनिक हिन्दुस्तान में यह खबर लीड है और इसका शीर्षक है, ईवीएम पर विपक्ष फिर हमलावर। फ्लैग शीर्षक है, एकजुटता : चंद्र्रबाबू नायडू बोले २१ दल ५० फीसदी वीवीपैट पर्चियों के मिलान के लिए सुप्रीम कोर्ट जाएंगे। अखबार ने जो बातें हाईलाइट की है उनमें एक यह भी है कि इस मामले को कई बार कानूनी चुनौती दी गई है और १९८४ में सुप्रीम कोर्ट में कानून में तकनीकी खामी के कारण ईवीएम के खिलाफ फैसला दिया था। १९८८ में जनप्रतिनिधि कानून १९५१ में संशोधन कर इस खामी को दूर किया गया। राजस्थान पत्रिका में भी यह खबर लीड है। शीर्षक है, पहले चरण की वोटिंग के बाद गर्मया ईवीएम का मुद्दा, विपक्ष बोला ५०% वोटों का मिलान हो। फ्लैग शीर्षक है, संविधान बचाओ प्रेस कांफ्रेंस : २१ दल फिर खटखटाएंगे सुप्रीम कोर्ट का दरवाजा। राजस्थान पत्रिका ने इस खबर को सबसे अच्छे से छापा है और इसके समर्थन में कांग्रेस के कपिल सिब्बल, टीडीपी के चंद्रबाबू नायडू, कांग्रेस के अभिषेक मनु सिंघवी और अरविन्द केजरीवाल के बयान प्रमुखता से छापे हैं। इनमें अरविन्द केजरीवाल ने कहा है कि एक पार्टी को छोड़कर सभी बैलेट पेपर की मांग कर रहे हैं। इन दोनों अखबारों में भाजपा की प्रतिक्रिया है।
hindi
30 ذٔخیٖرٕ واریاہ موضوع ڈومینن ہُنٛد مثالے ساینس، زِراعت، ٹٮ۪کنالجی، اِنسانیات تہٕ بیترِ:http://ndl.iitkgp.ac.in/
kashmiri
गाँव का खाना : पढ़े कैसे बचपन में मां और भाई के गुजरने के बाद भी इस २३ साल के युवां ने खड़ा किया लाखों का कारोबार - जगदीश जाट हिरणमोय गोगोई के अनुसार जब वह १५ साल का था तो एक दुर्घटना में उसके भाई की मौत हो गई थी। इसके तीन साल बाद ही उसकी मां की भी मौत हो गई। मां के चले जाने पर वो टूट सा गया था। फिर मैंने सोचा कि मेरी मां जैसी देश में करोड़ों मां हैं जो पैसे के अभाव में जरूरी सुविधाओं से वंचित हैं। यहां से मुझे इनके लिए कुछ करने की चाहत हुई। १० रुपये की शुरुआती लागत से शुरू किया बिज़नेस साल २०१६ में गोगोई ने अपना बिजनेस शुरू किया था। उसके पास शुरू में एक गैस सिलेंडर और एक स्टोव था। घर में रखे चावल, दाल और सब्जियों से गांव का खाना की शुरुआत हुई। गोगोई बताते हैं कि उन्हें सिर्फ नमक खरीदने के लिए १० रुपए खर्च करने पड़े थे। बाकी सामान घर से लगा था। गोगोई पहले घर में खाना बनाकर शहर में लोगों को खाना पहुंचाते और फिर शहर में ही घर का खाना बनाना शुरू किया। अब दे रहे है अन्य युवाओं को रोज़गार गोगोई ने अपने बिजनेस की फ्रेंचाइजी देना शुरू किया है। दिल्ली, गाजियाबाद और आगरा में घर का खाना की फ्रेंचाइजी शुरू होने वाली है। जो व्यक्ति गांव का खाना की फ्रेंचाइजी लेना चाहते हैं उन्हें १ लाख रुपए खर्च करने पड़ेंगे। हालांकि, जो बेरोजगार हैं वो फ्री में इसकी फ्रेंचाइजी ले सकते हैं। गोगोई कहते हैं कि उनका मकसद ज्यादा से ज्यादा रोजगार पैदा करना है। उनको इस साहसिक कार्य के लिए कई अवार्ड भी मिल चुके हैं। अन्धभक्ति और श्रद्धा में क्यों उमड़ें लाखों हज़ार समय रहते समय की क़ीमत समझ नही आती हैं
hindi
package pl.touk.nussknacker.engine.benchmarks.interpreter import java.util.concurrent.TimeUnit import cats.effect.IO import org.openjdk.jmh.annotations._ import pl.touk.nussknacker.engine.Interpreter.FutureShape import pl.touk.nussknacker.engine.api._ import pl.touk.nussknacker.engine.build.ScenarioBuilder import pl.touk.nussknacker.engine.graph.EspProcess import pl.touk.nussknacker.engine.spel.Implicits._ import pl.touk.nussknacker.engine.util.SynchronousExecutionContext import scala.concurrent.duration._ import scala.concurrent.{Await, ExecutionContext, Future} /* We don't provide exact results, as they vary significantly between machines As the rule of thumb, results below 100k/s should be worrying, for sync interpretation we expect more like 1m/s */ @State(Scope.Thread) class OneParamInterpreterBenchmark { private val process: EspProcess = ScenarioBuilder .streaming("t1") .source("source", "source") .buildSimpleVariable("v1", "v1", "{a:'', b: 2}") .enricher("e1", "out", "service", "p1" -> "''") //Uncomment to assess impact of costly variables //.buildSimpleVariable("v2", "v2", "{a:'', b: #out, c: {'d','d','ss','aa'}.?[#this.substring(0, 1) == ''] }") //.buildSimpleVariable("v3", "v3", "{a:'', b: #out, c: {'d','d','ss','aa'}.?[#this.substring(0, 1) == ''] }") .emptySink("sink", "sink") private val instantlyCompletedFuture = false private val service = new OneParamService(instantlyCompletedFuture) private val interpreterFuture = new InterpreterSetup[String] .sourceInterpretation[Future](process, Map("service" -> service), Nil)(new FutureShape()(SynchronousExecutionContext.ctx)) private val interpreterIO = new InterpreterSetup[String] .sourceInterpretation[IO](process, Map("service" -> service), Nil) private val interpreterFutureAsync = new InterpreterSetup[String] .sourceInterpretation[Future](process, Map("service" -> service), Nil)(new FutureShape()(ExecutionContext.global)) @Benchmark @BenchmarkMode(Array(Mode.Throughput)) @OutputTimeUnit(TimeUnit.SECONDS) def benchmarkFutureSync(): AnyRef = { Await.result(interpreterFuture(Context(""), SynchronousExecutionContext.ctx), 1 second) } @Benchmark @BenchmarkMode(Array(Mode.Throughput)) @OutputTimeUnit(TimeUnit.SECONDS) def benchmarkFutureAsync(): AnyRef = { Await.result(interpreterFutureAsync(Context(""), ExecutionContext.Implicits.global), 1 second) } @Benchmark @BenchmarkMode(Array(Mode.Throughput)) @OutputTimeUnit(TimeUnit.SECONDS) def benchmarkSyncIO(): AnyRef = { interpreterIO(Context(""), SynchronousExecutionContext.ctx).unsafeRunSync() } @Benchmark @BenchmarkMode(Array(Mode.Throughput)) @OutputTimeUnit(TimeUnit.SECONDS) def benchmarkAsyncIO(): AnyRef = { interpreterIO(Context(""), ExecutionContext.Implicits.global).unsafeRunSync() } } class OneParamService(instantlyCompletedFuture: Boolean) extends Service { @MethodToInvoke def methodToInvoke(@ParamName("p1") s: String)(implicit ec: ExecutionContext): Future[String] = { if (instantlyCompletedFuture) Future.successful(s) else Future { s } } }
code
चीन औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ, औद्योगिक यूवी पानी अजीवाणु बनानेवाला पदार्थ, औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ, औद्योगिक खाद्य यूवी अजीवाणु बनानेवाला पदार्थ निर्माता एलईडी विवरण:औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ,औद्योगिक यूवी पानी अजीवाणु बनानेवाला पदार्थ,औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ नेतृत्व,औद्योगिक खाद्य यूवी अजीवाणु बनानेवाला पदार्थ औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ,औद्योगिक यूवी पानी अजीवाणु बनानेवाला पदार्थ,औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ नेतृत्व,औद्योगिक खाद्य यूवी अजीवाणु बनानेवाला पदार्थ होम > उत्पादों > यूवी अजीवाणु बनानेवाला पदार्थ > औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थके उत्पाद श्रेणी, हम चीन से विशेष निर्माताओं, औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ औद्योगिक यूवी पानी अजीवाणु बनानेवाला पदार्थ आपूर्तिकर्ताओं / कारखाने, थोक उच्च गुणवत्ता के उत्पादों औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ नेतृत्व अनुसंधान एवं विकास और विनिर्माण, हम बिक्री के बाद सेवा और तकनीकी सहायता एकदम सही है। आपके सहयोग के लिए तत्पर हैं! चीन औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ आपूर्तिकर्ता खाद्य ग्रेड ३०४/३१६ स्टेनलेस स्टील शैल, ल इटसोर्सेस या फिलिप्स यूवी चिराग यूवी जल शोधक सिस्टम नेत्रहीन साफ पानी, बादल छाए रहेंगे या तुर्बिद रंग नहीं, के साथ उपयोग के लिए इरादा है. ँ परिवेश पानी का तापमान: २-४५ ँ मैंगनीज: ०.०5 पीपीएम (०.०5 मिलीग्राम/ल) ँ यूवी संप्रेषण: > ७५% औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ , औद्योगिक यूवी पानी अजीवाणु बनानेवाला पदार्थ , औद्योगिक यूवी अजीवाणु बनानेवाला पदार्थ नेतृत्व , औद्योगिक खाद्य यूवी अजीवाणु बनानेवाला पदार्थ , इस्पात यूवी अजीवाणु बनानेवाला पदार्थ , व्यावसायिक यूवी अजीवाणु बनानेवाला पदार्थ , जल उपचार यूवी अजीवाणु बनानेवाला पदार्थ , अल्ट्रा वायलेट अजीवाणु बनानेवाला पदार्थ
hindi
سُہٕ چُھ ژؤر "1" جڑن "2" (یعنی کلاسیکی عناصر) ہند کِس بارس منز تہٕ Anaxagoras سٕنز سونچَس جٲری تھاوان، زِ پاننہٕ وٲن مِلِتھ کرٕ تیم اسہِ اردگرد تمام چیز تخلیق۔
kashmiri
\begin{document} \draft \title{Measurement and state preparation via ion trap quantum computing.} \author{S.~Schneider, H.~M.~Wiseman, W.~J.~Munro, and G.~J.~Milburn} \address{Department of Physics, The University of Queensland, St. Lucia, Brisbane, Qld 4072, Australia} \maketitle \begin{abstract} We investigate in detail the effects of a QND vibrational number measurement made on single ions in a recently proposed measurement scheme for the vibrational state of a register of ions in a linear rf trap [C.~D'Helon and G.~J~Milburn, Phys.\ Rev.\ A {\bf 54}, 5141 (1996)]. The performance of a measurement shows some interesting patterns which are closely related to searching. \end{abstract} \date{today} \pacs{} \narrowtext \section{Introduction} The realisation that quantum physics enables a computer to solve certain problems more efficiently than a classical computer\cite{ekert} has led to a revision of our understanding of information processing in physical systems. The key feature of quantum physics which lies behind the new developments is, as usual, the superposition principle. In a quantum computer certain properties of a function can be determined, in a single step, by evaluating the function on a coherent superposition of input states. A classical computer in contrast would need to evaluate the function on separate inputs one step at a time. While we are a long way from a complete understanding and classification of quantum algorithms, a number of general features may be distinguished. Firstly, some part of the physical computer, the input register, is used to encode the input states. Typically this involves a set of $N$ two-state systems which may be each prepared in an arbitrary superposition state. Each of these systems is said to encode a qubit, as distinct from the single bit encoded in a classical two state system. As the product Hilbert space of the input system is of dimension $2^N$, it may be used to encode a massive amount of information in a single large superposition state. Secondly we identify another part of the physical computer, the output register, to encode the output states. The output register may likewise be a set of two-state systems. Finally there is some unitary transformation which correlates the states of the output register and the input register in such a way that the output register now encodes the result of some function evaluation of the input. The final result is a large entangled state of the input and the output registers. This unitary transformation is a very high dimensional object, and would in general require a very complicated time dependent Hamiltonian. However we now know that an arbitrary unitary transformation may be approximated by a suitable sequence of simpler transformations involving at least two qubits \cite{ekert}. The scheme described in the previous paragraph has a lot in common with the von Neumann model of a quantum measurement. Instead of the input register, we speak of the {\em measured system}, and instead of the output register we speak of the {\em apparatus}. However in the von Neumann scheme traditionally, we have only considered unitary transformations generated by a single time independent Hamiltonian. Having seen the quantum computation scheme, this seems to be unnecessarily restrictive. We thus have a novel view of a measurement as an information processing system in which information in the measured system is processed and written to the apparatus. The possibility of choosing a much larger class of unitary transformation so to couple the system and apparatus, and to further process information in the apparatus before read-out, is the motivation of our work. In a recent paper \cite{dhelon} a method was proposed to make an approximate quantum nondemolition measurement of the number state distribution of the collective vibrational mode of a $N$-qubit ion register. In that paper, a vibrational energy eigenstate, indexed by an integer $n$ (phonon number), was perfectly correlated with the internal states of a number of $N$ trapped ions. A read-out of the internal state of each ion would yield a binary string, of length $N$ encoding a vibrational quantum state and this binary string would occur with the same probability as the vibrational quantum number in the original centre-of-mass state of the ion. In this article we examine what happens if we readout only a subset of the internal states of the trapped ions. We show that when the vibrational mode is prepared in a coherent state, then various measurements on the ions gives various (in general unequally weighted) superpositions of coherent states of different phases. In the limit where all ions are measured the ring of superposed coherent states results in a Fock state \cite{janszky}. We show that this can actually be achieved with a {\em single ion}, rather than $N$ ions, if feedback is allowed in the computation. Detailed investigation of the phonon number measurement in terms of filter functions reveals a close relation to the Walsh function generators recently introduced in the context of searching a quantum database \cite{terhal}. We devote one section to exploring this relation. \section{The model} \label{model} In this section we summarize the model of D'Helon and Milburn \cite{dhelon}. The physical system consists of $N$ two-level ions forming a `crystallized' string confined in a linear rf trap. This is our quantum register. We make the usual assumptions that spontaneous emission from the upper internal level of the ions is negligible and that the ions to be cooled into the Lamb-Dicke limit. We also neglect decoherence of the center-of-mass motion. The electronic states of the ions are denoted by an integer $k$ \begin{equation} |k\rangle_e = |S_N\rangle_N \otimes |S_{N-1}\rangle_{N-1} \otimes ... \otimes |S_1\rangle_1 \,\, , \end{equation} where $S_i = 0, 1$ and \begin{equation} k = S_N \times 2^{N-1} + S_{N-1} \times 2^{N-2} + ...+ S_1 \times 2^0. \label{k} \end{equation} The binary expansion for $k$ is thus just the string $S_N S_{N-1}...S_1$. In the model of D'Helon et al.\cite{dhelon}, the interaction between the centre-of-mass motion and the internal electronic state of each ion was mediated by a detuned (classical) laser pulse. For interaction times much greater than the vibrational period of the trap, the effective Hamiltonian for the $j$th ion is \begin{equation} H_I^{(j)} = \hbar a^{\dagger}a \chi (\sigma_z^{(j)}+\frac{1}{2}) \hspace{2.5 mm}, \label{ham_far} \end{equation} where $\sigma_{z}^{(j)}$ is the population inversion for the $j$th ion and where $\chi=\eta_{CM}^2 \Omega^2/(N\Delta)$. Here $\eta_{CM}$ is the Lamb-Dicke parameter, $\Omega$ is the Rabi frequency for the transitions between the two internal states of the ions, and $\Delta$ the detuning between the exciting laser pulses and the electronic transition. For simplicity we assume that these parameters are the same for all ions. If we choose the durations $\tau_{j}$ of the standing wave pulses to increase geometrically with qubit number as $\chi\tau_j=2^j\pi/2^N$, then the Hamiltonians $H_I^{(j)}$ generate a unitary transformation \begin{equation} U=\exp\left(\frac{-i 2\pi a^{\dagger}a \Upsilon}{2^N}\right) \hspace{2.5 mm}, \label{ut} \end{equation} where the electronic operator $\Upsilon$ provides a binary ordering of the qubits: \begin{equation} \Upsilon=\sum_{j=1}^N (\sigma_z^{(j)} + \mbox{$\frac{1}{2}$}) 2^{j-1} \hspace{2.5 mm}. \end{equation} The eigenvectors of the operator $\Upsilon$ are the electronic number states \begin{equation} \Upsilon=\sum_{k=0}^{2^N-1} k |k \rangle_e \langle k| \hspace{2.5 mm}. \end{equation} In the case of an $N$-ion register, the transformation $U$ does not affect the vibrational state, however it displaces the eigenstates of an electronic operator $\Phi$ which is canonically conjugate to $\Upsilon$. The eigenstates of $\Phi$ are a complementary set of electronic basis states $\tilde{|p\rangle}_e$ given by Fourier transforms of the electronic number states $|k\rangle_e$, \begin{equation} \tilde{|p\rangle}_e=\frac{1}{\sqrt{2}^N}\sum_{k=0}^{2^N-1} \exp(-2\pi i k p 2^{-N}) |k\rangle_e \hspace{2.5 mm}, \end{equation} for $0 \leq p \leq 2^N-1$. We start with all the ions in the ground state $|0\rangle_e$ and an initially arbitrary vibrational state of all the ions, thus \begin{equation} |\psi_{in} \rangle = \sum_{n=0}^\infty c_n |n\rangle_{vib} \otimes |0\rangle_e \,\, . \end{equation} We apply a Fourier transform ${\cal{F}}^-$ to the register, so that it is in the state \begin{equation} {\cal{F}}^- {} |0 \rangle_e = | \tilde{0} \rangle_e \equiv \frac{1}{\sqrt{2^N}} \sum_{k=0}^{2^N -1} |k\rangle_e \,\, , \end{equation} which is an equal superposition of all $|k\rangle_e$ states and the state of the system now reads \begin{equation} |\psi_{in}^\prime \rangle = \sum_{n=0}^\infty c_n |n\rangle_{vib} \otimes |\tilde{0} \rangle_e \,\, . \end{equation} As explained in Ref.~\cite{dhelon} this can be easily achieved using $\pi/2$ laser pulses, one for each of the $N$ atoms. We now apply the unitary transformation in Eq.(\ref{ut}) After this interaction we arrive at an entangled state of the vibrational mode and the register \begin{equation} |\psi_{out}^\prime \rangle = \sum_{n=0}^\infty \sum_{p=0}^{2^N -1} c_{p+ n 2^N} |p + n 2^N\rangle_{vib} \otimes |\tilde{p}\rangle_e \,\, , \end{equation} where the register is in a superposition of Fourier transformed states. So the final step to get the desired result is now to apply the inverse Fourier transform ${\cal{F}}^+$ to the register. ${\cal{F}}^+$ satisfies ${\cal{F}}^+ {\cal{F}}^- = 1$. We get \begin{equation} \label{entangle} |\psi_{out}\rangle = {\cal{F}}^+ |\psi_{out}^\prime \rangle = \sum_{n=0}^\infty \sum_{k=0}^{2^N -1} c_{k+n2^N} |k + n2^N\rangle_{vib} \otimes |k\rangle_e \,\, , \end{equation} which is the basis for further investigations. \section{Superpositions of coherent states on a circle} In this section we assume that the vibrational state of the ions is a coherent state \begin{equation} |\psi \rangle _{vib} = |\alpha\rangle_{vib} = \sum_{n=0}^\infty c_n |n\rangle_{vib} = \sum_{n=0}^\infty \frac{\alpha^k}{\sqrt{k !}} e^{-\frac{1}{2} |\alpha|^2} |n\rangle_{vib} \,\, . \end{equation} Thus the final state after all the transformations described in section \ref{model} is \begin{equation} |\psi_{out} \rangle = \sum_{n=0}^\infty \sum_{k=0}^{2^N -1} \frac{\alpha^{k+n2^N}}{\sqrt{(k+n2^N) !}} e^{-\frac{1}{2} |\alpha|^2} |k + n2^N\rangle_{vib} \otimes |k\rangle_e \,\, . \end{equation} What happens if we perform a measurement on say the first ion? We look at the definition of $k$, Eq. \ref{k}, and see that the first ion (i.e. ion number one) is determining whether $k$ is even or odd. So measuring ion number one is projecting the register in either a superposition of even or of odd Fock states, depending on the outcome of the measurement. Since the register is highly entangled with the vibrational state of the ions, this measurement will also have an effect on this state: It projects the former coherent state into either an even or odd Schr\"odinger cat state. This can be seen quite easily by writing the entangled state as a sum of states with even and odd $k$ \begin{eqnarray} |\psi_{out} \rangle & = & \sum_{n=0}^\infty \sum_{k=0}^{2^{N-1} -1} \frac{\alpha^{2k+n2^N}}{\sqrt{(2k+n2^N) !}} e^{-\frac{1}{2} |\alpha|^2} |2k + n2^N\rangle_{vib} \otimes |2k\rangle_e \nonumber \\ & & + {} \sum_{n=0}^\infty \sum_{k=0}^{2^{N-1} -1} \frac{\alpha^{2k+1+n2^N}}{\sqrt{(2k+ 1+n2^N) !}} e^{-\frac{1}{2} |\alpha|^2} |2k+1 + n2^N\rangle_{vib} \otimes |2k+1\rangle_e \,\, . \end{eqnarray} If we just want to get a Schr\"odinger cat state, we only need one ion, which means that $N=1$ and the even and odd cats are then entangled with the lower or upper internal level of this ion, respectively. Otherwise we have to disentangle the states again, which can be done by applying ${\cal{F}}^+ \hat{U}^{-1} {\cal{F}}^-$ to the measured state. So what happens if we do not stop after the first ion but go on measuring the second, the third and so on? We just project into more and more superpositions of coherent states aligned around a circle of radius $|\alpha|$. Those superpositions are investigated in \cite{janszky}. They finally lead to a Fock state in the vibrational motion, depending on how many ions are used. For example, if the last two ions are measured and the result $S_{2}=S_{1}=0$ is obtained, we project the register in a superposition of $|4k\rangle_e$ states and the resulting vibrational state is \begin{equation} |\psi_{out}\rangle_{vib} \propto |\alpha\rangle + |-\alpha\rangle + |i\alpha\rangle + |-i\alpha \rangle \,\, . \label{state1} \end{equation} The Wigner function for this is plotted in Fig.~\ref{fig1}. Theses results are closely related to work done by Brune {\em et al.} \cite{brune90,brune92}. There the role of the ions is played by atoms and the phonons by photons in a cavity. It is interesting to note that the above measurement can in fact be done with a {\em single} ion, providing we allow quantum-limited feedback in our computation. Instead of the simultaneous application of the unitary operator (\ref{ut}) on all ions (which requires $N$ different laser pulses on the $N$ ions), one can apply $N$ different laser pulses on the same ion. The scheme works as follows: The ion begins in the ground state $|0\rangle$. We apply the Fourier transform operator to the ion, which is simply a $\pi/2$ pulse: \begin{equation} {\cal F}^- |0\rangle_e = |\tilde{0}\rangle_e = (1/\sqrt{2}) ( |0\rangle_e + |1\rangle_e ) \,\, . \end{equation} We now transform with the unitary operator \begin{equation} \label{UU} \hat{U}_j = \exp[-i2\pi \hat{a}^\dagger \hat{a} (\hat\sigma_z +1/2) 2^{-N+j-1}], \end{equation} with $j=1$ initially. Then the $-\pi/2$ pulse is applied, producing the transformation ${\cal F}^+$. Next, the atom state is read out in the $|0\rangle, |1\rangle $ basis. The result is the least significant digit of $k$ (which determines whether $k$ is even or odd). If the result is $1$ we apply the transform ${\cal F}^+$, if $0$ we apply ${\cal F}^-$. It is these different unitary transformations, controlled by the result of a measurement, which constitute the feedback in the computation. We now move on to the next digit by doubling the interaction time of the pulse in Eq.~(\ref{UU}) by setting $j=2$. Following ${\cal F}^-$ we read out the next-to-least significant digit of $k$. The rest follows analogously until we reach $j=N$, where $N$ in this case is completely arbitrary (there is only one ion!). It might be thought that the measurement would be much quicker with simultaneous pulses on $N$ ions rather than consecutive pulses on a single ion, but this is in fact not the case, as can be seen as follows. Assuming that the $\pi/2$ pulses can be achieved in negligible time, the time of the measurement will be dominated by the time taken to apply the unitary transformations (\ref{ut}) or (\ref{UU}). As pointed out above, to get the desired unitary transformation the interaction times have to be chosen so that \begin{equation} \chi \tau_j = \frac{2^j \pi}{2^N} \,\, . \end{equation} If the interactions are consecutive (whether on $1$ ion or $N$ ions), the total interaction time is \begin{eqnarray} t_{total} & = & \sum_{j=1}^N \tau_j = \sum_{j=1}^N \frac{2^j \pi}{2^N \chi} = \sum_{j=1}^N \frac{2^j \pi N \Delta } {2^N (\eta_{CM} \Omega )^2} \nonumber \\ & = & \frac{\pi N\Delta}{2^N (\eta_{CM} \Omega)^2} 2 \frac{1-2^N} {1-2} = \frac{2 \pi N \Delta}{(\eta_{CM} \Omega)^2}(1 - 2^{-N}) \label{simN1}\,\, . \end{eqnarray} If the interactions are simultaneous the total interaction time is just the longest interaction time \begin{equation} \label{simN2} t_{total} = \tau_{N} = \frac{ \pi N \Delta}{(\eta_{CM} \Omega)^2} \end{equation} which is at most a fraction of two smaller than the interaction time for consecutive pulses. From this calculation we can conclude that for making the QND measurement of D'Helon and Milburn one can do perfectly well with one ion, which should make experimental implementation much easier. However it is worth noting that there are some experiments which one could do with the entangled state (\ref{entangle}) which are not possible with a single ion. Assume that we do not measure the internal state of the first ion in the register, but perform our measurement instead on the second or third or any other ion, and then disentangle the remaining ions by applying ${\cal F}^{+}\hat{U}^{-1}{\cal F}^{-}$. The result is not as simple as an equally-weighted multiple-component cat-state, but has an interesting interference structure nevertheless. Essentially by measuring one ion (whichever) we set half of the $c_{k+n2^N}$ to zero. For the first ion every second one is set to zero and the result is the standard cat state. For the second ion we get a pattern of $2^1$ states set to zero alternating with $2^1$ states not set to zero. In general measuring the $i^{\mbox{th}}$ ion creates a pattern consisting of $2^{i-1}$ states set to zero alternating with the same number of states not set to zero. For example if we measure the second ion to be in the state $|0\rangle$, the register is in a superposition of states of the form $|4k \rangle_e$ and $|4k + 1\rangle_e$ ($k=0, 1, 2, ... 2^{N-2} -1$) and in this case we project the vibrational state into \begin{equation} |\psi_{out}\rangle_{vib} \propto 2|\alpha\rangle + (1-i)|i\alpha\rangle + (1+i)|-i\alpha\rangle \,\, . \label{state2} \end{equation} The Wigner function for this state is plotted in Fig.~\ref{fig2}. \section{Searching for a Fock state} The last considerations in the previous section have a close relation to classical searching as can be seen in \cite{terhal}. An unsorted database which has one marked entry (i.e.\ is of Hamming weight one) and contains $2^N$ items can be searched classically by bit strings of the same length but Hamming weight $2^{N-1}$. An answer of the database $y$ to the querie $g_k$, where $g_k$ is the hamming weight $2^{N-1}$ bit string which contains $2^k$ zeros alternation with $2^k$ ones, is a bit \begin{equation} a_k(g_k,y) = g_k \cdot y = \left( \sum_{i=1}^n (g_k)_i y_i \right) \mbox{mod} \,\,\, 2 \,\, , \end{equation} where $(g_k)_i$ and $y_i$ are the $i^{\mbox{th}}$ bits of $g_k$ and $y$ \cite{terhal}. Classically the answers of the database to $N$ queries with all the possible Hamming weight $2^{N-1}$ bit strings uniquely determine the location of the marked entry. This location $l$ is just the number we get after transferring the ordered answers (i.e.\ a binary string of length $N$) into their decimal form \begin{equation} l = g_{N-1} \times 2^{N-1} + g_{N-2} \times 2^{N-2} + ... + g_0 \times 2^0 \,\, . \end{equation} The bit strings $g_k$ play an essential rule in the recently introduced algorithm for searching a database \cite{terhal}. In this work the authors search a database for one marked entry with a single querie which consist of a superposition of Walsh-functions. Walsh-functions are all possible ``XOR'' combinations of the $g_k$s and the bit string containing $n$ zeros. Thus the $g_k$s are the generators of the Walsh-functions. Since in \cite{terhal} the answer to each Walsh-function is stored in the sign of the Walsh-function in the superposition state, further data processing to transform the information into $N$ bits is required. Now in our scheme we assume that the vibrational motion is in an unknown Fock state $|m\rangle_{vib}$ and our only knowledge about it is that $m \leq 2^N-1$. We choose to implement the measurement using $N$ ions in the trap. The state (\ref{entangle}) then reads \begin{equation} |\psi_{out}\rangle = \sum_{k=0}^{2^N -1} c_k |k\rangle_{vib} \otimes |k\rangle_e \,\, , \end{equation} where in this special case $c_k = \delta_{k,m}$ and so \begin{eqnarray} |\psi_{out}\rangle & = & |m \rangle_{vib} \otimes |m \rangle_e \nonumber \\ & = & |m \rangle_{vib} \otimes |a_{N-1}(g_{N-1}, m)\rangle_{N} |a_{N-2}(g_{N-2}, m)\rangle_{N-1} ... |a_{0}(g_{0},m)\rangle_{1} \,\, . \end{eqnarray} Measuring the register directly gives the Fock state we were looking for since each ion gives the answer of the database to a querie with a specific generator. This is different from other search schemes \cite{terhal} where post processing is often required to obtain the information. How fast can this measurement be done? The preparation of the superposition takes $O(\log(n))$ steps \cite{grover}. The interaction presented here also takes a time of order $O(\log(n))$. This is because both equation (\ref{simN1}) and equation (\ref{simN2}) imply a total time scaling as \begin{equation} t_{total} \sim N, \end{equation} which is of order $\log(n)$. The inverse Fourier transform however takes $O((\log(n))^2)$ steps \cite{ekert} which hence is the limiting process. So all in all our measurement scheme takes $O((\log(n))^2)$ steps. \section{Discussion} In this paper we have investigated a measurement scheme for the determination of the vibrational quantum number of the lowest normal mode of $N$ ions confined in a trap. The vibrational state of the ions becomes entangled with a state in the product space of the internal electronic states. This product state is defined by an integer encoded as a binary string. The measurement scheme depends on the ability to perform a unitary Fourier transform operation on the product space of the internal electronic states of all the ions. This is a very large dimensional unitary transformation (of dimension $2^N$), but can be done if we regard the system as an ion trap quantum computer. This enables a significant generalisation of the standard von Neumann measurement model. However we point out that the same measurement can be accomplished with a single ion, provided that subsequent unitary transformations are selected on the basis of past measurements. This is a kind of feedback or adaptive scheme. The measurement readout is done by reading out the internal electronic state of each ion in some predetermined order. If the excited state encodes a 1 and the ground state encodes a 0, the result of the measurement is a binary string which encodes an integer $n$. This result will occur with the same probability as the occurrence of the normal mode vibrational quantum number $n$ in the initial centre-of-mass mode. The resulting conditional state of the centre-of-mass is then a Fock state $|n\rangle_{vib}$. If instead of reading out the entire binary string we only readout a suitable partition by making a state determination on a subset of ions, a number of different vibrational states can be prepared. We have shown for the case of an initial coherent state of the normal mode vibration that the resulting conditional states are in fact superpositions of coherent states on a ring of fixed radius in the phase-space of vibrational motion. Determining selected partitions of a binary string is very similar to methods used in binary search routines. We have shown that our measurement scheme can in fact be viewed as a kind of quantum search algorithm for a particular integer $n$ encoding a vibrational normal mode state. \section*{Acknowledgments} S.~S. gratefully acknowledges financial support from a ``DAAD Doktorandenstipendium im Rahmen des gemeinsamen Hochschulsonderprogramms III von Bund und L\"andern'' and from the Center of Laser Science. \thebibliography{99} \bibitem{ekert} A.~Ekert and R.~Jozsa, Rev. Mod. Physics {\bf 68}, 733 (1996). \bibitem{dhelon} C.~D'Helon and G.~J.~Milburn, Phys. Rev. A {\bf 54}, 5141 (1996) and C.~D'Helon, \em Quantum-Limited Measurements on Trapped Laser-Cooled Ions \em, PhD thesis, The University of Queensland, 1997. \bibitem{janszky} J. Janszky \em et al. \em, Phys. Rev. A {\bf 48}, 2213 (1993). \bibitem{terhal} B.~M.~Terhal and J.~A.~Smolin, submitted to Phys. Rev. A, and Report No. quant-ph/9705041. \bibitem{brune90} M.~Brune \em et al.\em, Phys. Rev. Lett. {\bf 65}, 976 (1990). \bibitem{brune92} M. Brune \em et al.\em, Phys. Rev. A {\bf 45}, 5193 (1992). \bibitem{grover} L.~K.~Grover, Phys. Rev. Lett. {\bf 79}, 325 (1997). \begin{figure} \caption{Wigner function for the state $|\psi_{out} \label{fig1} \end{figure} \begin{figure} \caption{Wigner function for the state $|\psi_{out} \label{fig2} \end{figure} \end{document}
math
प्रेरणास्रोत : कोरोना और सिगनल की समस्या बन रही थी बाधक, शिक्षिका ने बच्चों को बना लिया परिवार का सदस्य उदयपुर (लाहौल-स्पीति) कोरोना के चलते लागू लॉकडाउन के कारण बच्चों की पढ़ाई प्रभावित होने लगी तो जेबीटी छेरिंग डोलमा ने तीन गांवों के पांच बच्चों को अपने घर में ही रख लिया। वह उनकी पढ़ाई के साथ बच्चों के खाने-पीने और रहने की व्यवस्था भी कर रही हैं। उन्होंने पहली से पांचवीं कक्षा तक तीनों गांवों से इन बच्चों का मेह स्कूल में दाखिला भी करवाया है। खुद भी राजकीय प्राथमिक पाठशाला मेह में तैनात हैं। छेरिंग डोलमा की शादी रारिक गांव में हुई है। शिक्षा और बच्चों के प्रति रेड जोन लेह-लद्दाख से लगता हिमाचल बॉर्डर सील केलांग (लाहौल-स्पीति) जम्मू-कश्मीर और लेह-लद्दाख में कोरोना के मामले बढ़ने के बाद लाहौल-स्पीति जिला प्रशासन ने बॉर्डर सील कर दिया है। केके सरोच ने बताया कि सेना को छोड़कर लाहौल-स्पीति प्रशासन लेह-लद्दाख की तरफ से आने वाले बाहरी लोगों को घाटी में प्रवेश की इजाजत नहीं देगा। जिला प्रशासन मंगलवार को लेह और कारगिल के डीसी को पत्र भेजेगा। कोरोना के मामलों को लेकर लेह-लद्दाख रेड जोन घोषित है। लिहाजा मनाली-लेह सड़क मार्ग खुलने के बाद लेह और कारगिल-जंस्कार की तरफ से लाहौल में आम लोगों की आवाजाही की इजाजत राष्ट्रीय एमटीबी साइकिल रेस में शिवेन ने फिर कामयाबी का लोहा मनवाया केलांग (लाहौल-स्पीति) लाहौल के शिवेन ने १६वीं राष्ट्रीय एमटीबी (माउंटेन बाइकिंग) साइकिल रेस में एक बार फिर प्रतिभा का लोहा मनवाया है। प्रतियोगिता में शिवेन ने एक सिल्वर और एक ब्रांज मेडल जीता। जनजतीय जिला लाहौल-स्पीति के चुलिंग गांव के रहने वाले शिवेन ने उत्तराखंड के हल्द्वानी में राष्ट्रीय एमटीबी प्रतियोगिता के एलीट क्रॉस कंट्री ओलंपिक फॉरमेट में उम्दा प्रदर्शन कर सिल्वर मेडल जीता है। जबकि एलीट एकल प्रतिस्पर्धा में कांस्य मेडल झटका है। शिवेन पहले ऐसे भारतीय एमटीबी प्रतिभागी हैं, जिन्होंने एशियाई एमटीबी प्रतियोगिता में टॉप टेन में जगह केलांग/कोकसर (लाहौल-स्पीति) देश के सबसे ऊंचे मनाली-लेह हाईवे से बर्फ हटाने का काम शुरू हो गया है। इस अभियान में सीमा सड़क संगठन (बीआरओ) के दीपक और हिमांक प्रोजेक्ट की तीन कंपनियां जुट गई हैं। इस बार बर्फ हटाने का कार्य समय से १० दिन पहले ही शुरू कर दिया गया है। बीआरओ के दीपक परियोजना के मुख्य अभियंता एमएस बाघी ने गुलाबा से चार किलोमीटर आगे राहलफाल के समीप वीरवार को विधिवत पूजा-अर्चना के बाद मशीनों को आगे रोहतांग की तरफ बर्फ हटाने के लिए रवाना किया। लाहौल के मरीज को ढाई फीट बर्फ में उठाकर पहुंचाया पीएचसी अब कुल्लू के लिए देख रहे हैं हेलीकाप्टर की राह उदयपुर (लाहौल-स्पीति) जनजातीय क्षेत्र लाहौल घाटी के अति दुर्गम क्षेत्र म्याड़ घाटी के करपट गांव की एक महिला मरीज को ग्रामीणों ने दो से ढाई फीट बर्फ के बीच उठाकर तिंगरेट पीएचसी पहुंचाया। महिला का प्राथमिक स्वास्थ्य केंद्र तिंगरेट में उपचार चल रहा है। परिजन अब मरीज को वहां से लिफ्ट करने को लकर हेलीकाप्टर की राह देख रहे हैं। तिंगरेट हेलीपैड से हेलीकाप्टर की उड़ान होने पर कुल्लू के लिए महिला मरीज को लिफ्ट किया जाएगा। गौर हो कि भारी बर्फबारी से म्याड़ घाटी का संपर्क उपमंडल उदयपुर से लाहौल के लिए हेलीकाप्टर उड़ानों का शेड्यूल जारी उदयपुर (लाहौल-स्पीति)। राज्य सरकार ने लाहौल के बारिंग हेलीपैड के लिए पांच दिन बाद फिर हेलीकाप्टर उड़ानों का शेड्यूल जारी कर दिया है। जानकारी के मुताबिक मंगलवार को पहली उड़ान भुंतर-बारिंग-भुंतर के बीच, दूसरी उड़ान भुंतर-उदयपुर-भुंतर के बीच, तीसरी उड़ान भुंतर-तांदी (डाइट)-भुंतर के बीच होगी। इससे पहले बारिंग हेलीपैड के लिए छह फरवरी को उड़ान होना तय था। खराब मौसम के चलते छह और सात फरवरी को बारिंग के लिए कोई भी उड़ान नहीं हो पाई थी। ऐसे में यात्रियों को यह उम्मीद थी कि तय शेड्यूल के मुताबिक नौ लाहौल घाटी में नवंबर २०१९ अधिकतम पारा माइनस छह डिग्री तथा न्यूनतम पारा माइनस ३० डिग्री , ढाई दशक बाद सबसे लंबे दौर की ठंड दर्ज हुईरिकॉर्ड केलांग (लाहौल-स्पीति) लाहौल घाटी में ढाई दशक बाद सबसे लंबे दौर की ठंड पड़ रही है। नवंबर २०१९ के से घाटी में न्यूनतम के साथ अधिकतम पारा भी शून्य से नीचे चल रहा है। इससे पहले घाटी में इस तरह की ठंड एक सप्ताह से १५ दिन तक रहती थी, लेकिन इस बार ठंड की लंबी अवधि ने ढाई दशकों का रिकॉर्ड तोड़ दिया है। नवंबर २०१९ से लाहौल घाटी में अधिकतम पारा माइनस छह डिग्री तथा न्यूनतम पारा माइनस ३० डिग्री रिकॉर्ड हुआ है। इस साल काजा और कोकसर में शिमला हिमाचल में भरे जाने वाले पटवारियों के ११५६ पदों के लिए सवा दो लाख से ज्यादा आवेदन आ गए हैं। सोमवार आवेदन की आखिरी तारीख है। जिलों में डाक के माध्यम से आए कई आवेदन अभी खुले भी नहीं हैं। संभावित है कि संख्या ढाई लाख तक पहुंच जाए। आवेदन करने वालों में पीएचडी, एमएससी, एमबीए, एमसीए, बीडीएस, बीटेक, बीकॉम, बीएसई, एमए और बीए करने वाले भी शामिल हैं। जिला कांगड़ा में एक पद के लिए ४०० और मंडी में २०० से ज्यादा के बीच मुकाबला है। कांगड़ा जिला हिमाचल में टूटा एक दशक का रिकॉर्ड, २४ घंटे में हुई इतने एमएम बारिश शिमला हिमाचल में वर्ष २०११ के बाद बारिश का रिकॉर्ड टूटा है। प्रदेश में चौबीस घंटे में १०२.५ एमएम बारिश रिकॉर्ड की गई है। गत १४ अगस्त, २०११ में चौबीस घंटे में ७४ एमएम बारिश हुई थी।थी। राज्य भर में २४ अगस्त तक मौसम खराब रहने का पूर्वानुमान लगाया है। मौसम विज्ञान केंद्र शिमला के निदेशक मनमोहन सिंह ने बताया कि पहली जून से १८ अगस्त, २०१९ तक प्रदेश में ५३0.७ एमएम बारिश हुई, जो कि सामान्य से ३ फीसदी कम है। इन क्षेत्रों में बहुत भारी बारिश स्थान बलवीर सिंह को मिलेगा राष्ट्रपति पुलिस मेडल केलांग : सीमा सुरक्षा बल में तैनात लाहौल-स्पीति के सेकेंड-इन-कमांड अधिकारी बलवीर सिंह को राष्ट्रपति पुलिस मेडल से नवाजा जाएगा। बीते साल छत्तीसगढ़ के कनकेर के जंगलों में नक्सलियों ने बीएसएफ के एक दल पर घात लगाकर हमला किया। दल का नेतृत्व बलवीर सिंह कर रहे थे। अदम्य साहस का परिचय देते हुए बीएसफ के जांबाजों ने इस मुठभेड़ में ८ नक्सलियों को ढेर किया था। वर्ष १९९५ में संघ लोक सेवा आयोग के जरिये बलवीर सिंह को बीएसएफ में बतौर असिस्टेंट कमांडेंट नियुक्ति मिली। उन्होंने पंजाब, जम्मू-कश्मीर, पश्चिमी बंगाल,
hindi
\begin{document} \title{Interpretable Sensitivity Analysis for Balancing Weights hanks{We would like to thank Kevin Guo and Skip Hirshberg for useful discussion and comments. This research was supported in part by the Hellman Family Fund at UC Berkeley, the Institute of Education Sciences, U.S. Department of Education, through Grant R305D200010, and the Two Sigma PhD fellowship. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.} \begin{abstract} \singlespacing Assessing sensitivity to unmeasured confounding is an important step in observational studies, which typically estimate effects under the assumption that all confounders are measured. In this paper, we develop a sensitivity analysis framework for balancing weights estimators, an increasingly popular approach that solves an optimization problem to obtain weights that directly minimizes covariate imbalance. In particular, we adapt a sensitivity analysis framework using the percentile bootstrap for a broad class of balancing weights estimators. We prove that the percentile bootstrap procedure can, with only minor modifications, yield valid confidence intervals for causal effects under restrictions on the level of unmeasured confounding. We also propose an amplification to allow for interpretable sensitivity parameters in the balancing weights framework. We illustrate our method through extensive real data examples. \end{abstract} \onehalfspacing \section{Introduction} \label{sec:intro} Assessing the sensitivity of results to violations of causal assumptions is a critical part of the workflow for causal inference with observational studies. In such studies, the key assumption that all confounders are measured, sometimes known as \emph{ignorability} or \emph{unconfoundedness}, rarely holds in practice. A sensitivity analysis seeks to determine the magnitude of unobserved confounding required to alter a study's findings. If a large amount of confounding is needed, then the study is robust, enhancing its reliability. In this paper, we develop a sensitivity analysis framework for \emph{balancing weights estimators}. Building on classical methods from survey calibration, these estimators find weights that minimize covariate imbalance between a weighted average of the observed units and a given distribution, such as re-weighting control units to have a similar covariate distribution to the treated units. Balancing weights have become increasingly common within causal inference, with better finite sample properties than traditional inverse propensity score weighting (IPW). See \citet{benmichael_balancing_review} for a recent review. Our proposed sensitivity analysis framework adapts the percentile bootstrap sensitivity analysis that \citet{zhao2019sensitivity} develop for traditional IPW. Specifically, for a specified sensitivity parameter, we compute the upper and lower bounds of our estimator for each bootstrap sample, and then form a confidence interval using percentiles across bootstrap samples. We prove that this approach yields valid confidence intervals for our proposed sensitivity analysis procedure over a broad class of balancing weights estimators. To make the sensitivity analysis more interpretable, we propose a new amplification that expresses the error from confounding in terms of: (1) the imbalance in observed and unobserved covariates; and (2) the strength of the relationship between the outcome and the imbalanced covariates. Researchers can then relate the results of our amplification to estimates from observed covariates. We demonstrate this approach via a numerical illustration and via several applications. \section{Background, notation, and review} \label{sec:framework} \subsection{Setup and sensitivity model} We consider an observational study setting with independently and identically distributed data $(Y_i, X_i, Z_i)$, $i \in \left\{ 1,\ldots,n \right\}$, drawn from some joint distribution $P(\cdot)$ with outcome $Y_i\in\mathbb{R}$, covariates $X_i \in \ensuremath{\mathcal{X}}$, and treatment assignment $Z_i \in \{0,1\}$. We posit the existence of \textit{potential outcomes}: the outcome had unit $i$ received the treatment, $Y_i(1)$, and the outcome had unit $i$ received the control, $Y_i(0)$ \citep{neyman1923, rubin1974estimating}. We assume stable treatment and no interference between units \citep{rubin1980randomization}, so the observed outcome is $Y_i=(1-Z_i)Y_i(0)+Z_iY_i(1)$. Our primary estimand of interest is the \textit{Population Average Treatment Effect} (PATE): \begin{align} \label{eq:ate} \tau = \mathbb{E}[Y(1)-Y(0)] = \mu_1 - \mu_0, \end{align} where $\mu_1 = \mathbb{E}[Y(1)]$ and $\mu_0 = \mathbb{E}[Y(0)]$. To simplify the exposition, we will focus on estimating $\mu_1$; estimating $\mu_0$ is symmetric. We consider alternative estimands in Appendix \ref{sec:att}. A common set of identification assumptions in this setting, known as \textit{strong ignorability}, assumes that conditioning on the covariates $X$ sufficiently removes confounding between treatment $Z$ and the potential outcomes $Y(0), Y(1)$, and that treatment assignment is not deterministic given $X$ \citep{rosenbaum1983central}. \begin{assumption}[Ignorability] \label{a:ignore} $Y(0),Y(1) \indep Z \mid X$. \end{assumption} \begin{assumption}[Overlap] \label{a:overlap} The \emph{propensity score} $\pi(x) \equiv P(Z = 1 \mid X= x)$ satisfies $0<\pi(x)<1$ for all $x \in \ensuremath{\mathcal{X}}$. \end{assumption} \noindent Under Assumptions \ref{a:ignore} and \ref{a:overlap}, we can non-parametrically identify $\mu_1$, solely with the outcomes from units receiving treatment, \begin{align} \label{eq:ipw_identity} \mu_1 = \ensuremath{\mathbb{E}}\left[\frac{1}{\pi(X)} Y ~\ensuremath{\mathbb{B}}ig\vert~ Z = 1 \right]. \end{align} In an observational setting, the researcher does not know the \emph{true} treatment assignment mechanism, $\pi(x, y) \equiv P(Z = 1 \mid X = x, Y(1) = y)$, which in general can depend on \emph{both} the covariates $X$ and the potential outcomes $Y(1)$ and $Y(0)$. A rich literature assesses the sensitivity of estimates to violations of the ignorability assumption. This approach dates back at least to \citet{cornfield1959smoking}, who conducted a formal sensitivity analysis of the effect of smoking on lung cancer. More recent examples of sensitivity analysis include \citet{rosenbaum1983assessing}, \citet{Rosenbaum2002}, \citet{vanderweele2017sensitivity}, \citet{franks2019flexible}, and \citet{Cinelli2020}. See \citet{hong2020sensitivity} for a recent discussion of weighting-based sensitivity methods. Our proposed approach builds most directly on that of \citet{zhao2019sensitivity}, who use the percentile bootstrap and linear programming to perform a sensitivity analysis for traditional IPW. Following their setup, we split the problem into two parts, sensitivity for the mean of the treated potential outcomes and sensitivity for the mean of the control potential outcomes; without loss of generality, we consider the mean for the treated potential outcomes. Since unbiased estimation of $\mathbb{E}[Y(1)]$ requires knowledge only of $\pi(x,y) = P(Z=1 \mid X = x, Y(1) = y)$ rather than the full propensity score that also conditions on $Y(0)$, we can rewrite Assumption \ref{a:ignore} as $\pi(x, y) = \pi(x)$. For details on combining sensitivity analyses for $\mathbb{E}[Y(1)]$ and $\mathbb{E}[ Y(0)]$ into a single sensitivity analysis for the ATE, see Section 5 from \citet{zhao2019sensitivity}. We now introduce a sensitivity model where we relax the ignorability assumption so that the odds ratio between the two conditional probabilities $\pi(x)$ and $\pi(x,y)$ is bounded. \begin{assumption}[Marginal sensitivity model] \label{a:marginal_sens_model} For $\Lambda \geq 1$, the true propensity score satisfies \[ \pi(x, y) \in \ensuremath{\mathcal{E}}(\Lambda) \equiv \left\{\pi(x, y) \in (0,1): \Lambda^{-1} \leq \text{OR}(\pi(x), \pi(x,y)) \leq \Lambda \right\}, \] where $\text{OR} (p_1, p_2) = \frac{p_1/(1-p_1)}{p_2/(1-p_2) }$ is the odds ratio. \end{assumption} \noindent Here, $\Lambda$ is a sensitivity parameter, quantifying the difference between the true propensity score $\pi(x, y)$ and the probability of treatment given $X = x$, $\pi(x)$; when $\Lambda = 1$, the two probabilities are equivalent, and Assumption \ref{a:ignore} holds. If, for example, $\Lambda = 2$, Assumption \ref{a:marginal_sens_model} constrains the odds ratio between $\pi(x)$ and $\pi(x,y)$ to be between $\frac{1}{2}$ and 2. Again following \citet{zhao2019sensitivity}, we will consider an equivalent characterization of the set $\ensuremath{\mathcal{E}}(\Lambda)$ in terms of the log odds ratio $h(x,y) = \log \text{OR}(\pi(x), \pi(x,y))$: \begin{equation} \label{eq:log_sens_model} \ensuremath{\mathcal{H}}(\Lambda) = \left\{h : \ensuremath{\mathcal{X}} \times \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}} : \|h\|_\infty \leq \log \Lambda \right\}, \end{equation} where $\|h\|_\infty = \sup_{x\in\ensuremath{\mathcal{X}},y\in\ensuremath{\mathbb{R}}}|h(x,y)|$ is the supremum norm. For a particular $h \in \ensuremath{\mathcal{H}}(\Lambda)$, we can write the \emph{shifted inverse propensity score} as $\frac{1}{\pi^{(h)}(x,y)} = 1 + \left( \frac{1}{\pi(x)} - 1\right)e^{h(x, y)}$, and the \emph{shifted estimand} as \begin{equation} \label{eq:shifted_estimand} \mu_1^{(h)} = \ensuremath{\mathbb{E}}\left[\left.\frac{Z}{\pi^{(h)}(X,Y(1))} \right| Z = 1\right]^{-1} \ensuremath{\mathbb{E}}\left[\left.\frac{ZY}{\pi^{(h)}(X,Y(1))} \right| Z = 1\right]. \end{equation} Under the marginal sensitivity model in Assumption \ref{a:marginal_sens_model}, we then have a non-parametric partial identification bound, $\inf_{h \in \ensuremath{\mathcal{H}}(\Lambda)} \mu_1^{(h)} \leq \mu_1 \leq \sup_{h \in \ensuremath{\mathcal{H}}(\Lambda)} \mu_1^{(h)}$. In Section \ref{sec:sensitivity}, we will construct confidence intervals that cover this partial identification set. In Section \ref{sec:interpretation}, we will consider a finite sample analog of the marginal sensitivity model in order to amplify, interpret, and calibrate our sensitivity analyses. \subsection{Weighting estimators under strong ignorability} \label{sec:weighting_estimators} \label{sec:bal_overview} We estimate $\mu_1$ via a weighted average of treated units' outcomes using weights $\hat{\gamma}(X)$, \begin{equation} \label{eq:mu1_hat} \hat{\mu}_1 = \frac{1}{n_1}\sum_{i=1}^n Z_i \hat{\gamma}(X_i) Y_i, \end{equation} where $\sum_{i=1}^n Z_i = n_1.$ Under strong ignorability (Assumptions \ref{a:ignore} and \ref{a:overlap}), traditional Inverse Propensity Score Weighting (IPW) first models the propensity score, $\hat{\pi}(x)$, directly and then sets weights to be $\hat{\gamma}(X_i) = \frac{1}{\hat{\pi}(X_i)}$. Thus, $\hat{\mu}_1$ is a plug-in version of Equation \eqref{eq:ipw_identity}. This approach can perform poorly in moderate to high dimensions or when there is poor overlap and either $\pi(x)$ or $\hat{\pi}(x)$ is near 0 or 1 \citep{kang2007demystifying}. Balancing weights, by contrast, directly optimize for covariate balance; recent proposals include \citet{hainmueller2012entropy, zubizarreta2015stable, athey2018approximate, wang2019minimal, hirshberg2019minimax, Tan2020} and have a long history in survey calibration for non-response \citep{Deville1992, Deville1993}. See \citet{Chattopadhyay2019} and \citet{benmichael_balancing_review} for recent reviews. Most balancing weights estimators attempt to control the imbalance between the weighted treated sample and the full sample in some transformation of the covariates $\phi:\ensuremath{\mathcal{X}} \to \ensuremath{\mathbb{R}}^d$. For example, \citet{zubizarreta2015stable} proposes \emph{stable balancing weights} (SBW) that find weights $\hat{\gamma}(X)$ that solve \begin{equation} \label{eq:opt_primal} \begin{aligned} \min_{\gamma(X) \in \ensuremath{\mathbb{R}}^{n_1}} &\quad \int Z \gamma(X)^2 \, dP_n\\ \text{subject to } & \left\|\int Z \gamma(X) \phi(X) - \phi(X) \, dP_n\right\|_\infty \leq \lambda \;\;\;\; \gamma(X) \geq 0, \end{aligned} \end{equation} where $P_n$ is the empirical distribution corresponding to a sample of size $n$ from joint distribution $P(\cdot)$. We use this notation to distinguish between the \emph{sample problem} in Equaton \eqref{eq:opt_primal} and the \emph{population problem}, where we substitute the underlying distribution $P$ for $P_n$, yielding population weights $\gamma_P(X)$. These are the weights of minimum variance that guarantee \emph{approximate balance}: that the worst imbalance in $\phi$, the transformed covariates, is less than some hyper-parameter $\lambda$. There are many other choices of both the penalty on the weights and the measure of imbalance.\footnote{Other possibilities include soft balance penalties rather than hard constraints \citep[e.g.][]{benmichael2020_lor, Keele2020_hosp} and non-parametric measures of balance \citep[e.g.][]{hirshberg2019minimax}.} For instance, in low dimensions, setting $\lambda = 0$ guarantees \emph{exact balance} on the covariates $\phi(X_i) $. Here we focus on the more common case in which achieving exact balance is infeasible; in that case, the particular choice of penalty function is less important. The balancing weights procedure is connected to the modeled IPW approach above through the Lagrangian dual formulation of optimization problem \eqref{eq:opt_primal}. The imbalance in the $d$ transformations of the covariates induces a set of Lagrange multipliers $\beta \in \ensuremath{\mathbb{R}}^d$, and the Lagrangian dual is \begin{equation} \label{eq:opt_dual} \min_{\beta \in \ensuremath{\mathbb{R}}^d} \underbrace{\int Z \left[\beta \cdot \phi(X)\right]_+^2 - \beta \cdot \phi(X) \, dP_n}_{\text{balancing loss}} + \underbrace{\lambda \|\beta\|_1}_{\text{regularization}}, \end{equation} where $[x]_+ = \max\{0, x\}$. The weights are recovered from the dual solution as $\hat{\gamma}(X_i) = \left[\hat{\beta} \cdot \phi(X_i) \right]_+$. As \citet{Zhao2019} and \citet{wang2019minimal} show, this is a regularized $M$-estimator of the propensity score when it is of the form $\frac{1}{\pi(x)} = \left[\beta^\ast \cdot \phi(x)\right]_+$ for some true $\beta^\ast$. Therefore, we can view $\beta^\ast \cdot \phi(x)$ as a natural parameter for the propensity score; different penalty functions will induce different link functions, see \citet{wang2019minimal}. Similarly, different measures of balance will induce different forms of \emph{regularization} on the propensity score parameters. In the succeeding sections, we will use this dual connection to show that the percentile bootstrap sensitivity procedure proposed by \citet{zhao2019sensitivity} for traditional IPW estimators in the marginal sensitivity model is valid with balancing weights estimators. \section{Sensitivity analysis for balancing weights estimators} \label{sec:sensitivity} We now outline our procedure for extending the percentile bootstrap sensitivity analysis to balancing weights. We introduce the shifted balancing weights estimator, detail the bootstrap sampling procedure, and describe how to efficiently compute the confidence intervals. Key to constructing the confidence intervals for the partial identification set will be to construct intervals for each sensitivity model $h$ in the collection of sensitivity models $\mathcal{H}(\Lambda)$ in Equation \eqref{eq:log_sens_model}. Each $h$ represents a particular deviation from ignorability that remains in the set defined by the marginal sensitivity model. We show that the percentile bootstrap yields valid confidence intervals for each sensitivity model in $\mathcal{H}(\Lambda)$, resulting in a valid interval for the partial identification set. We provide guidance for interpreting our sensitivity analysis procedure in Section \ref{sec:interpretation}. To construct the confidence intervals, we can first consider the case where we know the log odds function $h(x, y) \in \ensuremath{\mathcal{H}}(\Lambda)$. With $h$, we can shift the balancing weights estimator for the shifted estimand $\mu_1^{(h)}$ as \begin{align} \label{eq:shifted_est} \hat{\mu}_1^{(h)} = \left(\frac{1}{n_1}\sum\limits_{Z_i=1} \hat{\gamma}^{(h)}(X_i, Y_i(1)) \right)^{-1} \frac{1}{n_1}\sum\limits_{Z_i=1} \hat{\gamma}^{(h)}(X_i, Y_i(1)) Y_i, \end{align} where $\hat{\gamma}^{(h)}(X_i, Y_i(1)) = 1 + ( \hat{\gamma}(X_i) - 1)e^{h(X_i, Y_i(1))}$ for $i \in \{i:Z_i=1\}$ are the shifted balancing weights. We then take $B$ bootstrap samples of size $n$ without conditioning on treatment assignment --- so the number of units in the treatment and control groups may vary from sample to sample --- and re-estimate the weights in each sample by solving the balancing weights optimization problem (\ref{eq:opt_primal}) using the bootstrapped data. Then, for every $h \in H(\Lambda)$, we can construct a confidence interval for $\mu_1^{(h)}$ using the percentile bootstrap as \begin{align} \label{eq:conf_int_h} \left[L^{(h)}, U^{(h)}\right] = \left[Q_{\frac{\alpha}{2}}\left(\hat{\mu}_{1,b}^{*(h)}\right), Q_{1-\frac{\alpha}{2}}\left(\hat{\mu}_{1,b}^{*(h)}\right)\right]. \end{align} $Q_\alpha(\hat{\mu}_{1,b}^{*(h)})$ is the $\alpha$-percentile of $\hat{\mu}_{1,b}^{*(h)}$ in the bootstrap distribution made up of the $B$ bootstrap samples and $\hat{\mu}_{1,b}^{*(h)}$ is the shifted balancing weights estimator (\ref{eq:shifted_est}) using bootstrap sample $b \in \left\{1, \ldots, B \right\}$. Note, the $^*$ in $\hat{\mu}_{1,b}^{*(h)}$ indicates that it is an estimate from bootstrap data and $b$ is used as an index for the $B$ bootstrap samples. The following theorem states that $[L^{(h)},U^{(h)}]$ is an asymptotically valid confidence interval for $\mu_1^{(h)}$ with at least $(1-\alpha)$-coverage under high-level assumptions in Appendix \ref{sec:proof_thm1} on how well the balancing weights estimate the propensity scores. \begin{theorem} \label{theorem:bal_weights} Under Assumption \ref{assumption:gamma_star} in Appendix \ref{sec:proof_thm1}, for every $h\in H(\Lambda),$ \begin{align*} \underset{n\to\infty}{\lim\sup}\;\mathbb{P}_0(\mu_1^{(h)}<L^{(h)})\leq\frac{\alpha}{2} \end{align*} and \begin{align*} \underset{n\to\infty}{\lim\sup}\;\mathbb{P}_0(\mu_1^{(h)}>U^{(h)})\leq\frac{\alpha}{2}, \end{align*} where $\mathbb{P}_0$ denotes the probability under the joint distribution of the data $P(\cdot)$. The probability statements apply under both the conditions on the inverse probabilities and the outcomes in Assumption \ref{assumption:gamma_star} and the marginal sensitivity model (\ref{a:marginal_sens_model}). \end{theorem} Since each of the confidence intervals $[L^{(h)}, U^{(h)}]$ are valid, we can use the Union Method to combine them into a single valid confidence interval $[L^{\text{union}}, U^{\text{union}}]$ for $\mu_1$ under Assumption \ref{a:marginal_sens_model}, where \begin{align} \label{eq:conf_int_union} L^{\text{union}} = \underset{h\in\mathcal{H}(\Lambda)}{\inf} L^{(h)}, \quad U^{\text{union}} = \underset{h\in\mathcal{H}(\Lambda)}{\sup} U^{(h)}. \end{align} Finding $[L^{\text{union}}, U^{\text{union}}]$ would require conducting a grid search over the space of log-odds functions $\ensuremath{\mathcal{H}}(\Lambda)$ and computing percentile bootstrap confidence intervals at each point; this is computationally infeasible. Instead, we can obtain a confidence interval $[L, U]$ for $\mu_1$ by using generalized minimax and maximin inequalities as \begin{align} \label{eq:zhao_ci} \left[L, U\right] = \left[Q_{\frac{\alpha}{2}}\left(\underset{h\in\mathcal{H}(\Lambda)}{\inf}\hat{\mu}_{1,b}^{*(h)}\right), Q_{1-\frac{\alpha}{2}}\left(\underset{h\in\mathcal{H}(\Lambda)}{\sup}\hat{\mu}_{1,b}^{*(h)}\right)\right]. \end{align} \cite{zhao2019sensitivity} show that this interval will be conservative, in the sense of being too wide, since $L \leq L^{\text{union}}$ and $U \geq U^{\text{union}}$. The extrema of the point estimates can be solved efficiently by the following linear fractional programming problem: \begin{equation} \label{eq:weights_lpp} \begin{aligned} \underset{r \in \mathbb{R}^{n_1}}{\min/\max} &\quad \hat{\mu}_1^{(h)} = \frac{\sum\limits_{i=1}^nZ_i \left(1+r_i \left[\hat{\gamma}(X_i)-1\right]\right)Y_i}{\sum\limits_{i=1}^nZ_i\left(1+r_i \left[\hat{\gamma}(X_i) -1\right]\right)} \\ \text{subject to} &\quad r_i \in [\Lambda^{-1}, \Lambda],\text{ for all }i\in \left\{1,\ldots,n\right\}, \end{aligned} \end{equation} where $r_i= \text{OR}\{\pi(X_i), \pi(X_i,Y_i(1))\}$ are the decision variables. The procedure to obtain confidence interval $[L, U]$ is then: \begin{procedure_step} \label{step:bootstrap} Obtain $B$ bootstrap samples of the data of size $n$ without conditioning on treatment assignment. \end{procedure_step} \begin{procedure_step} \label{step:extrema} For each bootstrap sample $b=1,\ldots,B$, re-estimate the weights and compute the extrema $\underset{h\in\mathcal{H}(\Lambda)}{\inf}\hat{\mu}_{1,b}^{*(h)}$ and $\underset{h\in\mathcal{H}(\Lambda)}{\sup}\hat{\mu}_{1,b}^{*(h)}$ under the collection of sensitivity models $\mathcal{H}(\Lambda)$ by solving \eqref{eq:weights_lpp}. \end{procedure_step} \begin{procedure_step} \label{step:CI} Obtain valid confidence intervals for sensitivity analysis: \begin{align} L = Q_{\frac{\alpha}{2}}\left(\underset{h\in\mathcal{H}(\Lambda)}{\inf}\hat{\mu}_{1,b}^{*(h)}\right), \quad U = Q_{1-\frac{\alpha}{2}}\left(\underset{h\in\mathcal{H}(\Lambda)}{\sup}\hat{\mu}_{1,b}^{*(h)}\right). \end{align} \end{procedure_step} \noindent Replacing $\hat{\gamma}(X_i)$ in Equation \eqref{eq:weights_lpp} with the inverse of propensity scores estimated by a generalized linear model recovers the procedure from \cite{zhao2019sensitivity}. Finally, a researcher must compute a sensitivity value for a given study; see \citet{Rosenbaum2002} for extensive discussion. Suppose the confidence interval for PATE under ignorability ($\Lambda = 1$) does not contain zero, indicating a statistically significant effect. As $\Lambda$ increases, allowing for stronger violations of ignorability, the confidence interval will widen and eventually cross zero. Of particular interest then is the minimum value of $\Lambda$ for which the confidence interval contains zero; we denote this value as $\Lambda^\ast$. Thus, we can interpret $\Lambda^*$ as the minimum difference in the odds ratio between the probability of treatment with and without conditioning on the treated potential outcome for which we no longer observe a significant treatment effect. This represents the degree of confounding required to change a study's causal conclusions, with larger values of $\Lambda^*$ representing more robust estimates. Sensitivity analysis may also be useful in cases where the confidence interval under $\Lambda = 1$ is very small and includes zero, indicating no large effect in any direction. In this setting, a researcher may obtain a sensitivity value $\Lambda^*$ by defining a minimal effect size $\iota > 0$ of practical interest and repeating the sensitivity analysis for larger and larger values of $\Lambda$ until the confidence interval includes either $-\iota$ or $\iota$, revealing the degree of confounding needed to mask a practically important effect. For examples of such sensitivity analyses for equivalence results, see \citet{pimentel2015large, pimentel2020optimal}. \section{Amplifying, interpreting, and calibrating sensitivity parameters} \label{sec:interpretation} In this section, we provide guidance for interpreting the main sensitivity parameter $\Lambda^*$ by ``amplifying'' the sensitivity analyses into a constraint on the product of: (1) the level of remaining imbalance in confounders; and (2) the strength of the relationship between the imbalanced confounders and the treated potential outcome. \subsection{Finite sample sensitivity model} \label{sec:bal_weight_sens_model} As a first step to the amplification procedure, we introduce a finite sample analog to the marginal sensitivity model in Assumption \ref{a:marginal_sens_model}. Importantly, the sensitivity analysis procedure for the marginal sensitivity model outlined in Section \ref{sec:sensitivity} remains the same; we introduce this extension in order to amplify, interpret, and calibrate the sensitivity analysis outlined above. Recall that the marginal sensitivity model constrains the difference between the true probability of treatment, conditional on the treated potential outcome and the covariates, and the propensity score that only conditions on the covariates. The true propensity score guarantees that, in expectation, the inverse probability weighted outcomes for the treated group are equal to $\mu_1$, i.e., that \begin{equation} \label{eq:true_ipw} \ensuremath{\mathbb{E}}[Y(1)] = \ensuremath{\mathbb{E}}\left[\frac{1}{\pi(X, Y(1))} Y ~\ensuremath{\mathbb{B}}ig\vert~ Z = 1 \right]. \end{equation} However, this does not guarantee that the two quantities will be the same in finite samples. In that case, we can instead consider \emph{oracle weights} $\mathring{\gamma}(X_i, Y_i(1))$ that guarantee equality between the weighted average of treated group outcomes and the \emph{sample} average treated outcome $\mathring{\mu}_1 = \frac{1}{n}\sum_{i=1}^n Y_i(1)$: \begin{equation} \label{eq:oracle_opt_primal} \begin{aligned} \underset{\gamma(X, Y(1)) \in \mathbb{R}^{n_1}}{\min} &\quad \sum\limits_{Z_i = 1} \gamma(X_i, Y_i(1)) \log\frac{\gamma(X_i, Y_i(1))}{\hat{\gamma}(X_i)} \\ \text{subject to} &\quad \frac{1}{\sum_{Z_i = 1} \gamma(X_i, Y_i(1))}\sum_{Z_i=1} \gamma(X_i, Y_i(1)) Y_i(1) = \frac{1}{n}\sum\limits_{i=1}^n Y_i(1), \end{aligned} \end{equation} where $\hat{\gamma}(X_i)$ are the estimated weights from balancing the observed covariates. These oracle weights satisfy two key properties. First, they exactly balance the treated potential outcomes between all units and the weighted treated units. Second, they are as close (in terms of entropy) as possible to the estimated weights. The corresponding oracle weight estimator of the complete data mean $\mu_1$ is then: \begin{align} \label{eq:mu_oracle_est} \mathring{\mu}_1 = \sum\limits_{Z_i=1} \frac{\mathring{\gamma}(X_i,Y_i(1))}{\sum\limits_{Z_i=1} \mathring{\gamma}(X_i,Y_i(1))} Y_i. \end{align} Extending the population sensitivity model in Assumption \ref{a:marginal_sens_model}, we can now define an analogous finite sample sensitivity model that bounds the difference between the estimated weights and the oracle weights \emph{within the sample} rather than in the population. For $\Lambda\geq 1$, we consider the set of oracle weights $\mathring{\gamma}(X,Y(1))$ that satisfy: \begin{align} \label{eq:bal_weights_sens_model} \mathcal{E}_{\mathring{\gamma}}(\Lambda)=\{ \mathring{\gamma}(x_i, y_i)\;\;:\;\;\Lambda^{-1} \leq \frac{\mathring{\gamma}(x_i, y_i)-1}{\hat{\gamma}(x_i)-1} \leq \Lambda, \; \forall \; i =1\ldots,n\}. \end{align} \noindent Rather than bounding the difference between the true probability of treatment $\pi(x,y)$ and the propensity score only conditioned on the covariates $\pi(x)$ in the population, we bound the difference between the estimated and oracle weights \emph{in the sample}. Thus, we can think of this model as the finite sample analogue to the superpopulation marginal sensitivity model and is therefore more consistent with notions of sensitivity common in the matching literature \citep[e.g.,][]{Rosenbaum2002}. \subsection{Amplification} \label{sec:amplification} In order for a confounder to bias causal effect estimates, it must be associated with both the treatment and the outcome. An ``amplification'' enhances a sensitivity analysis's interpretability by allowing a researcher to instead interpret the results of the sensitivity analysis in terms of two parameters: one controlling the confounder's relationship with the treatment and the other controlling its relationship with the outcome \citep{rosenbaum2009amplification}. In our finite sample sensitivity model, the parameter $\Lambda$ controls how far the estimated weights can be from oracle weights that exactly balance the treated potential outcome. To aid interpretation, we propose an amplification that expresses the results of our procedure in terms of the imbalance in confounders and the strength of the relationship between the confounders and the treated potential outcome. To start, we define our error of interest, $\mathring{\mu}_1 - \hat{\mu}_1$, to be the difference between the estimates of the complete data mean $\mu_1$ using oracle weights and estimated weights. Therefore, this error represents the difference between what we would like to have in a finite sample, an estimate using weights that exactly balance the potential outcome under treatment, and what we have, an estimate using weights that only balance observed covariates.\footnote{This definition of error as the difference between two sample estimates is similar to \cite{Cinelli2020}'s formulation of bias in their sensitivity analysis framework for linear regression models.} We can write the difference between the average treated potential outcome in the sample and our estimate $\hat{\mu}_1$ in terms of the imbalance in the treated potential outcome $Y(1)$: \[ \mathring{\mu}_1 - \hat{\mu}_1 = \sum\limits_{Z_i = 1} \ensuremath{\mathbb{B}}ig[ \frac{\mathring{\gamma}(X_i, Y_i(1)) }{\sum\limits_{Z_i=1} \mathring{\gamma}(X_i, Y_i(1))} - \frac{\hat{\gamma}(X_i) }{n_1} \ensuremath{\mathbb{B}}ig] Y_i(1). \] To relate imbalance in $Y(1)$ to observable quantities, we decompose $Y(1)$ into two parts: (1) the linear projection of covariates that the estimated balancing weights perfectly balance; and (2) the residual. Specifically, let $U$ be the projection of the re-weighted covariates onto $Z$ and let $W$ be the orthogonal component. Therefore, $U$ represents the parts of the observed and unobserved covariates that are not exactly balanced and $W$ represents the parts that are exactly balanced. We write this decomposition as $Y(1) = W \beta_w + U \beta_u$; this linear model merely serves as a guide to interpretation, rather than a true relationship we are assuming in the primary causal analysis. One way to reason about the covariates that are not exactly balanced is as follows. Consider imbalanced covariate $A$, and run a linear regression of $A$ on treatment assignment $Z$. The fitted values would be included in $U$ and the residuals in $W$, since the residuals represent the part of $A$ that is linearly independent from $Z$. Because the $W$ are exactly balanced by construction, they do not introduce any error. Finally, in numerical examples (Section \ref{sec:examples}), we focus on standardized covariates. With this decomposition, we can write the error as a product of two terms: \begin{align} \label{eq:error} \mathring{\mu}_1 - \hat{\mu}_1 = \beta_u \cdot \left(\frac{1}{n}\sum_{i=1}^n U_i - \frac{1}{n_1}\sum_{Z_i = 1} \hat{\gamma}(X_i) U_i \right) \equiv \beta_u \cdot \delta_u, \end{align} where $\delta_u$ is the imbalance in $U$. As in Section \ref{sec:sensitivity} above, we can use the fractional linear program \eqref{eq:weights_lpp} to find upper and lower bounds for the error in Equation \eqref{eq:error}: \begin{align} \label{eq:bias_bound_extrema} \ensuremath{\mathbb{B}}ig(\underset{h\in \mathcal{H}(\Lambda)}{\inf}\hat{\mu}_1^{(h)} \ensuremath{\mathbb{B}}ig) - \hat{\mu}_1 \leq \mathring{\mu}_1 - \hat{\mu}_1 \leq \ensuremath{\mathbb{B}}ig(\underset{h\in \mathcal{H}(\Lambda)}{\sup}\hat{\mu}_1^{(h)} \ensuremath{\mathbb{B}}ig) - \hat{\mu}_1. \end{align} Therefore, we can constrain the product $\delta_u \cdot \beta_u$: \begin{align} \label{eq:ampl_betau} \ensuremath{\mathbb{B}}ig(\underset{h\in \mathcal{H}(\Lambda)}{\inf}\hat{\mu}_1^{(h)} \ensuremath{\mathbb{B}}ig) - \hat{\mu}_1 \leq \mathring{\mu}_1 - \hat{\mu}_1 = \delta_u \cdot \beta_u \leq \ensuremath{\mathbb{B}}ig(\underset{h\in \mathcal{H}(\Lambda)}{\sup}\hat{\mu}_1^{(h)} \ensuremath{\mathbb{B}}ig) - \hat{\mu}_1. \end{align} Now, for any value of our sensitivity analysis parameter $\Lambda$, we can compute a corresponding bound on the error, $\mathring{\mu}_1 - \hat{\mu}_1$, and can use this bound to decompose the error into different values of $\delta_u$ and $\beta_u$. In practice, as in Equation \eqref{eq:ampl_betau}, we first bound the error via the extrema under the balancing weights sensitivity model. We then set the error equal to the maximum absolute value of the upper and lower bounds in Equation \eqref{eq:ampl_betau} for $\Lambda = \Lambda^\ast$. Therefore, this value is equal to the maximum absolute value of error that is possible under the balancing weights sensitivity model. Finally, we compute a curve that maps the value of error to different combinations of $\delta_u$ and $\beta_u$ for enhanced interpretation. For example, $\left(\delta_u, \beta_u\right) = \left(1.5, 2\right)$ and $\left(\delta_u, \beta_u\right) = \left(1, 3\right)$ are both consistent with error $\mathring{\mu}_1 - \hat{\mu}_1 = 3$. In Section \ref{sec:examples}, we illustrate our sensitivity analysis procedure and how our amplification can produce more interpretable results. \section{Numerical examples} \label{sec:examples} We now illustrate the sensitivity analysis and amplification procedures using two real data examples. We consider the situation in which a researcher uses balancing weights to estimate the Population Average Treatment Effect on the Treated (PATT) of a treatment on an outcome of interest; see Appendix \ref{sec:att} for an overview of the PATT in our setting. Based on domain knowledge, the researcher believes that the set of observed covariates includes most factors associated with the treatment assignment and the outcome, while leaving open the possibility that there remain relevant unobserved covariates. To start, we compute $\Lambda^*$, which represents the confounding required to alter a study's causal conclusions. In order to compute $\Lambda^*$, we compute confidence intervals for a grid of values of $\Lambda$, starting with $\Lambda = 1$ and then considering larger values of $\Lambda$. If the confidence interval corresponding to $\Lambda = 1$ contains zero, then the effect estimate is not significant, even under ignorability. If the confidence interval for $\Lambda = 1$ does not contain zero, increasing the value of $\Lambda$ causes the confidence intervals to widen and eventually cross zero for some value of $\Lambda$. We set $\Lambda^*$ equal to the minimum value of $\Lambda$ for which the confidence interval includes zero. Since the the percentile bootstrap procedure induces randomness, this value of $\Lambda^*$ is computed with Monte Carlo error. We fix error equal to the maximum absolute value of the upper and lower bounds on error in Equation \eqref{eq:error_bound_extrema_patt}. This value is the maximum absolute value of error possible under the balancing weights sensitivity model with $\Lambda = \Lambda^*$ and is therefore the error required to overturn the study's causal conclusion. We create contour plots with curves that map the particular value of error to varying values of $\delta_u$ and $\beta_u$, allowing the error to be alternatively interpreted in terms of two sensitivity analysis parameters. We include standardized observed covariates on the contour plots, which serve as guides for reasoning about potential unobserved covariates. Blue points correspond to observed covariates with imbalance prior to weighting, while red points represent post-weighting imbalance. We view the post-weighting imbalance corresponding to the red points as a best-case scenario for potential unobserved covariates --- in general, we expect to achieve better balance in terms of the observed covariates that we directly target than unobserved covariates. Conversely, the pre-weighting imbalance represented by the blue points may be more in line with our expectations for unobserved covariates. \subsection{Example 1: Simulated data} \label{sec:simulated_data} \begin{figure} \caption{Example contour plots. The black curve is the error for a highly sensitive study, green is ambiguous, and purple is robust. The blue and red points are observed covariates with imbalance before and after weighting, respectively. The red shaded region is the convex hull of the set including the red points, the origin, the point on the y-axis corresponding to the maximum $\beta_u$ value among the red points, and the point on the x-axis corresponding to the maximum $\delta_u$ value among the red points. The region represents the approximate magnitude of the observed covariates with post-weighting imbalance.} \label{fig:conceptual} \end{figure} Figure \ref{fig:conceptual} illustrates three cases a researcher might encounter when making the contour plots. The first scenario, corresponding to the black curve, is when the error curve intersects with the shaded red region. We view this as indicative of results that are sensitive to violations of ignorability, as confounders comparable to the observed covariates after weighting can overturn the results. By contrast, we view the scenario depicted by the purple curve in Figure \ref{fig:conceptual} as evidence that a study is fairly robust. The horizontal and vertical dotted blue lines correspond to the maximum values among observed covariates of $\beta_u$ and pre-weighting imbalance, respectively. Since the curve is above and to the right of the intersection of the two lines, the effect estimate is robust to an unobserved confounder with strength and pre-weighting imbalance as large as the maximum among the observed covariates. The final case occurs when the error curve lies in between the red region and the intersection of the two lines. The green curve in Figure \ref{fig:conceptual} illustrates this scenario. We view this as an ambiguous result that requires additional domain knowledge to evaluate the feasibility of there being confounding of this magnitude. Conditional on observing one of the cases outlined above, the actual value of $\Lambda^*$ can provide additional insight. For example, if we are in the ambiguous scenario depicted by the green curve in Figure \ref{fig:conceptual}, we would be more skeptical of the study's causal conclusion if $\Lambda^* = 1.05$ rather than $\Lambda^* = 2$, since smaller differences between the estimated and oracle weights could overturn the study's results. \subsection{Example 2: LaLonde job training experiment} \label{sec:lalonde} We re-examine data analyzed by \cite{lalonde1986evaluating} from the National Supported Work Demonstration Program (NSW), a randomized job training program. Specifically, we use the subset of data from \cite{dehejia1999causal} to form a treatment group and observational data from the Current Population Survey--Social Security Administration file (CPS1) to form a control group. We consider estimating the effect of the job training program on 1978 real earnings. The covariates for each individual include their age, years of education, race, marital status, whether or not they graduated high school, and earnings and employment status in 1974 and 1975. In total, there are 185 treated units and 15,992 control units. First, we use stable balancing weights in Equation \eqref{eq:opt_primal} to estimate $\widehat{\text{PATT}} = \$1,165$, which is in line with \citet{wang2019minimal}'s estimate using slightly different approximate balancing weights. We then compute $\Lambda^* = 1.01$, which indicates that even a slight difference between the estimated and oracle weights can negate the causal effect estimate. Figure \ref{fig:lalonde_sens_results} shows how the range of point estimates and the 95\% confidence interval widen as $\Lambda$ increases, with the confidence interval including zero for $\Lambda^*$. The range of point estimates is obtained by computing the extrema of the point estimates for a particular $\Lambda$. \begin{figure} \caption{Point estimate and confidence intervals for the LaLonde data. Dotted intervals are point estimate intervals and solid intervals are 95\% confidence intervals.} \label{fig:lalonde_sens_results} \caption{Contour plot for the LaLonde data. The black curve is the error for $\Lambda^*$. The blue and red points are observed covariates with imbalance before and after weighting, respectively. The red shaded region represents the approximate magnitude of the observed covariates with post-weighting imbalance.} \label{fig:lalonde_contour} \caption{Sensitivity analysis results with the LaLonde data} \label{fig:lalonde_figs} \end{figure} Figure \ref{fig:lalonde_contour} shows the contour plot for the LaLonde data. We observe that the error curve intersects with the red region. Therefore, the effect estimate is not robust to a confounder with similar levels of imbalance and strength as those seen in the observed covariates after weighting. Based on this, we deem the estimated effect to be fairly sensitive to unmeasured confounding. \subsection{Example 3: fish consumption and blood mercury levels} \label{sec:fish} We now examine data analyzed by \cite{zhao2018cross} and \cite{zhao2019sensitivity} from the National Health and Nutrition Examination Survey (NHANES) 2013-2014 containing information about fish consumption and blood mercury levels. We evaluate the sensitivity of estimating the effect of fish consumption on blood mercury levels using balancing weights. There are 234 treated units (consumption of greater than 12 servings of fish or shellfish in the past month) and 873 control units (zero or one servings). The outcome of interest is $\log_2$(total blood mercury), measured in micrograms per liter; the covariates include gender, age, income, whether income is missing and imputed, race, education, smoking history, and the number of cigarettes smoked in the previous month. To start, the stable balancing weights \eqref{eq:opt_primal} estimate of the PATT is an increase of 2.1 in $\log_2$(total blood mercury) and $\Lambda^*$ is approximately equal to 5.5 for the fish consumption data. We display the sensitivity analysis results for multiple values of $\Lambda$ in Figure \ref{fig:fish_sens_results}. We observe that the confidence interval corresponding to no confounding ($\Lambda = 1$) is far from zero and that the confidence interval for $\Lambda^* = 5.5$ just begins to cross zero. \begin{figure} \caption{Point estimate and confidence intervals for the fish data. Dotted intervals are point estimate intervals and solid intervals are 95\% confidence intervals.} \label{fig:fish_sens_results} \caption{Contour plot for the fish data. The black curve is the error for $\Lambda^*$. The blue and red points are observed covariates with imbalance before and after weighting, respectively. The red shaded region represents the approximate magnitude of the observed covariates with post-weighting imbalance.} \label{fig:fish_contour} \caption{Sensitivity analysis results with the fish data} \label{fig:fish_figs} \end{figure} The contour plot (Figure \ref{fig:fish_contour}) for the fish data indicates an extremely robust causal effect estimate. As the error curve is far above the intersection of the dotted lines that represents the maximum strength and pre-weighting imbalance among the observed covariates, confounding significantly stronger than the observed covariates would be required to alter the causal conclusion. Comparing the contour plot for the fish data to the contour plot for the LaLonde data (Figure \ref{fig:lalonde_contour}), we observe a clear difference. While the error curve for the LaLonde data lies well within the range of the observed covariates, the error curve corresponding to the fish data is far from the observed covariates. From this contrast, we conclude that the causal effect estimate for the fish data is much more robust than that for the LaLonde data. \section{Discussion} \label{sec:discussion} Balancing weights estimation is a popular approach for estimating treatment effects by weighting units to balance covariates. In this paper, we develop a framework for assessing the sensitivity of these estimators to unmeasured confounding. We then propose an amplification for enhanced interpretation and illustrate our method through real data examples. We briefly outline potential directions for future work. First, we could extend our framework to include \textit{augmented balancing weights estimators}, which use an outcome model to correct for bias due to inexact balance. Second, we could extend our sensitivity analysis framework to balancing weights in panel data settings. For example, we could adapt this framework to variants of the \textit{synthetic control method} \citep{abadie2003economic, ben2018augmented}, extending some recent proposals for sensitivity analysis from \citet{firpo2018synthetic}. Additionally, \citet{dorn2021sharp} recently proposed a modification to \citet{zhao2019sensitivity}'s procedure, using quantile balancing to obtain sharper sensitivity analysis intervals. We could adapt their modification to our proposed balancing weights framework. Finally, we could use our framework to provide guidance in the design stage of balancing weights estimators. When estimating treatment effects using balancing weights, researchers must make decisions including the specific dispersion function of the weights, the particular imbalance measure, and, in many cases, an acceptable level of imbalance. We could extend our sensitivity analysis procedure to help make these decisions to improve robustness and power in the presence of unmeasured confounding. For example, we could provide insight into the trade-off between achieving better (marginal) balance on a few covariates or worse balance on a richer set of covariates. \singlespacing \appendix \section{Proofs} \label{sec:app_proofs} \subsection{Proof of Theorem \ref{theorem:bal_weights}} \label{sec:proof_thm1} \begin{proof} We prove that, after centering, the difference between the mean computed from estimating and evaluating the inverse probability function $\gamma$ on bootstrap data and the mean computed from using the true function $\gamma$ and evaluating on actual data is of order $n^{-1/2}$. For simplicity, we consider estimating the population mean from an independent and identically distributed random sample with missing outcome data. For unit $i$, let $Y_i$ be the outcome, $X_i$ be a vector of observed covariates, and $Z_i$ be a response indicator, where $Z_i = 1$ if we observe unit $i$'s outcome and $Z_i = 0$ otherwise. We consider using estimator $\hat{\mu} = \frac{1}{n_1}\sum\limits_{i=1}^n\hat{\gamma}(X_i)Z_iY_i$ to estimate $\mu=\mathbb{E}[Y]= \mathbb{E}[\frac{ZY}{\pi_P(X)}]=\mathbb{E}[\gamma_P(X)ZY]$ from observed data $O_i = (X_i, Z_i, Y_iZ_i)_{i=1}^n$ drawn from joint distribution $P(\cdot)$. We sample split to make the proof and arguments simpler and more transparent \citep[see][]{klaassen1987consistent}. The proof can equivalently be done without sample splitting, but we sample split to avoid the associated complexities. We split the data into two equally sized samples, $i = 1,\ldots,m$ and $i = m+1,\ldots,n$. For both samples, we take an iid bootstrap sample of size $m$ from the respective empirical distribution to obtain data $O_i^* = (X_i^*, Z_i^*, Y_i^*Z_i^*)_{i=1}^m$ and $O_i^* = (X_i^*, Z_i^*, Y_i^*Z_i^*)_{i=m+1}^n$. Let $\hat{\gamma}^*$ denote an estimate of $\gamma$ using bootstrap data. We estimate $\hat{\gamma}^*(X)$ in one bootstrap sample and evaluate in the other bootstrap sample. We then switch roles and take a weighted average of the two estimates proportional to $\sum_{i=1}^m Z_i^*$ in both bootstrap samples to obtain an efficient estimate. This sample splitting approach with reversing roles and averaging yields the same estimate as without sample splitting to order $o(n^{-1/2})$. We demonstrate this through simulation (see Appendix \ref{sec:app_sim}). We examine the case where we evaluate on the bootstrap sample from the second half of the data and estimate $\hat{\gamma}^*(X)$ from the bootstrap sample from the first half. We make the following mild assumptions on how $\hat{\gamma}$ is constructed: \begin{assumption} \label{assumption:gamma_star} Consider function $\Tilde{\gamma}:\mathscr{X}^m \times \{0,1\}^m \to \mathbb{R}^+$. As an example, consider the function corresponding to the stable balancing weights optimization problem \eqref{eq:opt_primal}. Let $\hat{\gamma}_n^*(x) = \Tilde{\gamma}_{P^\ast_m}(X_1^*,\ldots, X_m^*, Z_1^*,\ldots, Z_m^*, x)$ and $\hat{\gamma}_n(x) = \Tilde{\gamma}_{P_m}(X_1,\ldots, X_m, Z_1,\ldots, Z_m, x)$, where $P^\ast_m$ and $P_m$ are the empirical distributions for the bootstrap sample from the first half of the data and the actual first half of the data, respectively, be such that: \begin{enumerate} \item $\Tilde{\gamma}$ is uniformly bounded in $m$ and $x$. \item \label{a:E_sup} $\mathbb{E}_1\left[\left(\underset{x}{\sup}\ensuremath{\mathbb{B}}ig| \hat{\gamma}_n^*(x) - \hat{\gamma}_n(x) \ensuremath{\mathbb{B}}ig|\right)^2 \right] = o_p(1)$. \item \label{a:sup} $\underset{x}{\sup}\ensuremath{\mathbb{B}}ig|\hat{\gamma}_n(x) - \gamma_P(x)\ensuremath{\mathbb{B}}ig| = o_p(1)$. \footnote{\cite{wang2019minimal}'s Theorem 2 proves that Assumption \ref{assumption:gamma_star}.\ref{a:sup} holds for a hard $L^\infty$ balance constraint on the weights.} \item \label{a:bias} $\mathbb{E}\left[\hat{\gamma}_n(X)ZY \right] - \mathbb{E}\left[\gamma_P(X)ZY \right] = o(n^{-1/2})$. \end{enumerate} \end{assumption} Assumption \ref{assumption:gamma_star}.\ref{a:bias} assumes that the bias of $\hat{\mu}$ for estimating $\mu$ is of order $n^{-1/2}$, which ensures that the weighting estimator $\hat{\mu}$ is asymptotically normal. The assumptions for Theorem 3 in \citet{wang2019minimal} and Theorem 2 in \citet{hirshberg2019minimax} are possible conditions under which asymptotic normality holds. These are representative of typical assumptions in this setting where the estimator is assumed to be a function of $d$ covariates with assumptions on the number of components of an orthogonal expansion. We note that weaker conditions are sufficient. For example, an alternative for $d$ dimensions would be to assume that the weighting function is of the form $\sum_{j=1}^d \gamma_j(X_j)$, where $X_j$ is the $j$th component of $X$. Then, we only require structure for one dimension instead of $d$ dimensions. These assumptions together imply that $\hat{\gamma}_n^*$ is consistently uniform for $\gamma$. Assumption \ref{assumption:gamma_star} verifies \begin{align*} & \mathbb{E}_1\ensuremath{\mathbb{B}}ig[ \ensuremath{\mathbb{B}}ig( \underset{x}{\sup}\ensuremath{\mathbb{B}}ig| \hat{\gamma}_n^*(x) - \gamma_P(x) \ensuremath{\mathbb{B}}ig| Y_{m+1} Z_{m+1} \ensuremath{\mathbb{B}}ig)^2 \ensuremath{\mathbb{B}}ig] \\ =& \mathbb{E}_1\ensuremath{\mathbb{B}}ig[ \ensuremath{\mathbb{B}}ig( \underset{x}{\sup}\ensuremath{\mathbb{B}}ig| \hat{\gamma}_n^*(x) - \gamma_P(x) \ensuremath{\mathbb{B}}ig| \ensuremath{\mathbb{B}}ig)^2\ensuremath{\mathbb{B}}ig] \mathbb{E}_1\ensuremath{\mathbb{B}}ig[Y_{m+1}^2 Z_{m+1}^2 \ensuremath{\mathbb{B}}ig] \\ =& o(1), \end{align*} where $\mathbb{E}_1$ denotes the conditional expectation given the first sample. Note, the conditions in Assumption \ref{assumption:gamma_star} are stronger than needed and could be relaxed. We proceed conditional on the first sample $O_i = (X_i, Z_i, Y_iZ_i)_{i=1}^m$ and the first bootstrap sample $O_i^* = (X_i^*, Z_i^*, Y_i^*Z_i^*)_{i=1}^m$. Therefore, $\hat{\gamma}_n^*$ is a completely known function. Let $\mathbb{E}^*$ denote the conditional expectation of the second bootstrap sample given the actual second sample. To show that the bootstrap can be validly applied, we can show that \begin{equation} \label{eq:want_prove_orig} \begin{aligned} &\frac{1}{m}\sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i^*)Z_i^*Y_i^* - \mathbb{E}^*\ensuremath{\mathbb{B}}ig[ \frac{1}{m} \sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i^*)Z_i^*Y_i^*\ensuremath{\mathbb{B}}ig] \\ =& \frac{1}{m}\sum\limits_{i=m+1}^{2m} \gamma_P(X_i)Z_iY_i - \mathbb{E}_1\ensuremath{\mathbb{B}}ig[\gamma_P(X_{m+1})Z_{m+1}Y_{m+1}\ensuremath{\mathbb{B}}ig] + o_p(n^{-1/2}). \end{aligned} \end{equation} Since \begin{align*} \mathbb{E}^*\ensuremath{\mathbb{B}}ig[ \frac{1}{m} \sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i^*)Z_i^*Y_i^*\ensuremath{\mathbb{B}}ig] = \frac{1}{m}\sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i)Z_iY_i, \end{align*} then, by Theorem 2.1 from \citet{bickel1981some}, \begin{align} &\frac{1}{m}\sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i^*)Z_i^*Y_i^* - \frac{1}{m}\sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i)Z_iY_i \label{eq:all_boot_term}\\ \text{and}\quad& \frac{1}{m}\sum\limits_{i=m+1}^{2m} \ensuremath{\mathbb{B}}ig(\hat{\gamma}_n^*(X_i)Z_iY_i - \mathbb{E}_1\ensuremath{\mathbb{B}}ig[ \hat{\gamma}_n^*(X_{m+1})Z_{m+1}Y_{m+1} \ensuremath{\mathbb{B}}ig] \ensuremath{\mathbb{B}}ig) \label{eq:boot_fcn_term} \end{align} have the same limiting distribution. Since (\ref{eq:all_boot_term}) and (\ref{eq:boot_fcn_term}) have the same limiting distribution, it suffices to show that the difference between the mean with the true $\gamma$ and the mean with $\hat{\gamma}_n^*$ estimated on the bootstrap data is of order $n^{-1/2}$ instead of (\ref{eq:want_prove_orig}). Therefore, we show \begin{equation} \label{eq:want_prove2} \begin{aligned} &\frac{1}{m}\sum\limits_{i=m+1}^{2m} \hat{\gamma}_n^*(X_i)Z_iY_i - \mathbb{E}_1\ensuremath{\mathbb{B}}ig[\hat{\gamma}_n^*(X_{m+1})Z_{m+1}Y_{m+1}\ensuremath{\mathbb{B}}ig] \\ =& \frac{1}{m}\sum\limits_{i=m+1}^{2m} \gamma_P(X_i)Z_iY_i - \mathbb{E}_1\ensuremath{\mathbb{B}}ig[\gamma_P(X_{m+1})Z_{m+1}Y_{m+1}\ensuremath{\mathbb{B}}ig] + o_p(n^{-1/2}). \end{aligned} \end{equation} We have now reduced the problem to showing that the true function $\gamma$ can be replaced with $\hat{\gamma}_n^*$. In order to show this, we use properties of $\hat{\gamma}_n^*$ from Assumption \ref{assumption:gamma_star}. First, we let \begin{align*} \Delta(X_i,Y_i,Z_i) = (\hat{\gamma}_n^*(X_i)-\gamma_P(X_i))Z_iY_i - \mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ (\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1}))Z_{m+1}Y_{m+1} \ensuremath{\mathbb{B}}ig]. \end{align*} Note that the difference between the terms on the left and right hand sides of (\ref{eq:want_prove2}) is equal to $\frac{1}{m}\sum\limits_{i=m+1}^{2m} \Delta(X_{i},Y_{i},Z_{i})$. Additionally, note that $\mathbb{E}_1\ensuremath{\mathbb{B}}ig[\Delta(X_{i},Y_{i},Z_{i})\ensuremath{\mathbb{B}}ig]=0$. Therefore, \begin{align*} &\mathbb{E}_1\ensuremath{\mathbb{B}}ig[ \ensuremath{\mathbb{B}}ig( \frac{1}{m}\sum\limits_{i=m+1}^{2m} \Delta(X_{i},Y_{i},Z_{i})\ensuremath{\mathbb{B}}ig)^2\ensuremath{\mathbb{B}}ig] \\ =&\frac{1}{m} \mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ \Delta(X_{m+1},Y_{m+1},Z_{m+1})^2 \ensuremath{\mathbb{B}}ig]. \end{align*} Since $m = \Omega(n)$, by Assumption \ref{assumption:gamma_star}, \begin{align*} &\mathbb{E}_1 \left[ \Delta(X_{m+1},Y_{m+1},Z_{m+1})^2 \right] \\ =&\mathbb{E}_1 \left[ \left( \left[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\right]Z_{m+1}Y_{m+1}\right)^2 \right] \\ &- 2\mathbb{E}_1 \ensuremath{\mathbb{B}}ig\{\big[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\big]Z_{m+1}Y_{m+1} \mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ \big[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\big]Z_{m+1}Y_{m+1} \ensuremath{\mathbb{B}}ig]\ensuremath{\mathbb{B}}ig\} \\ &+\mathbb{E}_1 \ensuremath{\mathbb{B}}ig\{\mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ \big[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\big]Z_{m+1}Y_{m+1} \ensuremath{\mathbb{B}}ig]^2\ensuremath{\mathbb{B}}ig\} \\ =&\mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ \ensuremath{\mathbb{B}}ig( \big[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\big]Z_{m+1}Y_{m+1}\ensuremath{\mathbb{B}}ig)^2 \ensuremath{\mathbb{B}}ig] - \mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ \big[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\big]Z_{m+1}Y_{m+1} \ensuremath{\mathbb{B}}ig]^2 \\ \leq& \mathbb{E}_1 \ensuremath{\mathbb{B}}ig[ \ensuremath{\mathbb{B}}ig( \big[\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\big]Z_{m+1}Y_{m+1}\ensuremath{\mathbb{B}}ig)^2 \ensuremath{\mathbb{B}}ig] \\ \leq& M \cdot \mathbb{E}_1 \left[ \left(\hat{\gamma}_n^*(X_{m+1})-\gamma_P(X_{m+1})\right)^2\right] \\ =& o_p(1), \end{align*} where $M$ upper bounds $Y$. Therefore, (\ref{eq:want_prove2}) follows. \end{proof} \section{Simulation for sample splitting} \label{sec:app_sim} We conduct simulations to demonstrate the validity of the sample splitting technique that we use to prove Theorem \ref{theorem:bal_weights} in Appendix \ref{sec:proof_thm1}. We show that the bootstrap distributions for the balancing weights estimates of $\mu_0$ with and without sample splitting are quite similar. The setup of the simulations is as follows. We draw 10,000 iid samples where covariates $X_1$ and $X_2$ are drawn from standard normal distributions, treatment indicator $Z_i$ is a bernoulli random variable with probability = $0.5 + 0.07 X_{1i} + 0.07 X_{2i} + \epsilon_i,$ where $\epsilon_i \sim \mathcal{N}(0,0.03^2)$, and $Y_i = 0.2 Z_i + 0.5 X_{1i} + 0.5 X_{2i} + \delta_i,$ where $\delta_i \sim \mathcal{N}(0,0.2^2)$. We run 1,000 simulations and estimate $\mu_0$ with and without sample splitting using weights obtained by entropy balancing with exact balance from \citet{hainmueller2012entropy}. We observe in Figure \ref{fig:split_sims} that the bootstrap distributions of the estimates with and without sample splitting are comparable. \begin{figure} \caption{Bootstrap distributions of estimates of $\mu_0$ with the full data and with sample splitting} \label{fig:split_sims} \end{figure} \section{Average treatment effect on the treated} \label{sec:att} In many settings, researchers are interested in estimating the \textit{Population Average Treatment Effect on the Treated} (PATT): \begin{align} \label{eq:att} \tau_T = \mathbb{E}[Y(1)-Y(0)|Z=1] = \mu_{11} - \mu_{01}, \end{align} where $\mu_{11} = \mathbb{E}[Y(1)|Z=1]$ and $\mu_{01} = \mathbb{E}[Y(0)|Z=1]$. Since $\mu_{11}$ is identifiable from observed data, we primarily focus on estimating $\mu_{01}$. Our procedure for performing sensitivity analysis outlined in Section \ref{sec:sensitivity} largely still holds. The primary details that differ for the PATT are as follows. First, for a particular $h \in H(\Lambda)$, we can write the shifted estimand as \begin{align} \label{eq:shifted_estimand_patt} \mu_{01}^{(h)} = \ensuremath{\mathbb{E}}\left[(1-Z)\frac{\pi^{(h)}(X,Y(0))}{1-\pi^{(h)}(X,Y(0))} \right]^{-1} \ensuremath{\mathbb{E}}\left[(1-Z)\frac{\pi^{(h)}(X,Y(0))}{1-\pi^{(h)}(X,Y(0))} Y \right]. \end{align} The corresponding shifted estimator for $\mu_{01}^{(h)}$ is \begin{align} \label{eq:shifted_est_patt} \hat{\mu}_{01}^{(h)} = \left(\sum\limits_{Z_i=0} e^{-h(X_i, Y_i(0))} \hat{\gamma}(X_i) \right)^{-1} \sum\limits_{Z_i=0} e^{-h(X_i, Y_i(0))} \hat{\gamma}(X_i) Y_i. \end{align} Additionally, the oracle weights now exactly balance the control potential outcome $Y(0)$ between the treatment group and the weighted control group. The oracle weights $\mathring{\gamma}(X, Y)$ solve the optimization problem \begin{equation} \label{eq:oracle_opt_primal_patt} \begin{aligned} \underset{\gamma(X,Y(0)) \in \mathbb{R}^{n_0}}{\min} &\quad \sum\limits_{Z_i = 0} \gamma(X_i, Y_i(0)) \log\frac{\gamma(X_i, Y_i(0))}{\hat{\gamma}(X_i)} \\ \text{subject to} &\quad \frac{1}{\sum_{Z_i=0} \gamma(X_i, Y_i(0))}\sum\limits_{Z_i=0} \gamma(X_i, Y_i(0)) Y_i(0) = \frac{1}{n_1} \sum\limits_{Z_i=1} Y_i(0) \end{aligned} \end{equation} where $\hat{\gamma}(X) \in \mathbb{R}^{n_0}$ are the estimated weights from balancing the observed covariates. The balancing weights estimate of $\mu_{01}$ using the oracle weights is \begin{align} \label{eq:mu_oracle_est_patt} \mathring{\mu}_{01} = \sum\limits_{Z_i=0} \frac{\mathring{\gamma}(X_i,Y_i(0))}{\sum\limits_{Z_i=0} \mathring{\gamma}(X_i,Y_i(0))} Y_i. \end{align} For $\Lambda\geq 1$, the balancing weights sensitivity model for the PATT considers the set of oracle weights $\mathring{\gamma}(X, Y(0))$ that satisfy: \begin{align} \label{eq:bal_weights_sens_model_patt} \mathcal{E}_{\mathring{\gamma}}(\Lambda)=\{ \mathring{\gamma}(x_i, y_i)\;\;:\;\;\Lambda^{-1} \leq \frac{\hat{\gamma}(x_i)}{\mathring{\gamma}(x_i, y_i)} \leq \Lambda, \; \forall \; i =1\ldots,n\}. \end{align} Finally, the amplification becomes \begin{align} \label{eq:error_bound_extrema_patt} \left(\underset{h\in \mathcal{H}(\Lambda)}{\inf}\hat{\mu}^{(h)}_{01} \right) - \hat{\mu}_{01} \leq \mathring{\mu}_{01} - \hat{\mu}_{01}= \delta_u \cdot \beta_u \leq \left(\underset{h\in \mathcal{H}(\Lambda)}{\sup}\hat{\mu}^{(h)}_{01} \right) - \hat{\mu}_{01}, \end{align} where \begin{equation} \begin{aligned} \underset{h\in \mathcal{H}(\Lambda)}{\inf/\sup} \; \hat{\mu}^{(h)}_{01} = \underset{r \in \mathbb{R}^{n_0}}{\min/\max} & \quad \frac{\sum\limits_{Z_i = 0} r_i^{-1} \hat{\gamma}(X_i) \cdot Y_i}{\sum\limits_{Z_i = 0} r_i^{-1} \hat{\gamma}(X_i)} \\ \text{subject to} & \quad \frac{1}{\Lambda} \leq r_i \leq \Lambda \end{aligned} \end{equation} and $r_i = \frac{\hat{\gamma}(X_i)}{\mathring{\gamma}(X_i,Y_i(0))}$ are the decision variables. \end{document}
math
At Mood Interiors, we believe that opting for bright and bold interiors is the perfect way to inject personality into your environment. That’s why we’ve created our own range of unique and brightly coloured wallpapers and fabrics, sure to bring fun and energy to any setting, whether that’s a home or business. Designed and printed here in the UK, our contemporary fabrics and wallpapers mix bold, geometric and global prints with pops of colour and class, making your setting feel bright all summer and warm all winter. Whether you go all out with our wallpapers and fabrics, or pair our soft furnishings with neutral décor, our designs are sure to take centre stage. At Mood Interiors, we pair our creative vision with top notch materials, which means our velvety drapery and upholstery textiles are luxuriously hardwearing. Whether you’re searching for curtain weight fabric for blinds, curtains or other soft furnishings, or upholstery weight fabric, there is guaranteed to be something in our range that you’ll love. Each of our fabrics is contract approved, Crib 5 compliant and can be manufactured from waterproof materials if required. If you’re looking for one of our designs to be made into wallpaper, all you need to do is contact us to tell us which one you’re interested in and our team will process your order. Our interior experts are dab hands at crafting all kinds of things for the home environment. From cushions and curtains, to upholstered furniture and everything in between, there’s not much that we can’t do. So, if you’re looking for bespoke interiors, or want to use our funky designed fabric or brightly coloured wallpaper for a special project, contact us today. If quirky is your thing, this one is for you. This fun pineapple design in white and pastel blue is sure to liven up a range of environments, contemporary or classic. If you’re looking to inject peace and harmony into your environment, but still want a colour injection, Free Spirit might just fit the bill. With a hint of retro and a dash of kitsch, this delightful doughnut design is sure to perk up your interiors. Whatever location this features in, the beautifully bold botanical print will transport you to a place of nature and harmony. This ‘60s inspired, retro design offers a vibrant riot of colour that will make any room feel positive, happy and fun. With a watercolour finish, this style offers a flowing continuum of colours sure to mesmerise. Who said you can’t go on holiday every day? With Pina Colada, you can. This offers a versatile and bold design, ideal for a range of styles. Designed to portray the calm before the storm, Puddle has an innate creativity of its own and can be easily paired with other patterns for a striking look. Release your inner animal and be wild with this ferociously funky design, guaranteed to make you roar. Mixing heritage with high fashion, Tart is the perfect combination of classic and contemporary.
english
'use strict'; angular.module('mean.servicedirectory') .config(['$stateProvider', function($stateProvider) { // Check if the user is connected var checkLoggedin = function($q, $timeout, $http, $location) { // Initialize a new promise var deferred = $q.defer(); // Make an AJAX call to check if the user is logged in $http.get('/loggedin').success(function(user) { // Authenticated if (user !== '0') $timeout(deferred.resolve); // Not Authenticated else { $timeout(deferred.reject); $location.url('/login'); } }); return deferred.promise; }; // states for my app $stateProvider .state('all service directory items', { url: '/servicedirectory', templateUrl: 'servicedirectory/views/list.html', resolve: { loggedin: checkLoggedin } }) .state('create service directory item', { url: '/servicedirectory/create', templateUrl: 'servicedirectory/views/create.html', resolve: { loggedin: checkLoggedin } }) .state('edit service directory item', { url: '/servicedirectory/:serviceDirectoryId/edit', templateUrl: 'servicedirectory/views/edit.html', resolve: { loggedin: checkLoggedin } }) .state('activity service directory item by id', { url: '/servicedirectory/:serviceDirectoryId', templateUrl: 'servicedirectory/views/view.html', resolve: { loggedin: checkLoggedin } }); } ]);
code
The Wayne County Area Chamber of Commerce and its board of directors has named Melissa Vance for the position of President and CEO. This announcement comes following the resignation of former President, Phil D’Amico, in January. Vance, a Wayne County native and graduate of Centerville High School, holds a business degree from Indiana Wesleyan University. Her background includes significant accomplishments in business development, marketing and public relations. Before joining the Chamber staff as Director of Education and Programs, she was the Events & Communications Coordinator at Reid Health Foundation where she established a grant program realizing $1.4 million in revenue through federal, state and private foundations. During her first two months in that role, she led the BRAvo! Signature Event to see the highest dollar amount ever raised through a Reid Event with a 78% profit margin. She also was the Director of Marketing & Business Development for Fayette Regional Health System, managing an operating budget of over $800,000, department employees and more than 100 Auxiliary volunteers. As the Director of Education & Programs for the Chamber, Vance worked with four of the Chamber’s six committees as the staff lead. Under her leadership, the Agribusiness Committee was able to re-launch the successful and much anticipated annual Farm Tour. The event has now hosted hundreds of attendees in Greens Fork, Dublin and Hagerstown, educating the community on emerging technology and the importance of agriculture to the Wayne County economy. The Business & Education Committee developed and implemented the High School Senior Job Fairs in 2017; the Buy Local Committee began Wayne County Weekends and the Buy Local Certified Program; and the Awards, Celebrations and Events Committee saw record numbers in attendance and dollars raised through events such as the Annual Dinner, Community Improvement Awards, Chamber Golf Classic and Taste of Wayne County/Business Expo. Vance currently serves on the Board of Directors for Center City Development Corporation and was a previous board member for Richmond Symphony Orchestra, Communities in Schools, Franklin County Chamber of Commerce and others. “With Melissa’s leadership, the Chamber will continue to be a great community resource for Wayne County businesses,” Elzemeyer said.
english
کینہہ نس معنہِ چھہ کینژھا
kashmiri
Starting a new school year can be stressful for everyone involved: kids, parents, and teachers! Here's a round up of every thing you need to start the school year off right. There were some great back to school ideas shared at last week's link party - everything from quick crafts, easy and inexpensive teacher gifts, printables to help you get organized, and ways to make packed lunches fun. ideas for teacher appreciation gifts. an easy to make crayon vase. ways to make packed lunches fun. some helpful back to school tips. Therena at Little Bit of Paint shared a great teacher back to school gift. What teacher wouldn't want to get these handy classroom supplies! TONS of super cute FREE printables! some useful back to school tips. Lisa at 365 Organized shared what moms do on the first day of school. some great tips on how to save on back to school clothes. an easy to make back to school wreath. Hi Bonnie! Thank You for sharing my Back to School Wreath with your readers! It's awesome to be featured among all these great school projects! Have a Blessed Week! It's a really cute wreath Lizbeth! Thanks so much for sharing it at last week's party! You're welcome Meredith! Thanks for joining the party last week! Thanks for featuring the Making Lunches Fun is Easy post from my blog. It was a guest post by Nina of Mamabelly. She does a great job with bentos! Thanks for sharing great ideas for school lunches Diana! Thanks so much for featuring my Back to School Teacher gifts!!! You're welcome Therena! These gifts are great and really useful for teachers in the classroom! Thanks for sharing! Some unique ideas here! Thanks for sharing. I had to smile when I saw the hand sanitizer bottle - those poor teachers with all the runny noses and who knows what else. Visiting via LOBS link up. Teachers are exposed to a lot of germs, that's for sure! When I was teaching, I would get really sick at least once a year. You're welcome Marcy! Thanks for joining the party! Such fun ideas! I can't hardly believe that back to school is here already. The summer has gone by very quickly!
english
\begin{document} \title{Security Of Finite-Key-Length Measurement-Device-Independent Quantum Key Distribution Using Arbitrary Number Of Decoys} \author{H. F. Chau} \thanks{email: \texttt{[email protected]}} \affiliation{Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong} \date{\today} \begin{abstract} In quantum key distribution, measurement-device-independent and decoy-state techniques enable the two cooperative agents to establish a shared secret key using imperfect measurement devices and weak Poissonian sources, respectively. Investigations so far are not comprehensive as they restrict to less than or equal to four decoy states. Moreover, many of them involves pure numerical studies. Here I report a general security proof that works for any fixed number of decoy states and any fixed raw key length. The two key ideas involved here. The first one is the repeated application of the inversion formula for Vandermonde matrix to obtain various bounds on certain yields and error rates. The second one is the use of a recently proven generalization of the McDiarmid inequality. These techniques rise the best provably secure key rate of the measurement-device-independent version of the BB84 scheme by at least 1.25~times and increase the workable distance between the two cooperative agents from slightly less than 60~km to slightly greater than 130~km in case there are $10^{10}$ photon pulse pair sent without a quantum repeater. \end{abstract} \maketitle \section{Introduction} \label{Sec:Intro} Quantum key distribution (QKD\xspace) is the art for two trusted agents, commonly refers to as Alice and Bob, to share a provably secure secret key by preparing and measuring quantum states that are transmitted through a noisy channel controlled by an eavesdropper Eve who has unlimited computational power. In realistic QKD\xspace setup, decoy-state technique allows Alice and Bob to obtain their secret key using the much more practical weak phase-randomized Poissonian sources~\cite{Wang05,LMC05}. In addition, measurement-device-independent (MDI\xspace) method enables them to use imperfect apparatus that may be controlled by Eve to perform measurement~\cite{LCQ12}. Decoy-state technique has been extensively studied. In fact, this technique can be applied to many different QKD\xspace schemes~\cite{Wang05,LMC05,VV14,BMFBB16,DSB18}. Researches on the effective use of a general number of decoys have been conducted~\cite{Hayashi07,HN14,Chau18,CN20}. The effect of finite raw key length on the key rate has been investigated~\cite{HN14,LCWXZ14,BMFBB16,Chau18,CN20}. Nonetheless, security and efficiency analyses on the combined use of decoy-state and MDI\xspace techniques are less comprehensive. So far, they are restricted to less than or equal to four decoy states~\cite{MFR12,SGLL13,SGLL13e,CXCLTL14,XXL14,YZW15,ZYW16,ZZRM17,MZZZZW18,WBZJL19}. Furthermore, it is unclear how to extend these methods analytically to an arbitrary but fixed number of decoys. Along a slightly different line, the case of finite raw key length for the combined use of decoy-state and MDI\xspace techniques has been studied. So far, these studies applied Azuma, Hoeffding and Sefling inequalities as well as Chernoff bound in a straightforward manner~\cite{CXCLTL14,YZW15,ZYW16,ZZRM17,MZZZZW18,WBZJL19}. Here I report the security analysis and a key rate formula for the BB84-based~\cite{BB84} MDI\xspace-QKD\xspace using passive partial Bell state detection for finite raw key length with the condition that Alice and Bob each uses an arbitrary but fixed number of decoys. One of the key ideas in this work is the repeated use of the analytical formula for the elements of the inverse of a Vandermonde matrix. A tight bound on various yields and error rates for a general number of decoys can then be obtained through this analytical formula. (Actually, Yuan \emph{et al.} also used repeated Vandermonde matrix inversion to obtain upper and lower bounds of the so-called two-single-photon yield in case one of the photon intensities used is $0$~\cite{YZLM16}. Nevertheless, the bounds reported here are more general and powerful than theirs.) The other key idea used here is the application of a powerful generalization of the McDiarmid inequality in mathematical statistics recently proven in Ref.~\cite{CN20}. This inequality is effective to tackle finite size statistical fluctuation of certain error rates involved in the key rate formula. I compute the secure key rate for the MDI\xspace-version of the BB84 scheme using the setup and channel studied by Zhou \emph{et al.} in Ref.~\cite{ZYW16}. The best provably secure key rate for this setup before this work are reported by Mao \emph{et al.} in Ref.~\cite{MZZZZW18}. Compared to their work, in case the total number of photon pulse pair send by Alice and Bob is $10^{10}$, the provably secure key rate using this new method is increased by at least 125\%. Besides, the maximum transmission distance is increased from slightly less than 60~km to slightly greater than 130~km. This demonstrates the effectiveness of this new approach for MDI\xspace-QKD\xspace. \section{The MDI\xspace-QKD\xspace Protocol} \label{Sec:Protocol} In this paper, the polarization of all photon pulses are prepared either in ${\mathtt X}\xspace$-basis with photon intensity $\mu_{{\mathtt X}\xspace,i}$ (for $i=1,2,\cdots, k_{\mathtt X}\xspace$) or in ${\mathtt Z}\xspace$-basis with photon intensity $\mu_{{\mathtt Z}\xspace,i}$ (for $i=1,2,\cdots,k_{\mathtt Z}\xspace$). For simplicity, I label these photon intensities in descending order by $\mu_{{\mathtt X}\xspace,1} > \mu_{{\mathtt X}\xspace,2} > \cdots > \mu_{{\mathtt X}\xspace,k_{\mathtt X}\xspace} \ge 0$ and similarly for $\mu_{{\mathtt Z}\xspace,i}$'s. I denote the probability of choosing the preparation basis ${\mathtt B}\xspace \in \{ {\mathtt X}\xspace,{\mathtt Z}\xspace \}$ by $p_{\mathtt B}\xspace$ and the probability of choosing photon intensity $\mu_{{\mathtt B}\xspace,i}$ given the preparation basis ${\mathtt B}\xspace$ by $p_{i\mid{\mathtt B}\xspace}$. Here I study the following MDI\xspace-QKD\xspace protocol, which is a BB84-based scheme originally studied in Refs.~\cite{LCQ12,CXCLTL14}. \begin{enumerate} \item Alice and Bob each has a phase-randomized Poissonian distributed source. Each of them randomly and independently prepares a photon pulse and sends it to the untrusted third party Charlie. They jot down the intensity and polarization used for each pulse. \label{Scheme:prepare} \item Charlie performs a partial Bell state measurement like the ones in Refs.~\cite{LCQ12,CXCLTL14,MR12}. He publicly announces the measurement result including non-detection and inconclusive events. \label{Scheme:measure} \item Alice and Bob reveal the basis and intensity they used for each of their prepared photon pulse. If the preparation bases of a pair of photon pulses they have sent to Charlie for Bell basis measurement disagree, they discard them. If both pulses are prepared in the ${\mathtt X}\xspace$-basis, they reveal their preparation polarizations. They also randomly reveal the preparation polarizations of a few pulses that they have both prepared in the ${\mathtt Z}\xspace$-basis. In this way, they can estimate the various yields and error rates to be defined in Sec.~\ref{Sec:Y_and_Q}. \label{Scheme:estimate} \item They use the preparation information of their remaining photon pulses that have been conclusively measured by Charlie to generate their raw secret keys and then perform error correction and privacy amplification on these keys to obtain their final secret keys according to the MDI\xspace-QKD\xspace procedure reported in Refs.~\cite{LCQ12,MR12}. (Here I assume that Alice and Bob use forward reconciliation to establish the key. The case of reverse reconciliation can be studied in a similar manner.) \label{Scheme:post_process} \end{enumerate} \section{Bounds On Various Yields And Error Rates In The MDI\xspace-Setting} \label{Sec:Y_and_Q} I use the symbol $Q_{{\mathtt B}\xspace,i,j}$ to denote the yield given that both Alice and Bob prepare their photons in ${\mathtt B}\xspace$-basis and that Alice (Bob) uses photon intensity $\mu_{{\mathtt B}\xspace,i}$ ($\mu_{{\mathtt B}\xspace,j}$) for ${\mathtt B}\xspace = {\mathtt X}\xspace, {\mathtt Z}\xspace$ and $i,j = 1,2,\cdots,k_{{\mathtt B}\xspace}$. More precisely, it is the portion of photon pairs prepared using the above description that Charlie declares conclusive detection. Furthermore, I define the error rate of these photon pairs $E_{{\mathtt B}\xspace,i,j}$ as the portion of those conclusively detected photons above whose prepared polarizations by Alice and Bob are the same. And I set $\bar{E}_{{\mathtt B}\xspace,i,j} = 1 - E_{{\mathtt B}\xspace,i,j}$. Similar to the case of standard (that is, non-MDI\xspace) implementation of QKD\xspace, for phase randomized Poissonian photon sources~\cite{MR12}, \begin{equation} Q_{{\mathtt B}\xspace,i,j} = \sum_{a,b=0}^{+\infty} \frac{\mu_{{\mathtt B}\xspace,i}^a \mu_{{\mathtt B}\xspace,j}^b Y_{{\mathtt B}\xspace,a,b} \exp(-\mu_{{\mathtt B}\xspace,i}) \exp(-\mu_{{\mathtt B}\xspace,j})}{a!\ b!} \label{E:Q_mu_def} \end{equation} and \begin{align} & Q_{{\mathtt B}\xspace,i,j} E_{{\mathtt B}\xspace,i,j} \nonumber \\ ={} & \sum_{a,b=0}^{+\infty} \frac{\mu_{{\mathtt B}\xspace,i}^a \mu_{{\mathtt B}\xspace,j}^b Y_{{\mathtt B}\xspace,a,b} e_{{\mathtt B}\xspace,a,b} \exp(-\mu_{{\mathtt B}\xspace,i}) \exp(-\mu_{{\mathtt B}\xspace,j})}{a!\ b!} . \label{E:E_mu_def} \end{align} Here, $Y_{{\mathtt B}\xspace,a,b}$ is the probability of conclusive detection by Charlie given that the photon pulses sent by Alice (Bob) contains $a$ ($b$)~photons and $e_{{\mathtt B}\xspace,a,b}$ is the corresponding bit error rate of the raw key. Furthermore, I denote the yield conditioned on Alice preparing a vacuum state and Bob preparing in the ${\mathtt B}\xspace$-basis by the symbol $Y_{{\mathtt B}\xspace,0,\star}$. Clearly, $Y_{{\mathtt B}\xspace,0,\star}$ obeys \begin{equation} Y_{{\mathtt B}\xspace,0,\star} = \sum_{j=1}^{k_{\mathtt B}\xspace} p_{j\mid{\mathtt B}\xspace} \tilde{Y}_{{\mathtt B}\xspace,0,j} , \label{E:Y_0*_relation} \end{equation} where $\tilde{Y}_{{\mathtt B}\xspace,0,j}$ is the yield conditioned on Alice sending the vacuum state and Bob sending photon with intensity $\mu_{{\mathtt B}\xspace,j}$ in the ${\mathtt B}\xspace$-basis. I need to deduce the possible values of $Y_{{\mathtt B}\xspace,i,j}$'s and $Y_{{\mathtt B}\xspace,i,j} e_{{\mathtt B}\xspace,i,j}$ from Eqs.~\eqref{E:Q_mu_def} and~\eqref{E:E_mu_def}. One way to do it is to compute various lower and upper bounds of $Y_{{\mathtt B}\xspace,i,j}$'s and $Y_{{\mathtt B}\xspace,i,j} e_{{\mathtt B}\xspace,i,j}$ by brute force optimization of truncated versions of Eqs.~\eqref{E:Q_mu_def} and~\eqref{E:E_mu_def} like the method reported in Refs.~\cite{MFR12,CXCLTL14,XXL14}. However, this approach is rather inelegant and ineffective. Further note that Alice and Bob have no control on the values of $Y_{{\mathtt B}\xspace,a,b}$'s and $e_{{\mathtt B}\xspace,a,b}$'s since Charlie and Eve are not trustworthy. All they know is that these variables are between 0 and 1. Fortunately, in the case of phase-randomized Poissonian distributed light source, Corollaries~\ref{Cor:Y0*} and~\ref{Cor:bounds_on_Yxx} in the Appendix can be used to bound $Y_{{\mathtt B}\xspace,0,\star}, Y_{{\mathtt B}\xspace,1,1}, Y_{{\mathtt B}\xspace,1,1} e_{{\mathtt B}\xspace,1,1}$ and $Y_{{\mathtt B}\xspace,1,1} \bar{e}_{{\mathtt B}\xspace,1,1}$ analytically, where $\bar{e}_{{\mathtt B}\xspace,1,1} \equiv 1 - e_{{\mathtt B}\xspace,1,1}$. More importantly, these bounds are effective to analyze the key rate formula to be reported in Sec.~\ref{Sec:Rate}. Following the trick used in Refs.~\cite{Chau18,CN20}, by using the statistics of either all the $k_{{\mathtt B}\xspace}$ different photon intensities or all but the largest one used by Alice and Bob depending on the parity of $k_{{\mathtt B}\xspace}$, Corollaries~\ref{Cor:Y0*} and~\ref{Cor:bounds_on_Yxx} imply the following tight bounds \begin{subequations} \label{E:various_Y_and_e_bounds} \begin{equation} Y_{{\mathtt B}\xspace,0,\star} \ge \sum_{i,j=1}^{k_{\mathtt B}\xspace} p_{j\mid{\mathtt B}\xspace} {\mathcal A}_{{\mathtt B}\xspace,0,i}^{\text{e}} Q_{{\mathtt B}\xspace,i,j} , \label{E:Y0*_bound} \end{equation} \begin{equation} Y_{{\mathtt B}\xspace,1,1} \ge Y_{{\mathtt B}\xspace,1,1}^\downarrow \equiv \sum_{i,j=1}^{k_{\mathtt B}\xspace} {\mathcal A}_{{\mathtt B}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt B}\xspace,1,j}^{\text{o}} Q_{{\mathtt B}\xspace,i,j} - C_{{\mathtt B}\xspace,2}^2 , \label{E:Y11_bound} \end{equation} \begin{equation} Y_{{\mathtt B}\xspace,1,1} e_{{\mathtt B}\xspace,1,1} \le \left( Y_{{\mathtt B}\xspace,1,1} e_{{\mathtt B}\xspace,1,1} \right)^\uparrow \equiv \sum_{i,j=1}^{k_{\mathtt B}\xspace} {\mathcal A}_{{\mathtt B}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt B}\xspace,1,j}^{\text{e}} Q_{{\mathtt B}\xspace,i,j} E_{{\mathtt B}\xspace,i,j} , \label{E:Ye11_bound} \end{equation} \begin{align} Y_{{\mathtt B}\xspace,1,1} e_{{\mathtt B}\xspace,1,1} &\ge{} \left( Y_{{\mathtt B}\xspace,1,1} e_{{\mathtt B}\xspace,1,1} \right)^\downarrow \nonumber \\ &\equiv \sum_{i,j=1}^{k_{\mathtt B}\xspace} {\mathcal A}_{{\mathtt B}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt B}\xspace,1,j}^{\text{o}} Q_{{\mathtt B}\xspace,i,j} E_{{\mathtt B}\xspace,i,j} - C_{{\mathtt B}\xspace,2}^2 , \label{E:Ye11_special_bound} \end{align} and \begin{align} Y_{{\mathtt B}\xspace,1,1} \bar{e}_{{\mathtt B}\xspace,1,1} &\ge \left( Y_{{\mathtt B}\xspace,1,1} \bar{e}_{{\mathtt B}\xspace,1,1} \right)^\downarrow \nonumber \\ &\equiv \sum_{i,j=1}^{k_{\mathtt B}\xspace} {\mathcal A}_{{\mathtt B}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt B}\xspace,1,j}^{\text{o}} Q_{{\mathtt B}\xspace,i,j} \bar{E}_{{\mathtt B}\xspace,i,j} - C_{{\mathtt B}\xspace,2}^2 \label{E:Y11bare1_bound} \end{align} \end{subequations} for ${\mathtt B}\xspace = {\mathtt X}\xspace, {\mathtt Z}\xspace$. (The reason for using ${\text{e}}$ and ${\text{o}}$ as superscripts is that it will be self-evident from the discussion below that for fixed ${\mathtt B}\xspace$ and $j$, there are even number of non-zero terms in ${\mathcal A}^{\text{e}}_{{\mathtt B}\xspace,j,i}$ and odd number of non-zero terms in ${\mathcal A}^{\text{o}}_{{\mathtt B}\xspace,j,i}$.) For the above inequalities, in case $k_{\mathtt B}\xspace$ is even, then \begin{subequations} \label{E:various_A_ai} \begin{equation} {\mathcal A}_{{\mathtt B}\xspace,j,i}^{\text{e}} = A_j(\mu_{{\mathtt B}\xspace,i},\{ \mu_{{\mathtt B}\xspace,1}, \mu_{{\mathtt B}\xspace,2}, \cdots, \mu_{{\mathtt B}\xspace,i-1}, \mu_{{\mathtt B}\xspace,i+1},\cdots, \mu_{{\mathtt B}\xspace,k_{\mathtt B}\xspace} \}) \label{E:A_xi_even_def_for_even} \end{equation} for $i=1,2,\cdots,k_{\mathtt B}\xspace$ and $j=0,1$. Furthermore, \begin{equation} {\mathcal A}_{{\mathtt B}\xspace,1,1}^{\text{o}} = 0 \label{E:A_11_odd_def_for_even} \end{equation} and \begin{equation} {\mathcal A}_{{\mathtt B}\xspace,1,i}^{\text{o}} = A_1(\mu_{{\mathtt B}\xspace,i},\{ \mu_{{\mathtt B}\xspace,2}, \mu_{{\mathtt B}\xspace,3},\cdots, \mu_{{\mathtt B}\xspace,i-1}, \mu_{{\mathtt B}\xspace,i+1},\cdots, \mu_{{\mathtt B}\xspace,k_{\mathtt B}\xspace} \}) \label{E:A_1i_odd_def_for_even} \end{equation} for $i=2,3,\cdots,k_{\mathtt B}\xspace$. In addition, \begin{widetext} \begin{equation} C_{{\mathtt B}\xspace,2} = \left( \sum_{\ell=2}^{k_{\mathtt B}\xspace} \mu_{{\mathtt B}\xspace,2} \mu_{{\mathtt B}\xspace,3} \cdots \mu_{{\mathtt B}\xspace,\ell-1} \mu_{{\mathtt B}\xspace,\ell+1} \cdots \mu_{{\mathtt B}\xspace,k_{\mathtt B}\xspace} \right) \sum_{i=2}^{k_{\mathtt B}\xspace} \left\{ \frac{1}{\mu_{{\mathtt B}\xspace,i} \prod_{t\ne 1,i} (\mu_{{\mathtt B}\xspace,i} - \mu_{{\mathtt B}\xspace,t})} \left[ \exp(\mu_{{\mathtt B}\xspace,i}) - \sum_{j=0}^{k_{\mathtt B}\xspace-2} \frac{\mu_{{\mathtt B}\xspace,i}^j}{j!} \right] \right\} . \label{E:C2_def_even} \end{equation} \end{widetext} Here I use the convention that the term involving $1/\mu_{{\mathtt B}\xspace,i}$ in the above summards with dummy index $i$ is equal to $0$ if $\mu_{{\mathtt B}\xspace,i} = 0$. Whereas in case $k_{\mathtt B}\xspace$ is odd, then \begin{equation} {\mathcal A}_{{\mathtt B}\xspace,1,i}^{\text{o}} = A_1(\mu_{{\mathtt B}\xspace,i},\{ \mu_{{\mathtt B}\xspace,1}, \mu_{{\mathtt B}\xspace,2}, \cdots, \mu_{{\mathtt B}\xspace,i-1}, \mu_{{\mathtt B}\xspace,i+1},\cdots, \mu_{{\mathtt B}\xspace,k_{\mathtt B}\xspace} \}) \label{E:A_1i_odd_def_for_odd} \end{equation} for $i=1,2,\cdots,k_{\mathtt B}\xspace$. Furthermore, \begin{equation} {\mathcal A}_{{\mathtt B}\xspace,j,1}^{\text{e}} = 0 \label{E:A_x1_even_def_for_odd} \end{equation} and \begin{equation} {\mathcal A}_{{\mathtt B}\xspace,j,i}^{\text{e}} = A_j(\mu_{{\mathtt B}\xspace,i},\{ \mu_{{\mathtt B}\xspace,2}, \mu_{{\mathtt B}\xspace,3},\cdots, \mu_{{\mathtt B}\xspace,i-1}, \mu_{{\mathtt B}\xspace,i+1},\cdots, \mu_{{\mathtt B}\xspace,k_{\mathtt B}\xspace} \}) \label{E:A_xi_even_def_for_odd} \end{equation} for $j=1,2$ and $i=2,3,\cdots,k_{\mathtt B}\xspace$. In addition, \begin{widetext} \begin{equation} C_{{\mathtt B}\xspace,2} = \left( \sum_{\ell=1}^{k_{\mathtt B}\xspace} \mu_{{\mathtt B}\xspace,1} \mu_{{\mathtt B}\xspace,2} \cdots \mu_{{\mathtt B}\xspace,\ell-1} \mu_{{\mathtt B}\xspace,\ell+1} \cdots \mu_{{\mathtt B}\xspace,k_{\mathtt B}\xspace} \right) \sum_{i=1}^{k_{\mathtt B}\xspace} \left\{ \frac{1}{\mu_{{\mathtt B}\xspace,i} \prod_{t\ne i} (\mu_{{\mathtt B}\xspace,i} - \mu_{{\mathtt B}\xspace,t})} \left[ \exp(\mu_{{\mathtt B}\xspace,i}) - \sum_{j=0}^{k_{\mathtt B}\xspace-1} \frac{\mu_{{\mathtt B}\xspace,i}^j}{j!} \right] \right\} . \label{E:C2_def_odd} \end{equation} \end{widetext} \end{subequations} Note that in Eq.~\eqref{E:various_A_ai}, \begin{subequations} \label{E:generators_def} \begin{equation} A_0(\mu,S) = \frac{\displaystyle -\exp(\mu) \prod_{s\in S} s}{\displaystyle \prod_{s\in S} (\mu - s)} \label{E:A_00_template} \end{equation} and \begin{equation} A_1(\mu,S) = \frac{\displaystyle -\exp(\mu) \sum_{s\in S} \left( \prod_{\substack{s'\in S \\ s' \ne s}} s' \right)}{\displaystyle \prod_{s\in S} (\mu - s)} . \label{E:A_11_template} \end{equation} \end{subequations} Note that different upper and lower bounds for $Y_{{\mathtt B}\xspace,1,1}$ have been obtained using similar Vandermonde matrix inversion technique in Ref.~\cite{YZLM16}. The differences between those bounds and the actual value of $Y_{{\mathtt B}\xspace,1,1}$ depend on the yields $Y_{{\mathtt B}\xspace,i,j}$ with $i,j \ge 1$. In contrast, the difference between the bound in Inequalities~\eqref{E:Y11_bound} and the actual value of $Y_{{\mathtt B}\xspace,1,1}$ depend on $Y_{{\mathtt B}\xspace,i,j}$ with $i,j\ge k_{\mathtt B}\xspace$. Thus, Inequality~\eqref{E:Y11_bound} and similarly also Inequality~\eqref{E:Ye11_bound} give more accurate estimates of $Y_{{\mathtt B}\xspace,1,1}$ and $Y_{{\mathtt B}\xspace,1,1} e_{{\mathtt B}\xspace,1,1}$, respectively. Furthermore, the bounds in Ref.~\cite{YZLM16} also work for the case of $\mu_{{\mathtt B}\xspace,k_{{\mathtt B}\xspace}} = 0$. It is also not clear how to extend their method to bound for yields other than the two-single-photon events that are needed in computing the key rate for twin-field~\cite{LYDS18} and phase-matching~\cite{MZZ18} MDI\xspace-QKD\xspace{s}. \section{The Key Rate Formula} \label{Sec:Rate} The secure key rate $R$ is defined as the number of bits of secret key shared by Alice and Bob at the end of the protocol divided by the number of photon pulse pairs they have sent to Charlie. In fact, the derivation of the key rate formula in Refs.~\cite{LCWXZ14,TL17,Chau18,CN20} for the case of standard QKD\xspace can be easily modified to the case of MDI\xspace-QKD\xspace by making the following correspondences. (See also the key rate formula used in Ref.~\cite{MR12} for MDI\xspace-QKD\xspace.) The vacuum event in the standard QKD\xspace is mapped to the event that both Alice and Bob send a vacuum photon pulse to Charlie. The single photon event is mapped to the event that both Alice and Bob send a single photon to Charlie. The multiple photon event is mapped to the event that Alice and Bob are both sending neither a vacuum nor a single photon pulse to Charlie. In the case of forward reconciliation, the result is \begin{widetext} \begin{equation} R \ge p_{\mathtt Z}\xspace^2 \left\{ \langle \exp(-\mu) \rangle_{{\mathtt Z}\xspace} Y_{{\mathtt Z}\xspace,0,\star} + \langle \mu \exp(-\mu) \rangle_{{\mathtt Z}\xspace}^2 Y_{{\mathtt Z}\xspace,1,1} [1-H_2(e_p)] - \Lambda_\text{EC} - \frac{\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j}}{\ell_\text{raw}} \left[ 6\log_2 \left( \frac{\chi}{\epsilon_\text{sec}} \right) + \log_2 \left( \frac{2}{\epsilon_\text{cor}} \right) \right] \right\} , \label{E:key_rate_basic} \end{equation} \end{widetext} where $\langle f(\mu)\rangle_{{\mathtt Z}\xspace} \equiv \sum_{i=1}^{k_{\mathtt Z}\xspace} p_{i\mid{\mathtt Z}\xspace} f(\mu_{{\mathtt Z}\xspace,i})$, $\langle f({\mathtt Z}\xspace,i,j) \rangle_{i,j} \equiv \sum_{i,j=1}^{k_{\mathtt Z}\xspace} p_{i\mid{\mathtt Z}\xspace} p_{j\mid{\mathtt Z}\xspace} f({\mathtt Z}\xspace,i,j)$, $H_2(x) \equiv -x \log_2 x - (1-x) \log_2 (1-x)$ is the binary entropy function, $e_p$ is the phase error rate of the single photon events in the raw key, and $\Lambda_\text{EC}$ is the actual number of bits of information that leaks to Eve as Alice and Bob perform error correction on their raw bits. It is given by \begin{equation} \Lambda_\text{EC} = f_\text{EC} \langle Q_{{\mathtt Z}\xspace,i,j} H_2(E_{{\mathtt Z}\xspace,i,j}) \rangle_{i,j} \label{E:information_leakage} \end{equation} where $f_\text{EC} \ge 1$ measures the inefficiency of the error-correcting code used. In addition, $\ell_\text{raw}$ is the raw sifted key length measured in bits, $\epsilon_\text{cor}$ is the upper bound of the probability that the final keys shared between Alice and Bob are different, and $\epsilon_\text{sec} = (1- p_\text{abort}) \| \rho_\text{AE} - U_\text{A} \otimes \rho_\text{E} \|_1 / 2$. Here $p_\text{abort}$ is the chance that the scheme aborts without generating a key, $\rho_\text{AE}$ is the classical-quantum state describing the joint state of Alice and Eve, $U_\text{A}$ is the uniform mixture of all the possible raw keys created by Alice, $\rho_\text{E}$ is the reduced density matrix of Eve, and $\| \cdot \|_1$ is the trace norm~\cite{Renner05,KGR05,RGK05}. In other words, Eve has at most $\epsilon_\text{sec}$~bits of information on the final secret key shared by Alice and Bob. (In the literature, this is often referred to it as a $\epsilon_\text{cor}$-correct and $\epsilon_\text{sec}$-secure QKD\xspace scheme~\cite{TL17}.) Last but not least, $\chi$ is a QKD\xspace scheme specific factor which depends on the detailed security analysis used. In general, $\chi$ may also depend on other factors used in the QKD\xspace scheme such as the number of photon intensities $k_{{\mathtt X}\xspace}$ and $k_{{\mathtt Z}\xspace}$~\cite{LCWXZ14,Chau18,CN20}. In Inequality~\eqref{E:key_rate_basic}, the phase error of the raw key $e_p$ obeys~\cite{Chau18,FMC10} \begin{widetext} \begin{equation} e_p \le e_{{\mathtt X}\xspace,1,1} + \bar{\gamma} \left( \frac{\epsilon_\text{sec}}{\chi}, e_{{\mathtt X}\xspace,1,1}, \frac{s_{\mathtt X}\xspace Y_{{\mathtt X}\xspace,1,1} \langle \mu \exp(-\mu) \rangle_{{\mathtt X}\xspace}^2}{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j}}, \frac{s_{\mathtt Z}\xspace Y_{{\mathtt Z}\xspace,1,1} \langle \mu \exp(-\mu) \rangle_{{\mathtt Z}\xspace}^2}{\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j}} \right) \label{E:e_p_bound} \end{equation} \end{widetext} with probability at least $1-\epsilon_\text{sec}/\chi$, where $\langle f(\mu) \rangle_{\mathtt X}\xspace \equiv \sum_{i=1}^{k_{\mathtt X}\xspace} p_{i\mid{\mathtt X}\xspace} f(\mu_{{\mathtt X}\xspace,i})$, \begin{equation} \bar{\gamma}(a,b,c,d) \equiv \sqrt{\frac{(c+d)(1-b)b}{c d} \ \ln \left[ \frac{c+d}{2\pi c d (1-b)b a^2} \right]} , \label{E:gamma_def} \end{equation} and $s_{\mathtt B}\xspace$ is the number of bits that are prepared and measured in ${\mathtt B}\xspace$ basis. Clearly, $s_{\mathtt Z}\xspace = \ell_\text{raw}$ and $s_{\mathtt X}\xspace \approx p_{\mathtt X}\xspace^2 s_{\mathtt Z}\xspace \langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j} / (p_{\mathtt Z}\xspace^2 \langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j})$. I also remark that $\bar{\gamma}$ becomes complex if $a,c,d$ are too large. This is because in this case no $e_p \ge e_{{\mathtt X}\xspace,1,1}$ exists with failure probability $a$. In this work, all parameters are carefully picked so that $\bar{\gamma}$ is real. There are two ways to proceed. The most general way is to directly find a lower bound for $Y_{{\mathtt Z}\xspace,1,1}$. Specifically, by substituting Inequalities~\eqref{E:Y0*_bound}, \eqref{E:Y11_bound} and~\eqref{E:e_p_bound} into Inequality~\eqref{E:key_rate_basic}, I obtain the following lower bound of the key rate \begin{align} R &\ge \sum_{i,j=1}^{k_{\mathtt Z}\xspace} {\mathcal B}_{{\mathtt Z}\xspace,i,j} Q_{{\mathtt Z}\xspace,i,j} - p_{\mathtt Z}\xspace^2 \left\{ \vphantom{\frac{\chi}{\epsilon_\text{sec}}} \langle \mu \exp(-\mu) \rangle_{\mathtt Z}\xspace^2 C_{{\mathtt Z}\xspace,2}^2 [1 - H_2(e_p)] \right. \nonumber \\ &\quad \left. + \Lambda_\text{EC} + \frac{\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j}}{\ell_\text{raw}} \left[ 6\log_2 \left( \frac{\chi}{\epsilon_\text{sec}} \right) + \log_2 \left( \frac{2}{\epsilon_\text{cor}} \right) \right] \right\} , \label{E:key_rate_asym} \end{align} where \begin{align} {\mathcal B}_{{\mathtt Z}\xspace,i,j} &= p_{\mathtt Z}\xspace^2 \left\{ \langle \exp(-\mu) \rangle_{{\mathtt Z}\xspace} {\mathcal A}_{{\mathtt Z}\xspace,0,i}^{\text{e}} p_{j\mid{\mathtt Z}\xspace} \right. \nonumber \\ &\quad \left. + \langle \mu \exp(-\mu) \rangle_{{\mathtt Z}\xspace}^2 {\mathcal A}_{{\mathtt Z}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt Z}\xspace,1,j}^{\text{o}} [1-H_2(e_p)] \right\} . \label{E:b_n_def} \end{align} Here I would like to point out that unlike the corresponding key rate formulae for standard QKD\xspace in Refs.~\cite{Hayashi07,LCWXZ14,Chau18,CN20}, a distinctive feature of the key rate formula for MDI\xspace-QKD\xspace in Eq.~\eqref{E:key_rate_asym} is the presence of the $C_{{\mathtt Z}\xspace,2}^2$ term. From Eq.~\eqref{E:various_A_ai}, provided that $\mu_{{\mathtt Z}\xspace,i} - \mu_{{\mathtt Z}\xspace,i+1}$ are all greater than a fixed positive number, the value of $C_{{\mathtt Z}\xspace,2}^2$ decreases with $k_{\mathtt Z}\xspace$. This is the reason why the MDI\xspace version of a QKD\xspace scheme may require more decoys to attain a key rate comparable to the corresponding standard QKD\xspace scheme. There is an alternative way to obtain the key rate formula discovered by Zhou \emph{et al.}~\cite{ZYW16} that works for BB84~\cite{BB84} and the six-state scheme~\cite{B98}. Suppose the photon pulses prepared by Alice and Bob in Step~\ref{Scheme:prepare} of the MDI\xspace-QKD\xspace protocol in Sec.~\ref{Sec:Protocol} both contain a single photon. Suppose further that they are prepared in the same basis. Then, from Charlie and Eve's point of view, this two-single-photon state are the same irrespective of their preparation basis. Consequently, $Y_{{\mathtt X}\xspace,1,1} = Y_{{\mathtt Z}\xspace,1,1}$ (even though $e_{{\mathtt X}\xspace,1,1}$ need not equal $e_{{\mathtt Z}\xspace,1,1}$). That is to say, the secure key rate in Inequality~\eqref{E:key_rate_basic} also holds if $Y_{{\mathtt Z}\xspace,1,1}$ there is replaced by $Y_{{\mathtt X}\xspace,1,1}$. (Here I stress that the key generation basis is still ${\mathtt Z}\xspace$. But as $Y_{{\mathtt X}\xspace,1,1} = Y_{{\mathtt Z}\xspace,1,1}$, I could use the bound on $Y_{{\mathtt X}\xspace,1,1}$ to obtain an alternative key rate formula for the same MDI\xspace-QKD\xspace scheme.) Following the same procedure above, I get \begin{align} R &\ge \sum_{i,j=1}^{k_{\mathtt Z}\xspace} {\mathcal B}'_{{\mathtt Z}\xspace,i,j} Q_{{\mathtt Z}\xspace,i,j} + \sum_{i,j=1}^{k_{\mathtt X}\xspace} {\mathcal B}_{{\mathtt X}\xspace,i,j} Q_{{\mathtt X}\xspace,i,j} \nonumber \\ &\quad - p_{\mathtt Z}\xspace^2 \left\{ \vphantom{\frac{\chi}{\epsilon_\text{sec}}} \langle \mu \exp(-\mu) \rangle_{\mathtt Z}\xspace^2 C_{{\mathtt X}\xspace,2}^2 [1 - H_2(e_p)] + \Lambda_\text{EC} \right. \nonumber \\ &\quad \left. + \frac{\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j}}{\ell_\text{raw}} \left[ 6\log_2 \left( \frac{\chi}{\epsilon_\text{sec}} \right) + \log_2 \left( \frac{2}{\epsilon_\text{cor}} \right) \right] \right\} , \label{E:key_rate_asym_alt} \end{align} where \begin{subequations} \begin{equation} {\mathcal B}'_{{\mathtt Z}\xspace,i,j} = p_{\mathtt Z}\xspace^2 \langle \exp(-\mu) \rangle_{{\mathtt Z}\xspace} {\mathcal A}_{{\mathtt Z}\xspace,0,i}^{\text{e}} p_{j\mid{\mathtt Z}\xspace} \label{E:bprime_n_def} \end{equation} and \begin{equation} {\mathcal B}_{{\mathtt X}\xspace,i,j} = p_{\mathtt Z}\xspace^2 \langle \mu \exp(-\mu) \rangle_{{\mathtt Z}\xspace}^2 {\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{o}} [1-H_2(e_p)] . \label{E:bX_n_def} \end{equation} \end{subequations} \section{Treatments Of Phase Error And Statistical Fluctuation Due To Finite Raw Key Length On The Secure Key Rate} \label{Sec:Finite_Size} In order to compute the lower bound on the key rate $R$ in Inequalities~\eqref{E:key_rate_asym} and~\eqref{E:key_rate_asym_alt}, I need to know the value of $e_{{\mathtt X}\xspace,1,1}$ through the Inequality~\eqref{E:e_p_bound}. More importantly, I need to take into consideration the effects of finite raw key length on the key rate $R$ due to the statistical fluctuations in $e_{{\mathtt X}\xspace,1,1}$ and $Q_{{\mathtt Z}\xspace,i,j}$'s. Here I do so by means of a McDiarmid-type of inequality in statistics first proven in Refs.~\cite{McDiarmid,McDiarmid1} and recently extended in Ref.~\cite{CN20}. Fluctuation of the first term in the R.H.S. of Inequality~\eqref{E:key_rate_asym} due to finite raw key length can be handled by Hoeffding inequality for hypergeometrically distributed random variables~\cite{Hoeffding,LCWXZ14,Chau18,CN20}, which is a special case of the McDiarmid inequality. Using the technique reported in Refs.~\cite{Chau18,CN20}, the first term in the R.H.S. of Inequality~\eqref{E:key_rate_asym} can be regarded as a sum of $s_{\mathtt Z}\xspace$ hypergeometrically distributed random variables each taking on values from the set $\{ \langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j} {\mathcal B}_{{\mathtt Z}\xspace,i,j} / (p_{i\mid{\mathtt Z}\xspace} p_{j\mid{\mathtt Z}\xspace}) \}_{i,j=1}^{k_{\mathtt Z}\xspace}$. Using Hoeffding inequality for hypergeometrically distributed random variables~\cite{Hoeffding}, I conclude that the measured value of $\sum_{i,j} {\mathcal B}_{ij} Q_{{\mathtt Z}\xspace,i,j}$ minus its actual value is greater than $\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j} \left[ \frac{\ln (\chi/\epsilon_\text{sec})}{2 s_{\mathtt Z}\xspace} \right]^{1/2} \Width \left( \left\{ \frac{{\mathcal B}_{{\mathtt Z}\xspace,i,j}}{p_{i\mid{\mathtt Z}\xspace} p_{j\mid{\mathtt Z}\xspace}} \right\}_{i,j=1}^{k_{\mathtt Z}\xspace} \right)$ with probability at most $\epsilon_\text{sec} / \chi$, where $\Width$ of a finite set of real numbers $S$ is defined as $\max S - \min S$. The value of $e_{{\mathtt X}\xspace,1,1}$ in the finite sampling size situation is more involved. Here I adapt the recent results in Ref.~\cite{CN20} to give four upper bounds on $e_{{\mathtt X}\xspace,1,1}$. Surely, I pick the best upper bound out of these four in the key rate analysis. The first step is to use the equality \begin{subequations} \label{E:e_Z11_identities} \begin{align} e_{{\mathtt X}\xspace,1,1} &= \frac{Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1}}{Y_{{\mathtt X}\xspace,1,1}} \label{E:e_Z11_identity1} \\ &= \frac{Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1}}{Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} + Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1}} . \label{E:e_Z11_identity2} \end{align} \end{subequations} To get the first two upper bounds of $e_{{\mathtt X}\xspace,1,1}$, I follow Ref.~\cite{CN20} by using Inequalities~\eqref{E:Y11_bound}, \eqref{E:Ye11_bound} and~\eqref{E:Y11bare1_bound} together with applying Hoeffding inequality for hypergeometrically distributed random variables to study the statistical fluctuations of $\sum_{i,j=1}^{k_{\mathtt X}\xspace} {\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}} Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j}$, $\sum_{i,j=1}^{k_{\mathtt X}\xspace} {\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{o}} Q_{{\mathtt X}\xspace,i,j}$ and $\sum_{i,j=1}^{k_{\mathtt X}\xspace} {\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{o}} Q_{{\mathtt X}\xspace,i,j} \bar{E}_{{\mathtt X}\xspace,i,j}$. The result is \begin{equation} e_{{\mathtt X}\xspace,1,1} \le \frac{\left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow + \Delta Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1}}{Y_{{\mathtt X}\xspace,1,1}^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1}} \label{E:e_Z11_bound1} \end{equation} and \begin{align} e_{{\mathtt X}\xspace,1,1} &\le \left[ \left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow + \Delta Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right] \left[ \left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow \right. \nonumber \\ &\quad \left. + \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow + \Delta Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right]^{-1} \label{E:e_Z11_bound2} \end{align} each with probability at least $1-2\epsilon_\text{sec}/\chi$, where \begin{subequations} \label{E:finite-size_Ye_s} \begin{align} \Delta Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} &= \left[ \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \ln (\chi/\epsilon_\text{sec})}{2 s_{\mathtt X}\xspace} \right]^{1/2} \times \nonumber \\ &\qquad \Width \left( \left\{ \frac{{\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\}_{i,j=1}^{k_{\mathtt X}\xspace} \right) , \label{E:finite-size_Ye} \end{align} \begin{align} \Delta Y_{{\mathtt X}\xspace,1,1} &= \langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \left[ \frac{\ln (\chi/\epsilon_\text{sec})}{2 s_{\mathtt X}\xspace} \right]^{1/2} \times \nonumber \\ &\qquad \Width \left( \left\{ \frac{{\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{o}}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\}_{i,j=1}^{k_{\mathtt X}\xspace} \right) \label{E:finite-size_Y} \end{align} and \begin{align} \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} &= \left[ \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \langle Q_{{\mathtt X}\xspace,i,j} \bar{E}_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \ln (\chi/\epsilon_\text{sec})}{2 s_{\mathtt X}\xspace} \right]^{1/2} \times \nonumber \\ &\qquad \Width \left( \left\{ \frac{{\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{o}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{o}}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\}_{i,j=1}^{k_{\mathtt X}\xspace} \right) . \label{E:finite-size_Yebar} \end{align} \end{subequations} Note that in the above equations, $\langle f({\mathtt X}\xspace,i,j) \rangle_{i,j} \equiv \sum_{i,j=1}^{k_{\mathtt X}\xspace} p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace} f({\mathtt X}\xspace,i,j)$. Both the third and the fourth bounds of $e_{{\mathtt X}\xspace,1,1}$ use Eq.~\eqref{E:e_Z11_identity2}, Inequality~\eqref{E:Y11bare1_bound} and the modified McDiarmid inequality in Ref.~\cite{CN20}. For the third one, the result is \begin{align} e_{{\mathtt X}\xspace,1,1} &\le \frac{\left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow}{\left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow + \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1}} \nonumber \\ &\quad + \Delta e_{{\mathtt X}\xspace,1,1} \label{E:e_Z11_bound3} \end{align} with probability at least $1 - 2\epsilon_\text{sec} / \chi$, where \begin{widetext} \begin{align} & \Delta e_{{\mathtt X}\xspace,1,1} \nonumber \\ ={} & \left[ \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \ln (\chi/\epsilon_\text{sec})}{2 s_{\mathtt X}\xspace} \right]^{1/2} \left[ \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right] \Width \left( \left\{ \frac{{\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\}_{i,j=1}^{k_{\mathtt X}\xspace} \right) \nonumber \\ & \quad \times \left[ \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} + \left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\downarrow \left( 1 - \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j}}{s_{\mathtt X}\xspace \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j}} \right) + \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j}^2}{s_{\mathtt X}\xspace^2 \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j}} \max_{i,j=1}^{k_{\mathtt X}\xspace} \left\{ \frac{{\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\} \right]^{-1} \nonumber \displaybreak[1] \\ & \quad \times \left[ \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} + \left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\downarrow \left( 1 - \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j}}{s_{\mathtt X}\xspace \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j}} \right) + \frac{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j}^2}{s_{\mathtt X}\xspace^2 \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j}} \min_{i,j=1}^{k_{\mathtt X}\xspace} \left\{ \frac{{\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\} \right]^{-1} . \label{E:finite-size_e_Z11_shift3} \end{align} And the fourth bound is \begin{equation} e_{{\mathtt X}\xspace,1,1} \le \frac{\left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow}{\left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\uparrow + \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1}} + \hat{r} \left[ \frac{\ln (\chi/\epsilon_\text{sec})}{2} \right]^{1/2} \label{E:e_Z11_bound4} \end{equation} with probability at least $1 - 3\epsilon_\text{sec} / \chi$, where \begin{align} \hat{r}^2 &\approx y^2 \sum_{m=1}^{k_{\mathtt X}\xspace^2} \frac{1}{w^{(m)}-x} \left( - \frac{1}{y+(t-\sum_{i<m} n^{(i)}+1)x+ \min {\mathcal W} + \sum_{i<m} n^{(i)} w^{(i)} + \mu[ w^{(m)}-x]} \right. \nonumber \\ & \qquad - \frac{1}{y+(t-\sum_{i<m} n^{(i)}+1)x+\max {\mathcal W} + \sum_{i<m} n^{(i)} w^{(i)} + \mu[ w^{(m)}-x]} \nonumber \\ & \qquad \left. \left. + \frac{2}{\Width({\mathcal W})} \ln \left\{ \frac{y+(t-\sum_{i<m} n^{(i)}+1)x+\max {\mathcal W} + \sum_{i<m} n^{(i)} w^{(i)} + \mu [w^{(m)}-x]}{y+(t-\sum_{i<m} n^{(i)}+1)x+\min {\mathcal W} + \sum_{i<m} n^{(i)} w^{(i)} + \mu [w^{(m)}-x]} \right\} \right) \right|_{\mu = 0}^{n^{(m)}} . \label{E:finite-size_e_Z11_shift4} \end{align} \end{widetext} In the above equation, \begin{subequations} \begin{equation} y = \left( Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} \bar{e}_{{\mathtt X}\xspace,1,1} , \label{E:y_def_shift4} \end{equation} \begin{equation} t \approx \frac{s_{\mathtt X}\xspace \langle Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} \rangle_{i,j}}{\langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j}} , \label{E:t_def_shift4} \end{equation} \begin{equation} x = \frac{\left( Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1} \right)^\downarrow - \Delta Y_{{\mathtt X}\xspace,1,1} e_{{\mathtt X}\xspace,1,1}}{t} \label{E:x_def_shift4} \end{equation} and \begin{equation} {\mathcal W} = \left\{ \frac{\langle Q_{{\mathtt X}\xspace,i',j'} \rangle_{i',j'} {\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}}}{s_{\mathtt X}\xspace p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\}_{i,j=1}^{k_{\mathtt X}\xspace} \label{E:W_def_shift4} \end{equation} \end{subequations} Last but not least, I need to define $w^{(m)}$ and $n^{(m)}$. Recall that by following the analysis in Ref.~\cite{CN20}, there is a one-one correspondence between a random variable in ${\mathcal W}$ taking the value of $\langle Q_{{\mathtt X}\xspace,i',j'} \rangle_{i',j'} {\mathcal A}_{{\mathtt X}\xspace,1,i}^{\text{e}} {\mathcal A}_{{\mathtt X}\xspace,1,j}^{\text{e}} / (s_{\mathtt X}\xspace p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace})$ and an event that a photon pulse pair is prepared by Alice (Bob) using intensity $\mu_{{\mathtt X}\xspace,i}$ ($\mu_{{\mathtt X}\xspace,j}$) both in basis ${\mathtt X}\xspace$ and that the Bell basis measurement result announced by Charlie is inconsistent with the photon states prepared by Alice and Bob. Now let us arrange the $k_{\mathtt X}\xspace^2$ elements in the set ${\mathcal W}$ are arranged in descending order as $\{ w^{(1)}, w^{(2)}, \cdots, w^{(k_{\mathtt X}\xspace^2)} \}$. Then, $n^{(i)}$ is the number of Bell basis measurement events that corresponds to the value of $w^{(i)} \in {\mathcal W}$. There is an important subtlety that requires attention. In almost all cases of interest, each summard in Eq.~\eqref{E:finite-size_e_Z11_shift4} consists of three terms. The first two are positive and the third one is negative. The sum of the first two terms almost exactly equal to the magnitude of the third term. Hence, truncation error is very serious if one directly use Eq.~\eqref{E:finite-size_e_Z11_shift4} to numerically compute $\hat{r}$. The solution is to expand each term in powers of $1/D_m$ and/or $1/E_m$ defined below. This gives \begin{align} & \hat{r}^2 \nonumber \\ \approx{}& \frac{y^2 \Width({\mathcal W})^2}{3} \sum_{m=1}^{k_{\mathtt X}\xspace^2} \frac{n^{(m)}}{D_m E_m} \left( \frac{1}{D_m^2} + \frac{1}{D_m E_m} + \frac{1}{E_m^2} \right) , \label{E:e_Z11_bound4_approx} \end{align} where \begin{subequations} \begin{equation} D_m = y+ (t - \sum_{i<m} n^{(i)} + 1)x + \min {\mathcal W} + \sum_{i<m} n^{(i)} w^{(i)} \label{E:D_m_def} \end{equation} and \begin{align} E_m &= y+ (t - \sum_{i<m} n^{(i)} + 1)x + \min {\mathcal W} + \sum_{i<m} n^{(i)} w^{(i)} \nonumber \\ &\quad + n^{(m)} (w^{(m)} - x) \nonumber \\ &= y+ (t - \sum_{i\le m} n^{(i)} + 1)x + \min {\mathcal W} + \sum_{i\le m} n^{(i)} w^{(i)} . \label{E:E_m_def} \end{align} \end{subequations} (Note that only the leading term is kept in Eq.~\eqref{E:e_Z11_bound4_approx}. This is acceptable because the next order term is of order of about $1/100$ that of the leading term in all cases of practical interest.) With all the above discussions, to summarize, the secure key rate $R$ of this $\epsilon_\text{cor}$-correct and $\epsilon_\text{sec}$-secure QKD\xspace scheme in the finite raw key length situation is lower-bounded by \begin{widetext} \begin{subequations} \begin{align} R &\ge \sum_{i,j=1}^{k_{\mathtt Z}\xspace} {\mathcal B}_{{\mathtt Z}\xspace,i,j} Q_{{\mathtt Z}\xspace,i,j} - \langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j} \left[ \frac{\ln (\chi / \epsilon_\text{sec})}{2s_{\mathtt Z}\xspace} \right]^{1/2} \Width \left( \left\{ \frac{{\mathcal B}_{{\mathtt Z}\xspace,i,j}}{p_{i\mid{\mathtt Z}\xspace} p_{j\mid{\mathtt Z}\xspace}} \right\}_{i,j=1}^{k_{\mathtt Z}\xspace} \right) - p_{\mathtt Z}\xspace^2 \left\{ \vphantom{\frac{\epsilon_\text{sec}}{s_{\mathtt Z}\xspace}} \langle \mu \exp(-\mu) \rangle_{\mathtt Z}\xspace^2 C_{{\mathtt Z}\xspace,2}^2 [ 1 - H_2(e_p) ] \right. \nonumber \\ &\quad \left. + f_\text{EC} \langle Q_{{\mathtt Z}\xspace,i,j} H_s(E_{{\mathtt Z}\xspace,i,j}) \rangle_{i,j} + \frac{\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j}}{\ell_\text{raw}} \left[ 6\log_2 \left( \frac{\chi}{\epsilon_\text{sec}} \right) + \log_2 \left( \frac{2}{\epsilon_\text{cor}} \right) \right] \right\} . \label{E:key_rate_finite-size} \end{align} and \begin{align} R &\ge \sum_{i,j=1}^{k_{\mathtt Z}\xspace} {\mathcal B}'_{{\mathtt Z}\xspace,i,j} Q_{{\mathtt Z}\xspace,i,j} + \sum_{i,j=1}^{k_{\mathtt X}\xspace} {\mathcal B}_{{\mathtt X}\xspace,i,j} Q_{{\mathtt X}\xspace,i,j} - \langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j} \left[ \frac{\ln (\chi / \epsilon_\text{sec})}{2s_{\mathtt Z}\xspace} \right]^{1/2} \Width \left( \left\{ \frac{{\mathcal B}'_{{\mathtt Z}\xspace,i,j}}{p_{i\mid{\mathtt Z}\xspace} p_{j\mid{\mathtt Z}\xspace}} \right\}_{i,j=1}^{k_{\mathtt Z}\xspace} \right) - \langle Q_{{\mathtt X}\xspace,i,j} \rangle_{i,j} \times \nonumber \\ &\qquad \left[ \frac{\ln (\chi / \epsilon_\text{sec})}{2 s_{\mathtt X}\xspace} \right]^{1/2} \Width \left( \left\{ \frac{{\mathcal B}_{{\mathtt X}\xspace,i,j}}{p_{i\mid{\mathtt X}\xspace} p_{j\mid{\mathtt X}\xspace}} \right\}_{i,j=1}^{k_{\mathtt X}\xspace} \right) - p_{\mathtt Z}\xspace^2 \left\{ \vphantom{\frac{\epsilon_\text{sec}}{s_{\mathtt Z}\xspace}} \langle \mu \exp(-\mu) \rangle_{\mathtt Z}\xspace^2 C_{{\mathtt X}\xspace,2}^2 [ 1 - H_2(e_p) ] + f_\text{EC} \langle Q_{{\mathtt Z}\xspace,i,j} H_s(E_{{\mathtt Z}\xspace,i,j}) \rangle_{i,j} \right. \nonumber \\ &\qquad \left. + \frac{\langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j}}{\ell_\text{raw}} \left[ 6\log_2 \left( \frac{\chi}{\epsilon_\text{sec}} \right) + \log_2 \left( \frac{2}{\epsilon_\text{cor}} \right) \right] \right\} . \label{E:key_rate_finite-size_alt} \end{align} \end{subequations} \end{widetext} I remark that the R.H.S. of the above inequalities implicitly depends on $e_{{\mathtt X}\xspace,1,1}$ whose upper bound obeys Inequalities~\eqref{E:e_Z11_bound1}, \eqref{E:e_Z11_bound2}, \eqref{E:e_Z11_bound3}, \eqref{E:e_Z11_bound4} and~\eqref{E:e_Z11_bound4_approx}. Furthermore, when using the key rate in Inequality~\eqref{E:key_rate_finite-size}, $\chi = 9 = 4+1+4$ for the first three inequalities concerning $e_{{\mathtt X}\xspace,1,1}$ and $\chi = 10$ for the last inequality concerning $e_{{\mathtt X}\xspace,1,1}$~\cite{CN20}. While using the key rate in Inequality~\eqref{E:key_rate_finite-size_alt} instead of Inequality~\eqref{E:key_rate_finite-size}, $\chi = 9, 10, 10, 11$ for Methods~A, B, C and D, respectively. (For reason for $\chi$ to increase by 1 except for Method~A by switching the rate formula from Inequality~\eqref{E:key_rate_finite-size} to Inequality~\eqref{E:key_rate_finite-size_alt} is due to the inclusion of the finite-size statistical fluctuations of the lower bound on $Y_{{\mathtt X}\xspace,1,1}$.) Compare with the corresponding key rate formula for standard QKD\xspace scheme, the most noticeable difference is the presence of additional terms and factors involving $C_{{\mathtt B}\xspace,2}^2$ which tend to lower the key rate. Fortunately, $C_{{\mathtt B}\xspace,2}$ roughly scale as $\mu_{{\mathtt B}\xspace,1}^{k_{\mathtt B}\xspace}$ so that in practice, these terms and factors are negligible if $k_{\mathtt B}\xspace \gtrsim 2$ to $3$. Finally, I remark that the in the limit of $s_{\mathtt Z}\xspace \to +\infty$, the key rate formulae in Inequalities~\eqref{E:key_rate_finite-size} and~\eqref{E:key_rate_finite-size_alt} are tight in the sense that these lower bound are reachable although the condition for attaining them is highly unlikely to occur in realistic channels. \section{Performance Analysis} \label{Sec:Performance} To study the key rate, I use the channel model reported by Ma and Razavi in Ref.~\cite{MR12}, which I called the MR\xspace channel. For this channel, \begin{subequations} \label{E:Channel_Model} \begin{equation} Q_{{\mathtt X}\xspace,i,j} = 2\beta_{ij}^2 [1+2\beta_{ij}^2-4\beta_{ij} I_0(\alpha_{{\mathtt X}\xspace ij}) +I_0(2\alpha_{{\mathtt X}\xspace ij})] , \label{E:Channel_Q_Xij} \end{equation} \begin{equation} Q_{{\mathtt X}\xspace,i,j} E_{{\mathtt X}\xspace,i,j} = e_0 Q_{{\mathtt X}\xspace,i,j} - 2(e_0 - e_d) \beta_{ij}^2 [I_0(2\alpha_{{\mathtt X}\xspace ij}) - 1] , \label{E:Channel_QE_Xij} \end{equation} \begin{equation} Q_{{\mathtt Z}\xspace,i,j} = Q^{(E)}_{ij} + Q^{(C)}_{ij} \label{E:Channel_Q_Zij} \end{equation} and \begin{equation} Q_{{\mathtt Z}\xspace,i,j} E_{{\mathtt Z}\xspace,i,j} = e_d Q^{(C)}_{ij} + (1-e_d) Q^{(E)}_{ij} , \label{E:Channel_QE_Zij} \end{equation} where $I_0(\cdot)$ is the modified Bessel function of the first kind, \begin{equation} \alpha_{{\mathtt B}\xspace ij} = \frac{\sqrt{\eta_\text{A} \mu_{{\mathtt B}\xspace,i} \eta_\text{B} \mu_{{\mathtt B}\xspace,i}}}{2} , \label{E:Channel_alpha_def} \end{equation} \begin{equation} \beta_{ij} = (1-p_d) \exp \left( -\frac{\eta_\text{A} \mu_{{\mathtt X}\xspace,i} + \eta_\text{B} \mu_{{\mathtt X}\xspace,j}}{4} \right) , \label{E:Channel_beta_def} \end{equation} \begin{equation} e_0 = \frac{1}{2} , \label{E:Channel_e0_def} \end{equation} \begin{align} Q^{(C)}_{ij} &= 2(1-p_d)^2 \exp \left( -\frac{\eta_\text{A} \mu_{{\mathtt Z}\xspace,i} + \eta_\text{B} \mu_{{\mathtt Z}\xspace,j}}{2} \right) \times \nonumber \\ & \qquad \left[ 1 - (1-p_d) \exp \left( - \frac{\eta_\text{A} \mu_{{\mathtt Z}\xspace,i}}{2} \right) \right] \times \nonumber \\ & \qquad \left[ 1 - (1-p_d) \exp \left( - \frac{\eta_\text{B} \mu_{{\mathtt Z}\xspace,j}}{2} \right) \right] \label{E:Channel_QC_def} \end{align} and \begin{align} Q^{(E)}_{ij} &= 2p_d(1-p_d)^2 \exp \left( - \frac{\eta_\text{A} \mu_{{\mathtt Z}\xspace,i} + \eta_\text{B} \mu_{{\mathtt Z}\xspace,j}}{2} \right) \times \nonumber \\ & \qquad \left[ I_0(2\alpha_{{\mathtt Z}\xspace ij}) - (1-p_d) \exp \left( - \frac{\eta_\text{A} \mu_{{\mathtt Z}\xspace,i} + \eta_\text{B} \mu_{{\mathtt Z}\xspace,j}}{2} \right) \right] . \label{E:Channel_QE_def} \end{align} Here $e_d$ is the misalignment probability, $p_d$ is the dark count rate per detector. Moreover, $\eta_\text{A}$ ($\eta_\text{B}$) is transmittance of the channel between Alice (Bob) and Charlie. They are given by \begin{equation} \eta_\text{A} = \eta_d 10^{-\eta_\text{att} L_\text{A} / 10} \label{E:Channel_eta_d_def} \end{equation} \end{subequations} and similarly for $\eta_\text{B}$, where $L_\text{A}$ is the length of the fiber connecting Alice to Charlie, $\eta_d$ is the detection efficiency of a detector, and $\eta_\text{att}$ is the transmission fiber loss constant. I remark that the MR\xspace channel model assumes that the partial Bell state measurement is performed using linear optics with idea beam and/or polarization beam splitters. It also assumes that all photon detectors are identical and that the dead time is ignored. Moreover, this channel does not consider the use of quantum repeater. \begin{figure} \caption{\label{F:Mao_cmp} \label{F:Mao_cmp} \end{figure} The state-of-the-art key rate formula for decoy-state MDI\xspace-QKD\xspace with finite raw key length is the one by Mao \emph{et al.} in Ref.~\cite{MZZZZW18}, which extended an earlier result by Zhou \emph{et al.} in Ref.~\cite{ZYW16}. (Note that even higher key rates have been reported by Xu \emph{et al.}~\cite{XXL14} and Zhou \emph{et al.}~\cite{ZYW16}. Note however that the first work applied brute force optimization as well as the Chernoff bound on a much longer raw key. Its effectiveness in handling short raw key length situation is not apparent. Whereas the second work assumed that the statistical fluctuation is Gaussian distributed which is not justified in terms of unconditional security.) To compare with the provably secure key rate reported in Ref.~\cite{MZZZZW18}, I use their settings by putting $e_d = 1.5\%$, $p_d = 6.02\times 10^{-6}$, $\eta_\text{att} = 0.2$~db/km, $\eta_d = 14.5\%$, $f_\text{EC} = 1.16$, and $L_\text{A} = L_\text{B} = L/2$ where $L$ is the transmission distance between Alice and Bob. For the security parameters, I follow Ref.~\cite{MZZZZW18} by setting $\epsilon_\text{sec} / \chi = 10^{-10}$ although a more meaningful way is to set $\epsilon_\text{sec}$ divided by the length of the final key to a fixed small number~\cite{LCWXZ14}. Whereas for $\epsilon_\text{cor}$, its value has not been specified in Refs.~\cite{MZZZZW18}. Fortunately, Inequality~\eqref{E:key_rate_finite-size} implies that the provably secure key rate does not sensitively dependent on $\epsilon_\text{cor}$. Here I simply set it to $10^{-10}$. Fig.~\ref{F:Mao_cmp} compares the key rates when the total number of photon pulse pairs prepared by Alice and Bob, $N_t \approx \ell_\text{raw} / (p_{\mathtt Z}\xspace^2 \langle Q_{{\mathtt Z}\xspace,i,j} \rangle_{i,j})$ is set to $10^{10}$. For each of the curves, the number of photon intensities $k_{\mathtt X}\xspace$ ($k_{\mathtt Z}\xspace)$ used for ${\mathtt X}\xspace$ (${\mathtt Z}\xspace$) are fixed. The smallest photon intensities $\mu_{{\mathtt X}\xspace,k_{\mathtt X}\xspace}$ and $\mu_{{\mathtt Z}\xspace,k_{\mathtt Z}\xspace}$ are both set to be $10^{-6}$. The optimized key rate is then calculated by varying the other $\mu_{{\mathtt X}\xspace,i}$'s, $\mu_{{\mathtt Z}\xspace,i}$'s as well as $p_{i|{\mathtt X}\xspace}, p_{i|{\mathtt Z}\xspace}$ and $p_Z$ by a combination of random sampling (with a minimum of $10^7$~samples to a maximum of about $10^9$~samples per data point on each curve) and adaptive gradient descend method (that is, the step size is adjusted dynamically to speed up the descend). For some of the curves, I introduce additional constraints that $\mu_{{\mathtt X}\xspace,i} = \mu_{{\mathtt Z}\xspace,i}$ so as to reduce the number of different photon intensities used. To aid discussion, I refer to the unconstrained and constrained situations by $(k_{\mathtt X}\xspace,k_{\mathtt Z}\xspace)_G$ and $(k_{\mathtt X}\xspace,k_{\mathtt Z}\xspace)_R$, respectively. The $R-L$~graphs in Fig.~\ref{F:Mao_cmp} clearly show the advantage of using the method in this text in computing the provably secure (optimized) key rate. The black curve, which is the distance-rate graph of $(3,2)_G$ that uses four different photon intensities, is much better than the red one (which also uses four different photon intensities) originally reported in Ref.~\cite{MZZZZW18}. In fact, for any distance $L$ between Alice and Bob, the key rate of the $(3,2)_G$ method is at least $2.25$~times that of the state-of-the-art key rate reported in Ref.~\cite{MZZZZW18}. (I also mention on passing that the rate of the black curve is even higher than that of the two decoy key rate reported in Ref.~\cite{XXL14} using a much longer raw key length of $\ell_\text{raw} = 10^{12}$.) Besides, the $(3,2)_G$ method extends the working distance between Alice and Bob from slightly less than 60~km to slightly over 130~km. The blue dashed curve is the key rate of $(3,3)_R$, which uses the same set of three different photon intensities for both preparation bases. Although it uses one less photon intensity, it still outperforms the key rate of the red curve when $L \gtrsim 45$~km. \begin{figure} \caption{\label{F:Zhou_cmp} \label{F:Zhou_cmp} \end{figure} To further illustrate the power of this method, I compare the key rates here with the ones obtained in Ref.~\cite{ZYW16} in which they used four photon states and the following paramters: $e_d = 1.5\%$, $p_d = 10^{-7}$, $\eta_\text{att} = 0.2$~db/km, $\eta_d = 40\%$, $f_\text{EC} = 1.16$, $L_\text{A} = L_\text{B} = L/2$ and $\epsilon_\text{sec} / \chi = \epsilon_\text{cor} = 10^{-7}$. The optimized key rates are then found using the same method that produces Fig.~\ref{F:Mao_cmp}. As shown in Fig.~\ref{F:Zhou_cmp}, the optimized key rate of $(3,2)_G$ (the black curve that uses four different photon intensities) is at least 90\% higher than those reported in Ref.~\cite{ZYW16}. Just like the previous comparison, the key rate of $(3,3)_R$ which uses only three different photon intensities is better than the one reported in Ref.~\cite{ZYW16} when $L \gtrsim 60$~km. Last but not least, the maximum transmission distance increases from about 87~km to about 156~km for $(3,2)_G$ and 162~km for $(4,3)_G$ (the later not sure in the figure to avoid curve crowding). \begin{figure} \caption{\label{F:kappa_rates} \label{F:kappa_rates} \end{figure} In a sense, instead of $\epsilon_\text{sec} / \chi$, a fairer security parameter to use is $\kappa$, namely $\epsilon_\text{sec}$ per number of bits of the final key~\cite{LCWXZ14}. Fig.~\ref{F:kappa_rates} depicts the $R-L$ curves of various methods when $\kappa = 10^{-15}$ and $\epsilon_\text{cor} = 10^{-10}$. Here instead of fixing $N_t$, I keep $\ell_\text{raw} = 10^{10}$ which corresponds to a much greater value of $N_t$ in general. The blue dashed curve is the rate for $(3,3)_R$ which uses three photon intensities. It already achieves a non-zero key rate at a distance of slightly greater than 160~km. The black curve is the rate for $(3,2)_G$ which uses four photon intensities. It allows Alice and Bob to share a secret key over a distance close to 200~km. This finding makes sense for a larger raw key length $\ell_\text{raw}$ implies smaller statistical fluctuations in our estimates of various yields and error rate, which in turn increase the provably secure key rate and the maximum transmission distance. Tables~\ref{T:keyrates_epsilon_sec_per_chi} and~\ref{T:keyrates_kappa} shows the provably secure optimized key rates using various values of $k_{\mathtt X}\xspace$ and $k_{\mathtt Z}\xspace$ for the case of fixing $\epsilon_\text{sec} / \chi$ and $\kappa$, respectively. The following points can be drawn from these figures and tables. First, for the unconstrained photon intensity situation, the optimized key rate increases as $k_{\mathtt X}\xspace$ increases. For instance, as shown in Table~\ref{T:keyrates_epsilon_sec_per_chi}, the key rate of $(4,2)_G$ is at least 39\% higher than that of $(3,2)_G$ by fixing $\epsilon_\text{sec}/\chi$. And from Table~\ref{T:keyrates_kappa}, the corresponding increase in key rate by fixing $\kappa$ is about 18\% in the distance range from 0~km to 150~km. (I do not draw these curves in Figs.~\ref{F:Mao_cmp} and~\ref{F:kappa_rates} because they almost overlap with the $(3,2)_G$ curve using the same plotting scales.) Second, for the constrained photon intensity situation, the optimized key rate increases as $k_{\mathtt X}\xspace = k_{\mathtt Z}\xspace$ increases. These findings can be understood by the fact that the more decoy intensity used, the closer the various bounds of yields and error rates are to their actual values. Third, the constrained key rate is in general several times lower than the corresponding unconstrained one. So, using the same set of photon intensities for the two bases is not a good idea, at least for the MR\xspace channel. \begin{table}[t] \centering \begin{tabular}{||c|c|c||} \hline\hline $L$/km & 0 & 50 \\ \hline $(3,2)_G$ & $7.49\times 10^{-5}$ & $1.50\times 10^{-6}$ \\ \hline $(3,3)_R$ & $9.65\times 10^{-6}$ & $1.25\times 10^{-7}$ \\ \hline $(3,3)_G$ & $8.51\times 10^{-5}$ & $1.82\times 10^{-6}$ \\ \hline $(4,2)_G$ & $1.04\times 10^{-4}$ & $2.22\times 10^{-6}$ \\ \hline $(4,3)_G$ & $1.04\times 10^{-4}$ & $2.24\times 10^{-6}$ \\ \hline $(4,4)_R$ & $3.10\times 10^{-5}$ & $3.75\times 10^{-7}$ \\ \hline $(4,4)_G$ & $1.04\times 10^{-4}$ & $2.23\times 10^{-6}$ \\ \hline\hline \end{tabular} \caption{Optimized secure key rates for $N_t = 10^{10}$ and $\epsilon_\text{sec} / \chi = \epsilon_\text{cor} = 10^{-10}$. \label{T:keyrates_epsilon_sec_per_chi}} \end{table} There is an interesting observation that requires in-depth explanation. From Table~\ref{T:keyrates_kappa}, for the case of fixing $k_{\mathtt X}\xspace$ and $\kappa$, the increase in $R$ due to increase in $k_{\mathtt Z}\xspace$ is insignificant. Moreover, Table~\ref{T:keyrates_epsilon_sec_per_chi} shows that for the case fixing $\epsilon_\text{sec} / \chi$ and $k_{\mathtt X}\xspace$, significant increase in key rate occurs only when $k_{\mathtt X}\xspace = 3$. The reason is that for the MR\xspace channel~\cite{MR12}, it turns out that the key rate computed by Inequality~\eqref{E:key_rate_finite-size_alt} is greater than that computed by Inequality~\eqref{E:key_rate_finite-size}. That is to say, the lower bound of $Y_{{\mathtt X}\xspace,1,1}$ is a better estimate of the single photon pair yield that the lower bound of $Y_{{\mathtt Z}\xspace,1,1}$. Thus, increasing $k_{\mathtt Z}\xspace$ only gives a better estimate of $Y_{{\mathtt Z}\xspace,0,\star}$. Since I fix the lowest photon intensity to $10^{-6}$, which is very close to the vacuum state, the major error in estimating $Y_{{\mathtt Z}\xspace,0,\star}$ comes from finite-size statistical fluctuation. Consequently, by fixing a large enough raw key length $\ell_\text{raw}$, the use of more than two photon intensities for the ${\mathtt Z}\xspace$ does not improve the provably secure key rate in practice. In other words, the improvement on the provably secure key rate by increasing $k_{\mathtt Z}\xspace$ alone for the MR\xspace channel occurs only when $k_{\mathtt Z}\xspace$ is small, say, about 2 to 3 and when $\ell_\text{raw}$ is small. There is a systematic trend that worth reporting. For the case of using unconstrained photon intensities, Method~D plus the use of $Y_{{\mathtt X}\xspace,1,1}$ to bound the single photon-pair yield gives the highest key rate over almost the whole range of distance $L$. Only when close to the maximum transmission distance that the best rate is computed using Method~C and $Y_{{\mathtt X}\xspace,1,1}$. Whereas for the case of constrained photon intensities, for short transmission distance, the best rate is computed using Method~D plus $Y_{{\mathtt Z}\xspace,1,1}$. For longer transmission distance, the best rate is due to Method~B and $Y_{{\mathtt X}\xspace,1,1}$. \begin{table}[t] \centering \begin{tabular}{||c|c|c|c|c||} \hline\hline $L$/km & 0 & 50 & 100 & 150 \\ \hline $(3,2)_G$ & $3.23\times 10^{-4}$ & $2.85\times 10^{-5}$ & $2.44\times 10^{-6}$ & $1.51\times 10^{-7}$ \\ \hline $(3,3)_R$ & $8.37\times 10^{-5}$ & $6.67\times 10^{-6}$ & $4.33\times 10^{-7}$ & $1.27\times 10^{-8}$ \\ \hline $(3,3)_G$ & $3.23\times 10^{-4}$ & $2.85\times 10^{-5}$ & $2.44\times 10^{-6}$ & $1.51\times 10^{-7}$ \\ \hline $(4,2)_G$ & $3.82\times 10^{-4}$ & $3.39\times 10^{-5}$ & $2.89\times 10^{-6}$ & $1.78\times 10^{-7}$ \\ \hline $(4,3)_G$ & $3.82\times 10^{-4}$ & $3.39\times 10^{-5}$ & $2.89\times 10^{-6}$ & $1.78\times 10^{-7}$ \\ \hline $(4,4)_R$ & $1.70\times 10^{-4}$ & $1.32\times 10^{-5}$ & $8.27\times 10^{-7}$ & $2.64\times 10^{-8}$ \\ \hline $(4,4)_G$ & $3.82\times 10^{-4}$ & $3.39\times 10^{-5}$ & $2.89\times 10^{-6}$ & $1.78\times 10^{-7}$ \\ \hline\hline \end{tabular} \caption{Optimized secure key rates for $\ell_\text{raw} = 10^{10}$, $\kappa = 10^{-15}$ and $\epsilon_\text{cor} = 10^{-10}$. \label{T:keyrates_kappa}} \end{table} \section{Summary} \label{Sec:Summary} In summary, using the BB84 scheme in the MR\xspace channel as an example, I have reported a key rate formula for MDI\xspace-QKD\xspace using Possionian photon sources through repeated use the inversion of Vandermonde matrix and a McDiarmid-type inequality. This method gives a provably secure key rate that is at least 2.25~times that of the current state-of-the-art result. It also shows that using five photon intensities, more precisely the $(4,2)$-method, gives an additional 18\% increase in the key rate for the MR\xspace channel. This demonstrates once again the effectiveness of using McDiarmid-type inequality in statistical data analysis in physical problems. Provided that the photon source is sufficiently close to Possionian, Note that the Vandermonde matrix inversion technique is rather general. As pointed out in Remark~\ref{Rem:C_property} in the Appendix, by modifying the proof of Lemma~\ref{Lem:C_property}, one can show that $C_{3i} \ge 0$ if $k$ is even and $C_{3i} < 0$ if $k$ is odd for all $i \ge k$. Thus, I can find the lower bound of $Y_{{\mathtt B}\xspace,0,2}$ and $Y_{{\mathtt B}\xspace,2,0}$. In other words, I can extend the key rate calculation to the case of twin-field~\cite{LYDS18} or phase-matching MDI\xspace-QKD\xspace~\cite{MZZ18}. Note further that Inequalities~\eqref{E:various_Y_and_e_bounds} are still valid by replacing the ${\mathcal A}_{{\mathtt B}\xspace,j,i}^{\text{e}}$'s and ${\mathcal A}_{{\mathtt B}\xspace,j,i}^{\text{o}}$'s by their perturbed expressions through standard matrix inversion perturbation as long as the photon sources are sufficiently close to Possionian. In this regard, the theory developed here also applies to these sources. Interested readers are invited to fill in the details. Last but not least, it is instructive to extend this work to cover other MDI\xspace-QKD\xspace protocols as well as more realistic quantum channels that take dead time and imperfect beam splitters into account. \appendix \section{Auxiliary Results On Bounds Of Yields And Error Rates} \label{Sec:results_on_Y_and_e_bounds} I begin with the following lemma. \begin{Lem} Let $\mu_1,\mu_2,\cdots,\mu_k$ be $k \ge 2$ distinct real numbers. Then \begin{equation} \sum_{i=1}^k \frac{\mu_i^\ell}{\prod_{t\ne i} (\mu_i - \mu_t)} = 0 \label{E:auxiliary_sum} \end{equation} for $\ell = 0,1,\cdots,k-2$. \label{Lem:sum} \end{Lem} \begin{proof} Note that the L.H.S. of Eq.~\eqref{E:auxiliary_sum} is a symmetric function of $\mu_i$'s. Moreover, only its first two terms involve the factor $\mu_1-\mu_2$ in the denominator. In fact, the sum of the these two terms equals \begin{displaymath} \frac{\mu_1^\ell \prod_{t>2} (\mu_2 - \mu_t) - \mu_2^\ell \prod_{t>2} (\mu_1 - \mu_t)}{(\mu_1 - \mu_2) \prod_{t>2} [(\mu_1 - \mu_t) (\mu_2 - \mu_t)]} . \end{displaymath} By applying reminder theorem, I know that the numerator of the above expression is divisible by $\mu_1 - \mu_2$. Consequently, the L.H.S. of Eq.~\eqref{E:auxiliary_sum} is a homogeneous polynomial of degree $\le \ell - k + 1$. But as $\ell \le k-2$, this means the L.H.S. of Eq.~\eqref{E:auxiliary_sum} must be a constant. By putting $\mu_i = t^i$ for all $i$ and by taking the limit $t\to +\infty$, I conclude that this constant is $0$. This completes the proof. \end{proof} Following the notation in Ref.~\cite{Chau18}, I define \begin{equation} C_{a+1,i} = \frac{(-1)^{k-a} a!}{i!} \sum_{t=1}^k \frac{\mu_t^i S_{ta}}{\prod_{\ell\ne t} (\mu_t - \mu_\ell)} , \label{E:C_def} \end{equation} where \begin{equation} S_{ta} = \sideset{}{^{'}}{\sum} \mu_{t_1} \mu_{t_2} \cdots \mu_{t_{k-a-1}} \label{E:S_in_def} \end{equation} with the primed sum being over all $t_j$'s $\ne i$ obeying $1 \le t_1 < t_2 < \dots < t_{k-a-1} \le k$. The following lemma is an extension of a result in Ref.~\cite{Chau18}. \begin{Lem} Let $\mu_1 > \mu_2 > \cdots > \mu_k \ge 0$. Suppose $0\le i < k$. Then \begin{equation} C_{a+1,i} = \begin{cases} -1 & \text{if } a = i , \\ 0 & \text{otherwise} . \end{cases} \label{E:C_value1} \end{equation} Whereas if $i \ge k$, then \begin{equation} \begin{cases} C_{1i} \ge 0 \text{ and } C_{2i} < 0 & \text{if } k \text{ is even,} \\ C_{1i} \le 0 \text{ and } C_{2i} > 0 & \text{if } k \text{ is odd.} \end{cases} \label{E:C_value2} \end{equation} \label{Lem:C_property} \end{Lem} \begin{proof} Using the same argument as in the proof of Lemma~\ref{Lem:sum}, I conclude that $C_{a+1,i}$ is a homogeneous polynomial of degree $\le i-a$. Consider the case of $i\le a$ so that $C_{a+1,i}$ is a constant. By putting $\mu_t = \delta^t$ for all $t$ and then taking the limit $\delta\to 0^+$, it is straightforward to check that $C_{a+1,i} = 0$ if $i < a$ and $C_{a+1,i} = -1$ if $i = a$. It remains to consider the case of $a < i < k$. I first consider the subcase of $a = 0$. Here $C_{1i}$ contains a common factor of $\prod_{t=1}^k \mu_t$, which is of degree $k > i$. Therefore, I could write $\prod_{t=1}^k \mu_t = C_{1i} F$ where $F$ is a homogeneous polynomial of degree $\ge i$. As a consequence, either $C_{1i}$ or $F$ contains $\mu_1$ and hence all $\mu_t$'s. Thus, $C_{1i}$ must be a constant for $i < k$. By setting $\mu_k = 0$, I know that $C_{1i} = 0$. Next, I consider the subcase of $a = 1$. Since $i > 1$, from the findings of the first subcase, I arrive at \begin{align} C_{2i} &= \frac{(-1)^{k-1}}{i!} \sum_{t=1}^k \frac{T_1 \mu_t^{i-1} - \left( \prod_{\ell=1}^k \mu_\ell \right) \mu_t^{i-2}}{\prod_{\ell\ne t} (\mu_t - \mu_\ell)} \nonumber \\ &= \frac{(-1)^{k-1} T_1}{i!} \sum_{t=1}^k \frac{\mu_t^{i-1}}{\prod_{\ell\ne t} (\mu_t - \mu_\ell)} - C_{1,i-1} \nonumber \\ &= \frac{(-1)^{k-1} T_1}{i!} \sum_{t=1}^k \frac{\mu_t^{i-1}}{\prod_{\ell\ne t} (\mu_t - \mu_\ell)} , \label{E:C_a=1} \end{align} where $T_1$ is the symmetric polynomial \begin{equation} T_1 = \sum_{t=1}^k \mu_1 \mu_2 \cdots \mu_{t-1} \mu_{t+1} \cdots \mu_k . \label{E:T1_def} \end{equation} By Lemma~\ref{Lem:sum}, I find that $C_{2i} = 0$ as $i < k$. The third subcase I consider is $a = 2$. As $i > 2$, \begin{equation} C_{3i} = \frac{2 (-1)^k}{i!} \sum_{t=1}^k \frac{T_2 \mu_t^{i-1} - T_1 \mu_t^{i-2} + \left( \prod_{\ell=1}^k \mu_\ell \right) \mu_t^{i-3}}{\prod_{\ell\ne t} (\mu_t - \mu_\ell)} , \label{E:C_a=2} \end{equation} where \begin{equation} T_2 = \sideset{}{^{'}}{\sum} \mu_{t_1} \mu_{t_2} \cdots \mu_{t_{k-2}} \label{E:T2_def} \end{equation} with the primed sum over all $t_j$'s with $1\le t_1 < t_2 < \dots < t_{k-2} \le k$. By Lemma~\ref{Lem:sum}, I get $C_{3i} = 0$ as $i < k$. By induction, the proof of the subcase $a=2$ can be extended to show the validity for all $a \le k$ and $a < i < k$. This shows the validity of Eq.~\eqref{E:C_value1} The proof of Eq.~\eqref{E:C_value2} can be found in Ref.~\cite{Chau18}. I reproduce here for easy reference. By expanding the $1/(\mu_1 - \mu_t)$'s in $C_{ai}$ as a power series of $\mu_1$ with all the other $\mu_t$'s fixed, I obtain \begin{align} C_{1i} &= \frac{(-1)^k}{i!} \left( \prod_{t=1}^k \mu_t \right) \left[ \mu_1^{i-k} \prod_{r=2}^k \left( 1 + \frac{\mu_r}{\mu_1} + \frac{\mu_r^2}{\mu_1^2} + \cdots \right) \right. \nonumber \\ &\qquad \left. \vphantom{\prod_{r=2}^k \left( 1 + \frac{\mu_r}{\mu_1} + \frac{\mu_r^2}{\mu_1^2} \right)} + f(\mu_2,\mu_3,\cdots,\mu_k) \right] + \BigOh(\frac{1}{\mu_1}) \label{E:C_1i_intermediate} \end{align} for some function $f$ independent of $\mu_1$. As $C_{1i}$ is a homogeneous polynomial of degree $\le i$, by equating terms in powers of $\mu_1$, I get \begin{equation} C_{1i} = \frac{(-1)^k}{i!} \left( \prod_{t=1}^k \mu_t \right) \sum_{\substack{t_1+t_2+\dots+t_k = i-k, \\ t_1,t_2,\cdots,t_k \ge 0}} \mu_1^{t_1} \mu_2^{t_2} \cdots \mu_k^{t_k} \label{E:C_1i_explicit} \end{equation} for all $i\ge k$. As all $\mu_t$'s are non-negative, $C_{1i} \ge 0$ if $k$ is even and $C_{1i} \le 0$ if $k$ is odd. By the same argument, I expand all the $1/(\mu_1 - \mu_t)$ terms in $C_{2i}$ in powers of $\mu_1$ to get \begin{align} C_{2i} &= \frac{(-1)^{k-1} T_1}{i!} \!\!\sum_{\substack{t_1+t_2+\cdots+t_k = i-k \\ t_1 > 0, t_2,\cdots,t_k \ge 0}} \mu_1^{t_1} \mu_2^{t_2} \cdots \mu_k^{t_k} \nonumber \\ & \quad + f'(\mu_2,\cdots,\mu_k) \label{E:C_2j_intermediate} \end{align} for some function $f'$ independent of $\mu_1$. By recursively expanding Eq.~\eqref{E:C_def} in powers of $\mu_2$ but with $\mu_1$ set to $0$, and then in powers of $\mu_3$ with $\mu_1,\mu_2$ set to $0$ and so on, I conclude that whenever $i \ge k$, then $C_{2i} < 0$ if $k$ is even and $C_{2i} > 0$ if $k$ is odd. This completes the proof. \end{proof} \begin{Rem} By the same technique of expanding each factor of $1/(\mu_1 - \mu_t)$ in $C_{a+1,i}$ in powers of $\mu_1$, it is straightforward to show that if $i \ge k$ and $j\ge 1$, then $C_{2j+1,i} \ge 0$ and $C_{2j,i} \le 0$ provided that $k$ is even. And $C_{2j+1,i} \le 0$ and $C_{2j,i} \ge 0$ provided that $k$ is odd. \label{Rem:C_property} \end{Rem} The following theorem is an extension of a similar result reported in Ref.~\cite{Chau18} by means of an explicit expression of the inverse of a Vandermonde matrix. \begin{Thrm} Let $\mu_1 > \mu_2 > \cdots > \mu_k \ge 0$ and $\tilde{\mu}_1 > \tilde{\mu}_2 > \cdots > \tilde{\mu}_{\tilde{k}} \ge 0$. Suppose \begin{equation} \sum_{a,b=0}^{+\infty} \frac{\mu_i^a}{a!} \frac{\tilde{\mu}_j^b}{b!} A_{ab} \equiv \sum_{a,b=0}^{+\infty} M_{a+1,i} \tilde{M}_{b+1,j} A_{ab} = B_{ij} \label{E:A_B_relation} \end{equation} for all $i = 1,2,\cdots,k$ and $j = 1,2,\cdots,\tilde{k}$. Then, \begin{align} A_{ab} &= \sum_{i=1}^k \sum_{j=1}^{\tilde{k}} \left( M^{-1} \right)_{a+1,i} \left( \tilde{M}^{-1} \right)_{b+1,j} B_{ij} \nonumber \\ & \quad + \sum_{I=k}^{+\infty} C_{a+1,I} A_{Ib} + \sum_{J=\tilde{k}}^{+\infty} \tilde{C}_{b+1,J} A_{aJ} \nonumber \\ & \quad - \sum_{I=k}^{+\infty} \sum_{J=\tilde{k}}^{+\infty} C_{a+1,I} \tilde{C}_{b+1,J} A_{IJ} \label{E:explicit_A_relation} \end{align} for all $a = 0,1,\cdots,k-1$ and $b = 0,1,\cdots,\tilde{k}-1$. Here \begin{equation} \left( M^{-1} \right)_{a+1,i} = \frac{(-1)^{k-a-1} a! S_{ia}}{\prod_{t\ne i} (\mu_i - \mu_t)} \label{E:M_inverse} \end{equation} and similarly for $\left( \tilde{M}^{-1} \right)_{b+1,j}$. \label{Thrm:double_inversion} \end{Thrm} \begin{proof} Note that for any fixed $a = 0,1,\cdots,k-1$ and $b = 0,1,\cdots, \tilde{k}-1$, \begin{equation} \sum_{b=0}^{+\infty} \frac{\tilde{\mu}_{j}^b}{b!} A_{ab} = \sum_{i=1}^k \left( M^{-1} \right)_{a+1,i} \left( B_{ij} - \sum_{b=0}^{+\infty} \sum_{I=k}^{+\infty} \frac{\mu_i^I}{I!} \frac{\tilde{\mu}_j^b}{b!} A_{Ib} \right) . \label{E:inversion_intermediate1} \end{equation} Here $M^{-1}$ is the inverse of the $k\times k$ matrix $(M_{a+1,i})_{a+1,i=1}^k$. From Ref.~\cite{Chau18}, the matrix elements of $M^{-1}$ are related to inverse of certain Vandermonde matrix and are given by the expression immediately after Eq.~\eqref{E:explicit_A_relation}. From Lemma~\ref{Lem:C_property}, Eq.~\eqref{E:inversion_intermediate1} can be rewritten as \begin{equation} \sum_{b=0}^{+\infty} \frac{\tilde{\mu}_j^b}{b!} A_{ab} = \sum_{a=1}^k \left( M^{-1} \right)_{a+1,i} B_{ij} + \sum_{b=0}^{+\infty} \sum_{I=k}^{+\infty} \frac{\tilde{\mu}_j^b}{b!} C_{a+1,I} A_{Ib} . \label{E:inversion_intermediate2} \end{equation} By repeating the above procedure again, I find that for any fixed $a=0,1, \cdots,k-1$ and $b = 0,1,\cdots,\tilde{k}-1$, \begin{align} A_{ab} &= \sum_{i=1}^k \sum_{j=1}^{\tilde{k}} \left( M^{-1} \right)_{a+1,i} \left( \tilde{M}^{-1} \right)_{b+1,j} B_{ij} \nonumber \\ &\quad - \sum_{I=k}^{+\infty} \sum_{\tilde{t}=0}^{+\infty} C_{a+1,I} \tilde{C}_{b+1,\tilde{t}} A_{I\tilde{t}} + \sum_{J=\tilde{k}}^{+\infty} \tilde{C}_{b+1,J} A_{aJ} . \label{E:inversion_intermediate3} \end{align} Here the $\tilde{k}\times\tilde{k}$ matrix $\tilde{C}$ is defined in the exactly the same as the $k\times k$ matrix $C$ except that the $k$ and $\mu_t$'s variables are replaced by $\tilde{k}$ and the corresponding $\tilde{\mu}_t$'s. Substituting Eq.~\eqref{E:C_value1} into the above equation gives Eq.~\eqref{E:explicit_A_relation}. \end{proof} Applying Lemma~\ref{Lem:C_property} and Theorem~\ref{Thrm:double_inversion}, in particular, the Inequality~\eqref{E:C_value2}, I arrive at the following two Corollaries. \begin{Cor} Suppose the conditions stated in Theorem~\ref{Thrm:double_inversion} are satisfied. Suppose further that $A_{ab} = 0$ for all $b > 0$ and $a\ge 0$; and $A_{a0} \in [0,1]$ for all $a$. Then \begin{equation} A_{00} \ge \sum_{i=1}^k \left( M^{-1} \right)_{1i} B_{i0} . \label{E:A_0*_lower_bound} \end{equation} \label{Cor:Y0*} \end{Cor} \begin{Cor} Suppose the conditions stated in Theorem~\ref{Thrm:double_inversion} are satisfied. Suppose further that $A_{ab} \in [0,1]$ for all $a,b$. Then \begin{subequations} \begin{equation} A_{00} \ge \sum_{i=1}^k \sum_{j=1}^{\tilde{k}} \left( M^{-1} \right)_{1i} \left( \tilde{M}^{-1} \right)_{1j} B_{ij} - \sum_{I=k}^{+\infty} \sum_{J=\tilde{k}}^{+\infty} C_{1I} \tilde{C}_{1J} \label{E:A00_lower_bound} \end{equation} and \begin{equation} A_{11} \le \sum_{i=1}^k \sum_{j=1}^{\tilde{k}} \left( M^{-1} \right)_{2i} \left( \tilde{M}^{-1} \right)_{2j} B_{ij} \label{E:A11_upper_bound} \end{equation} provided both $k$ and $\tilde{k}$ are even. Furthermore, \begin{equation} A_{11} \ge \sum_{i=1}^k \sum_{j=1}^{\tilde{k}} \left( M^{-1} \right)_{2i} \left( \tilde{M}^{-1} \right)_{2j} B_{ij} - \sum_{I=k}^{+\infty} \sum_{J=\tilde{k}}^{+\infty} C_{2I} \tilde{C}_{2J} \label{E:A11_lower_bound} \end{equation} \end{subequations} if both $k$ and $\tilde{k}$ are odd. \label{Cor:bounds_on_Yxx} \end{Cor} \begin{Rem} Clearly, each of the bounds in the above Corollary are tight. Although the conditions for attaining the bound in Inequality~\eqref{E:A11_upper_bound} are not compatible with those for attaining the bounds in Inequalities~\eqref{E:A00_lower_bound} and~\eqref{E:A11_lower_bound}, the way I use these inequalities in Secs.~\ref{Sec:Rate} and~\ref{Sec:Finite_Size} ensures that it is possible to attaining all these bounds in the key rate formula. \label{Rem:tightness} \end{Rem} \begin{acknowledgments} This work is supported by the RGC grant 17302019 of the Hong Kong SAR Government. \end{acknowledgments} \end{document}
math
/* * (c) Copyright 2015-2017 Micro Focus or one of its affiliates. * * Licensed under the MIT License (the "License"); you may not use this file * except in compliance with the License. * * The only warranties for products and services of Micro Focus and its affiliates * and licensors ("Micro Focus") are as may be set forth in the express warranty * statements accompanying such products and services. Nothing herein should be * construed as constituting an additional warranty. Micro Focus shall not be * liable for technical or editorial errors or omissions contained herein. The * information contained herein is subject to change without notice. */ package com.hp.autonomy.searchcomponents.idol.requests; import com.hp.autonomy.searchcomponents.core.search.QueryRestrictions; import com.hp.autonomy.searchcomponents.core.search.SuggestRequestTest; import com.hp.autonomy.searchcomponents.idol.search.IdolQueryRestrictions; import com.hp.autonomy.searchcomponents.idol.search.IdolSuggestRequest; import com.hp.autonomy.types.requests.idol.actions.query.params.PrintParam; import com.hp.autonomy.types.requests.idol.actions.query.params.SummaryParam; import com.hp.autonomy.types.requests.idol.actions.tags.params.SortParam; import org.apache.commons.io.IOUtils; import org.junit.Before; import java.io.IOException; import java.time.ZonedDateTime; public class IdolSuggestRequestTest extends SuggestRequestTest<IdolQueryRestrictions> { @Override @Before public void setUp() { super.setUp(); objectMapper.addMixIn(QueryRestrictions.class, IdolQueryRestrictionsMixin.class); } @Override protected IdolSuggestRequest constructObject() { return IdolSuggestRequestImpl.<String>builder() .reference("REFERENCE") .queryRestrictions(IdolQueryRestrictionsImpl.builder() .queryText("*") .fieldText("NOT(EMPTY):{FIELD}") .database("Database1") .minDate(ZonedDateTime.parse("2016-11-15T16:07:00Z[UTC]")) .maxDate(ZonedDateTime.parse("2016-11-15T16:07:01Z[UTC]")) .minScore(5) .languageType("englishUtf8") .anyLanguage(false) .stateMatchId("0-ABC") .stateDontMatchId("0-ABD") .build()) .start(1) .maxResults(50) .summary(SummaryParam.Concept.name()) .summaryCharacters(250) .sort(SortParam.Alphabetical.name()) .highlight(true) .print(PrintParam.Fields.name()) .printField("CATEGORY") .build(); } @Override protected String json() throws IOException { return IOUtils.toString(getClass().getResourceAsStream("/com/hp/autonomy/searchcomponents/idol/search/suggestRequest.json")); } }
code
जलपुरुष करेंगे हिमाचल का दौरा, राजभवन में मिलेगी जल बचाने की मंत्र दीक्षा जलपुरुष के नाम से विश्वविख्यात राजेंद्र सिंह। (फाइल फोटो) शिमला। जलपुरुष के नाम से विश्वविख्यात राजेंद्र सिंह मंगलवार को शिमला में पानी की एक-एक बूंद की कीमत समझाने के साथ-साथ जल बचाने की मंत्र दीक्षा प्रदान करेंगे। हिमाचल के राजभवन में आयोजित की जा रही संगोष्ठी में राजेंद्र सिंह जल संरक्षण के तरीकों पर व्याख्यान देंगे। शिमलाऔर भीमुख्यमंत्री राहत कोष को एक ही दिन में मिले १.३३ करोड़, गरीबों के इलाज में आएंगे काम शिमला। साधनहीन मरीजों केप्रदेश में एक हजार नए कंडक्टर होंगे भर्ती, परिवहन मंत्री ने किया ऐलानशिमला। हिमाचल पथ परिवहन निगम में नौकरी काफूलों की खुशबु से महक रहा रिज, प्रदर्शनी का हुआ आयोजनशिमला। लोगों में फूलों के प्रति रूचि बढ़ाने और पर्यटन कोहिमाचल-उत्तराखंड सीमा पर वज्रपात से तीन की मौत, आठ लोग घायलशिमला। हिमाचल प्रदेश और उत्तराखंड की सीमा पर स्थितपूर्व व्क एडीएन बाजपेयी के विदाई समारोह के लिए शिक्षकों से पैसे की उगाही शुरू, भड़के शिक्षकशिमला। हिमाचल प्रदेशहिमाचल विस में गस्ट बिल पास, नेताओं ने कहा- 'सोनिया-राहुल का अहम योगदान'शिमला। हिमाचल विधानसभा के दो दिवसीय 'ट्यूबलाइट' के ट्रेलर लॉन्च पर सलमान ने...रिलीज हुआ सलमान खान की फिल्म ट्यूबलाइट का...बॉलीवुड की टॉप १२ ग्लैमरस मां-बेटी की जोड़ी'सचिन ए बिलियन ड्रीम्स' के प्रीमियर पर सचिन...'सा रे गा मा पा लिटिल चैंप्स' में पहुंचे कृति...पिक्स: कपिल के शो में पहुंची इंडियन टीम की...पिक्स: एसिड अटैक सर्वाइवर बहन की शादी में...पिक्स: मुंबई के चैम्पियन बनने पर नीता अंबानी ने... चैम्पियंस ट्रॉफी में धवन के निशाने पर पाक बल्लेबाज का रिकॉर्ड नई दिल्ली। पिछली बार टीम की खिताबी जीत में अहम'मन की बात' में बोले पीएम- योग भारत की सबसे बड़ी देननई दिल्ली। प्रधानमंत्री नरेन्द्र मोदी रेडियो पर 'मन कीचैम्पियंस लीग के फाइनल में जाना सपने के सच होने जैसा : रनवीरमुंबई। चोट से जूझ रहे बॉलीवुड अभिनेता रनवीर सिंहजो विल्फ्रेड सोंगा ने जीता साल का तीसरा एटीपी खिताबलियोन। फ्रांस के अग्रणी टेनिस खिलाड़ी जो विल्फ्रेड सोंगा नेआज टीम इंडिया का पहला प्रैक्टिस मैच, न्यूजीलैंड से सामनालंदन। चैंपियंस ट्रॉफी की मौजूदा चैंपियन भारतीय क्रिकेट
hindi
Technical Teams, Engineers, and Product Development. Configure your environment to allow the OloAlohaService to send a command to login your interface employee OR configure the environment in a way that prevents ATG from logging your interface employee out. a) Add the SkipLogout Lines to ATG's Config file. b) Edit your configuration on the Olo side (with your TAM) to AlwaysLoggedOn=false so that Olo will inherently perform a login function whenever there is Olo work to do in Aloha. 2) Once confirmed, restart OloAlohaService and allow the new settings to be live. 1) May want to discuss implementation of POS Password for the Interface Employee with Option b. 2) The SkipLogout lines have been overwritten by refreshes in the past. 3) Possible workaround is to have Olo change your configuration to AlwaysLoggedIn=false. Discuss with your TAM or CSM.
english
\begin{document} \begin{center} \LARGE {\bf \textsc{Posets of Geometric Graphs}} \end{center} \begin{center} \textsc{Debra L. Boutin} \textsc{Department of Mathematics} \textsc{Hamilton College, Clinton, NY 13323} \textsc {\sl [email protected]} \end{center} \begin{center} \textsc{Sally Cockburn} \textsc{Department of Mathematics} \textsc{Hamilton College, Clinton, NY 13323} \textsc {\sl [email protected]} \end{center} \begin{center} \textsc{Alice M. Dean} \textsc{Mathematics and Computer Science Department} \textsc{Skidmore College, Saratoga Springs, NY 12866} \textsc {\sl [email protected]} \end{center} \begin{center} \textsc{Andrei Margea} \textsc{Department of Computer Science} \textsc{University of Texas, Austin, TX 78712} \textsc {\sl [email protected]} \end{center} \begin{abstract} A {\it geometric graph} $\overline{G}$ is a simple graph drawn in the plane, on points in general position, with straight-line edges. We call $\overline{G}$ a {\it geometric realization} of the underlying abstract graph $G$. A {\it geometric homomorphism} $f:\overline{G} \to \overline{H}$ is a vertex map that preserves adjacencies and crossings (but not necessarily non-adjacencies or non-crossings). This work uses geometric homomorphisms to introduce a partial order on the set of isomorphism classes of geometric realizations of an abstract graph $G$. Set $\overline{G}\preceq \widehat{G}$ if $\overline{G}$ and $\widehat{G}$ are geometric realizations of $G$ and there is a vertex-injective geometric homomorphism $f:\overline{G} \to \widehat{G}$. This paper develops tools to determine when two geometric realizations are comparable. Further, for $3\leq n \leq 6$, this paper provides the isomorphism classes of geometric realizations of $P_n, C_n$ and $K_n$, as well as the Hasse diagrams of the geometric homomorphism posets (resp., ${\mathcal P}_n, {\mathcal C}_n, {\mathcal K}_n$) of these graphs. The paper also provides the following results for general $n$: each of ${\mathcal P}_n$ and ${\mathcal C}_n$ has a unique minimal element and a unique maximal element; if $k\leq n$ then ${\mathcal P}_k$ (resp., ${\mathcal C}_k$) is a subposet of ${\mathcal P}_n$ (resp., ${\mathcal C}_n$); and ${\mathcal K}_n$ contains a chain of length $n-2$. \end{abstract} \section{Introduction} The topic of graph homomorphisms has been a subject of growing interest; for an excellent survey of the area see \cite{HN}. In \cite{BC}, Boutin and Cockburn extend the theory of graph homomorphisms to geometric graphs. In this paper we use geometric homomorphisms to define and study posets of geometric realizations of a given abstract graph. Throughout this work the term graph means simple graph. A geometric graph $\overline{G}$ is a graph drawn in the plane, on points in general position, with straight-line edges. What we care about in a geometric graph is which pairs of vertices are adjacent and which pairs of edges cross. In particular, two geometric graphs are said to be isomorphic if there is a bijection between their vertex sets that preserves adjacencies, non-adjacencies, crossings, and non-crossings. A natural way to extend the idea of abstract graph homomorphism to the context of geometric graphs is to define a geometric homomorphism as a vertex map $f:\overline{G} \to \overline{H}$ that preserves both adjacencies and crossings (but not necessarily non-adjacencies or non-crossings). If such a map exists we write $\overline{G}\to\overline{H}$ and say that $\overline{G}$ is (geometrically) homomorphic to $\overline{H}$. There are many similarities between abstract graph homomorphisms and geometric graph homomorphisms, but there are also great contrasts. Results that are straightforward in the context of abstract graphs can become complex in the context of geometric graphs. In abstract graph homomorphism theory, two vertices cannot be identified under any homomorphism if and only if they are adjacent. However, in \cite{BC} we show that there are additional reasons why two vertices cannot be identified under any geometric homomorphism: if they are involved in a common edge crossing; if they are endpoints of an odd length path each edge of which is crossed by a common edge; if they are endpoints of a path of length two whose edges cross all edges of an odd length cycle. This list is likely not exhaustive. In abstract graph homomorphism theory, a graph is not homomorphic to a graph on fewer vertices if and only if it is a complete graph. In geometric homomorphism theory, there are many graphs other than complete graphs that are not homomorphic to any geometric graph on fewer vertices. For example, since vertices involved in a common crossing cannot be identified by any geometric homomorphism, there is no geometric homomorphism of a non-plane realization of $C_4$ into a geometric graph on fewer than four vertices. In abstract graph homomorphism theory, every graph on $n$ vertices is homomorphic to $K_n$. However, not all geometric graphs on $n$ vertices are homomorphic to a given realization of $K_n$. In fact, two different geometric realizations of $K_n$ are not necessarily homomorphic to each other. For example, consider the three geometric realizations of $K_6$ given in Figure~\ref{fig:three_k6}. Note that $\overline{G}_2$ has a vertex with all incident edges crossed, while $\overline{G}_3$ does not; this can be used to prove that there is no geometric homomorphism from $\overline{G}_2$ to $\overline{G}_3$. Also, $\overline{G}_3$ has more crossings than $\overline{G}_2$; this can be used to prove that there is no geometric homomorphism from $\overline{G}_3$ to $\overline{G}_2$. On the other hand we can easily argue that while there is no geometric homomorphism from $\overline{G}_2$ to $\overline{G}_1$, the map $f:\overline{G}_1\to \overline{G}_2$ implied by the given vertex numbering schemes is a geometric homomorphism. \begin{figure} \caption{Three geometric realizations of $K_6$} \label{fig:three_k6} \end{figure} Since homomorphisms are reflexive and transitive, it is natural to want to use them to induce a partial order. That is, we would like to define $\overline{G}\preceq \overline{H}$ when $\overline{G}\to \overline{H}$. However, homomorphisms are not necessarily antisymmetric. It is easy to find geometric (or abstract) graphs $\overline{G}$ and $\overline{H}$ so that $\overline{G}\to \overline{H}$ and $\overline{H}\to \overline{G}$ but $\overline{G}$ and $\overline{H}$ are not isomorphic. For example, let $\overline{H}$ be any geometric graph with a non-isolated vertex $z$. Add a vertex $x$ and edge $e$ between $x$ and $z$, positioned so that $e$ crosses no other edge of $\overline{H}$. Call this new graph $\overline{G}$. Identifying $x$ with any neighbor of $z$ gives us $\overline{G} \to \overline{H}$. The fact that $\overline{H}$ is a subgraph of $\overline{G}$ gives us $\overline{H}\to \overline{G}$. But clearly $\overline{G}$ and $\overline{H}$ are not isomorphic. Thus graph homomorphisms (whether abstract or geometric) do not induce a partial order since they are not antisymmetric. In \cite{HN}, Hell and Ne{\v{s}}et{\v{r}}il solve this problem for abstract graphs by using homomorphisms to define a partial order on the class of non-isomorphic cores of graphs. The {\it core} of an (abstract or geometric) graph is the smallest subgraph to which it is homomorphic. In the example above, $\overline{G}$ and $\overline{H}$ have isomorphic cores. In this paper we solve the problem by using geometric homomorphisms to define a partial order on the set of geometric realizations of a given abstract graph. That is to say, we let $\overline{G}\preceq \overline{H}$ if there is a geometric homomorphism $f:\overline{G}\to \overline{H}$ that induces an isomorphism on the underlying abstract graphs. This definition ensures that $\preceq$ is antisymmetric. The paper is organized as follows. In Section \ref{sect:basics} we give formal definitions and develop tools that help determine whether two geometric realizations are homomorphic. In Section \ref{sect:paths} we determine the isomorphism classes and resulting poset, ${\mathcal P}_n$, of realizations of the path $P_n$ with $2\leq n\leq 6$. Additionally we provide the following results: ${\mathcal P}_n$ has a unique minimal and a unique maximal element; if $k\leq n$, then ${\mathcal P}_k$ is a subposet of ${\mathcal P}_n$; for each positive integer $c$ less than or equal to the maximum number of crossings, there is at least one realization of $P_n$ with precisely $c$ crossings. In Section \ref{sect:cycles} we determine the isomorphism classes and resulting poset, ${\mathcal C}_n$, of realizations of the cycle $C_n$ with $3\leq n\leq 6$. We also show that ${\mathcal C}_n$ has a unique minimal element and a unique minimal element. Further if $k\leq n$, then ${\mathcal C}_k$ is a subposet of ${\mathcal C}_n$. In Section \ref{sect:cliques} we determine the isomorphism classes and resulting poset, ${\mathcal K}_n$, of realizations of the complete graph $K_n$ with $3\leq n\leq 6$, and we prove that for all $n$, ${\mathcal K}_n$ contains a chain of length $n-2$. In Section \ref{sect:questions} we provide some open questions. \section{Basics, Tools, Examples}\label{sect:basics} A {\it geometric graph} $\overline{G}$ is a simple graph $G = \big( V(G), E(G) \big)$ together with a straight-line drawing of $G$ in the plane with vertices in general position (no three vertices are collinear and no three edges cross at a single point). A geometric graph $\overline{G}$ with underlying abstract graph $G$ is called a {\it geometric realization} of $G$; the term {\em rectilinear drawing} is also used in the literature. Two geometric realizations of a graph are considered the same if they have the same vertex adjacencies and edge crossings. This is formalized below by extending the definition of graph isomorphism in a natural way to geometric graphs. \noindent{\bf Definition.} A geometric isomorphism, denoted $f:\overline{G} \to \overline{H}$, is a bijection $f:V(\overline G) \to V(\overline H)$ such that for all $u, v, x, y \in V(\overline G)$, \begin{enumerate} \item $uv \in E(\overline G)$ if and only if $f(u)f(v) \in E(\overline H)$, and \item $xy$ crosses $uv$ in $\overline{G}$ if and only if $f(x)f(y)$ crosses $f(u)f(v)$ in $\overline{H}$. \end{enumerate} If there exists a geometric isomorphism $f:\overline{G} \to \overline{H}$, we write $\overline{G} \cong \overline{H}$. Geometric isomorphism clearly defines an equivalence relation on the set of geometric realizations of a simple graph $G$. Note that in \cite{HH, HT} Harborth, et al., give a definition for isomorphism of geometric graphs that is stricter than the one given here. They require that a geometric isomorphism also preserve regions and parts of edges. Figure~\ref{fig:two_c6} shows two geometric realizations of $C_6$ that have the same crossings (and so are isomorphic by our definition) but have different regions (and so are not isomorphic in the sense of Harborth). One consequence of this is that for a given abstract graph there are (potentially) fewer isomorphism classes of realizations under our definition than under that of Harborth. \begin{figure} \caption{Realizations with the same crossings but different regions} \label{fig:two_c6} \end{figure} We similarly extend the definition of graph homomorphism to geometric graphs. \noindent{\bf Definition.} A geometric homomorphism, denoted $f:\overline{G} \to \overline{H}$, is a function $f:V(\overline G) \to V( \overline H)$ such that for all $u, v, x, y \in V(\overline G)$, \begin{enumerate} \item if $uv \in E(\overline G)$, then $ f(u)f(v) \in E(\overline H)$, and \item if $xy$ crosses $uv$ in $\overline{G}$, then $f(x)f(y)$ crosses $f(u)f(v)$ in $\overline{H}$. \end{enumerate} If there is a geometric homomorphism $f:\overline{G}\to \overline{H}$, we write $\overline{G} \stackrel{f}{\to} \overline{H}$ or simply $\overline{G}\to \overline{H}$, and we say that $\overline{G}$ is (geometrically) homomorphic to $\overline{H}$. \noindent{\bf Definition.} Let $\overline{G}$ and $ \widehat{G}$ be geometric realizations of a graph $G$. Set $\overline{G} \preceq \widehat{G}$ (or $\overline{G} \stackrel{f}{\preceq}\widehat{G}$) if there exists a (vertex) injective geometric homomorphism $f:\overline{G} \to \widehat{G}$. Note that since the abstract graphs underlying $\overline{G}$ and $\widehat{G}$ are the same, the fact that $f$ is injective and preserves adjacency means that $f$ induces an isomorphism from $G$ to itself. It is not difficult to see that this relation is reflexive, transitive, antisymmetric, and hence a partial order. \noindent{\bf Definition.} The {\it geometric homomorphism poset} of a graph is the set of geometric isomorphism classes of its realizations partially ordered by the relation $\preceq$. \subsection{The Edge Crossing and Line\slash Crossing Graphs}\label{subsec:LEX} Recall that the {\it line graph} of an abstract graph $G$, denoted $L(G)$, is the abstract graph whose vertices correspond to edges of $G$, with adjacency when the corresponding edges of $G$ are adjacent. In this section, we define the {\it edge crossing graph}, which is similar to the line graph except that it encodes edge crossings rather than edge adjacencies. In \cite{N} Ne{\v{s}}et{\v{r}}il, et al., proved a correspondence between graph homomorphisms and homomorphisms of their line graphs. We generalize this to geometric graphs and the union of their line and crossing graphs. \noindent{\bf Definition.} Let $\overline{G}$ be a geometric graph. Define the {\it edge crossing graph}, denoted $EX(\overline{G})$, to be the abstract graph whose vertices correspond to edges of $\overline{G}$, with adjacency when the corresponding edges of $\overline{G}$ cross. Define the {\it line\slash crossing graph}, denoted $LEX(\overline{G})$, to be the 2-edge-colored abstract graph whose vertices correspond to the edges of $\overline{G}$, with red edges in $LEX(\overline{G})$ corresponding to adjacent edge pairs in $\overline{G}$ and blue edges in $LEX(\overline{G})$ corresponding to crossing edge pairs in $\overline{G}$. In other words, the line\slash crossing graph of $\overline{G}$ is the union of the line graph of $G$ and the edge crossing graph of $\overline{G}$, with an added edge coloring to keep the meanings of these edges clear. Figure \ref{fig:P6} shows a geometric realization of $P_6$ and its line\slash crossing graph. The following theorem is well-known and important to our proofs. \begin{thm}\label{thm:linegraph}\rm \cite{W} If $G$ has more than four vertices, then $G\cong H \iff L(G)\cong L(H)$. \end{thm} \begin{prop} \label{prop:preceqiff}\rm Let $\overline{G}$ and $\overline{H}$ be geometric graphs on more than four vertices. Then $\overline{G}\preceq\overline{H}$ if and only if there exists a color-preserving graph homomorphism $\tilde f: LEX(\overline{G}) \to LEX(\overline{H})$ that restricts to an isomorphism from $L(G)$ to $L(H)$.\end{prop} Note that using Proposition~\ref{prop:preceqiff} requires that $\overline{G}$ and $\overline{H}$ have isomorphic underlying abstract graphs. Thus we may assume that $\overline{G}$ and $\overline{H}$ are geometric realizations of the same graph. An alternate way to phrase Proposition \ref{prop:preceqiff} is to say that if $\overline{G}$ and $\widehat{G}$ are geometric realizations of the abstract graph $G$, then $\overline{G}\preceq \widehat{G}$ if and only if there exists an isomorphism of the line graphs that induces a homomorphism of the edge crossing graphs. Thus if there is no injective homomorphism $\tilde f:EX(\overline{G}) \to EX(\widehat{G})$, then $\overline{G}\not\preceq \widehat{G}$. Thus given $\overline{G}$ and $\widehat{G}$, if $EX(\overline{G})$ is not isomorphic to a subgraph of $EX(\overline{H})$, then there is no geometric homomorphism $\overline{G} \to \widehat{G}$. However, Proposition \ref{prop:preceqiff} tells us something stronger. The proposition tells us that $\overline{G}\to \widehat{G}$ if and only if there is $\tilde f\in {\rm Aut}(L(G))$ so that $\tilde f (EX(\overline{G}))$ is a subgraph of $EX(\widehat{G})$. For graphs whose line graphs have small automorphism groups, this reduces the work significantly. \noindent{\bf Example.} In Figure \ref{fig:P6} we see a realization $\overline P_6$ of $P_6$. Note that $L(P_6) \cong P_5$ and $EX(\overline P_6) \cong P_2$. Each non-plane geometric realization of $P_6$ has edge crossing graph with at least one edge, so theoretically there are many possible realizations $\widehat P_6$ to which ${\overline P}_6$ might be geometrically homomorphic. However, by Proposition \ref{prop:preceqiff} a vertex injective homomorphism $f:\overline P_6 \to \widehat P_6$, taking $EX(\overline P_6)$ to a subgraph of $EX(\widehat P_6)$, needs to be an automorphism of $L(P_6)$. Recall that ${\overline P}_6$ as given in Figure \ref{fig:P6} has a single crossing that occurs between edges $e_1$ (with endpoints $1$ and $2$) and $e_3$ (with endpoints $3$ and $4$). Applying the two automorphisms of $L(P_6)$ to this realization, we see that ${\overline P}_6 \to {\widehat P}_6$ if and only if ${\widehat P}_6$ has one of the crossings $e_1\times e_3$ or $e_3 \times e_5$. This restricts our search significantly. \begin{figure} \caption{${\overline P} \label{fig:P6} \end{figure} \subsection{Parameters}\label{subsec:tools} We next define parameters that help determine whether there is a geometric homomorphism between two geometric realizations of the same graph. Proposition ~\ref{prop:observe} lists several properties that follow easily from the definition of these parameters. \begin{figure} \caption{Vertex labels giving a geometric homomorphism from $\overline K_6$ to $\widehat K_6$} \label{fig:K6s} \end{figure} Figure~\ref{fig:K6s} shows two geometric realizations, $\overline K_6$ and $\widehat K_6$, of $K_6$. The vertex labeling gives a geometric homomorphism from $\overline K_6$ to $\widehat K_6$. Below we define several parameters for geometric embeddings, which we then use to demonstrate that there is no geometric homomorphism from $\widehat K_6$ to $\overline K_6$. In what follows we let $e_{i,j}$ denote the edge from vertex $i$ to vertex $j$. \noindent{\bf Definition.} If $\overline{G}$ is a geometric realization of a graph $G$, let $cr(\overline{G})$ denote the total number of crossings in $\overline{G}$. For $e\in E(\overline{G})$ let $cr(e)$ be the number of edges that cross the edge $e$ in $\overline{G}$. Let $E_0$ (resp., $E_\times$) denote the set of edges in $\overline{G}$ that have $cr(e) = 0$ \ (resp., $cr(e) > 0$). If $|E_\times| = 0$ we say that $\overline{G}$ is a {\it plane realization} of $G$. Let $G_0$ (resp., $G_\times $) denote the abstract graph that is the spanning subgraph of $G$ whose edge set is $E_0$ (resp., $E_\times$). A clique in $\overline{G}$ is called a {\em convex clique} if its vertices are in convex position. The {\em convex clique number} of $\overline{G}$ is the maximum size of a convex clique in $\overline{G}$, denoted by $\widehat \omega(\overline{G})$. \begin{prop}\label{prop:observe}\rm Let $\overline{G}$ and $ \widehat{G}$ be geometric realizations of a graph $G$, and suppose $\overline{G}\stackrel{f} \preceq \widehat{G}$. Then each of the following conditions holds. \begin{enumerate} \item\label{a} $cr(\overline{G}) \leq cr(\widehat{G})$. \item\label{b} For each $e\in E(\overline{G})$, $cr(e) \leq cr(f(e))$. \item\label{c} $|E_0(\overline{G})| \geq |E_0(\widehat{G})|$ and $|E_\times(\overline{G})| \leq |E_\times(\widehat{G})|$. \item\label{d} $\widehat{G}_0$ is a subgraph of $\overline{G}_0$. \item\label{e} $\widehat{\omega}(\overline{G}) \leq \widehat{\omega}(\widehat{G})$. \end{enumerate} \end{prop} \noindent{\bf Example.} In the graphs in Figure \ref{fig:K6s}, $cr(\overline K_6) = 8$, $cr(\widehat K_6)=11$, $|E_0(\overline K_6)| = 7$, $|E_0(\widehat K_6)| = 6$, $\widehat{\omega}(\overline K_6) = 4$ and $\widehat{\omega}(\widehat K_6)=5$. Thus parts \ref{a}, \ref{c}, and \ref{e} of Proposition~\ref{prop:observe} each imply that there is no geometric homomorphism from $\widehat K_6$ to $\overline K_6$. \noindent{\bf Definition.} \rm Let $\overline{G}$ be a geometric realization of a graph $G$. For each $v\in V(\overline{G})$, let $d_0(v)$ be the number of uncrossed edges incident to $v$ and let $m(v)$ be the maximum number of times an edge that is incident to $v$ is crossed. \begin{prop}\label{prop:obs2} \rm If $\overline{G} \stackrel{f}{\preceq} \widehat{G}$ is a geometric homomorphism, then for each $v\in V(\overline{G})$, $d_0(v) \geq d_0(f(v))$ and $m(v) \leq m(f(v))$. \end{prop} \noindent{\bf Example.} In the example in Figure \ref{fig:K6s}, consider the vertex $v = 1$ in $\overline K_6$ and its image $f(v)$ in $\widehat K_6$. Then $d_0(v)=2$ and $m(v) = 2$, while $d_0(f(v))=2$ and $m(f(v))=3$. An effective way to use Proposition~\ref{prop:obs2} is to compare the values of each parameter over all vertices at once. This motivates the following definitions. \noindent{\bf Definition.} For a geometric graph $\overline{G}$, let $D_0(\overline{G})$ (resp., $M(\overline{G})$) be the vector whose coordinates contain the values $\{d_0(v)\}_{v\in V(\overline{G})}$ (resp., $\{m(v)\}_{v\in V(\overline{G})}$), listed in non-increasing order. \noindent{\bf Definition.} \rm Given two vectors $\vec x$ and $\vec y$ in ${\mathbb Z}^n$, we say that $\vec x\leq \vec y$ if each coordinate of $\vec x$ has value that is at most the value in the corresponding coordinate of $\vec y$. Let $X, Y$ be the vector of values of $\vec x$ and $\vec y$ listed in non-increasing order. \begin{lem}\label{lem:downarrow} \rm Let $\vec x, \vec y \in \mathbb{Z}^n$. If $\vec x \leq \vec y$, then $X \leq Y$. \end{lem} \begin{proof} By definition, the first coordinate of $X$ is $X_1=\max \big\{ x_i \mid 1 \leq i \leq n\big \}$, and so $X_1 = x_{i_1}$ for some $i_1 \in \{1, \dots , n\}$. But by assumption, $X_1 = x_{i_1} \leq y_{i_1} \leq \text{max} \big\{ y_i \mid 1 \leq i \leq n\big \} = Y_1.$ More generally, for all $2 \leq k \leq n$, the $k$-th entry of $X$ is less than or equal to at least $k$ entries of $\vec x$, say $x_{i_1},\ldots, x_{i_k}$. Further by assumption, for each $h$, $x_{i_h}\leq y_{i_h}$. Since there are (at least) $k$ coordinates of $\vec y$ that have value at least $X_k$, the value $Y_k$ must be at least that large. Thus the $k$-th entry of $X$ is less than or equal to at least $k$ coordinates of $\vec Y$, and so $X_k \leq Y_k$. Thus $X \leq Y$.\end{proof} \begin{cor}\rm \label{cor:vector} \rm If $\overline{G} \stackrel{f}{\preceq} \widehat{G}$, then: \begin{enumerate} \item\label{cor1} $D_0(\overline{G}) \geq D_0(\widehat{G})$; \item\label{cor2} $M(\overline{G})\leq M(\widehat{G})$. \end{enumerate} \end{cor} In Figure \ref{fig:K6s}, $D_0(\overline K_6) = (3, 3, 3, 2, 2, 1) \geq D_0(\widehat K_6)=(3, 2, 2, 2, 2, 1)$, while $M(\overline K_6)=(4, 4, 3, 3, 2, 2) \leq M(\widehat K_6)=(4, 4, 3, 3, 3, 2)$. \section{Posets for Geometric Paths}\label{sect:paths} We now determine ${\mathcal P}_n$, the geometric homomorphism poset of the path ${P}_n$ on $n$ vertices, for $n = 2, \ldots, 6$, and we state some properties of this poset for general $n$. Throughout this section, we denote the vertices of $P_n$ by $1, 2, \ldots, n$, and its edges by $e_i = \{i, i+1\}, i = 1, \ldots, n-1$. The following two lemmas are helpful in determining the geometric realizations of $P_n$. \begin{lem}\label{lem:p5crossingrule} \rm If a geometric graph $\overline{G}$ contains $P_5$ as a subgraph, with vertices and edges numbered as above and $e_1 \times e_3$ and $e_2 \times e_4$ are both crossings in $\overline{G}$, then so is $e_1 \times e_4$. \end{lem} \begin{proof} Suppose $\overline{G}$ has both of the crossings $e_1 \times e_3$ and $e_2 \times e_4$. Let $\ell$ be the line determined by edge $e_1$. Since $e_1$ crosses $e_3$, we may assume that vertex 3 lies above $\ell$ and vertex 4 lies below $\ell$, as indicated in Figure~\ref{fig:lemma3}. Let $C$ be the cone with vertex 4 and sides extending through vertices 1 and 2 (indicated by dashed lines in Figure~\ref{fig:lemma3}). For $e_3$ to cross $e_1$, both 3 and $e_2$ must be inside $C$. For $e_4$ to cross $e_2$, both 5 and $e_4$ must lie in the cone with vertex 4 and sides through 2 and 3, and 5 must also lie above $e_2$. This forces $e_4$ to cross $e_1$ in addition to crossing $e_2$. \end{proof} \begin{figure} \caption{For use in the proof of Lemma \ref{lem:p5crossingrule} \label{fig:lemma3} \end{figure} \begin{lem}\label{lem:pn_max} \rm A geometric realization of $P_n$ has at most ${(n-2)(n-3)}/{2}$ edge crossings. Moreover, this bound is tight.\end{lem} \begin{proof} The only possible crossing edge pairs in any geometric realization of $P_n$ are of the form $e_i \times e_j$ with $j-i\ge 2$; thus for any $i = 1, \ldots, n-3$, there are $n-i-2$ higher-numbered edges that can cross $e_i$. A straightforward algebraic calculation shows that ${(n-2)(n-3)}/{2}$ is an upper bound on the number of edge crossings. Figure~\ref{fig:p7_maxcrossings} gives a geometric realization of $P_7$ that achieves this bound. \end{proof} \begin{figure} \caption{$\overline{P} \label{fig:p7_maxcrossings} \end{figure} To determine up to isomorphism all the possible geometric realizations of $P_n$, $n = 2, \ldots, 5$, first list all sets whose elements are pairs of edges $e_i \times e_j$ with $j-i\ge 2$. Next eliminate any sets that violate Lemma~\ref{lem:p5crossingrule}, and identify any sets that are equivalent under an automorphism of $P_n$. Recall that the only two automorphisms of $P_n$ are the identity and the map that reverses the order of the vertices. Finally, check that each of the remaining sets corresponds to a geometric realization. To determine the structure of the geometric homomorphism poset, recall that by Proposition \ref{prop:preceqiff}, $\overline{P}_n \preceq {\widehat P}_n$ if and only if there is $\tilde f\in {\rm Aut}(L(P_n))$ that induces a graph homomorphism from $EX(\overline{P}_n)$ to $EX({\widehat P}_n)$. The graph $L(P_n) = P_{n-1}$ has only the two automorphisms mentioned above. Thus for each ordered pair of realizations, we need only check two automorphisms to see if they extend to color-preserving homomorphisms of the corresponding line\slash crossing graph. For $n = 2, \ldots, 5$, all sets that satisfy Lemma~\ref{lem:p5crossingrule} have geometric realizations. We state the poset results for $P_2, \ldots, P_5$ below. \begin{thm} \label{thm:p1top5posets} \rm Let ${\mathcal P_n}$ be the poset of geometric realizations of $P_n$. \begin{enumerate} \item Each of ${\mathcal P}_2$ and ${\mathcal P}_3$ is trivial, containing only the plane realization. \item ${\mathcal P}_4$ is a chain of two elements, in which the plane realization is the unique minimal element, and the realization with crossing $e_1\times e_3$ is the unique maximal element. \item ${\mathcal P}_5$ has the following five non-isomorphic geometric realizations and Hasse diagram as given in Figure \ref{fig:P5_poset}: $0.1 = \emptyset$ (the plane realization) ; two 1-crossing realizations, $1.1 = \{e_1\times e_3\} \equiv \{e_2\times e_4\}$ and $1.2 = \{e_1\times e_4\}$; a single 2-crossing realization, $2.1 = \{e_1\times e_3, e_1\times e_4\} \equiv \{e_1\times e_4, e_2\times e_4\}$; and a single 3-crossing realization, $3.1 = \{e_1\times e_3, e_1\times e_4, e_2\times e_4\}$. \end{enumerate} \end{thm} \begin{proof} It is straightforward to find geometric realizations with the given sets of crossings. These realizations, together with their line/crossing graphs (which aid in determining the poset relations), appear in Appendix~\ref{app:paths}. It follows from Lemmas~\ref{lem:p5crossingrule} and \ref{lem:pn_max} that there are no other realizations. \end{proof} \begin{figure} \caption{The Hasse diagram for ${\mathcal P} \label{fig:P5_poset} \end{figure} For $P_6$, there is one set of crossing edge pairs that satisfies Lemma~\ref{lem:p5crossingrule}, but which does not correspond to a geometric realization of $P_6$, namely $\{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_5, e_3\times e_5\}$. The following lemma shows that this set can also be eliminated. \begin{lem}\label{lem:p6rule} \rm Suppose a geometric graph $\overline{G}$ contains $P_6$ as a subgraph, with vertices and edges numbered in the standard way. If $e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_5$, and $e_3\times e_5$ are crossings in $\overline{G}$, then so is $e_2 \times e_4$. \end{lem} \begin{proof} Since edges $e_1$ and $e_3$ cross, we may assume without loss of generality that $e_2$ is horizontal, and that $e_1$ and $e_3$ lie above $e_2$, as indicated in Figure~\ref{fig:claim2}. Since $\overline{G}$ contains the crossing $e_1\times e_4$, vertex 5 is in one of the regions $T, E$, or $S$ shown in Figure~\ref{fig:claim2}. If vertex 5 is in $E$ or $T$, then $e_5$ cannot cross all three of the edges $e_1, e_2,$ and $e_3$. Thus vertex 5 is in $S$, forcing the crossing $e_2 \times e_4$. \end{proof} \begin{figure} \caption{For use in the proof of Lemma~\ref{lem:p6rule} \label{fig:claim2} \end{figure} We are now able to list all the non-isomorphic geometric realizations of $P_6$ and give the Hasse diagram for ${\mathcal P}_6$. \begin{thm}\label{thm:p6geoms} \rm The poset ${\mathcal P}_6$ has the following thirty-one non-isomorphic geometric realizations and has Hasse diagram as given in Figure \ref{fig:P6_poset}. {\em 0 crossings:} 0.1 = $\emptyset$; {\em 1 crossing:} 1.1 = $\{e_1 \times e_3\} \equiv \{e_3 \times e_5\}$, 1.2 = $\{e_1 \times e_4\} \equiv \{e_2 \times e_5\}$, 1.3 = $\{e_1 \times e_5\}$, 1.4 = $\{e_2 \times e_4\}$; {\em 2 crossings:} 2.1 = $\{e_1 \times e_3, e_1 \times e_4\}$, 2.2 = $\{e_1 \times e_3, e_1 \times e_5\}$, 2.3 = $\{e_1 \times e_3, e_2 \times e_5\}$, 2.4 = $\{e_1 \times e_3, e_3 \times e_5\}$, 2.5 = $\{e_1 \times e_4, e_1 \times e_5\}$, 2.6 = $\{e_1 \times e_4, e_2 \times e_4\}$, 2.7 = $\{e_1 \times e_4, e_2 \times e_5\}$, 2.8 = $\{e_1 \times e_5, e_2 \times e_4\}$; {\em 3 crossings:} 3.1 = $\{e_1 \times e_3, e_1 \times e_4, e_1 \times e_5\}$, 3.2 = $\{e_1 \times e_3, e_1 \times e_4, e_2 \times e_4\}$, 3.3 = $\{e_1 \times e_3, e_1 \times e_4, e_2 \times e_5\}$, 3.4 = $\{e_1 \times e_3, e_1 \times e_4, e_3 \times e_5\}$, 3.5 = $\{e_1 \times e_3, e_1 \times e_5, e_2 \times e_5\}$, 3.6 = $\{e_1 \times e_3, e_1 \times e_5, e_3 \times e_5\}$, 3.7 = $\{e_1 \times e_4, e_1 \times e_5, e_2 \times e_4\}$ 3.8 = $\{e_1 \times e_4, e_1 \times e_5, e_2 \times e_5\}$, 3.9 = $\{e_1 \times e_4, e_2 \times e_4, e_2 \times e_5\}$; {\em 4 crossings:} 4.1 = $\{e_1 \times e_3, e_1 \times e_4, e_1 \times e_5, e_2 \times e_4\} \equiv \{e_1 \times e_5, e_2 \times e_4, e_2 \times e_5, e_3 \times e_5\}$, 4.2 = $\{e_1 \times e_3, e_1 \times e_4, e_1 \times e_5, e_2 \times e_5\} \equiv \{e_1 \times e_4, e_1 \times e_5, e_2 \times e_5, e_3 \times e_5\}$, 4.3 = $\{e_1 \times e_3, e_1 \times e_4, e_1 \times e_5, e_3 \times e_5\} \equiv \{e_1 \times e_3, e_1 \times e_5, e_2 \times e_5, e_3 \times e_5\}$, 4.4 = $\{e_1 \times e_3, e_1 \times e_4, e_2 \times e_4, e_2 \times e_5\} \equiv \{e_1 \times e_4, e_2 \times e_4, e_2 \times e_5, e_3 \times e_5\}$, 4.5 = $\{e_1 \times e_3, e_1 \times e_4, e_2 \times e_5, e_3 \times e_5\}$, 4.6 = $\{e_1 \times e_4, e_1 \times e_5, e_2 \times e_4, e_2 \times e_5\}$; {\em 5 crossings:} 5.1 = $\{e_1 \times e_3, e_1 \times e_4, e_1 \times e_5, e_2 \times e_4, e_2 \times e_5\} \equiv \{e_1 \times e_4, e_1 \times e_5, e_2 \times e_4, e_2 \times e_5, e_3 \times e_5\}$, 5.2 = $\{e_1 \times e_3, e_1 \times e_4, e_2 \times e_4, e_2 \times e_5, e_3 \times e_5\}$; {\em 6 crossings:} 6.1 = $\{e_1 \times e_3, e_1 \times e_4, e_1 \times e_5, e_2 \times e_4, e_2 \times e_5, e_3 \times e_5\}$. \end{thm} \begin{proof} It is straightforward to find geometric realizations with the given sets of crossings. These realizations, together with their line/crossing graphs (which aid in determining the poset relations), appear in Appendix~\ref{app:paths}. It follows from Lemmas~\ref{lem:p5crossingrule}, \ref{lem:pn_max}, and \ref{lem:p6rule} that there are no others. \end{proof} The following theorem lists some properties of ${\mathcal P}_n$ for $n \ge 3$. \begin{thm}\label{thm:pn_props} \rm For $n \ge 3$, ${\mathcal P}_n$ has the following properties. \begin{enumerate} \item\label{a1} There is a unique minimal element, corresponding to the plane realization of $P_n$. \item\label{a2} There is a unique maximal element, corresponding to the realization of $P_n$ with ${(n-2)(n-3)}/{2}$ crossings. \item\label{a3} ${\mathcal P}_n$ has a chain of size ${(n-2)(n-3)}/{2}+ 1$. In particular, for each $c$ with $0 \le c \le {(n-2)(n-3)}/{2}$, there is at least one realization of $P_n$ with exactly $c$ crossings. \item\label{a4} For $1 \le k \le n$, ${\mathcal P}_k$ is isomorphic to a sub-poset of ${\mathcal P}_n$. \end{enumerate} \end{thm} \begin{proof} Properties~\ref{a1} and \ref{a2} are easily seen to be true. For Property~\ref{a3}, consider a geometric realization ${\overline P}_n$ of $P_n$ with $c \ge 1$ crossings. Such a realization can be modified to create a new realization ${\widehat P}_n$ with $c-1$ crossings, and with ${\widehat P}_n \prec {\overline P}_n$, by sliding the vertex $n$ along edge $e_{n-1}$ until it passes over a crossing edge, and then erasing the section of $e_{n-1}$ that extends beyond this point. If $e_{n-1}$ has no crossings, we slide vertices $n$ and $n-1$ along edge $e_{n-2}$, erasing what remains of $e_{n-1}$ and $e_{n-2}$, and so on. We can continue in a similar manner to remove one crossing at a time until there are none left; the process is illustrated in Figure~\ref{fig:P7_embed}. Since Lemma~\ref{lem:pn_max} guarantees that $P_n$ has a realization with ${(n-2)(n-3)}/{2}$ crossings, Property~\ref{a3} follows. For Property~\ref{a4}, suppose we have some geometric realization of $P_k$. We can replace the uncrossed segment of edge $e_{k-1}$ nearest to vertex $k$ with a path from $k$ to $n$ to obtain a geometric realization of $P_n$ with the same crossings. By doing this for each realization of $P_k$, we see that its poset of geometric realizations is isomorphic to a sub-poset of the poset of geometric realizations of $P_n$. \end{proof} A {\em cover} of an element $x$ in a poset $\mathcal{P}$ is an element $y \in \mathcal{P}$ such that $x \prec y$ and no $z \in \mathcal{P}$ satisfies $x \prec z \prec y$. $\mathcal{P}$ is called a {\it graded} poset if there is a {\em rank} function $\rho:\mathcal{P}\to\mathbb{N}$ such that for all $x, y \in \mathcal{P}$, 1) all minimal elements have the same value under the rank function, 2) if $x \prec y$, then $\rho(x) < \rho(y)$, and 3) if $y$ covers $x$, then $\rho(y) = \rho(x) + 1$. Note that if a poset is graded then all maximal chains between a given pair of elements must have the same length. A reasonable conjecture for geometric homomorphism posets is that the number of edge crossings acts as a rank function. Condition 1 holds by Proposition~\ref{prop:observe}. However, Condition 2 fails to hold in exactly one instance in $\mathcal{P}_6$: realization 6.1 covers realization 4.3, yet it has two more crossings. In fact, the poset $\mathcal{P}_6$ does not admit any rank function, because it has maximal chains between 0.1 and 6.1 which have different lengths: $0.1 \prec 1.4 \prec 2.8 \prec 3.7 \prec 4.6 \prec 5.1 \prec 6.1$ and $0.1 \prec 1.1 \prec 2.2 \prec 3.5 \prec 4.3 \prec 6.1$. Hence, $\mathcal{P}_6$ is not a graded poset. It follows from Property~\ref{a4} that $\mathcal{P}_n$ is not a graded poset for any $n \geq 6$. \begin{figure} \caption{The Hasse diagram for ${\mathcal P} \label{fig:P6_poset} \end{figure} A {\it lattice} is a poset in which any two elements have a unique supremum (join) and unique infimum (meet). Figure \ref{fig:P6_poset} shows that $\mathcal{P}_6$ is not a lattice because (for example) realizations 3.5 and 3.1 have both 4.3 and 4.2 as suprema, and realizations 4.3 and 4.2 have both 3.5 and 3.1 as infima. It follows from Property~\ref{a4} that $\mathcal{P}_n$ is not a lattice for any $n \geq 6$. \begin{figure} \caption{Geometric realizations of $P_7$ with 10 crossings, 6 crossings, and 3 crossings} \label{fig:P7_embed} \end{figure} \section{Posets for Geometric Cycles}\label{sect:cycles} We now determine ${\mathcal C}_n$, the geometric homomorphism poset of the path ${C}_n$ on $n$ vertices, for $n = 3, \ldots, 6$, and we state some properties of this poset for general $n$. Throughout this section we denote the vertices of $C_n$ by $1, 2, \ldots, n$, and its edges by $e_i = \{i, i+1\}, i = 1, \ldots, n-1$, and $e_n = \{n, 1\}$. The maximum number of crossings in a geometric realization of $C_n$ was determined in 1977 by Furry and Kleitman~\cite{Furry77}; their results are summarized in the next lemma. \begin{lem} \rm {\cite{Furry77}}\label{lem:furry} For $n \geq 3$, a geometric realization of $C_n$ has at most ${n(n-3)}/{2}$ edge crossings if $n$ is odd and ${n(n-4)}/{2} + 1$ edge crossings if $n$ is even. Moreover, these bounds are tight. \end{lem} Figure~\ref{fig:c10_max} shows such a realization for $n=10$. \begin{figure} \caption{A realization of $C_{10} \label{fig:c10_max} \end{figure} The techniques of the previous section can be used to find the elements of $\mathcal{C}_n$. For $3 \le n \le 5$, every set of crossing edge pairs that satisfies Lemma~\ref{lem:p5crossingrule} corresponds to a geometric realization. For $C_6$, this is the case for all geometric realizations with at most two crossings. For realizations with three or more crossings, some cases require additional lemmas. \begin{lem}\label{lem:insideoutside} \rm Suppose $\overline{C}_n$ is a geometric realization of the cycle $C_n$ that has crossings $e_i \times e_k$ and $e_j \times e_\ell$, where $i < j < k < \ell$. Then there is at least one additional crossing $e_\alpha \times e_\beta$ where $i \le \alpha \le k$ and $k \le \beta \le i$ (mod $n$) (and $\{\alpha, \beta\} \ne \{i, k\}$). \end{lem} \begin{proof} Suppose there is no such additional crossing. Place a vertex $v$ at the crossing $e_i \times e_k$, subdividing each of those two edges. Starting at edge $\{v, i+1\}$ of the modified graph, follow the cycle in order of increasing vertex number, coloring each edge red, until the crossing $e_i \times e_k$ is reached again at edge $\{k, v\}$. Then follow the rest of the cycle, beginning at edge $\{v, k+1\}$, coloring each edge blue, until the final edge $\{i, v\}$ is reached. Note that edge $e_j$ is red and edge $e_\ell$ is blue. The red cycle is a closed, but not necessarily simple, rectilinear curve in the plane. From the hypotheses of the lemma and our assumption that the conclusion is false, this red curve does not cross either of the edges $\{i, v\}$ and $\{v, k+1\}$, so these two edges lie in the same region of the plane determined by the red curve. If we now follow the blue curve starting at $k+1$, then the red-blue crossing $e_j \times e_\ell$ takes the blue curve into a different region of the plane determined by the red curve. But since we have assumed that the additional crossing of the lemma does not exist, the blue curve cannot return to end at vertex $i$, which is a contradiction. \end{proof} Lemma~\ref{lem:c6crossingrules} is a multi-part technical lemma. We prove the first part below; the proofs of the others, which are similar to the proof of Lemma~\ref{lem:p6rule}, appear in Appendix~\ref{app:cycles}. \begin{lem}\label{lem:c6crossingrules} \rm Let $\overline{C}_6$ be a geometric realization of the cycle $C_6$, with edges labeled consecutively, $e_1, e_2, \ldots, e_6$. \begin{enumerate} \item\label{cl1} If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4$, and $e_1\times e_5$, then it doesn't contain the crossing $e_2\times e_6$. \item\label{cl2} If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4$, and $e_4\times e_6$, then it also contains the crossing $e_2\times e_5$. \item\label{cl3} If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_4$, and $e_2\times e_5$, then it also contains at least one of the crossings $e_1\times e_5, e_3\times e_5, e_3\times e_6$. \item\label{cl4} If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_5$, and $e_4\times e_6$, then it also contains at least one of the crossings $e_2\times e_4, e_3\times e_5, e_3\times e_6$. \item\label{cl5} If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_5$, and $e_3\times e_6$, then it also contains at least one of the crossings $e_2\times e_4, e_2\times e_6, e_3\times e_5, e_4\times e_6$. \end{enumerate} \end{lem} \begin{proof} For part~\ref{cl1}, suppose $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4$, and $e_1\times e_5$. Let $h$ be the line through edge $e_1$. Since $e_1$ crosses every edge of the path joining vertices 3 and 6, the vertices 3 and 6 lie on opposite sides of $h$. Thus the edges $e_2=\{2, 3\}$ and $e_6=\{6, 1\}$ also lie on opposite sides of $h$ and so do not cross. \end{proof} Part~\ref{cl1} and its proof generalize to give us the following corollary. \begin{cor}\label{lem:cncrossingrule} \rm Let $\overline{C}_n$ be a geometric realization of the cycle $C_n$, where $n\ge 4$ is even. If $\overline{C}_n$ contains the crossings $e_1\times e_3, e_1\times e_4, \ldots, e_1\times e_{n-1}$, then it doesn't contain the crossing $e_2\times e_n$. \end{cor} To determine the the elements of the poset ${\mathcal C}_6$, look at all possible sets of crossing edge pairs, and delete sets that don't satisfy Lemmas~\ref{lem:p5crossingrule}, \ref{lem:p6rule}, \ref{lem:insideoutside} or~\ref{lem:c6crossingrules}. Next, identify those that are equivalent under an automorphism of $C_n$. There are $2n$ such automorphisms: each of the rotations and each of these composed with the reflection map. To determine the geometric homomorphisms among the remaining sets, recall that by Proposition~\ref{prop:preceqiff}, it suffices to look for automorphisms of the line graph $L(C_n)$ that extend to homomorphisms on the edge crossing graphs. Since $L(C_n) = C_n$, these automorphisms are precisely the $2n$ automorphisms mentioned above. Theorem~\ref{thm:c1toc6posets} lists the elements of the poset of geometric realizations of $C_n$ for $3 \le n \le 6$; all nontrivial realizations are given up to isomorphism. \begin{thm} \label{thm:c1toc6posets} \rm Let ${\mathcal C_n}$ be the poset of geometric realizations of $C_n$. \begin{enumerate} \item ${\mathcal C}_3$ is trivial, containing only the plane realization. \item ${\mathcal C}_4$ is a chain of two elements, in which the plane realization is the unique minimal element, and the realization with crossing $e_1\times e_3$ is the unique maximal element. \item ${\mathcal C}_5$ is a chain of five elements: the plane realization $0.1 = \emptyset$, $1.1 = \{e_1\times e_3\}$, $2.1 = \{e_1\times e_3, e_1\times e_4\}$, $3.1 = \{e_1\times e_3, e_1\times e_4, e_2\times e_4\}$, and $5.1 = \{e_1\times e_3, e_1\times e_4, e_2\times e_4, e_2\times e_5, e_3\times e_4\}$. \item ${\mathcal C}_6$ has the following twenty-six non-isomorphic geometric realizations and has Hasse diagram as given in Figure~\ref{fig:C6_poset}: {\em 0 crossings:} $0.1 = \emptyset$ (the plane realization); {\em 1 crossing:} $1.1 = \{e_1\times e_3\}$, $1.2 = \{e_1\times e_4\}$; {\em 2 crossings:} $2.1 = \{e_1\times e_3, e_1\times e_4\}, 2.2 = \{e_1\times e_3, e_1\times e_5\}$, $2.3 = \{e_1\times e_3, e_4\times e_6\}$; {\em 3 crossings:} $3.1 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5\}, 3.2 = \{e_1\times e_3, e_1\times e_4, e_2\times e_4\}, 3.3 = \{e_1\times e_3, e_1\times e_4, e_2\times e_5\}, 3.4 = \{e_1\times e_3, e_1\times e_4, e_3\times e_5\}, 3.5 = \{e_1\times e_3, e_1\times e_4, e_3\times e_6\}, 3.6 = \{e_1\times e_3, e_1\times e_4, e_4\times e_6\}, 3.7 = \{e_1\times e_3, e_1\times e_5, e_3\times e_5\}, 3.8 = \{e_1\times e_4, e_2\times e_5, e_3\times e_6\}$; {\em 4 crossings:} $4.1 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4\}, 4.2 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_5\}, 4.3 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_3\times e_5\}, 4.4 = \{e_1\times e_3, e_1\times e_4, e_2\times e_5, e_3\times e_5\}, 4.5 = \{e_1\times e_3, e_1\times e_4, e_3\times e_6, e_4\times e_6\}$; {\em 5 crossings:} $5.1 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4, e_2\times e_5\}, 5.2 = \{e_1\times e_3, e_1\times e_4, e_2\times e_4, e_2\times e_5, e_3\times e_5\}, 5.3 = \{e_1\times e_3, e_1\times e_4, e_2\times e_4, e_2\times e_5, e_3\times e_6\}, 5.4 = \{e_1\times e_3, e_1\times e_4, e_2\times e_5, e_3\times e_6, e_4\times e_6\}$; {\em 6 crossings:} $6.1 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4, e_2\times e_5, e_3\times e_5\}$, $6.2 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4, e_2\times e_5, e_3\times e_6\}$; {\em 7 crossings:} $7.1 = \{e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4, e_2\times e_5, e_3\times e_6, e_4\times e_6\}$. \end{enumerate} \end{thm} \begin{proof} It is straightforward to find geometric realizations with the given sets of crossings. These realizations, together with their line/crossing graphs (which aid in determining the poset relations) appear in Appendix~\ref{app:cycles}. It follows from Lemmas~\ref{lem:p5crossingrule}, \ref{lem:p6rule}, \ref{lem:furry}, \ref{lem:insideoutside} and \ref{lem:c6crossingrules} that there are no other realizations. \end{proof} \begin{figure} \caption{The Hasse diagram for ${\mathcal C} \label{fig:C6_poset} \end{figure} Unlike the case of ${\mathcal P}_n$, we see from the geometric realizations of $C_5$ that not every possible number of crossings up to the maximum is necessarily achieved. On the other hand, $C_6$ has at least one realization with each number of crossings from 0 up to its maximum of 7. Furry and Kleitman have shown that this is representative of the geometric realizations of all odd and even cycles. This is stated in Theorem \ref{FurryKleitman} below. \begin{thm}\label{FurryKleitman} \rm \cite{Furry77} For $n$ even, $n\ge4$, $\overline{C}_n$ can have any number of crossings from 0 up to the maximum of $n(n-4)/2+1$. For $n$ odd, $n\ge3$, $\overline{C}_n$ can have any number of crossings from 0 up to the maximum of $n(n-3)/2$, except there is no geometric realization with $n(n-3)/2-1$ crossings. \end{thm} Finally we mention two more properties that are true in general for any geometric realization of the cycle $C_n$. The first one is obvious, and the second is easy to see, since any realization of $C_n$ can be replaced by one of $C_{n+1}$, by subdividing the edge $e_n$ into two edges, $e_n$ and $e_{n+1}$, so that the new edge $e_{n+1}$ has no crossings. \begin{thm}\label{thm:cn_props} \rm For $n \ge 3$, the poset ${\mathcal C}_n$ has the following properties. \begin{enumerate} \item\label{b1} There is a unique minimal element, corresponding to the plane realization of $C_n$. \item\label{b2} There is a unique maximal element, corresponding to the geometric realization with the maximum number of crossings, as given in Theorem~\ref{FurryKleitman}. \item\label{b3} For $3 \le k \le n$, the poset ${\mathcal C}_k$ is isomorphic to a sub-poset of ${\mathcal C}_n$. \end{enumerate} \end{thm} Note that $\mathcal{C}_6$ is not a graded poset because it has maximal chains between 0.1 and 6.1 which have different lengths: $0.1 \prec 1.1 \prec 2.2 \prec 3.7 \prec 4.3 \prec 6.1$ and $0.1 \prec 1.1 \prec 2.2 \prec 3.1 \prec 4.1 \prec 5.1 \prec 6.1$. Thus by Property \ref{b2}, for all $n\geq 6$, $\mathcal{C}_n$ is not a graded poset. Also, $\mathcal{C}_6$ is not a lattice because, for example, realizations 2.1 and 2.2 do not have a unique supremum. By Property \ref{b2}, $\mathcal{C}_n$ is not a lattice for all $n\geq 6$. \section{Posets for Geometric Cliques}\label{sect:cliques} We now determine ${\mathcal K}_n$, the geometric homomorphism poset of the clique ${K}_n$ for $n = 3, \ldots, 6$, and we state some properties of this poset for general $n$. Throughout this section we denote the vertices of $K_n$ by $1, 2, \ldots, n$, and its edges by $e_{ij} = \{i, j\}, i \ne j \in \{1, \ldots, n\}$. In \cite{HT}, Harborth and Th\"{u}rmann give all non-isomorphic geometric realizations of $K_n$ for $3\leq n \leq 6$. Recall that their definition for geometric isomorphism is stricter than the definition being used here. However, that only means that in general, our set of non-isomorphic geometric realizations may be smaller than theirs. That is, two geometric realizations that Harborth and Th\"{u}rmann consider non-isomorphic, we may consider isomorphic. However, in the cases $K_3, K_4, K_5, K_6$, all pairs that are non-isomorphic according to Harborth are also non-isomorphic according to us. \begin{thm} \label{thm:k1tok6posets} \rm Let ${\mathcal K_n}$ be the poset of geometric realizations of $K_n$. \begin{enumerate} \item ${\mathcal K}_3$ is trivial, containing only the plane realization. \item ${\mathcal K}_4$ is a chain of two elements, in which the plane realization is the unique minimal element, and the realization with crossing $e_{1,3}\times e_{2,4}$ is the unique maximal element. \item ${\mathcal K}_5$ is a chain of three elements: $1.1=\{e_{3,5}\times e_{2,4}\}$, $3.1=\{e_{1,4}\times e_{2,5}$, $e_{1,4}\times e_{3,5}$, $e_{2,4}\times e_{3,5}, \}$, and $5.1 = \{e_{1,3}\times e_{2,4}$, $e_{1,3}\times e_{2,5}$, $e_{1,4}\times e_{2,5}$, $e_{1,4}\times e_{3,5}$, $e_{2,4}\times e_{3,5}\}$. \item ${\mathcal K}_6$ has Hasse diagram as given in Figure~\ref{fig:HasseK6poset} with fifteen non-isomorphic geometric realizations: {\em 3 crossings:} $3.1 = \{e_{1, 3}\times e_{2, 6}, e_{1, 4}\times e_{2, 5}, e_{3, 5}\times e_{4, 6}\}$; {\em 4 crossings:} $4.1 = \{e_{1, 3}\times e_{2, 6}, e_{1, 4}\times e_{3, 5}, e_{1, 4}\times e_{5, 6}, e_{3, 5}\times e_{4, 6}\}$; {\em 5 crossings:} $5.1 = \{e_{1, 3}\times e_{2,4}, e_{1, 3}\times e_{2, 6}, e_{1, 4}\times e_{2, 6}, e_{1, 4}\times e_{3, 6}, e_{2, 4}\times e_{3, 6}\}$, $5.2 =\{e_{1, 3}\times e_{2, 6}, e_{1, 4}\times e_{2, 6}, e_{1, 4}\times e_{3,5}, e_{1, 4}\times e_{3,6}, e_{3, 5} \times e_{4, 6} \}$; {\em 6 crossings:} $6.1 = \{e_{1, 4}\times e_{2, 5}, e_{1, 4}\times e_{2, 6}, e_{1, 4}\times e_{3, 6}, e_{2, 4}\times e_{3, 6}, e_{2, 5}\times e_{3, 6}, e_{2, 5}\times e_{4, 6} \}$; {\em 7 crossings:} $7.1 = \{e_{1, 3}\times e_{2, 5}, e_{1, 3}\times e_{2, 6}, e_{1, 4}\times e_{2, 5}, e_{1, 4}\times e_{3, 5}, e_{1, 4}\times e_{5, 6}, e_{1, 6}\times e_{2, 5}, e_{3, 5}\times e_{4, 6} \}$, $7.2 =\{e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{3, 5}, e_{1, 5} \times e_{4, 6}, e_{2, 4} \times e_{3, 5}, e_{2, 5} \times e_{4, 6}, e_{3, 5} \times e_{4, 6} \}$; {\em 8 crossings:} $8.1 = \{e_{1, 3}\times e_{2, 4}, e_{1, 3}\times e_{2, 6}, e_{1, 4}\times e_{3, 5}, e_{1, 4} \times e_{5, 6}, e_{1, 6}\times e_{2,4}, e_{2, 4} \times e_{3, 5}, e_{2, 4} \times e_{5, 6}, e_{3, 5} \times e_{4, 6}\}$, $8.2 = \{e_{1, 3} \times e_{2,4}, e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 6}, e_{1, 4} \times e_{3, 6}, e_{1, 5} \times e_{2, 6}, e_{1, 5} \times e_{3,6}, e_{1, 5} \times e_{4, 6}, e_{2, 4} \times e_{3, 6}\}$; {\em 9 crossings:} $9.1=\{e_{1, 3} \times e_{2, 5}, e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{2, 6}, e_{1, 4} \times e_{3, 5}, e_{1, 4} \times e_{3, 6}, e_{2, 5} \times e_{3, 6}, e_{2, 5} \times e_{4, 6}, e_{3, 5} \times e_{4, 6} \}$, $9.2 = \{e_{1, 3} \times e_{2, 4}, e_{1, 3} \times e_{2, 5}, e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{2, 6}, e_{1, 4} \times e_{3, 6}, e_{2, 4} \times e_{3, 6}, e_{2, 5} \times e_{3, 6}, e_{2, 5} \times e_{4, 6}\}$; {\em 10 crossings:} $10.1 = \{e_{1, 3} \times e_{2, 4}, e_{1, 3} \times e_{2, 5}, e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{3, 5}, e_{1, 4} \times e_{5, 6}, e_{1, 6} \times e_{2, 5}, e_{2, 4} \times e_{3, 5}, e_{2, 4} \times e_{3, 6}, e_{3, 5} \times e_{4, 6}\}$; {\em 11 crossings:} $11.1 =\{e_{1, 3} \times e_{2,4}, e_{1, 3} \times e_{2,5}, e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{3, 5}, e_{1, 4} \times e_{5, 6}, e_{1, 6} \times e_{2, 4}, e_{1, 6} \times e_{2, 5}, e_{2, 4} \times e_{3, 5}, e_{2, 4} \times e_{5, 6}, e_{3, 5} \times e_{4, 6} \}$; {\em 12 crossings:} $12.1 =\{e_{1, 3} \times e_{2, 4}, e_{1, 3} \times e_{2, 5}, e_{1, 3} \times e_{2,6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{2, 6}, e_{1, 4} \times e_{3, 5}, e_{1, 4} \times e_{3, 6}, e_{2, 4} \times e_{3, 5}, e_{2, 4} \times e_{3, 6}, e_{2, 5} \times e_{3, 6}, e_{2, 5} \times e_{4, 6}, e_{3, 5} \times e_{4, 6}\}$; {\em 15 crossings:} $15.1 =\{e_{1, 3} \times e_{2, 4}, e_{1, 3} \times e_{2, 5}, e_{1, 3} \times e_{2, 6}, e_{1, 4} \times e_{2, 5}, e_{1, 4} \times e_{2, 6}, e_{1, 4} \times e_{3, 5}, e_{1, 4} \times e_{3, 6}, e_{1, 5} \times e_{2, 6}, e_{1, 5} \times e_{3, 6}, e_{1, 5} \times e_{4, 6}, e_{2, 4} \times e_{3, 5}, e_{2, 4} \times e_{3, 6}, e_{2, 5} \times e_{3, 6}, e_{2, 5} \times e_{4, 6}, e_{3, 5} \times e_{4, 6} \}$. \end{enumerate} \end{thm} \begin{proof} Since ${\rm Aut}(K_n)$ contains all possible permutations of the vertices, it does not make our job easier to first restrict our search for homomorphisms to those that are induced by automorphisms of the underlying abstract graph. Thus we use the tools of Subsection~\ref{subsec:tools} rather than those of Subsection~\ref{subsec:LEX}. Drawings of the realizations of $K_5$ and $K_6$, as well justifications of the poset relations and non-relations, appear in Appendix \ref{app:cliques}. \end{proof} \begin{figure} \caption{The Hasse diagram for ${\mathcal K} \label{fig:HasseK6poset} \end{figure} Observe that $\mathcal{K}_6$ has five minimal elements and three maximal ones. As with cycles, not every possible number of crossings, from 3 up to the maximum of 15, is achieved; there are no realizations of $K_6$ containing 13 crossings or 14 crossings. Clearly, the number of edge crossings cannot act as a rank function. In fact, $\mathcal{K}_6$ is not a graded poset because it has maximal chains between 3.1 and 15.1 of different lengths: $3.1 \prec 7.2 \prec 15.1$ and $3.1 \prec 9.1 \prec 12.1 \prec 15.1$. Moreover, $\mathcal{K}_6$ is not a lattice, because realizations 3.1 and 4.1 do not have a unique supremum. Although $\mathcal{K}_6$ has no rank function, the function taking a realization to the number of vertices in the boundary of its convex hull is order-preserving. In Figure~\ref{fig:HasseK6poset}, all realizations displayed on the bottom level of the Hasse diagram have 3 vertices in the boundary of the convex hull, those on the second level have 4, those on the third level have 5 and realization 15.1 has 6. \begin{thm}\label{prop:chains} \rm For all $n \geq 3$, $\mathcal{K}_n$ contains a maximal chain of length $n-2$. More precisely, $\mathcal{K}_n$ contains a chain of the form $$\overline H_3 \prec \overline H_4 \prec \cdots \prec \overline H_n$$ where $\overline{H}_k$ denotes a geometric realization of $K_n$ with $k$ vertices on the boundary of its convex hull. \end{thm} \begin{proof} We start with a template; consider the circle $x^2 + (y+1)^2 = 4$ in the $xy$ plane, together with the two tangent lines at $(- \sqrt{3}, 0)$ and $(\sqrt{3}, 0)$ that intersect at $(0, 3)$. Place $n-1$ vertices along the upper portion of the circle, starting at $(- \sqrt{3}, 0)$ and ending at $( \sqrt{3}, 0)$; they should be roughly evenly spaced, but in general position. Label the leftmost one $n$, and the remaining ones $2, 3, \dots, n-1$ from left to right. Add another vertex at $(0, 3)$ and label it $1$. To complete the template, for each $k \in \{2, 3, \dots, n-2\}$, add a ray from vertex $1$ through vertex $k$ and mark where it intersects the lower portion of the circle with $(n-k+1)^*$. See Figure \ref{fig:chains}. \begin{figure} \caption{Template for the proof of Theorem \ref{prop:chains} \label{fig:chains} \end{figure} Joining all pairs of vertices in the template with an edge gives us $\overline{H}_3$. Note that the boundary of its convex hull consists of vertices $1, n$ and $n-1$ and that all crossings in $\overline{H}_3$ occur in the geometric subgraph induced by the vertices $2, 3, \dots, n$. To get $\overline{H}_4$, slide vertex $n-1$ clockwise along the circle to position $(n-1)^*$. Then slide vertices $2$ through $n-2$ clockwise `one spot' along the upper portion of the circle. This is now a geometric realization of $K_n$ in which the boundary of the convex hull consists of the four vertices $1, n-2, n-1$ and $n$. Since vertices $2, 3, \dots, n$ are still in convex position, no crossings of $\overline{H}_3 $ have been lost. However the edge $\{1, n-1\}$ now crosses edges $\{2, n\}, \{3, n\}, \dots, \{n-2, n\}$. Thus $\overline{H}_3 \prec \overline{H}_4$. To get $\overline{H}_5$, move vertex $n-2$ to position $(n-2)^*$ and shift vertices $2$ through $n-3$ clockwise another `one spot' along the circle. This gives a geometric realization in which the boundary of the convex hull consists of the vertices $1, n-3, n-2, n-1$ and $n$. Again, no edge crossings have been lost, but edge $\{1, n-2\}$ now crosses $\{2, n\}, \{3, n\}, \dots, \{n-3, n\}$. Iterating this process yields the chain described in the theorem. To prove the maximality of this chain, note that its final element $\overline{H}_n$ has all $n$ vertices in convex position and so has the maximum number of crossings of any realization of $K_n$ and therefore has no successor in $\mathcal{K}_n$. Next, suppose that $f: \overline{H}\to \overline{H}_3$ is an injective geometric homomorphism. By Proposition~\ref{prop:obs2}, all edges incident to vertex $v = f^{-1}(1)$ must be uncrossed. Let $w, x, y, z$ be any other four vertices in $\overline{H}$; if they are not in convex position, then one of them, say $w$, must lie in the interior of the convex hull of the other three. This would imply that the edge $\{v, w\}$ is crossed in $\overline{H}$, a contradiction. Hence all $n-1$ other vertices in $\overline{H}$ lie in convex position, implying that in fact $\overline{H} \cong \overline{H}_3$. \end{proof} Each of the posets $\mathcal{K}_4$ and $\mathcal{K}_5$ is precisely the chain given in Theorem \ref{prop:chains}. Within $\mathcal{K}_6$, the chain constructed in Theorem \ref{prop:chains} is $5.1 \prec 9.2 \prec 12.1 \prec 15.1$. \section{Open Questions}\label{sect:questions} \begin{enumerate} \item Are there (closed or recursive) formulas for the number of elements in $\mathcal{P}_n$ or $\mathcal{C}_n$? \item For $3 \leq k \leq n$, is $\mathcal{K}_k$ a sub-poset of $\mathcal{K}_n$? \item If $\overline{K}_n \prec \widehat{K}_n$, must the number of vertices in the convex hull of $\overline{K}_n$ be strictly less than that of $\widehat{K}_n$? If so, then the chain constructed in Theorem \ref{prop:chains} is a maxi\emph{mum} chain. \item We saw that $\mathcal{K}_6$ has a maximal chain of length 2. What is the length of a smallest possible maximal chain in $\mathcal{K}_n$? \item What is the geometric homomorphism poset for other common families of graphs? In \cite{Cockburn2010}, Cockburn has determined the geometric homomorphism poset $\mathcal{K}_{2,n}$ for one family of complete bipartite graphs. \end{enumerate} \begin{appendices} \cleardoublepage \appendixpage \section{Geometric Realizations and Posets for the paths $P_5$ and $P_6$}\label{app:paths} In this appendix we prove that $P_5$ and $P_6$ have the geometric realizations claimed in Theorems~\ref{thm:p1top5posets} and \ref{thm:p6geoms}, and we provide evidence to easily check the correctness of the Hasse diagrams for ${\mathcal P}_5$ and ${\mathcal P}_6$ given in Figures~\ref{fig:P5_poset} and \ref{fig:P6_poset}, respectively. Table~\ref{tab:p5} gives the four non-equivalent, non-plane, geometric realizations of $P_5$, using the labels given in Theorem~\ref{thm:p1top5posets}; it follows from Lemma~\ref{lem:p5crossingrule} that there are no other realizations. It is easy to see from Table~\ref{tab:p5} that the identity map is a geometric homomorphism from each realization to each other that has more crossings, justifying the Hasse diagram shown in Figure~\ref{fig:P5_poset}. \begin{table} \begin{center} \begin{tabular}{c} \centerline{\includegraphics[width=\textwidth,keepaspectratio]{p5_table}} \end{tabular} \caption{The four non-plane geometric realizations of $P_5$} \label{tab:p5} \end{center} \end{table} Table~\ref{tab:p6} shows the thirty non-plane geometric realizations of $P_6$, using the labels given in Theorem~\ref{thm:p6geoms}; it follows from Lemmas~\ref{lem:p5crossingrule} and \ref{lem:p6rule} that there are no others. Table~\ref{tab:p6x} shows the line/crossing graphs corresponding to each realization in Table~\ref{tab:p6}. The vertices, $e_i = \{i, i+1\}, i = 1, \ldots, 5$, run counterclockwise from the top left, and each vertex is labeled with the number of times the edge in the corresponding geometric realization of $P_6$ is crossed. The red, dashed edges belong to the line graph, and the black, solid edges belong to the crossing graph (so the vertex labels are the degrees in the crossing graph). As indicated in Proposition~\ref{prop:preceqiff}, given two geometric realizations of a graph $G$, $\overline{G}$ and $\widehat{G}$, the line/crossing graphs are useful tools to determine if $\overline{G}$ and $\widehat{G}$ are equivalent, or if $\overline{G} \prec \widehat{G}$. These tools are particularly useful when $G$ is a path $P_n$, because there are only two isomorphisms to check: the identity map $I_n$ and the map $R_n$ that reverses the order of the vertices. For $P_6$, we need only observe whether one of these two maps induces a color-preserving injection from the line/crossing graph of one realization ${\overline P}_6$ into the line/crossing graph of another realization $\widehat{P}_6$ to determine whether ${\overline P}_6 \preceq \widehat{P}_6$. For example, in Table~\ref{tab:p6x}, compare the line/crossing graph labeled 2.3 with the ones labeled 3.2, 3.3, and 3.4. It is easy to see that neither the identity nor the reversal map induces a color-preserving injection of 2.3 into 3.2, but the identity map is a color-preserving injection of 2.3 into 3.3, and the reversal map is a color-preserving injection of 2.3 into 3.4. Thus $2.3 \not\prec 3.2$, but $2.3 \prec 3.3$ and $2.3 \prec 3.4$. In this way Table~\ref{tab:p6x} can be used to verify the correctness of the Hasse diagram for ${\mathcal P}_6$ shown in Figure~\ref{fig:P6_poset}. \begin{table} \begin{center} \begin{tabular}{c} \centerline{\includegraphics[width=.95\textwidth,keepaspectratio]{p6_table}}\end{tabular} \caption{The thirty non-plane geometric realizations of $P_6$} \label{tab:p6} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c} \centerline{\includegraphics[width=0.98\textwidth,keepaspectratio]{p6x_table}} \end{tabular} \caption{The line/crossing graphs of the geometric realizations in Table~\ref{tab:p6}} \label{tab:p6x} \end{center} \end{table} \section{Geometric Realizations and Posets for the cycles $C_5$ and $C_6$}\label{app:cycles} In this appendix we prove that $C_5$ and $C_6$ have the geometric realizations claimed in Theorem~\ref{thm:c1toc6posets}, and we provide evidence to easily check the correctness of the Hasse diagram for ${\mathcal C}_6$ given in Figure~\ref{fig:C6_poset}. Table~\ref{tab:c5} lists the four non-equivalent, non-plane, geometric realizations of $C_5$, using the labels given in Theorem~\ref{thm:c1toc6posets}; it follows from Lemma~\ref{lem:p5crossingrule} that there are no other realizations. It is easy to see from Table~\ref{tab:c5} that the identity map is a geometric homomorphism from each realization to each other that has more crossings, making the Hasse diagram ${\mathcal C}_5$ a chain of four elements. \begin{table} \begin{center} \begin{tabular}{c} \centerline{\includegraphics[width=0.75\textwidth,keepaspectratio]{c5_table}} \end{tabular} \caption{The four non-plane geometric realizations of $C_5$} \label{tab:c5} \end{center} \end{table} The structure of the poset $\mathcal{C}_6$ is based in part on Lemma~\ref{lem:c6crossingrules}. The proof of claim~\ref{cl1} appears in Section~\ref{sect:cycles}; for completeness, we include the proofs of the other claims below. \noindent {\bf Lemma~\ref{lem:c6crossingrules}.} Let $\overline{C}_6$ be a geometric realization of the cycle $C_6$, with edges labeled consecutively, $e_1, e_2, \ldots, e_6$. \begin{enumerate} \item If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4$, and $e_1\times e_5$, then it doesn't contain the crossing $e_2\times e_6$. \item If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_1\times e_5, e_2\times e_4$, and $e_4\times e_6$, then it also contains the crossing $e_2\times e_5$. \item If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_4$, and $e_2\times e_5$, then it also contains at least one of the crossings $e_1\times e_5, e_3\times e_5, e_3\times e_6$. \item If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_5$, and $e_4\times e_6$, then it also contains at least one of the crossings $e_2\times e_4, e_3\times e_5, e_3\times e_6$. \item If $\overline{C}_6$ contains the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_5$, and $e_3\times e_6$, then it also contains at least one of the crossings $e_2\times e_4, e_2\times e_6, e_3\times e_5, e_4\times e_6$. \end{enumerate} \begin{proof} The proof of claim 1 appears in the paper. We prove claims 2-5 by contradiction, so for claim 2 assume that we do not have the crossing $e_2\times e_5$: Since we have the crossing $e_1\times e_3$, we may assume that the edge $e_2$ is horizontal and 1 and 4 lie above the line through this edge, as shown in Figure~\ref{pf:cl23}. Because we also have the crosssings $e_1\times e_4$ and $e_2\times e_4$, it follows that vertex 5 is in the region labeled S, which is the part of the cone with vertex 4 and sides passing through 2 and 3 that lies below edge $e_2$. Since we have the crossing $e_1\times e_5$ but not $e_2\times e_5$, it follows that vertex 6 is in the region labeled N, which is the part of the cone with vertex 5 and sides passing though 1 and 3 that is above edge $e_1$. But then we cannot have the crossing $e_4\times e_6$, a contradiction. \begin{figure} \caption{For proof of claims 2 and 3} \label{pf:cl23} \end{figure} For claim 3, as in the proof of claim 2, having the crossings $e_1\times e_3, e_1\times e_4, e_2\times e_4$ implies that vertex 5 lies in the region of Figure~\ref{pf:cl23} labeled S. By asssumption we have neither of the crossings $e_1\times e_5, e_3\times e_5$ but we do have the crossing $e_2\times e_5$, this forces vertex 6 to lie in the region labeled T, which in turn forces the crossing $e_3\times e_6$, a contradiction. For claim 4, by assumption we have the crossings $e_1\times e_3$ and $e_1\times e_4$ but not $e_2\times e_4$. It follows that vertex 5 is in one of the regions labeled T and E in Figure~\ref{pf:cl45}, in which the edge $e_2$ is horizontal. First suppose $5\in T$. Since we have crossing $e_2\times e_5$, vertex 6 lies below the horizontal line through edge $e_2$. But then the crossing $e_4\times e_6$ forces the crossing $e_3\times e_6$, a contradiction. So then $5 \in E$, but now it is impossible to have crossing $e_2\times e_5$ but not $e_3\times e_5$ or $e_3\times e_6$. \begin{figure} \caption{For proof of claims 4 and 5} \label{pf:cl45} \end{figure} For claim 5, as in the proof for claim 4, we may assume that vertex 5 is in one of the regions in Figure~\ref{pf:cl45} labeled T or E, since by assumption we have crossings $e_1\times e_3$ and $e_1\times e_4$ but not $e_2\times e_4$. If $5\in T$, then crossing $2\times 5$ implies that 6 lies below the horizontal line through edge $e_2$. But then the crossing $e_3\times e_6$ forces either $e_2\times e_6$ or $e_4\times e_6$, a contradiction in either case. So suppose that $5\in E$. In order to have crossing $e_2\times e_5$ but not $e_3\times e_5$, edge $e_5$ must cross $e_2$ from below, i.e., 5 is below the horizontal line through $e_2$ and 6 is above that line. But the the crossing $e_3\times e_6$ forces the crossing $e_4\times e_6$, again a contradiction. \end{proof} Table~\ref{tab:c6} shows the twenty-five non-plane geometric realizations of $C_6$, using the labels given in Theorem~\ref{thm:p6geoms}; it follows from Lemmas~\ref{lem:p5crossingrule}, \ref{lem:p6rule}, \ref{lem:furry}, \ref{lem:insideoutside} and \ref{lem:c6crossingrules} that there are no other realizations. Table~\ref{tab:c6x} shows the line/crossing graphs corresponding to each realization in Table~\ref{tab:c6}. The vertices, $e_i = \{i, i+1\}, i = 1, \ldots, 6$, run counterclockwise from the top, and each vertex is labeled with the number of times the edge in the corresponding geometric realization of $C_6$ is crossed. The red, dashed edges belong to the line graph, and the black, solid edges belong to the crossing graph (so the vertex labels are the degrees in the crossing graph). As in Appendix~\ref{app:paths}, the line/crossing graphs are again useful tools to determine if ${\overline C}_6$ and $\widehat{C}_6$ are equivalent, or if ${\overline C}_6 \prec \widehat{C}_6$. The isomorphisms of $C_6$ are the six rotations, the reversal map, and the compositons of the rotations with the reversal map. We need only observe whether one of these maps induces a color-preserving injection of the line/crossing graph of one realization ${\overline C}_6$ to a subgraph of another realization $\widehat{C}_6$ to determine whether ${\overline C}_6 \preceq \widehat{C}_6$. For example, in Table~\ref{tab:c6x}, compare the line/crossing graph labeled 2.2 with the ones labeled 3.1, 3.2, and 3.4. It is easy to see that the identity map is a color-preserving injection of 2.2 into 3.5, a rotation is a color-preserving injection from 2.1 into 3.4, but there is no color-preserving injection from 2.1 into 3.2. \begin{table} \begin{center} \begin{tabular}{c} \centerline{\includegraphics[width=\textwidth,keepaspectratio]{c6_table}} \end{tabular} \caption{The twenty-five non-plane geometric realizations of $C_6$} \label{tab:c6} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c} \centerline{\includegraphics[width=\textwidth,keepaspectratio]{c6x_table}} \end{tabular} \caption{The line/crossing graphs of the geometric realizations in Table~\ref{tab:c6}} \label{tab:c6x} \end{center} \end{table} \section{Geometric Realizations and Poset for the clique $K_6$}\label{app:cliques} In this appendix, we provide the details of the proof of Theorem~\ref{thm:k1tok6posets}. As noted there, the posets $\mathcal{K}_4$ and $\mathcal{K}_5$ are chains. Any vertex bijection is a geometric homomorphism from the plane realization of $K_4$ to the convex realization; with the vertex labeling on the different realizations of $K_5$ given in Figure~\ref{fig:K5Poset}, the identity is a geometric homomorphism from each realization to one with more crossings. \begin{figure} \caption{The three realizations of $K_5$.} \label{fig:K5Poset} \end{figure} Justifying the poset structure of $\mathcal{K}_6$, illustrated by the Hasse diagram in Figure~\ref{fig:HasseK6poset}, requires more work. Drawings of the realizations are given in Figure~\ref{fig:K6Poset}; with the vertex labelings shown, all covering relations are induced by the identity map, except the eight that induced by the geometric homomorphisms listed in Table~\ref{tab:K6nonids}. \begin{table}[htdp] \caption{Non-identity geometric homomorphisms in $\mathcal{K}_6$.} \begin{center} \begin{tabular}{|c|c |} \hline 3.1 & 8.1\\ \hline 1 & 1\\ 2 & 6\\ 3 & 3\\ 4 & 4\\ 5 & 5\\ 6 & 2 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c c c|} \hline 4.1 & 7.2 &8.2 & 9.2 \\ \hline 1 & 3 & 6 & 1 \\ 2 & 6 & 5 & 5\\ 3 & 1 & 2 & 3 \\ 4 & 5 & 3 & 4 \\ 5 & 4 & 4 & 6 \\ 6 & 2 & 1 & 2 \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline 5.1 & 10.1 \\ \hline 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ 4 & 4 \\ 5 & 6 \\ 6 & 5 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c c|} \hline 5.2 & 8.1 & 8.2\\ \hline 1 & 2 & 6 \\ 2 & 3 & 5 \\ 3 & 6 & 4 \\ 4 & 4 & 3 \\ 5 & 5 & 2 \\ 6 & 1 & 1 \\ \hline \end{tabular} \quad \begin{tabular}{|c|c c|} \hline 8.2 & 11.1 & 12.1\\ \hline 1 & 1 & 3 \\ 2 & 5 & 2 \\ 3 & 4 & 1 \\ 4 & 6 & 6 \\ 5 & 3 & 5\\ 6 & 2 & 4\\ \hline \end{tabular} \end{center} \label{tab:K6nonids} \end{table} \begin{figure} \caption{The fifteen realizations of $K_6$.} \label{fig:K6Poset} \end{figure} We can justify the absence of covering relations in Figure~\ref{fig:HasseK6poset} by using the contrapositive of various results in Subsection~\ref{subsec:tools}. To do so, we record the relevant parameters for each element of $\mathcal{K}_6$ in Table~\ref{table:parameters}. Also, to ease the use of part (4) of Proposition~\ref{prop:observe} , we provide the subgraph of uncrossed edges for each of the realizations in Figure~\ref{fig:K60Poset}. \begin{table}[htdp] \caption{Parameters for elements of $\mathcal{K}_6$.} \renewcommand{1.5}{1.5} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\overline{G}$ & $cr(\overline{G})$ & $|E_0(\overline{G})|$ & $\widehat{\omega}(\overline{G})$ & $D_0(\overline{G})$ & $M(\overline{G})$ \\ \hline 3.1 & 3 & 9 & 4 & [3, 3, 3, 3, 3, 3] & [1, 1, 1, 1, 1, 1] \\ 4.1 & 4 & 9 & 4 & [4, 3, 3, 3, 3, 2] & [2, 2, 2, 2, 1, 1] \\ 5.1 & 5 & 10 & 5 & [5, 3, 3, 3, 3, 3] & [2, 2, 2, 2, 2, 0] \\ 5.2 & 5 & 9 & 4 & [4, 4, 3, 3, 2, 2] & [3, 3, 2, 2, 2, 2] \\ 6.1 & 6 & 9 & 4 & [4, 4, 4, 2, 2, 2] & [3, 3, 3, 3, 3, 3] \\ \hline 7.1 & 7 & 7 & 4 & [3, 3, 3, 2, 2, 1] & [3, 3, 3, 3, 2, 1] \\ 7.2 & 7 & 7 & 4 & [3, 3, 2, 2, 2, 2] & [3, 3, 3, 3, 2, 2] \\ 8.1 & 8 & 7 & 4 & [3, 3, 3, 2, 2, 1] & [4, 4, 3, 3, 2, 2] \\ 8.2 & 8 & 8 & 5 & [4, 3, 3, 2, 2, 2] & [3, 3, 3, 3, 3, 2] \\ 9.1 & 9 & 8 & 4 & [3, 3, 3, 3, 2, 2] & [4, 4, 4, 4, 2, 2] \\ 9.2 & 9 & 8 & 5 & [4, 3, 3, 2, 2, 2] & [4, 4, 3, 3, 3, 3] \\ \hline 10.1 & 10& 5 & 5 & [2, 2, 2, 2, 2, 0] & [3, 3, 3, 3, 3, 1]\\ 11.1 & 11& 6 & 5 & [3, 2, 2, 2, 2, 1] & [4, 4, 3, 3, 3, 2]\\ 12.1 &12& 7 & 5 & [3, 3, 2, 2, 2, 2] & [4, 4, 4, 4, 3, 3]\\ \hline 15.1 & 15& 6 & 6 & [2, 2, 2, 2, 2, 2] & [4, 4, 4, 4, 4, 4]\\ \hline \end{tabular} \end{center} \label{table:parameters} \end{table} \begin{figure} \caption{The subgraphs of uncrossed edges of realizations of $K_6$.} \label{fig:K60Poset} \end{figure} Justifications for cases of nonprecedence in the poset $\mathcal{K}_6$ are recorded in Table \ref{table:nonprecedence}; each part of Proposition~\ref{prop:observe} and each part of Corollary~\ref{cor:vector} is used at least once. In some cases, there are several ways of justifying nonprecedence but only one is given. The following examples indicate how to read this table: \begin{itemize} \item the blank in the `$(4.1, 3.1)$' entry means that $4.1 \not \prec 3.1$ is justified simply by the total number of edge crossings (that is, using part (1) of Proposition~\ref{prop:observe}), \item ``\ref{cor:vector}(\ref{cor1})" in the `$(3.1, 4.1)$' entry means that $3.1 \not \prec 4.1$ is justified by an application of part (1) of Corollary \ref{cor:vector}; \item ``\ref{prop:observe}(\ref{c})" in the `$(3.1, 5.1)$' entry means that $3.1 \not \prec 5.1$ is justified by an application of part (3) of Proposition \ref{prop:observe}; \item ``$\prec$" in the `$(3.1, 8.1)$' entry means that $3.1$ is covered by $8.1$; that is, there is a geometric homomorphism $3.1 \to 8.1$ (this one is recorded in Table \ref{tab:K6nonids}), but no realization $\overline{K}_6$ such that $3.1 \prec \overline{K}_6 \prec 8.1$; \item ``$ \circ $'' in the `$(3.1, 10.1)$' entry means that a geometric homomorphism $3.1 \to 10.1$ can be obtained by composing two geometric homomorphisms (in this case, the two identity maps $3.1 \to 7.1$ and $7.1 \to 10.1$). \end{itemize} \begin{sidewaystable}[htdp] \renewcommand{1.5}{1.5} \caption{Justifications for nonprecedence in $\mathcal{K}_6$.} \begin{center} \begin{tabular}{|c||c|c|c|c|c||c|c|c|c|c|c||c|c|c||c|} \hline & 3.1 & 4.1 & 5.1& 5.2 & 6.1 & 7.1 & 7.2 & 8.1& 8.2 & 9.1 & 9.2 & 10.1 & 11.1 & 12.1 & 15.1 \\ \hline \hline 3.1 & = & \ref{cor:vector}(\ref{cor1}) & \ref{prop:observe}(\ref{c}) & \ref{cor:vector}(\ref{cor1}) & \ref{cor:vector}(\ref{cor1}) & $\prec$ & $\prec$ & $\prec$ & \ref{cor:vector}(\ref{cor1}) & $\prec$ & \ref{cor:vector}(\ref{cor1}) & $\circ$ & $ \circ $ & $ \circ $ & $ \circ$ \\ \cline{2-16} 4.1& \multicolumn{1}{c |}{} & = & \ref{prop:observe}(\ref{c}) & \ref{cor:vector}(\ref{cor1}) & \ref{cor:vector}(\ref{cor1}) & $\prec$ & $\prec$ & $\prec$ & $\prec$ & \ref{prop:observe}(\ref{d}) & $\prec$ & $\circ$ & $ \circ $ & $ \circ $ & $ \circ$ \\ \cline{3-16} 5.1 & \multicolumn{2}{c |}{} & = & \ref{prop:observe}(\ref{e}) & \ref{prop:observe}(\ref{e}) & \ref{prop:observe}(\ref{e}) & \ref{prop:observe}(\ref{e}) & \ref{prop:observe}(\ref{e}) & $\prec$ & \ref{prop:observe}(\ref{e}) & $\prec$ & $\prec$ & $ \circ $ & $ \circ$ & $\ \circ$ \\ \cline{4-16} 5.2 & \multicolumn{2}{c |}{} & \ref{prop:observe}(\ref{c}) & = & \ref{cor:vector}(\ref{cor1}) & \ref{cor:vector}(\ref{cor2}) & \ref{prop:observe}(\ref{d})+(\ref{cor2}) & $\prec$ & $\prec$ & $\prec$ & \ref{prop:observe}(\ref{d}) & \ref{cor:vector}(\ref{cor2}) & $ \circ $ &$ \circ $ & $ \circ$ \\ \cline{4-16} 6.1 & \multicolumn{4}{c |}{} & = & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor1}) & $\prec$ & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & $ \circ $ & $ \circ$ \\ \hline \hline 7.1 & \multicolumn{5}{c ||}{} & = & \ref{cor:vector}(\ref{cor1}) & \ref{prop:observe}(\ref{d}) & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c})& \ref{prop:observe}(\ref{c}) & $\prec$ & $\prec$ & \ref{cor:vector}(\ref{cor1}) & \ref{cor:vector}(\ref{cor1}) \\ \cline{7-16} 7.2 & \multicolumn{5}{c ||}{} & \ref{cor:vector}(\ref{cor2}) & = & \ref{cor:vector}(\ref{cor1}) & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c}) & \ref{cor:vector}(\ref{cor2}) & \ref{prop:observe}(\ref{d}) & \ref{prop:observe}(\ref{d}) & $\prec$ \\ \cline{7-16} 8.1 & \multicolumn{7}{c |}{} & = & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c}) & \ref{cor:vector}(\ref{cor2}) & $\prec$ & \ref{prop:observe}(\ref{d}) & \ref{prop:observe}(\ref{d}) \\ \cline{9-16} 8.2 & \multicolumn{7}{c |}{} & \ref{cor:vector}(\ref{cor2}) & = & \ref{prop:observe}(\ref{e}) & \ref{prop:observe}(\ref{d}) & \ref{cor:vector}(\ref{cor2}) & $\prec$ & $\prec$ & $\circ$ \\ \cline{9-16} 9.1 & \multicolumn{9}{c |}{} & = & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & $\prec$ & $ \circ $ \\ \cline{11-16} 9.2 & \multicolumn{9}{c |}{} & \ref{prop:observe}(\ref{e}) & = & \ref{cor:vector}(\ref{cor2}) & \ref{cor:vector}(\ref{cor2}) & $\prec$ & $\circ $ \\ \cline{10-16} \hline \hline 10.1 & \multicolumn{11}{c ||}{} & = & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{c}) \\ \cline{13-16} 11.1 & \multicolumn{12}{c |}{} & = & \ref{prop:observe}(\ref{c}) & \ref{prop:observe}(\ref{d}) \\ \cline{14-16} 12.1 & \multicolumn{13}{c |}{} & = & $\prec$ \\ \cline{15-16} 15.1 & \multicolumn{14}{c ||}{} & = \\ \hline \end{tabular} \end{center} \label{table:nonprecedence} \end{sidewaystable} One entry in this table deserve further elaboration. To show $5.2 \not \prec 7.2$, we first look at the uncrossed subgraphs in Figure \ref{fig:K60Poset}. Note that the uncrossed subgraph of $7.2$ consists of a 6-cycle with one `diameter' chord (the edge $e_{3,6}$); the uncrossed subgraph of $5.2$ has only one 6-cycle, namely $(1,2,3,4,5,6)$, which has only one `diameter' chord, namely $e_{2,5}$. Hence, the only possible pre-image of the uncrossed four-cycle $(1, 2, 3, 6)$ of $7.2$ must be either the uncrossed 4-cycle $(2, 3, 4, 5)$ or the uncrossed 4-cycle $(1, 2, 5, 6)$ of $5.2$. Now, in $7.2$, $e_{1,3}$ and $e_{2,6}$ cross only each other, so $cr(e_{1,3}) = cr(e_{2,6}) = 1$. However, in $5.2$, there are only two possible pre-images of this pair of edges, namely $e_{2,4}, e_{3,5}$ or $e_{1,5}, e_{2,6}$. Since $cr (e_{3,5}) =cr (e_{2,6}) = 2$, by part (2) of Proposition \ref{prop:observe}, $5.2 \not \prec 7.2$. \end{appendices} \cleardoublepage \end{document}
math
जॉर्ज फ्लॉयड की मौत के बाद पूरे अमेरिका में हिंसक विरोध प्रदर्शन हो रहे हैं, जिसमें कम से कम पांच लोगों की मौत हो गई और चार हजार अधिक लोगों की गिरफ्तारी हुई है... इनर डेस्क. अमेरिका में इस समय अश्वेत जॉर्ज फ्लॉयड की पुलिस हिरासत में मौत की वजह से... "कोरोना खानदान का मर्स वायरस बहुत ज्यादा ताकतवर हुआ करता था, पर कोरोना कोविड १९ इस खानदान का सबसे कम खतरनाक वायरस माना गया है, फिर भी इस वायरस ने समूची दुनिया को हिलाकर रख दिया है... इनर (लिमटी खरे)। कोरोना कोविड १९ के संक्रमण से आज समूची दुनिया हिली हुई है।... डॉ नीलम महेंद्र, वरिष्ठ स्तंभकार इनर डेस्क। दक्षिण पूर्व एशिया के देश वियतनाम में खुदाई के दौरान बलुआ पत्थर का एक शिवलिंग मिलना ना सिर्फ पुरातात्विक शोध की दृष्टि से एक अद्भुत घटना है अपितु भारत के सनातन धर्म की सनातनता और उसकी व्यापकता का एक अहम प्रमाण भी है। यह शिवलिंग... कोरोना से बचिए, क्योंकि यह सिर्फ फेफड़ा नहीं, किडनी, लीवर, ब्रेन, स्कीन सबको डैमेज कर डालता है कोरोना वायरस (कोविड-१९) सिर्फ फेफड़ों को ही नहीं दिमाग, दिल, किडनी, त्वचा सबको नुकसान पहुंचाता है अब तक माना जा रहा है कि कोविड-१९ से मुख्य तौर पर सांस लेने में दिक्कत होती है। लेकिन नए शोधों से पता चलता है कि सार्स सीओवी-२ ना सिर्फ फेफड़ों पर बल्कि हृदय,... इनर / लिमटी खरे. दुनिया में चीन और अमेरिका को दो महाशक्तियों के रूप में जाना जाता है। इसे चीन के अपरादर्शी रवैया कहा जाए या दुनिया के चौधरी अमेरिका के प्रथम नागरिक डोनाल्ड ट्रंप की नीतियां कि दोनों ही देश वर्तमान समय में एक ऐसे मोड़ पर आकर खड़े...
hindi
\begin{document} \title{Quantum Statistical Mechanics on a Quantum Computer} \section{Introduction} Recent theoretical work has shown that a Quantum Computer (QC) has the potential of solving certain computationally hard problems such as factoring integers and searching databases much faster than a conventional computer. \cite{SHORone,CHUANGone,KITAEV,GROVERzero,GROVERone,GROVERtwo} The idea that a QC might be more powerful than an ordinary computer is based on the notion that a quantum system can be in any superposition of states and that interference of these states allows exponentially many computations to be done in parallel.\cite{AHARONOVone} This hypothetical power of a QC might be used to solve other difficult problems as well, such as for example the calculation of the physical properties of quantum many-body systems.\cite{CERFone,ZALKAone,TERHALone} As a matter of fact, part of Feynman's original motivation to consider QC's was that they might be used as a vehicle to perform exact simulations of quantum mechanical phenomena.\cite{FEYNMAN} For future applications it is clearly of interest to address the question how to program a QC such that it performs a simulation of specific physical systems. In this paper we describe a quantum algorithm (QA) to compute the equilibrium properties of quantum systems by making a few statistically uncorrelated runs, starting from random initial states. Exploiting the intrinsic parallellism of the hypothetical QC this QA executes in polynomial time. En route this QA computes the density of states (DOS) from which all the eigenvalues of the model Hamiltonian may be determined. We test our QA on a software implemention of a 21-qubit QC by explicit calculations for an antiferromagnetic Heisenberg model on a triangular lattice. \section{Theory} Consider the problem of computing the physical properties of a quantum system, described by a Hamiltonian $H$, in thermal equilibrium at a temperature $T$. In the canonical ensemble this equilibrium state is characterised by the partition function $Z= {\rm Tr} \exp(-\beta H)$ where $\beta=1/k_B T$ and $k_B$ is Boltzmann's constant (we put $k_B=1$ and $\hbar=1$ in the rest of this paper). The dimension of the Hilbert space of physical states will be denoted by $D$. If all the eigenvalues $\{E_i\,;\,i=1,\ldots,D\}$ of $H$ are known, we can make use of the fact that $Z=\sum_{i=1}^D\exp(-\beta E_i)$ to reduce the computation of the physical properties to a classical statistical mechanical problem which can be solved by standard probabilistic methods.\cite{HAMMERSLEY} For a non-trivial quantum many-body system the determination of the eigenvalues is a difficult computational problem itself, in practice as difficult as the calculation of $Z$. In view of this we will from now on assume that the eigenvalues of $H$ are not known. The DOS \begin{equation} \label{epsilon} {\cal D}(\epsilon)=\sum_i \delta(\epsilon-E_i) = {1\over 2\pi}\int_{-\infty}^{+\infty} e^{it\epsilon}\, {\rm Tr} e^{-itH}\, dt , \end{equation} determines the equilibrium state of the system. Indeed, once ${\cal D}(\epsilon)$ is known $Z$ is easy to calculate: \begin{equation} \label{integral} Z= \int_{-\infty}^{+\infty} e^{-\beta \epsilon} {\cal D}(\epsilon)\, d\epsilon . \end{equation} Integral~(\ref{integral}) exists whenever the spectrum of $H$ has a lowerbound, i.e. ${\cal D}(\epsilon)=0$ for $-\infty<\epsilon < \min_i E_i$. Note that ${\cal D}(\epsilon)$ is a real-valued function. With suitable (time-dependent) modifications of $H$, the partition function plays the role of a generating function from which all physical quantities of interest can be obtained. Physical quantities such as the energy and specific heat are given by \begin{equation} \label{min9} E= \EXPECT{H} = {1\over Z}\int_{-\infty}^{+\infty} \epsilon e^{-\beta\epsilon} {\cal D}(\epsilon)\, d\epsilon , \end{equation} and \begin{equation} \label{min8} C= \beta^2(\EXPECT{H^2}-\EXPECT{H}^2) = \beta^2\left( {1\over Z} \int_{-\infty}^{+\infty} \epsilon^2 e^{-\beta\epsilon} {\cal D}(\epsilon)\, d\epsilon - E^2\right) , \end{equation} respectively. \magweg{Time-dependent properties, e.g. dynamic structure factors can be obtained in this way as well.\cite{PEDROone} } From~(\ref{epsilon}) it is clear that the computation of ${\cal D}(\epsilon)$ consists of two parts: 1) Compute the trace of $e^{-itH}$ for many values of $t$ and 2) perform a (Fast) Fourier Transform of this data. It is known how to carry out part 2 on a QC\cite{AHARONOVone,DIVINCENZOone,ECKERTone,CLEVEone} so we focus on part 1. The QA described below computes $e^{-itH}\KET{\psi}$ for any $\psi$ in ${\cal O} \left({\log D}\right)$ operations. We now argue that in practice it will usually be sufficient to determine a small fraction ($\approx {\cal O}\left( \log D\right)$) of the $D$ diagonal matrix elements of $e^{-itH}$. Instead of computing diagonal matrix elements with respect to a chosen complete set of basis states $\{\phi_n; n=1,\ldots,D\}$, we generate random numbers $\{a_n; n=1,\ldots,D\}$ and construct the new state \begin{equation} \KET{\Phi}=\sum_{n=1}^D a_n \KET{\phi_n} . \end{equation} The corresponding diagonal element of the time-evolution operator reads \begin{equation} \BRACKET{\Phi}{e^{-itH}\Phi}= \sum_{n,m=1}^D a_n^* a_m^{\phantom{*}} \BRACKET{\phi_n}{e^{-itH}\phi_m} . \end{equation} If we now generate the $a_i$'s such that $\overline{a^*_n a_m}=\delta_{n,m}$ ($\overline{x}$ denotes the average of $x$ over statistically independent realizations) then \begin{equation} \overline{\BRACKET{\Phi}{e^{-itH}\Phi}}= \sum_{n=1}^D \BRACKET{\phi_n}{e^{-itH}\phi_n}= {\rm Tr} e^{-itH} . \end{equation} In other words, the trace of the time-evolution operator can be estimated by random sampling of the states $\KET{\Phi}$.\cite{ALBEN} A QC computes $\KET{e^{-itH}\Phi}$ just as easily as $\KET{e^{-itH}\phi_m}$ and in practice there would be no need to have a random state generator: Switching the QC off and on will put the QC in some random initial state. However to compute $\BRACKET{\Phi}{e^{-itH}\Phi}$ we would have to store the initial state ($\KET{\Phi}$) in the QC and project $\KET{e^{-itH}\Phi}$ onto it, a rather complicated procedure. Instead it is more effective to apply to $\KET{\phi_1}$ a random sequence of Controlled-NOT operations to construct e.g. an entangled random state $\KET{\Phi}=D^{-1/2}(\pm\phi_1\pm\phi_2\ldots\pm\phi_D)$.\cite{ECKERTone,VEDRALone} We then calculate $\KET{e^{-itH/2}\Phi} = \sum_{n=1} b_n(t/2)\KET{\phi_n}$ and use \begin{equation} \BRACKET{\Phi}{e^{-itH}\Phi}=\BRACKET{e^{itH/2}\Phi}{e^{-itH/2}\Phi}=\sum_{n=1}^D |b_n(t/2)|^2 \end{equation} to obtain the diagonal matrix element. Each of these steps executes very efficiently on a QC. Remains the question how many samples $S$ are needed to compute the energy and specific heat to high accuracy. According to the central limit theorem the statistical error on the results vanishes as $1/\sqrt{S}$. However, as we demonstrate below, the application of our QA to a highly non-trivial quantum many-body system provides strong evidence that this error also {\bf decreases} with the system size. For systems of 15 qubits or more, we find that taking $S=20$ samples already gives very accurate results. Our experimental finding that the statistical error on $\BRACKET{\Phi}{e^{-itH}\Phi}$ for randomly chosen $\KET{\Phi}$ decreases with $D$ gives an extra boost to the efficiency of the QA. \section{Soft Quantum Computer and Quantum Algorithm} The method described above has been tested on our Soft Quantum Computer (SQC). The SQC used to compute the results presented in the present paper is a hard-coded version, derived from of a more versatile SQC discussed elsewhere.\cite{XXXXXX} Our SQC solves the time-dependent Schr\"odinger equation (TDSE) \begin{equation} \label{schrod} i{\partial \over\partial t} \KET{\Psi(t)}= H \KET{\Psi(t)} , \end{equation} for a quantum many-body system described by the spin-1/2 Hamiltonian \begin{equation} \label{hamiltonian} H=-\sum_{i,j=1}^L\sum_{\alpha=x,y,z} J_{i,j}^\alpha S_i^\alpha S_j^\alpha -\sum_{i=1}^L\sum_{\alpha=x,y,z} h_{i}^\alpha S_i^\alpha , \end{equation} where the first sum runs over all pairs $P$ of spins (qubits), $S_i^\alpha$ ($\alpha=x,y,z$) denotes the $\alpha$-th component of the spin-1/2 operator representing the $i$-th spin, $J_{i,j}^\alpha $ determines the strength of the interaction between the spins at sites $i$ and $j$, and $h_{i}^\alpha$ is the (local magnetic) field acting on the $i$-th spin. The number of qubits is $L$ and $D=2^L$. Hamiltonian~(\ref{hamiltonian}) is sufficiently general to capture the salient features of most physical models of QC's (our SQC also deals with time-dependent external fields). According to~(\ref{schrod}) the QC will evolve in time through the $D\times D$ unitary transformation $U(t)=e^{-itH}$. We now describe the QA that computes $U(t)\KET{\Phi}$ for arbitrary $\KET{\Phi}$. \magweg{If all eigenvalues and eigenvectors are known then $\KET{\Psi(t+\tau)}=U(\tau)\KET{\Psi(t)}$ can be calculated directly. However we have assumed that this data is not available so we take another route.\cite{HDRCPR} to set up an algorithm to solve the TDSE~(\ref{schrod}). } Using the semi-group property of $U(t)$ to write $U(t)=U(\tau)^m$ where $t=m\tau$, the main step is to replace $U(\tau)$ by a symmetrized product-formula approximation.\cite{HDRCPR} For the case at hand it is expedient to take \begin{equation} U(\tau)\approx {\widetilde U(\tau)} = e^{-i\tau H_z/2} e^{-i\tau H_y/2} e^{-i\tau H_x} e^{-i\tau H_y/2} e^{-i\tau H_z/2} , \end{equation} where \begin{equation} H_\alpha=-\sum_{i,j=1}^L J_{i,j}^\alpha S_i^\alpha S_j^\alpha -\sum_{i=1}^L h_{i}^\alpha S_i^\alpha \quad;\quad\alpha=x,y,z . \end{equation} \magweg{Other decompositions\cite{PEDROone,SUZUKItwo} work equally well but are somewhat less efficient.} Evidently ${\widetilde U(\tau)}$ is unitary and hence the algorithm to solve the TDSE is unconditionally stable.\cite{HDRCPR} \magweg{It can be shown that $\Vert U(\tau) - {\widetilde U(\tau)}\Vert \le c \tau^3$, implying that the algorithm is correct to second order in the time-step $\tau$.\cite{HDRCPR} Usually it is not difficult to choose $\tau$ so small that for all practical purposes the results obtained can be considered as being ``exact''. Moreover, if necessary, ${\widetilde U(\tau)}$ can be used as a building block to construct higher-order algorithms.\cite{HDRone,SUZUKItwo2,HDRKRMone,SUZUKIthree} } As basis states $\{\KET{\phi_n}\}$ we take the direct product of the eigenvectors of the $S_i^z$ (i.e. spin-up $\KET{\uparrow_i}$ and spin-down $\KET{\downarrow_i}$). In this basis, $e^{-i\tau H_z/2}$ changes the input state by altering the phase of each of the basis vectors. As $H_z$ is a sum of pair interactions it is trivial to rewrite this operation as a direct product of 4x4 diagonal matrices (containing the interaction-controlled phase shifts) and 4x4 unit matrices. Still working in the same representation, the action of $e^{-i\tau H_y/2}$ can be written in a similar manner but the matrices that contain the interaction-controlled phase-shift have to be replaced by non-diagonal matrices. Although this does not present a real problem it is more efficient and systematic to proceed as follows. Let us denote by $X$($Y$) the rotation by $\pi/2$ of each spin about the $x$($y$)-axis. As \begin{equation} \label{last} e^{-i\tau H_y/2}=XX^\dagger e^{-i\tau H_y/2}XX^\dagger =X e^{-i\tau H_z^\prime/2}X^\dagger , \end{equation} it is clear that the action of $e^{-i\tau H_y/2}$ can be computed by applying to each qubit, the inverse of $X$ followed by an interaction-controlled phase-shift and $X$ itself. The prime in~(\ref{last}) indicates that $J_{i,j}^z$ and $h_{i}^z$ in $H_z$ have to be replaced by $J_{i,j}^y$ and $h_{i}^y$ respectively. A similar procedure is used to compute the action of $e^{-i\tau H_x}$. \magweg{: We only have to replace $X$ by $Y$.} Our SQC carries out ${\cal O} \left(P\,2^L\right)$ operations to perform the trans\-formation $e^{-i\tau H_z/2}$ but a QC operates on all qubits simultaneously and would therefore only need \ORDER{P} operations. The operation counts for $e^{-i\tau H_x}$ (or $e^{-i\tau H_y}$) are \ORDER{(P+2) 2^L} and \ORDER{P+2} for the SQC and QC respectively. On a QC the total operation count per time-step is \ORDER{3\,P+4}. \magweg{Finally we estimate how many time-steps $N$ are needed to obtain $E$ and $C$ with acceptable accuracy. First we note that the number of read-outs of the QC can be minimized by using the Nyquist sampling theorem and the fact that $H$ is bounded from above and below. Indeed, it is sufficient to measure at intervals $\Delta t=\pi/E_{max}$ where $E_{max}=\max_i |E_i|$. If $\tau_0$ denotes the maximum value of $\tau$ that yields an accurate approximation ${\widetilde U(\tau)}$ we set $\tau=\min\{\tau_0,\Delta t\}$ and solve the TDSE for $N$ time steps. $N$ determines the resolution $\Delta \epsilon=\pi/N\tau$ of the discrete Fourier transform (note that $E_{max}$ increases linearly with $L$ whereas the number of eigenvalues in the interval $[-E_{max},E_{max}]$ increases exponentially with $L$). Taking a very pessimistic point of view we may expect to obtain accurate values for $E$ and $C$ as long as $T>\Delta\epsilon$. It is important to note that increasing this accuracy by a factor of two only requires running the QC twice as long: The error decreases linearly with computation time. } \section{Application} The QA described above has been tested on our SQC by simulating the antiferromagnetic spin 1/2 Heisenberg model with $J=-1$ \magweg{(i.e. $J_{i,j}^\alpha =-1$ if $i$ and $j$ are nearest neighbors and $J_{i,j}^\alpha =0$ otherwise, $h_i^\alpha=0$)} on triangular lattices of $L=6,10,15,21$ sites, subject to free boundary conditions. The ground-state properties of this model can be computed by standard sparse-matrix techniques, see e.g. Ref.~\citen{DEUTCHER}. The low temperature properties of this model are difficult to compute by conventional Quantum Monte Carlo (QMC) methods.\cite{HATANOone} The presence of frustrated interactions leads to the sign problem\cite{HATANOone} that is very often encountered in QMC work.\cite{HDRQMC,HDRSD} In Fig.~\ref{fig:specheat} we present some SQC results for the specific heat per site $E/L$ as a function of the temperature. The number of samples $S=20$ in all cases. Also shown is data obtained by exact diagonalization of $H$ for $L=6,10$.\footnote{The calculation of all 32768 (2097152) eigenvalues of the $L=15$ ($L=21$) system by standard linear algebra methods requires tremendous computational resources} The SQC and exact results differ significantly for temperatures where the specific heat exhibits a sharp peak. This is related to the presence of a gap in the low-energy part of the spectrum and the random choice of the $\KET{\Phi}$'s. In this low-temperature regime where only a few of the lowest eigenvalues contribute, random fluctuations can have a large effect. However this is not really a problem: Knowing the DOS it is not difficult to determine the precise values of these few eigenvalues and compute $C$ directly, without invoking~(\ref{min8}). \magweg{We do not present such an analysis here because the emphasis of this paper is on the QA, not on advanced methods of processing DOS data.} \begin{figure} \caption{ Specific heat per site as a function of the temperature.} \caption{Standard deviation on the specific heat per site.} \label{fig:specheat} \label{fig:stddev} \end{figure} In Fig.~\ref{fig:stddev} we show results for the standard deviation (SD) on $C/L$, calculated from the same data\magweg{ set of $S=20$ samples}. The most remarkable feature of the SD is its dependence on the system size: The larger the system, the smaller the SD. Unfortunately we cannot yet offer a theoretical basis for this observed decrease. At very low temperature the SD on $C/L$ goes to zero because only one eigenstate (i.e. the ground state) effectively contributes. The large values of the $L=6,10$ low temperature SD data reflect the fact that in this regime the SQC and exact results differ considerably and indicate that for these system sizes more than $S=20$ samples are necessary to obtain accurate results. On the other hand, except for test purposes, we wouldn't use a QC to simulate a 10-site system because it can easily be solved exactly on an ordinary computer. \magweg{The decrease of the SD with system size gives an extra boost to the efficiency of our QA. Obviously for a non-interacting system, a randomly chosen initial state will lead to the correct answer, with a SD that decreases with the system size. Although our results strongly suggest that a similar phenomenon is also at work in the case of an interacting system, we believe that more work is necessary to establish, and perhaps prove, that this is a general feature of the QA discussed above.} \section{Summary} We have described a quantum algorithm to determine the distribution of eigenvalues of a quantum many-body system in polynomial time. From these data thermal equilibrium properties of the system can be computed directly. The approach has been illustrated by numerical calculations on a software emulator of a physical model of a quantum computer. Excellent results have been obtained, suggesting a new route for simulating experiments on quantum systems on a quantum computer. However, implicit in the formulation of this physical model of the quantum computer is the assumption that each physical spin represents one qubit. If this were not the case, the quantum computer will operate with much less efficiency. \end{document}
math
یہٕ تسلیم کران کہ کہہ نَفَر دیمسدٕ ہمدردی پیٹھ ہیٛکہٕ نہٕ اثر انداز گژتھ، تعلق تھون وأل لوک دیمین تہٕ یمن تنظیمن یا ادارن خأطرٕ پنن ہمدردی بھڑھانہٕ خأطرٕ ہیٛکن کأم کٔرتھ یمن ہنٛز تم نمائندگی چِھ کران۔
kashmiri
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package com.sumzerotrading.bitmex.market.data; import com.sumzerotrading.bitmex.client.BitmexRestClient; import com.sumzerotrading.bitmex.client.BitmexWebsocketClient; import com.sumzerotrading.bitmex.common.api.BitmexClientRegistry; import com.sumzerotrading.bitmex.entity.BitmexQuote; import com.sumzerotrading.bitmex.entity.BitmexTrade; import com.sumzerotrading.bitmex.listener.IQuoteListener; import com.sumzerotrading.bitmex.listener.ITradeListener; import com.sumzerotrading.data.Ticker; import com.sumzerotrading.marketdata.Level1Quote; import com.sumzerotrading.marketdata.Level1QuoteListener; import com.sumzerotrading.marketdata.QuoteEngine; import com.sumzerotrading.marketdata.QuoteType; import java.math.BigDecimal; import java.time.ZonedDateTime; import java.util.Date; import java.util.HashMap; import java.util.Map; import java.util.Properties; import org.eclipse.jetty.websocket.client.WebSocketClient; /** * * @author RobTerpilowski */ public class BitmexLevel1QuoteEngine extends QuoteEngine implements IQuoteListener, ITradeListener { protected BitmexRestClient restClient; protected BitmexWebsocketClient websocketClient; protected boolean isStarted = false; protected Map<String, Ticker> tickerMap = new HashMap<>(); @Override public void startEngine() { websocketClient = BitmexClientRegistry.getInstance().getWebsocketClient(); isStarted = true; } @Override public void startEngine(Properties props) { } @Override public void stopEngine() { } @Override public Date getServerTime() { return new Date(); } @Override public boolean started() { return isStarted; } @Override public boolean isConnected() { return isStarted; } @Override public void useDelayedData(boolean useDelayed) { } @Override public void subscribeLevel1(Ticker ticker, Level1QuoteListener listener) { super.subscribeLevel1(ticker, listener); tickerMap.put(ticker.getSymbol(), ticker); websocketClient.subscribeQuotes(ticker, this); websocketClient.subscribeTrades(ticker, this); } @Override public void quoteUpdated(BitmexQuote quoteData) { logger.debug("Received from Bitmex: " + quoteData ); Ticker ticker = tickerMap.get(quoteData.getSymbol()); Map<QuoteType, BigDecimal> quoteMap = new HashMap<>(); quoteMap.put(QuoteType.ASK, new BigDecimal(quoteData.getAskPrice())); quoteMap.put(QuoteType.BID, new BigDecimal(quoteData.getBidPrice())); quoteMap.put(QuoteType.BID_SIZE, new BigDecimal(quoteData.getBidSize())); quoteMap.put(QuoteType.ASK_SIZE, new BigDecimal(quoteData.getAskSize())); quoteMap.put(QuoteType.MIDPOINT, new BigDecimal((quoteData.getAskPrice() + quoteData.getBidPrice()) / 2.0)); ZonedDateTime timestamp = getTimestamp(quoteData.getTimestamp()); Level1Quote quote = new Level1Quote(ticker, timestamp, quoteMap); fireLevel1Quote(quote); } @Override public void tradeUpdated(BitmexTrade trade) { Ticker ticker = tickerMap.get(trade.getSymbol()); Map<QuoteType, BigDecimal> quoteMap = new HashMap<>(); quoteMap.put(QuoteType.LAST, new BigDecimal(trade.getPrice())); quoteMap.put(QuoteType.LAST_SIZE, new BigDecimal(trade.getSize())); ZonedDateTime timestamp = getTimestamp(trade.getTimestamp()); Level1Quote quote = new Level1Quote(ticker, timestamp, quoteMap); fireLevel1Quote(quote); } public int getMessageProcessorQueueSize() { return websocketClient.getMessageProcessorCount(); } protected ZonedDateTime getTimestamp(String timestamp) { return ZonedDateTime.parse(timestamp); } }
code
Credit Card Applications » Questions » User Questions » Limited/Bad/Fair Credit » What are the Bad Credit Alternatives for those With Poor Credit Scores? What are the Bad Credit Alternatives for those With Poor Credit Scores? Credit cards and other unsecured debt that includes medical bills, legal bills and personal loans could all land you in a soup if you don`t keep up with your monthly payments. Most of this is perhaps because, we are following a financial model that allows one to spend lavishly first and then think about paying the cost later. Some people never seem to get out of that mode, even as the debt levels and outstanding credit card balances on their names reach alarming levels and their credit scores as a result take a beating. A bad situation could result when you are in need of an urgent loan or debt but your bad credit doesn`t let you get a loan. The credit card APR which is astronomical will charge you amazingly high interest rates too. But all is not lost if you have a bad credit and are in quick need of money. Two things that you can look at are the payday loans and the bad credit loans. If you are embarrassed asking for loans and constantly getting turned down, you can look for banks which offer these loans and you will in fact find quite a few. If you meet their criteria, you can get a debt very quickly. Most of these payday spot loans and bad credit loans are instant approval loans. It hardly takes a day or two for them to get approved, as it should be, if someone is in such urgent need of money. Once you fill the online application, the bank immediately looks for those lenders who are fine with a bad credit as long as some conditions are satisfied. Firstly, you must have an account and must be above 18 years of age. Most banks offer this bad credit loan without any security if you have a job and have already received your first paycheck, meaning, you have been employed for more than a month. If you do, then you should have no problems in getting the loan. The money would be deposited into your account within the next couple of days. Payday loans are mostly 15-day or 1 month loans which are suited for those people who need money urgently but have a source of income to pay off the loan by the end of the month. The interest rates on these loans because of the risk factor are slightly high.
english
4 Mar 2019 Wanna know more about Egretia and Founder 's story? 20 Apr 2019 Scatter connects you to Egretia in a simple and straightforward package. It is internationalized, sleek, and powerful. 19 Apr 2019 Scatter connects you to Egretia in a simple and straightforward package. It is internationalized, sleek, and powerful. 13 Mar 2019 Hello Egretia! How is development going?
english
/**************************************************************************** * arch/avr/src/avr32/up_usestack.c * * Copyright (C) 2010, 2013 Gregory Nutt. All rights reserved. * Author: Gregory Nutt <[email protected]> * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * 3. Neither the name NuttX nor the names of its contributors may be * used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. * ****************************************************************************/ /**************************************************************************** * Included Files ****************************************************************************/ #include <nuttx/config.h> #include <sys/types.h> #include <stdint.h> #include <sched.h> #include <debug.h> #include <nuttx/kmalloc.h> #include <nuttx/arch.h> #include "up_internal.h" /**************************************************************************** * Private Types ****************************************************************************/ /**************************************************************************** * Private Function Prototypes ****************************************************************************/ /**************************************************************************** * Global Functions ****************************************************************************/ /**************************************************************************** * Name: up_use_stack * * Description: * Setup up stack-related information in the TCB using pre-allocated stack * memory. This function is called only from task_init() when a task or * kernel thread is started (never for pthreads). * * The following TCB fields must be initialized: * * - adj_stack_size: Stack size after adjustment for hardware, * processor, etc. This value is retained only for debug * purposes. * - stack_alloc_ptr: Pointer to allocated stack * - adj_stack_ptr: Adjusted stack_alloc_ptr for HW. The * initial value of the stack pointer. * * Inputs: * - tcb: The TCB of new task * - stack_size: The allocated stack size. * * NOTE: Unlike up_stack_create() and up_stack_release, this function * does not require the task type (ttype) parameter. The TCB flags will * always be set to provide the task type to up_use_stack() if it needs * that information. * ****************************************************************************/ int up_use_stack(struct tcb_s *tcb, void *stack, size_t stack_size) { size_t top_of_stack; size_t size_of_stack; /* Is there already a stack allocated? */ if (tcb->stack_alloc_ptr) { /* Yes.. Release the old stack allocation */ up_release_stack(tcb, tcb->flags & TCB_FLAG_TTYPE_MASK); } /* Save the new stack allocation */ tcb->stack_alloc_ptr = stack; /* The AVR32 uses a push-down stack: the stack grows * toward loweraddresses in memory. The stack pointer * register, points to the lowest, valid work address * (the "top" of the stack). Items on the stack are * referenced as positive word offsets from sp. */ top_of_stack = (size_t)tcb->stack_alloc_ptr + stack_size - 4; /* The AVR32 stack must be aligned at word (4 byte) * boundaries. If necessary top_of_stack must be rounded * down to the next boundary */ top_of_stack &= ~3; size_of_stack = top_of_stack - (size_t)tcb->stack_alloc_ptr + 4; /* Save the adjusted stack values in the struct tcb_s */ tcb->adj_stack_ptr = (FAR void *)top_of_stack; tcb->adj_stack_size = size_of_stack; return OK; }
code
The Primavera Hardwood Viola Bow, Round 12'' is a bow ideal for musicians starting the viola. The stick is crafted from quality hardwood and is rounded for a comfortable grip when performing. Similar to more expensive bows, the Primavera hardwood viola bow features a full ebony frog, pearl eye, bright lapping, comfortable thumb grip and a three-piece end screw for a premium look. The Primavera hardwood viola bow uses natural horse hair and is suitable for 12'' (1/2 size) violas.
english
اَتھ حدس تام کہ واریاہ مذہبی فرمیہٕ چِھ اکس بیکس خلاف مقابلہٕ کران، تم چِھ مذہبی صارفینن ہنٛد کینٛہہ طبقن ہنٛز مخصوص ضروریاتن مہارت حأصل کرنس تہٕ یمن پورٕ کرنُک رجحان چِھ تھوان۔
kashmiri
पटियाला, जेएनएन। पंजाब इलेक्ट्रिसिटी रेट : पंजाब में बिजली उपभोक्ताओं को जल्द ही झटका लगने वाला है। पवरकॉम राज्य में बिजली की दरों में वृद्धि की तैयारी है। थर्मल प्लाटों को पावरकॉम की ओर से किया गया१४२६ करोड़ों रुपये का भुगतान बिजली उपभोक्ताओं पर भारी पड़ सकता है। पावरकॉम ने आर्थिक संकट से बचने के लिए रेगुलेटरी कमीशन को बिजली की दरें बढ़ाने की याचिका दायर की है। पावरकॉम अगले सप्ताह रेगुलेटरी कमीशन को एनुअल रेवेन्यू रिक्वायरमेंट रिपोर्ट सौंपेगी। बिजली की दरें बढ़ाने के लिए रेगुलेटरी कमीशन को डाली याचिका कोयले की सफाई के भुगतान को लेकर राजपुरा और तलवंडी साबो थर्मल प्लांट की पावरकॉम के खिलाफ लड़ाई में सुप्रीम कोर्ट ने पावरकॉम को १४२० करोड़ रुपये भुगतान करने के आदेश दिए हैं। थर्मल प्लांटों को किए जाने वाले इस भुगतान की रिकवरी उपभोक्ताओं से करने के लिए पावरकॉम ने बिजली दरें बढ़ाने की याचिका दायर की है। पावरकॉम अगले सप्ताह रेगुलेटरी कमीशन को सौंपेगा एनुअल रेवेन्यू रिक्वायरमेंट रिपोर्ट पावरकॉम के चीफ इंजीनियर एआरआर एंड टीआर ने रेगुलेटरी कमिशन के पास दायर याचिका में बताया कि दो निजी बिजली कंपनियों नाभा पॉवर लिमिटेड (एनपीएल) और तलवंडी साबो पावर लिमिटेड (टीपीएल) ने पावरकॉम के खिलाफ कुछ मुद्दों को लेकर सुप्रीम कोर्ट में केस किया था। कोर्ट ने १४२० करोड़ रुपये इन कंपनियों को अदा करने के आदेश दिए थे। पावरकॉम ने सुप्रीम कोर्ट के आदेश पर १४२० करोड़ रुपये का भुगतान थर्मल प्लांट चलाने वाली कंपनियों को किया है। ४२१ करोड़ ७७ लाख रुपये एनपीएल को और १००२ करोड़ ५ लाख रुपये टीपीएल को अदा कर दिए हैं। इस रकम को पावकॉम की ओर से दोनों बिजली कंपनियों से बिजली खरीदने के लिए किया गया खर्च मानते हुए अलग-अलग वर्गों के लिए तय बिजली दरों में वृद्धि की जाए। इसकी वसूली भी पिछले समय से की जाए। कम वेतन देकर शोषण का शिकार हो रहे मुलाजिम, धरना ३० को यह भी पढ़ें: हरियाणा कैबिनेट एक्स्टेंशन: सरकार बनने के १७ दिन बाद मंत्रियों के नाम तय, शपथ ग्रहण कल इसके साथ ही इस याचिका पर फैसला जल्द लेने का आग्रह भी किया है। रेगुलेटरी कमिशन ने पावरकॉम की इस याचिका पर सार्वजनिक एतराज मांगते हुए पांच दिसंबर की तारीख तय की है। नौ नवंबर को प्रकाशित हुई सूचना के २१ दिन के अंदर लोग अपना एतराज कमिशन को भेज सकते हैं। मिल्क टैंकर, कार और ऑटो की टक्कर में तीन घायल पंचायती जमीने बचाओ एक्शन कमेटी ने निकाली रैली # पंजाब में बिजली की दरें # पंजाब में बिजली दरें में वृद्धि
hindi
مگر اَز کرو نہٕ أسۍ لڑٲے
kashmiri
Masthead sloop with adjustable backstay and runners. Boom furling system has recently been re built. The wheel shown is the cruising wheel, there is also a larger diameter "Regatta Wheel" Saloon settee converts quickly into an additional double and there is also an additional single berth cabin aft currently used as storage. This Northshore 46 designed by Hank Kauffman and built by Northshore Yachts Sydney is a safe and comfortable cruising boat and a capable offshore racer, with her lead keel and centerboard, she can also get into those shallower and more sheltered anchorages. Her boom furling makes sail handling a breeze and she has been predominantly sailed single handed by both her current and previous owner. Comfortably sleeping 5 in two doubles and a pilot berth, the saloon settee converts quickly into another double and there is also an additional single berth cabin aft currently used as storage. A major refit in 2017 including, new mast & rigging, interior upgrades, high output solar panels,electric head, New Furuno instruments and much more. Many recent upgrades and additions. This Northshore (Australia) 46 Centerboard Sloop has been personally photographed and viewed by Alan Giles of Boatshed Phuket. Click here for contact details.
english
हर्समाचार: स्कूली बच्चों में छठी से ही बोएंगे व्यवसाय के बीज बच्चों में अब ११-१२ साल की उम्र से ही व्यवसाय के बीज रोपे जाएंगे, ताकि आने वाली पीढ़ी अकेले नौकरी के ही भरोसे न रहे। सरकार ने स्कूलों में इसको लागू करने की एक बड़ी योजना तैयार की है। इसके तहत बच्चों को छठीं क्लास से ही व्यवसायिक शिक्षा दी जाएगी। इस दौरान इसका कोर्स बिल्कुल ऐसा होगा, ताकि बच्चों में इस दिशा में आगे बढ़ने को लेकर रुचि पैदा हो सके। अभी स्कूलों में व्यवसायिक शिक्षा की पढ़ाई नौवीं क्लास से दी जाती है। योजना के तहत केंद्रीय विद्यालयों में इसकी पढ़ाई इसी सत्र से शुरू की जा सकती है। हालांकि इसको लेकर अभी एनसीईआरटी और मंत्रलय के बीच अंतिम दौर की चर्चा होनी बाकी है। योजना के तहत व्यवसायिक कोर्स को छठवीं से लागू करने के पीछे जो तर्क है कि नौवीं तक आते-आते ज्यादातर बच्चे अपनी पढ़ाई और आगे के क्षेत्र को लेकर मानसिक रूप से तैयार हो चुके होते हैं। बाद में वह उसी दिशा में आगे बढ़ जाते हैं। इसके चलते व्यवसायिक शिक्षा का उन पर कोई खास प्रभाव नहीं पड़ पाता है। सूत्रों की मानें तो एनसीईआरटी ने इन पहलुओं को देखते हुए बच्चों को अब छठवीं से ही इसकी पढ़ाई कराने की योजना बनाई है। योजना के तहत बच्चों को इस दौरान सिर्फ इस विषय की प्रारंभिक जानकारी दी जाएगी। ताकि उनके व्यवसायिक क्षेत्र में बढ़ने का रुझान पैदा हो सके। एनसीईआरटी इसके तहत छठवीं से आठवीं तक के लिए एक नया पाठ्यक्रम डिजाइन करने में जुटा है। गौरतलब है कि सरकार का फोकस इन दिनों युवाओं को पढ़ाई के बाद स्टार्टअप जैसे कारोबार से जोड़ने को लेकर है। प्रधानमंत्री नरेंद्र मोदी भी कई बार युवाओं को कह चुके हैं कि वह नौकरी करने वाले नहीं, बल्कि नौकरी देने वाले बनें।
hindi
आपोल इफोन ११ प्रो को तीन वेरिएंट में लॉन्च किया गया है। इसमें ६४गब, २५६गब और ५१२गब स्टोरेज शामिल है। हालांकि कंपनी ने इसके बेस वेरिएंट की कीमत ही बताई है। इसकी शुरुआती कीमत ९९,९०० रुपये है। अमेरिका की दिग्गज कंपनीआपोल ने आखिरकार आपोल इफोन २०१९ को मार्केट में पेश कर दिया है। कंपनी ने इस दौरान इफोन ११, इफोन ११ प्रो और इफोन ११ प्रो मैक्स को मार्केट में पेश किया है। हम पहले ही इन स्मार्टफोन की लॉन्च कॉपी और उसकी भारतीय कीमत पहले ही बग्र.इन हिन्दी पर पब्लिश कर चुके हैं। इफोन ११ जहां पिछले साल लॉन्च हुए आपोल इफोन ज़्र का अपग्रेडिड वर्जन है, वहीं इफोन ११ प्रो पहले लॉन्च हुए इफोन ज़्स का अपग्रेडिड है। वहीं ११ प्रो मैक्स पिछले साल लॉन्च हुए इफोन ज़्स मैक्स का अपग्रेडिड वर्जन है। यहां हम आपको आपोल इफोन ११ प्रो और आपोल इफोन ज़्स में कंपेरिसन को बता रहे हैं। आपोल इफोन ११ प्रो को तीन वेरिएंट में लॉन्च किया गया है। इसमें ६४गब, २५६गब और ५१२गब स्टोरेज शामिल है। हालांकि कंपनी ने इसके बेस वेरिएंट की कीमत ही बताई है। इसकी शुरुआती कीमत ९९,९०० रुपये है। इस फोन के वेरिएंट की स्टोरेज को बढ़ाया नहीं जा सकता है। इसकी कीमत आपोल इफोन ज़्स के लॉन्च प्राइस जितनी ही है। आपोल इफोन ज़्स के बेस वेरिएंट जिसमें ६४जीबी स्टोरेज है उसकी कीमत ९९,९०० रुपये है। इसकी कीमत १३४,९०० रुपये तक जाती है जिसमें ५१२जीबी स्टोरेज दी गई है। आपोल इफोन ११ प्रो और आपोल ज़्स की डिस्प्ले लगभग एक जैसी है। इफोन ११ प्रो ५.८-इंच सुपर रेटिना ज़्द्र दिसप्ल्य के साथ आता है। वहीं इसमें ऑलेड पैनल है जो ८2.१ पर्सेंट स्क्रीन टू बॉडी रेश्यो के साथ आता है। इसकी स्क्रीन का रिजॉल्यूशन १,१2५2,४३६ पिक्सल्स का है। हालांकि पिछले साल लॉन्च हुआ इफोन ज़्स ३ड टूच के साथ आता था लेकिन अबकी बार कंपनी ने इसे रिमूव कर दिया है। डिजाइन के मामले में भी दोनों आईफोन में कोई अंतर नहीं है। इफोन ११ प्रो सिमिलर डिस्प्ले और फ्रंट नॉच के साथ आता है। इफोन ११ प्रो में अबकी बार आपको ट्रिपल कैमरा सेंसर मिल रहा है। इसके अलावा कंपनी ने बैक पैनल के मिड में एप्पल लोगो को सेट किया है। ११ प्रो का वजन जहां १८८ ग्राम है वहीं ज़्स का वजन १७७ ग्राम है। आपोल इफोन ११ प्रो को लेटेस्ट इयोस वर्जन इयोस १३ के साथ आता है। इस बार आपको आपोल आ१३ बायोनिक सोक और ६गब राम सभी वेरिएंट में मिल रही है। हालांक एप्पल ने ऑफिशियल तौर पर ६गब रैम को कंफर्म नहीं किया है। इसके अलावा अबकी बार आपको स्पशियल आडियो, वी-फी ६ और १८व चार्जिंग जैसे फीचर्स मिल रहे हैं। आपोल इफोन ११ प्रो में ट्रिपल रियर कैमरा दिया गया है। कंपनी ने इसके अलावा फ्रंट कैमरा सेंसर में भी सुधार किया है। फोन के बैक में पहले दो सेंसर आपोल इफोन ज़्स जेसे ही हैं। लेकिन तीसरा सेंसर अबकी बार अल्ट्रावाइड लेंस और फ/२.४ अपर्चर के साथ आता है। इस बार इफोन ११ प्रो में 1२मेगापिक्सल का सेंसर मिल रहा है, जबकि पिछली बार इफोन ज़्स में ७मेगापिक्सल का सेंसर मिल रहा था।
hindi
# ManipulaSom Source-code for Manipula#Som @CCB 2017, including Arduino/MakeyMakey code, MaxMSP and Ableton Live projects
code
Jeff Pollack is an associate professor in the NC State Poole College of Management. He received his undergraduate degree at Northwestern University, his master’s in organizational communication at NC State University, and his doctorate in management at Virginia Commonwealth University. Over the past 15 years, Jeff has accumulated a variety of entrepreneurial experiences from working in small family-owned firms, to buying and selling businesses, to investing in businesses. These experiences shape his perspective on research in the domain of entrepreneurship. Jeff’s research has been published in journals including Academy of Management Journal, Psychological Bulletin, Journal of Business Venturing, Entrepreneurship Theory and Practice, Journal of Management Studies, Journal of Management, Journal of Small Business Management, Journal of Organizational Behavior, Family Business Review, Journal of Managerial Psychology, Basic and Applied Social Psychology, and Journal of Leadership Studies. Jeff has taught courses, at the undergraduate and graduate levels, on entrepreneurship, business opportunity analysis and planning, new venture creation, social entrepreneurship, organizational design, industrial and organizational psychology, organizational behavior, and public speaking. The determinants of performance both at the firm and individual levels.
english
बिग बॉस १२: पहली बार घर में श्रीसंत के सपोर्ट में उतरे सलमान खान, राेहित काे पड़ी खूब डांट विवादित शाे बिग बाॅस में अक्सर श्रीसंत काे सलमान खान से खरी- खाेटी सुनने काे मिलती है, लेकिन इस बार सलमान खान पहली बार श्रीसंत का सपाेर्ट करते हुए नजर आए। जानकारी के लिए बता दें कि इस बार वीकेंड का वार एपिसोड शुक्रवार से ही दिखाया जाएगा, शो का नया प्रोमो सामने आया है। जिसमें सलमान खान ने रोहित सुचांती की क्लास लगाई और श्रीसंत को सपोर्ट किया। यह भी पढ़े:- आखिर क्यों इमोशनल हुई सोनाली बेंद्रे ने बहन रूपा ताई के लिए लिखा ये पाेस्ट यह भी पढ़े:- लव जिहाद में बुरी तरह फंसी केदारनाथ उत्तराखंड में हुई बैन, कमाई में होगा नुकसान जी हां, बता दें कि रोहित सुचांती को इस बार घर के सबसे बदतमीज कंटेस्टेंट का टैग मिला है। अक्सर वे श्रीसंत, दीपिका कक्कड़, मेघा धाडे और जसलीन को चिढ़ाते और उकसाते हुए नजर आते हैं, एक्टर के इसी बिहेवियर पर सलमान खान का गुस्सा फूटा है। कलर्स पर जारी किए गए प्रोमो में सलमान, रोहित को डांटते हुए नजर आते हैं। श्रीसंत को उकसाना रोहित पर भारी पड़ता दिखाई दिया, सलमान ने रोहित को हद में रहने की सलाह दी। इस दाैरान सलमान खान कहते हैं, 'रोहित क्या तुम्हारे पास और दूसरा कोई टैलेंट नहीं है? सिर्फ उकसाना ही टैलेंट है आपका? क्या आपको पता है कि आपने श्रीसंत को क्या-क्या कहा है? मैं भी उन्हीं चीजों से गुजरा हूं, इन सब चीजों से बाहर निकलने के लिए बहुत सारी हिम्मत लगती है। ये छोटे आदमी की पहचान है, आपके पास किसी को नीचा दिखाने का कोई अधिकार नहीं है।' सलमान खान को अपना सपोर्ट लेते हुए देखकर श्रीसंत भावुक हो जाते हैं, वे रोने लगते हैं और सलमान खान को शुक्रिया अदा करते हैं। प्रोमो देखकर अंदाजा लगता है कि शुक्रवार का एपिसोड धमाकेदार होने वाला है। यह भी पढ़े:- एक बार फिर से बड़े पर्दे पर नजर आएंगे पंजाबी सिगंर जस्सी गिल, पढ़े पूरी खबर यह भी पढ़े:- रेवीव: दमदार किरदार से सारा ने जीत लिया सबका दिल, इस वजह से जरुर देखने जाए फिल्म वहीं शाे में आगे शुक्रवार को बिग बॉस कैप्टन होने के नाते सुरभि को अधिकार देंगे कि वे किन्हीं ३ सदस्यों को कालकोठरी के पात्रों के रुप में चुनें। सुरभि अपना फैसला सुनाते हुए श्रीसंत का नाम देती हैं, इसके बाद श्रीसंत गुस्सा हो जाते हैं और वॉशरूम में खुद को बंद कर लेते हैं। श्रीसंत काे काफी बार जेल जाना पड़ा है, लेकिन इसके बाद वाे क्या फैसला लेते हैं आैर सुरभि पर किस तरह रिएक्ट करेंगे ये ताे इस एपिसाेड के आने के बाद ही पता चलेगा। यह भी पढ़े:- तेलंगना: साउथ स्टार चिरंजीवी और नागार्जुन पहुंचे वोट डालने, ये सितारे भी आए नजर यह भी पढ़े:- विडियो: मीका सिंह पर पड़ी मुसिबत ताे मदद के लिए भागी राखी सावंत, किया ये पाेस्ट यह भी पढ़े:- शॉकिंग: ज़रीन खान को इस शख्स ने किए आपत्तिजनक मैसेज दी धमकी, कहा...
hindi
\begin{document} \begin{abstract} We construct a framework for studying clustering algorithms, which includes two key ideas: {\em persistence} and {\em functoriality}. The first encodes the idea that the output of a clustering scheme should carry a multiresolution structure, the second the idea that one should be able to compare the results of clustering algorithms as one varies the data set, for example by adding points or by applying functions to it. We show that within this framework, one can prove a theorem analogous to one of J. Kleinberg \cite{kleinberg}, in which one obtains an existence and uniqueness theorem instead of a non-existence result. We explore further properties of this unique scheme, stability and convergence are established. \end{abstract} \title{Persistent Clustering and a Theorem of J. Kleinberg} \section{Introduction} Clustering techniques play a very central role in various parts of data analysis. They can give important clues to the structure of data sets, and therefore suggest results and hypotheses in the underlying science. There are many interesting methods of clustering available, which have been applied to good effect in dealing with many datasets of interest, and they are regarded as important methods in exploratory data analysis. Despite being one of the most commonly used tools for unsupervised exploratory data analisys and despite its and extensive literature very little is known about the theoretical foundations of clustering methods. The general question of which methods are ``best", or most appropriate for a particular problem, or how significant a particular clustering is has not been addressed as frequently. One problem is that many methods involve particular choices to be made at the outset, for example how many clusters there should be, or the value of a particular thresholding quantity. In addition, some methods depend on artifacts in the data, such as the particular order in which the elements are listed. In \cite{kleinberg}, J. Kleinberg proves a very interesting impossibility result for the problem of even defining a clustering scheme with some rather mild invariance properties. He also points out that his results shed light on the trade-offs one has to make in choosing clustering algorithms. In this paper, we produce a variation on this theme, which we believe also has implications for how one thinks about and applies clustering algorithms. In addition, we study the precise quantitative (or metric) stability and convergence/consitency of one particular clustering scheme which is characterized by one of our results. We summarize the two main points in our approach. \noindent {{\bf Persistence:} We believe that the output of clustering algorithms shouldn't be a single set of clusters, but rather a more structured object which encodes ``multiscale" or ``multiresolution" information about the underlying dataset. The reason is that data can often intrinsically possess structure at various different scales, as in Figure \ref{multiscale} below. Clustering techniques should reflect this structure, and provide methods for representing and analyzing it. \begin{figure} \caption{Dataset with multiscale structure and its corresponding dendrogram.} \label{multiscale} \end{figure} \noindent Ideally, users should be presented with a readily computable and presentable object which will give him/her the option of choosing the proper scale for the analysis, or perhaps interpreting the multiscale invariant directly, rather than being asked to choose a scale or choosing it for him/her. It is widely accepted that clustering is ultimately itself a tool for exploratory data analysis, \cite{ulrike_general}. In some sense, it is therefore totally acceptable to provide this multiscale invariant, whenever available and let the user pick different scale thresholds that will yield different partitions of the data. Once we accept this, we can concentrate on answering theoretical questions regarding schemes that output this kind of information. Our analysis will not, however, rule out clustering methods that provide a one-scale view of the data, since, formally, one can consider a such a scheme as one that at all scales gives the same information, cf. Example \ref{example:scale-free} We choose a particular way of representing this multiscale information, we use the formalism of \emph{persistent sets}, which is introduced in Section \ref{persistence}, Definition \ref{def:persistent-sets}. The idea of showing the multiscale clustering view of the dataset is widely used in Gene expression data analysis and it takes the form of \emph{dendrograms}.} \noindent {{\bf Functoriality:} As our replacement for the constraints discussed in \cite{kleinberg}, we will use instead the notion of {\em functoriality} which has been a very useful framework for the discussion of a variety of problems within mathematics over the last few decades. For a discussion of categories and functors, see \cite{maclane}. Our idea is that clusters should be viewed as the stochastic analogue of the mathematical concept of {\em path components}. Recall (see, e.g. \cite{munkres}) that the path components of a topological space $X$ are the equivalence classes of points in the space under the equivalence relation $\sim _{path}$, where, for $x,y \in X$, we have $x \sim _{path} y$ if and only if there is a continuous map $\varphi: [0,1] \rightarrow X$ so that $\varphi (0) = x$ and $\varphi (1) = y$. In other words, two points in $X$ are in the same path component if they are connected by a continuous path in $X$. This set of components is denoted by $\pi _0 (X)$. The assignment $X \rightarrow \pi _0 (X)$ is said to be functorial, in that given a continuous map $f :X \rightarrow Y$ (\emph{morphism} of topological spaces), there is a natural map of sets $\pi _0 (f) : \pi _0(X) \rightarrow \pi _0 (Y)$, which is defined by requiring that $\pi _0 (f)$ carries the path component of a point $x \in X$ to the path component of $f(x) \in Y$. This notion has been critical in many aspects of geometry; it provides the basis for the methods of organizing geometric objects combinatorially which is referred to as combinatorial or simplicial topology. The input to clustering algorithms is not, of course, a topological space. Rather, it is typically {\em point cloud data}, finite sets of points lying in a Euclidean space of some dimension, or perhaps in some other metric space, such as a tree or a collection of words in some alphabet equipped with a metric. We will therefore think of it as a finite metric space (see \cite{munkres} for a discussion of metric spaces). There is a natural notion of what is meant by a map of metric spaces, which one can think of as loosely analogous to continuity. This notion has been used in other contexts in the past, see for example \cite{isbell}. Similarly, we define a natural notion of what is meant by a morphism of the persistent sets defined above, and require \emph{functoriality} for the clustering algorithms we consider in terms of these notions of morphisms. For the time being the reader not familiar with the concept, can think of functoriality as a notion of coarse stability/consistency. By varying the richness of the class of morphisms between metric spaces one can control how stringent are the conditions imposed on the clustering algorithms. Functoriality can therefore be interpreted as a notion of \emph{coarse stability} of these clustering algorithms.} In \cite{mccullaugh}, the idea of using categorical and functorial ideas in statistics has been proposed as a formalism for defining what is meant by statistical models. One aspect of our work is to show that the same ideas, which are so powerful in many other aspects of mathematics, can be used to understand the nature of algorithms for accomplishing statistical tasks. We summarize the main features of our point of view. (a) It makes explicit the notion of multiscale representation of the set of clusters. (b) By varying the degree of functoriality (i.e. by considering different notions of morphism on the domain of point cloud data) one can reason about the existence and properties of various schemes. We illustrate this possibility in Section \ref{results}. In particular, are able to prove a \emph{uniqueness} theorem for clustering algorithms with one natural notion of functoriality. (c) Beyond the conceptual advantages cited above, functoriality can be directly useful in analyzing datasets. The property can be used to study qualitative geometric properties of point cloud data, including more subtle geometric information than clustering, such as presence of ``loopy" behavior or higher dimensional analogues. See e.g. \cite{mumford} for an example of this point of view. We will also present an example in Subsection \ref{intrinsic}. In addition, the functoriality property can be used to analyze functions on the datasets, by studying the behavior of sublevel sets of the function under clustering. One version of this idea builds probabilistic versions of the {\em Reeb graph}. See \cite{mapper} for a number of examples of how this can work. Other, different, notions of stability of clustering schemes have appeared in the literature, see \cite{raghavan,sober} and references therein. We touch upon similar concepts in Section \ref{sec:stab}. The organization of the paper is as follows. In Section \ref{persistence} we introduce the main objects that model the output of clustering algorithms together with some important examples. Section \ref{cats} introduces the concepts of categories and functors, and the idea of \emph{functoriality} is discussed. We present our main characterization results in Section \ref{results}. The quantitative study of stability and consistency is presented in Section \ref{sec:stab}. Further applications of the concept of functoriality are discussed in Section \ref{sec:zigzags} and concluding remarks are presented in Section \ref{various}. \section{Persistence}\label{persistence} In this section we define the objects which are the output of the clustering algorithms we will be working with. These objects will encode the notion of ``multiscale" or ``multiresolution" sets discussed in the introduction. Let ${\mathcal P}(X)$ denote the set of partitions of the (finite) set $X$. \begin{Definition}\label{def:persistent-sets} A {\em persistent set} is a pair $(X, \theta)$, where $X$ is a finite set, and $\theta$ is a function from the non-negative real line $[0,+ \infty)$ to ${\mathcal P}(X)$ so that the following properties hold. \begin{enumerate} \item{ If $r \leq s$, then $\theta (r)$ refines $\theta (s)$. } \item{For any $r$, there is a number $\epsilon > 0$ so that $\theta (r^{\prime}) = \theta (r)$ for all $r^\prime\in[r,r+\epsilon]$.} \end{enumerate} If in addition there exists $t>0$ s.t. $\theta(t)$ consists of the single block partition for all $r\geq t$, then we say that $(X,\theta)$ is a \emph{dendrogram}.\footnote{In the paper we will be using the word dendrogram to refer both to the object defined here and to the standard graphical representation of them.} \end{Definition} \noindent The intuition is that the set of blocks of the partition $\theta (r)$ should be regarded as $X$ viewed at scale $r$. \begin{Example}\label{rips}{\em Let $(X,d)$ be a finite metric space. Then we can associate to $(X,d)$ the persistent set whose underlying set is $X$, and where blocks of the partition $\theta (r)$ consist of the equivalence classes under the equivalence relation $\sim _r$, where $x \sim _r x^{\prime}$ if and only if there is a sequence $x_0, x_1, \ldots , x_t \in X$ so that $x_0 = x, x_t = x^{\prime}$, and $d(x_i, x_{i+1}) \leq r$ for all $i$.} \end{Example} \begin{Example}\label{example:scale-free} {\em A more trivial example is one in which $\theta (r) $ is constant, i.e. consists of a single partition. This is the scale free notion of clustering. Examples are $k$-means clustering and spectral clustering.} \end{Example} \begin{Example}{\em \label{hierarchical} Here we consider the family of Agglomerative Hierarchical clustering techniques, \cite{clusteringref}. We (re)define these by the recursive procedure described next. Let $X=\{x_1,\ldots,x_n\}$ and let $\mathcal L$ denote a family of \emph{linkage functions}, i.e. functions which one uses for defining the distance between two clusters. Fix $l\in {\mathcal L}$. For each $R>0$ consider the equivalence relation $\sim_{l,R}$ on blocks of a partition $\Pi\in{\mathcal P}(X)$, given by \mbox{${\mathcal B}\sim_{l,R} {\mathcal B}'$} if and only if there is a sequence of blocks ${\mathcal B}={\mathcal B}_1,\ldots,{\mathcal B}_s={\mathcal B}'$ in $\Pi$ with $l({\mathcal B}_k,{\mathcal B}_{k+1})\leq R$ for $k=1,\ldots,s-1$. Consider the sequences $r_1,r_2,\ldots\in [0,\infty)$ and $\Theta_1,\Theta_2,\ldots\in \mathcal{P}(X)$ given by $\Theta_1:=\{x_1,\ldots,x_n\}$ and for $i\geq 1$, $\Theta_{i+1}=\Theta_i/\sim_{l,r_i}$ where $r_i:= \min\{l({\mathcal B},{\mathcal B}'),\,{\mathcal B},{\mathcal B}'\in\Theta_i,\,{\mathcal B}\neq {\mathcal B}'\}.$ Finally, we define $\theta^l:[0,\infty)\rightarrow \mathcal{P}(X)$ by $r\mapsto \theta^l(r):=\Theta_{i(r)}$ where $i(r):=\max\{i|r_i\leq r\}$. Standard choices for $l$ are \emph{single linkage}: $l({\mathcal B},{\mathcal B}')=\min_{x\in {\mathcal B}}\min_{x'\in {\mathcal B}'}d(x,x');$ \emph{complete linkage} $l({\mathcal B},{\mathcal B}')=\max_{x\in {\mathcal B}}\max_{x'\in {\mathcal B}'}d(x,x');$ and \emph{average linkage}: $l({\mathcal B},{\mathcal B}')=\frac{\sum_{x\in {\mathcal B}}\sum_{x'\in {\mathcal B}'}d(x,x')}{\#{\mathcal B} \cdot \#{\mathcal B}'}.$ It is easily verified that the notion discussed in Example \ref{rips} is \emph{equivalent} to $\theta^l$ when $l$ is the single linkage function. Note that, unlike the usual definition of agglomerative hierarchical clustering, at each step of the inductive definition we allow for more than two clusters to be merged.} \end{Example} \noindent We will be using the persistent sets which arise out of Example \ref{rips}. It is of course the case that the persistent set carries much more information than a single set of clusters. One can ask whether it carries too much information, in the sense that either (a) one cannot obtain useful interpretations from it or (b) it is computationally intractable. We claim that it can usually be usefully interpeted, and can be effectively and efficiently computed. One can observe this as follows. Since there are only a finite number of partitions of $X$, a persistent set ${\mathcal Q}$ gives a partition of $\mathbb{R}^+$ into a finite collection ${\mathcal I}$ of intervals of the form $[r,r^{\prime})$, together with one interval of the form $[r, + \infty)$. For each such interval, every number in the interval corresponds to the same partition of $X$. We claim that knowledge of these intervals is a key piece of information about the persistent sets arising from Examples \ref{rips} and \ref{hierarchical} above. The reason is that long intervals in ${\ I}$ correspond to large ranges of values of the scale parameter in which the associated cluster decomposition doesn't change. One would then regard the partition into clusters corresponding to that interval as likely to represent significant structure present at the given range of scales. If there is only one long interval (aside from the infinite interval of the form $[r, + \infty)$) in ${\mathcal I}$, then one is led to believe that there is only one interesting range of scales, with a unique decomposition into clusters. However, if there are more that one long interval, then it suggests that the object has significant multiscale behavior, see Figure \ref{multiscale}. Of course, the determination of what is ``long" and what is ``short" will be problem dependent, but choosing thresholds for the length of the intervals will give definite ranges of scales. As for the computability, the persistent sets associated to a finite metric space can be readily computed using (conveniently modified) hierarchical clustering techniques, or the methods of persistent homology (see \cite{persistence}). \section{Categories, functors and functoriality}\label{cats} \subsection{Definitions and Examples} In this section, we will give a brief description of the theory of categories and functors, which will be the framework in which we state the constraints we will require of our clustering algorithms. An excellent reference for these ideas is \cite{maclane}. Categories are useful mathematical constructs that encode the nature of certain objects of interest together with a set of admissible/interesting/useful maps between them. This formalism is extremely useful for studying classes of mathematical objects which share a common structure, such as sets, groups, vector spaces, or topological spaces. The definition is as follows. \begin{Definition} A category $\underline{C}$ consists of \begin{itemize} \item{ A collection of objects $ob(\underline{C})$ (e.g. sets, groups, vector spaces, etc.)} \item{ For each pair of objects $X,Y \in ob(\underline{C})$, a set\\ $Mor_{\underline{C}} (X,Y)$, the morphisms from $X$ to $Y$ (e.g. maps of sets from $X$ to $Y$, homomorphisms of groups from $X$ to $Y$, linear transformations from $X$ to $Y$, etc. respectively)} \item{ Composition operations:\\ $\mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}: Mor_{\underline{C}} (X,Y) \times Mor_{\underline{C}} (Y,Z) \rightarrow Mor_{\underline{C}} (X,Z)$, corresponding to composition of set maps, group homomorphisms, linear transformations, etc. } \item{For each object $X \in \underline{C}$, a distinguished element $id_X\in Mor_{\underline{C}}(X,X)$} \end{itemize} The composition is assumed to be associative in the obvious sense, and for any $f \in Mor_{\underline{C}} (X,Y)$, it is assumed that $id_Y \mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}f = f$ and $f \mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}id_X = f$. \end{Definition} Here are the relevant examples for this paper. \begin{Example} {\em We will construct three categories $\underline{{\mathcal M}}^{iso}$, $\underline{{\mathcal M}}^{mon}$, and $\underline{{\mathcal M}}^{gen}$, whose collections of objects will all consist of the collection of finite metric spaces. Let $(X,d_X)$ and $(Y, d_Y)$ denote finite metric spaces. A set map $f: X \rightarrow Y$ is said to be {\em distance non increasing} if for all $x,x^{\prime} \in X$, we have $d_Y(f(x),f(x^{\prime})) \leq d_X(x,x^{\prime})$. It is easy to check that composition of distance non-increasing maps are also distance non-increasing, and it is also clear that $id_X$ is always distance non-increasing. We therefore have the category $\underline{{\mathcal M}}^{gen}$, whose objects are finite metric spaces, and so that for any objects $X$ and $Y$, $Mor_{\underline{{\mathcal M}}^{gen}}(X,Y)$ is the set of distance non-increasing maps from $X$ to $Y$, cf. \cite{isbell} for another use of this class of maps. We say that a distance non-increasing map is {\em monic} if it is an inclusion as a set map. It is clear compositions of monic maps are monic, and that all identity maps are monic, so we have the new category $\underline{{\mathcal M}}^{mon}$, in which $Mor_{\underline{{\mathcal M}}^{mon}}(X,Y)$ consists of the monic distance non-increasing maps. Finally, if $(X,d_X)$ and $(Y,d_Y)$ are finite metric spaces, $f:X \rightarrow Y$ is an {\em isometry} if $f$ is bijective and $d_Y(f(x), f(x^{\prime})) = d_X(x,x^{\prime})$ for all $x$ and $x^{\prime}$. It is clear that as above, one can form a category $\underline{{\mathcal M}}^{iso}$ whose objects are finite metric spaces and whose morphisms are the isometries. It is clear that we have inclusions \begin{equation}\label{eq:inclu-cats} \underline{{\mathcal M}}^{iso} \subseteq \underline{{\mathcal M}}^{mon} \subseteq \underline{{\mathcal M}}^{gen} \end{equation} of subcategories (defined as in \cite{maclane}). Note that although the inclusions are bijections on object sets, they are proper inclusions on morphism sets, i.e. they are not in general surjective. } \end{Example} We will also construct a category of \emph{persistent sets}. \begin{Example}\label{example:P} {\em Let $(X, \theta), (Y, \eta)$ be persistent sets. For any partition $\Pi$ of a set $Y$, and any set map $f: X \rightarrow Y$, we define $f^*(\Pi)$ to be the partition of $X$ whose blocks are the sets $f^{-1}({\mathcal B})$, as ${\mathcal B}$ ranges over the blocks of $\Pi$. A map of sets $f: X \rightarrow Y$ is said to be {\em persistence preserving} if for each $r \in \mathbb{R}$, we have that $\theta (r)$ is a refinement of $ f^*(\eta(r))$. It is easily verified that the composite of persistence preserving maps is persistence preserving, and that any identity map is persistence preserving, and it is therefore clear that we may define a category $\underline{{\mathcal P}}$ whose objects are persistent sets, and where $Mor_{\underline{{\mathcal P}}}((X,\theta), (Y, \eta))$ consists of the set maps from $X$ to $Y$ which are persistence preserving. A simple example is shown in Figure \ref{fig:pers-preserv}. } \begin{figure} \caption{Two persistent sets $(X,\theta)$ and $(Y,\eta)$ represented by their dendrograms. On the left one defined in the set $X = \{A,B,C\} \label{fig:pers-preserv} \end{figure} \end{Example} We next introduce the key concept in our discussion, that of a \emph{functor}. We give the formal definition first. \begin{Definition}\label{def:functor} Let $\underline{C}$ and $\underline{D}$ be categories. Then a {\em functor} from $\underline{C}$ to $\underline{D}$ consists of \begin{itemize} \item{A map of sets $F:ob(\underline{C}) \rightarrow ob(\underline{D})$} \item{For every pair of objects $X,Y \in \underline{C}$ a map of sets $\Phi(X,Y):Mor_{\underline{C}}(X,Y) \rightarrow Mor_{\underline{D}}(FX,FY)$ so that \begin{enumerate} \item{ $\Phi(X,X)(id_X) = id_{F(X)}$ for all $X \in ob(\underline{C})$} \item{$ \Phi (X,Z)(g \mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}f) = \Phi (Y,Z)(g) \mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}\Phi (X,Y)(f)$ for all $f \in Mor _{\underline{C}}(X,Y)$ and $g \in Mor _{\underline{C}}(Y,Z)$} \end{enumerate} } \end{itemize} \end{Definition} \begin{Remark} In the interest of clarity, we often refer to the pair $(F,\Phi)$ as a single letter $F$. See diagram (\ref{eq:diagram}) in Example \ref{ex:key} below for an example. \end{Remark} A morphism $f:X \rightarrow Y$ which has a two sided inverse $g: Y\rightarrow X$, so that $f \mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}g = id_Y$ and $g \mbox{\hspace{.05cm}}\raisebox{.04cm}{\tiny {$\circ$ }}f = id_X$, is called an {\em isomorphism}. Two objects which are isomorphic are intuitively thought of as ``structurally indistinguishable" in the sense that they are identical except for naming or choice of coordinates. For example, in the category of sets, the sets $\{ 1,2,3 \}$ and $\{ A,B,C \}$ are isomorphic, since they are identical except for choice made in labelling the elements. We illustrate this definition with some examples. \begin{Example}{\bf (Forgetful functors)} {\em When one has two categories $\underline{C}$ and $\underline{D}$, where the objects in $\underline{C}$ are objects in $\underline{D}$ equipped with some additional structure and the morphisms in $\underline{C}$ are simply the morphisms in $\underline{D}$ which preserve that structure, then we obtain the ``forgetful functor" from $\underline{C}$ to $\underline{D}$, which carries the object in $\underline{C}$ to the same object in $\underline{C}$, but regarded without the additional structure. For example, a group can be regarded as a set with the additional structure of multiplication and inverse maps, and the group homomorphisms are simply the set maps which respect that structure. Accordingly, we have the functor from the category of groups to the category of sets which ``forgets the multiplication and inverse". Similarly, we have the forgetful functor from $\underline{{\mathcal P}}$ to the category of sets, which forgets the presence of $\theta$ in the persistent set $(X, \theta)$. } \end{Example} \begin{Example} {\em The inclusions $ \underline{{\mathcal M}}^{iso} \subseteq \underline{{\mathcal M}}^{mon} \subseteq \underline{{\mathcal M}}^{gen} $ are both functors. } \end{Example} Any given clustering scheme is a procedure $F$ which takes as input a finite metric space $(X,d_X)$, that is, an object in $ob(\underline{\mathcal M}^{gen})$, and delivers as output a persistent set, that is, an object in $ob(\underline{\mathcal P})$. The concept of \underline{functoriality} refers to the additional condition that the clustering procedure maps a pair of input objects into a pair of output objects in a manner which is consistent/stable with respect to the morphisms attached to the input and output spaces. When this happens, we say that the clustering scheme is \underline{functorial}. This notion of consistency/stability is made precise in Definition \ref{def:functor} and described by diagram (\ref{eq:diagram}). Now, the idea is to regard clustering algorithms (that output a persistent set) as functors. Assume for instance we want to consider ``stability'' to all distance non-increasing maps. Then the correct category of inputs (finite metric spaces) is $\underline{{\mathcal M}}^{gen}$ and the category of outputs is $\underline{\mathcal P}$. According to Definition \ref{def:functor} in order to view a clustering scheme as a functor we need to specify (1) how it maps objects of $\underline{{\mathcal M}}^{gen}$ (finite metric spaces) into objects of $\underline{\mathcal P}$ (persistent sets), and (2) how a valid morphism/map $f:(X,d_X)\rightarrow (Y,d_Y)$ between two objects $(X,d_X)$ and $(Y,d_Y)$ in the input space/category $\underline{{\mathcal M}}^{gen}$ induce a map in the output category $\underline{\mathcal P}$, see diagram (\ref{eq:diagram}) below. We exemplify this through the construction of the key example for this paper. \begin{Example} \label{ex:key}{\em We define a functor $${\mathcal R}^{gen}: \underline{{\mathcal M}}^{gen} \rightarrow \underline{{\mathcal P}}$$ as follows. For a finite metric space $(X, d_X)$, we define ${\mathcal R}^{gen}(X,d_X)$ to be the persistent set $(X, \theta ^{d_X})$, where $\theta ^{d_X}(r)$ is the partition associated to the equivalence relation $\sim _r$ defined in Example \ref{rips}. This is clearly an object in $\underline{{\mathcal P}}$. We also define how ${\mathcal R}^{gen}$ acts on maps $f : (X,d_X) \rightarrow (Y,d_Y)$: The value of ${\mathcal R}^{gen}(f)$ is simply the set map $f$ regarded as a morphism from $(X,\theta ^{d_X})$ to $(Y,\theta ^{d_Y})$ in $\underline{{\mathcal P}}$. That it is a morphism in $\underline{{\mathcal P}}$ is easy to check. This functorial construction is represented through the diagram below: \begin{equation}\label{eq:diagram} \xymatrix{ (X,d_X) \ar[r]^{{\mathcal R}^{gen}} \ar[d]_{f} & (X,\theta^{d_X}) \ar[d]^{{\mathcal R}^{gen}(f)} \\ (Y,d_Y) \ar[r]^{{\mathcal R}^{gen}} & (Y,\theta^{d_Y})} \end{equation} where ${\mathcal R}^{gen}(f)$ is persistence preserving.} \end{Example} \begin{Example}{ \em By restricting ${\mathcal R}^{gen}$ to the subcategories $\underline{{\mathcal M}}^{iso} \mbox{ and } \underline{{\mathcal M}}^{mon} $, we obtain functors\\ ${\mathcal R}^{iso}: \underline{{\mathcal M}}^{iso} \rightarrow \underline{{\mathcal P}}$ and ${\mathcal R}^{mon}: \underline{{\mathcal M}}^{mon} \rightarrow \underline{{\mathcal P}}$.} \end{Example} \begin{Example}\label{scaling} {\em Let $\lambda $ be any positive real number. Then we define a functor $\sigma _{\lambda} : \underline{{\mathcal M}}^{gen} \rightarrow \underline{{\mathcal M}}^{mon}$ on objects by $$ \sigma _{\lambda}(X,d_X) = (X,\lambda d_X) $$ and on morphisms by $\sigma _{\lambda} (f) = f$. One easily verifies that if $f$ satisfies the conditions for being a morphism in $\underline{{\mathcal M}}^{gen}$ from $(X,d_X)$ to $(Y,d_Y)$, then it readily satisfies the conditions to be a morphism from $(X,\lambda d_X)$ to $(Y, \lambda d_Y)$. Similarly, we define a functor $s_{\lambda} : \underline{{\mathcal P}} \rightarrow \underline{{\mathcal P}}$ by setting $s_{\lambda}(X, \theta ) = (X, \theta ^{\lambda})$, where $\theta ^{\lambda}(r) = \theta (\frac{r}{\lambda})$. } \end{Example} In Section \ref{sec:results} we will be showing our main results. We will now have a brief disgression to discuss other situations in which, in our opinion, the concept of functoriality can be useful. \subsection{Intrinsic Value of Functoriality}\label{intrinsic} \noindent By studying functorial methods of clustering, it is possible to recover qualitative aspects of the geometric structure of a dataset. We illustrate this idea with a ``toy" example. We suppose that we have a point cloud data which is concentrated around the unit circle. We consider the projection of the data on to the $x$-axis, and cover the axis with two (overlapping) intervals $U$ and $V$, pictured on the left in Figure \ref{hocolim} below as being red and yellow, with orange intersection. By considering those portions of the dataset whose $x$-coordinate lie in $U$ and $V$ respectively, we obtain the red and yellow subsets of the dataset pictured on the right below. Their intersection is pictured as orange, and the arrows indicate that we have inclusions of the intersection into each of the pieces. \begin{figure} \caption{\emph{Left} \label{hocolim} \end{figure} \noindent Next, we note that if we are dealing with a functorial clustering scheme, and cluster each of these subsets, we obtain the diagram of clusters in the center of Figure \ref{hocolim}. This is now a very simple combinatorial object. \noindent There is a topological construction known as the {\em homotopy colimit}, which given any diagram of sets of any shape reconstructs a simplicial set (a slightly more flexible version of the notion of simplicial complex), and in particular a space. To first approximation, one builds a vertex for every element in any set in the diagram, and an edge between any two elements which are connected by a map in the diagram, and then attaches higher order simplices according to a well defined procedure. In the case of the diagram above, this constructs the space given in the rightmost part of Figure \ref{hocolim} . \noindent The details of the theory of simplicial sets and homotopy colimits are beyond the scope of this paper. A thorough exposition is given in \cite{bk}. \noindent Functoriality is also quite useful when one is interested in studying the qualitative behavior of a real-valued function $f$ on a dataset, for example the output of a density estimator. Then it is useful to study the set of clusters in sublevel and superlevel sets of $f$, and understanding how the clusters behave under changes in the thresholds can help one understand the presence of saddle points and higher index critical points of the function. One example of this is two-parameter persistence constructions, \cite{gsan}. In this case, there is more structure than just persistent sets (trees/dendrograms) as defined in this paper. We will elaborate on another application of functoriality in Section \ref{bootstrap}. \section{Results}\label{results} \label{sec:results} We now study different clustering algorithms using the idea of functoriality. We have 3 possible ``input'' categories ordered by inclusion (\ref{eq:inclu-cats}). The idea is that studying functoriality over a larger category will be more stringent/demanding than requiring functoriality over a smaller one. We now consider different clustering algorithms and study whether they are functorial over our choice of the input category. We start by analyzing functoriality over the least demanding one, $\underline{\mathcal M}^{iso}$, then we prove a uniqueness result for functoriality over $\underline{\mathcal M}^{gen}$ and finally we study how relaxing the conditions imposed by the morphisms in $\underline{\mathcal M}^{gen}$, namely, by restricting ourselves to the smaller but intermediate category $\underline{\mathcal M}^{mon}$, we permit more functorial clustering algorithms. \subsection{Functorality over $\underline{{\mathcal M}}^{iso}$} This is the smallest category we will deal with. The morphisms in $\underline{{\mathcal M}}^{iso}$ are simply the bijective maps between datasets which preserve the distance function. As such, functoriality of a clustering algorithms over $\underline{{\mathcal M}}^{iso}$ simply means that the output of the scheme doesn't depend on any artifacts in the dataset, such as the way the points are named or the way in which they are ordered. Here are some examples which illustrate the idea. \begin{itemize} \item{The {\em $k$-means algorithm} (see \cite{clusteringref}) is in principle allowed by our framework since $ob(\underline{\mathcal P})$ contains all constant persistent sets. However it is not functorial on any of our input categories. It depends both on a paramter $k$ (number of clusters) and on an initial choices of means, and is not therefore dependent on the metric structure alone.} \item{{\em Agglomerative hierarchical clustering}, in standard form, as described for example in \cite{clusteringref}, begins with point cloud data and constructs a \emph{binary} tree (or dendrogram) which describes the merging of clusters as a threshold is increased. The lack of functoriality comes from the fact that when a single threshold value corresponds to more than one data point, one is forced to choose an ordering in order to decide which points to ``agglomerate" first. This can easily be modified by relaxing the requirement that the tree be binary. This is what we did in Example \ref{hierarchical} In this case, one can view these methods as functorial on $\underline{{\mathcal M}}^{iso}$, where the functor takes its values in arbitrary rooted trees. It is understood that in this case, the notion of morphism for the output ($\underline{\mathcal P}$) is simply isomorphism of rooted trees. In contrast, we see next that amongst these methods, when we impose that they be functorial over the larger (more demanding) category $\underline{\mathcal M}^{gen}$ then only one of them passes the test.\footnote{The result in Theorem \ref{thm:uniq} is actually more powerful in that it states that there is a unique functor from $\underline{\mathcal M}^{gen}$ to $\underline{\mathcal P}$ that satisfies certain natural conditions.} } \item{{\em Spectral clustering}. As described in \cite{ulrike_spectral}, typically, spectral methods consist of two different layers. They first define a laplacian matrix out of the dissimilarity matrix (given by $d_X$ in our case)} and then find eigenvalues and eigenvectors of this operator. The second layer is as follows: a natural number $k$ must be specified, a projection to $\R^k$ is performed using the eigenfunctions, and clusters are found by an application of the $k$-means clustering algorithm. Clearly, operations in the second layer will fail to be functorial as they do not depend on the metric alone. However, the procedure underlying in the first layer is clearly functorial on $\underline{\mathcal M}^{iso}$ as eigenvalue computations are changed by a permutation in a well defined, natural, way. \end{itemize} \subsection{Functorality over $\underline{\mathcal M}^{gen}$: a uniqueness theorem} In this section, as an example application of the conceptual framework of functoriality, we will prove a theorem of the same flavor as the main theorem of \cite{kleinberg}, except that we prove existence and uniqueness on $\underline{\mathcal M}^{gen}$ instead of impossibility in our context. Before stating and proving our theorem, it is interesting to point out why complete linkage and average linkage (agglomerative) clustering, as defined in Example \ref{hierarchical} are not functorial on $\underline{\mathcal M}^{gen}$. A simple example explains this: consider the metric spaces $X=\{A,B,C\}$ with metric given by the edge lengths $\{4,3,5\}$ and $Y=(A',B',C')$ with metric given by the edge lengths $\{4,3,2\}$, as given in Figure \ref{cexample}. Obviously the map $f$ from $X$ to $Y$ with $f(A)=A'$, $f(B)=B'$ and $f(C)=C'$ is a morphism in $\underline{\mathcal M}^{gen}$. Note that for example for $r=3.5$ (shaded regions of the dendrograms in Figure \ref{cexample}) we have that the partition of $X$ is $\Pi_X = \{\{A,C\},B\}$ whereas the partition of $Y$ is $\Pi_Y = \{\{A',B'\},C'\}$ and thus $f^*(\Pi_Y)=\{\{A,B\},\{C\}\}$. Therefore $\Pi_X$ does not refine $f^*(\Pi_Y)$ as required by functoriality. The same construction yields a counter-example for average linkage. \begin{figure} \caption{An example that shows why complete linkage fails to be functorial on $\underline{\mathcal M} \label{cexample} \end{figure} \begin{Theorem} \label{thm:uniq} Let $\Psi: \underline{{\mathcal M}}^{gen} \rightarrow \underline{{\mathcal P}} $ be a functor which satisfies the following conditions. \begin{description} \item[(I)] {Let $\alpha : \underline{{\mathcal M}}^{gen} \rightarrow \underline{Sets}$ and $\beta :\underline{ {\mathcal P}}\rightarrow \underline{Sets}$ be the forgetful functors$ (X,d_X) \rightarrow X$ and $(X,\theta) \rightarrow X$, which forget the metric and partition respectively, and only ``remember" the underlying sets $X$. Then we assume that $\beta \compcirc \Psi = \alpha$. This means that the underlying set of the persistent set associated to a metric space is just the underlying set of the metric space. } \item[(II)] {For $\delta\geq 0$ let $Z(\delta)=(\{p,q\},\dmtwo{\delta})$ denote the two point metric space with underlying set $\{ p,q \}$, and where $\mbox{dist}(p,q) = \delta$. Then $\Psi (Z(\delta))$ is the persistent set $(\{p,q\},\theta ^{Z(\delta)})$ whose underlying set is $\{p,q \}$ and where $\theta ^{Z(\delta)} (t) $ is the partition with one element blocks when $t < \delta$ and it is the partition with a single two point block when $t \geq \delta$. } \item[(III)] {Given a finite metric space $(X,d_X)$, let $$\mbox{sep}(X) := \min_{x \neq x^{\prime}}d_X(x,x^{\prime}).$$ Write $\Psi(X,d_X)=(X,\theta^{\Psi})$, then for any $t < \mbox{sep}(X)$, the partition $\theta^{\Psi}(t)$ is the discrete partition with one element blocks. } \end{description} Then $\Psi $ is equal to the functor ${\mathcal R}^{gen}$. \end{Theorem} \begin{proof} Let $\Psi(X,d_X) = (X,\theta^{\Psi})$. For each $r\geq 0$ we will prove that \textbf{(a)} $\theta^{d_X}(r)$ is a refinement of $\theta^{\Psi}(r)$ and \textbf{(b)} $\theta^{\Psi}(r)$ is a refinement of $\theta^{d_X}(r)$. Then it will follow that $\theta^{d_X}(r)=\theta^{\Psi}(r)$ for all $r\geq 0$, which shows that the objects are the same. Since this is a situation where, given any pair of objects, there is at most one morphism between them, this also determines the effect of the functor on morphisms. Fix $r\geq 0$. In order to obtain (a) we need to prove that whenever $x,x'\in X$ lie in the same block of the partition $\theta^{d_X}(r)$, that is $x\sim_r x'$, then they both lie in the same block of $\theta^{\Psi}(r)$. It is enough to prove the following \underline{\textbf{Claim}}: whenever\\ $d_X(x,x')\leq r$ then $x$ and $x'$ lie in the same block of $\theta^{\Psi}(r)$. Indeed, if the claim is true, and $x\sim_r x'$ then one can find $x_0,x_1,\ldots,x_n$ with $x_0=x$, $x_n=x'$ and $d_X(x_i,x_{i+1})\leq r$ for $i=0,1,2,\ldots,n-1$. Then, invoking the claim for all pairs $(x_i,x_{i+1})$, $i=0,\ldots,n-1$ one would find that: ${x}=x_0$ and $x_1$ lie in the same block of $\theta^{\Psi}(r)$, $x_1$ and $x_2$ lie in the same block of $\theta^{\Psi}(r)$, $\ldots$, $x_{n-1}$ and $x_n=x'$ lie in the same block of $\theta^{\Psi}(r)$. Hence, $x$ and $x'$ lie in the same block of $\theta^{\Psi}(r)$. So, let's prove the claim. Assume $d_X(x,x')\leq r$, then the function given by $p \rightarrow x, q \rightarrow x^{\prime}$ is a morphism $g:Z(r) \rightarrow ( X, d_X) $ in $\underline{{\mathcal M}}^{gen}$. This means that we obtain a morphism $$\Psi (g) : \Psi (Z(r)) \rightarrow \Psi (X,d_X)$$ in $\underline{{\mathcal P}}$. But, $p$ and $q$ lie in the same block of the partition $\theta ^{Z(r)}$ by definition of $Z(r)$, and functoriality therefore guarantees that $\Psi(g)$ is persistence preserving (recall Example \ref{example:P}) and hence the elements $g(p) = x$ and $g(q) = x^{{\prime}}$ lie in the same block of $ \theta ^{d_X} (r)$. This concludes the proof of (a). For condition (b), assume that $x$ and $x'$ belong to the same block of the partition $\theta^{\Psi}(r)$. We will prove that necessarily $x\sim_r x'$. This of course will imply that $x$ and $x'$ belong to the same block of $\theta^{d_X}(r)$. Consider the metric space $(X[r],d_{[r]})$ whose points are the equivalence classes of $X$ under the equivalence relation $\sim _r$, and where the metric $d_{[r]}:X[r]\times X[r]\rightarrow \R^+$ is defined to be the maximal metric pointwisely less than or equal to $W$, where for two points ${\mathcal B}$ and ${\mathcal B}' $ in $X[r]$ (equivalence classes of $X$ under $\sim_r$), $W({\mathcal B},{\mathcal B}') = \min_{x \in {\mathcal B}}\min _{x^{\prime} \in {\mathcal B}'} d_X (x,x^{\prime}).$\footnote{See Section \ref{sec:stab} for a similar, explicit construction.} It follows from the definition of $\sim _r$ that if two equivalence classes are distinct, then the distance between them is $> r$. This means that $\mbox{sep}(X[r])>r.$ Write $\Psi (X[r],d_{[r]})=(X[r],\theta^{[r]})$. Since $\mbox{sep}(X[r])>r$, hypothesis (III) now directly shows that the blocks of the partition $\theta^{[r]}(r)$ are exactly the equivalence classes of $X$ under the equivalence relation $\sim _r$, that is $\theta^{[r]}(r) = \theta^{d_X}(r)$. Finally, consider the morphism $$\pi_r: (X,d_X) \rightarrow (X[r],d_{[r]})$$ in $\underline{{\mathcal M}}^{gen}$ given on elements $x \in X$ by $\pi_r (x) =[x]_r$, where $[x]_r$ denotes the equivalence class of $x$ under $\sim_r$. By functoriality, $\Psi(\pi_r):(X,\theta^{\Psi})\rightarrow (X[r],\theta^{[r]})$ is persistence preserving, and therefore, $\theta^{\Psi}(r)$ is a refinement of $\theta^{[r]}(r)=\theta^{d_X}(r)$. This is depicted as follows: \begin{displaymath} \xymatrix{ (X,d_X) \ar[r]^\Psi \ar[d]_{\pi_r} & (X,\theta^{d_X}) \ar[d]^{\Psi(\pi_r)} \\ (X[r],d_{[r]}) \ar[r]^{\Psi} & (X[r],\theta_{[r]}) } \end{displaymath} This concludes the proof of (b). \end{proof} We should point out that another characterization of single linkage has been obtained in the book \cite{math-taxo}. \subsubsection{Comments on Kleinberg's conditions}\label{sec:klein-conds} \noindent We conclude this section by observing that analogues of the three (axiomatic) properties considered by Kleinberg in \cite{kleinberg} hold for ${\mathcal R}^{gen}$. Kleinberg's \underline{first condition} was {\em scale-invariance}, which asserted that if the distances in the underlying point cloud data were multiplied by a constant positive multiple $\lambda$, then the resulting clustering decomposition should be identical. In our case, this is replaced by the condition that ${\mathcal R}^{gen}\compcirc\sigma _{\lambda}(X,d_X) = s_{\lambda}\compcirc{\mathcal R}^{gen}(X,d_X)$, which is trivially satisfied. Kleinberg's \underline{second condition}, {\em richness}, asserts that any partition of a dataset can be obtained as the result of the given clustering scheme for some metric on the dataset. In our context, partitions are replaced by persistent sets. Assume that there exist $t\in \Rr$ s.t. $\theta(t)$ is the single block partition, i.e., impose that the persistent set is a dendrogram (cf. Definition \ref{def:persistent-sets}). In this case, it is easy to check that any such persistent set can be obtained as ${\mathcal R}^{gen}$ evaluated for some (pseudo)metric on some dataset. Indeed,\footnote{We only prove triangle inequality.} let $(X,\theta)\in{ob}(\underline{\mathcal P})$. Let $\epsilon_1,\ldots,\epsilon_k$ be the (finitely many) transition/discontinuity points of $\theta$. For $x,x'\in X$ define $d_X(x,x')=\min\{\epsilon_i\}$ s.t. $x,x'$ belong to same block of $\theta(\epsilon_i)$. This is a pseudo metric on $X$. Indeed, pick points $x,x'$ and $x''$ in $X$. Let $\epsilon_1$ and $\epsilon_2$ be minimal s.t. $x,x'$ belong to the same block of $\theta(\epsilon_1)$ and $x',x''$ belong to the same block of $\theta(\epsilon_2)$. Let $\epsilon_{12}:=\max(\epsilon_1,\epsilon_2)$. Since $(X,\theta)$ is a persistent set (Definition \ref{def:persistent-sets}), $\theta(\epsilon_{12})$ must have a block $\mathcal B$ s.t. $x,x'$ and $x''$ all lie in $\mathcal B$. Hence $d_X(x,x'')\leq \epsilon_{12}\leq \epsilon_1+\epsilon_2=d_X(x,x')+d_X(x',x'')$. Finally, Kleinberg's \underline{third condition}, \emph{consistency}, could be viewed as a rudimentary example of functoriality. His morphisms are similar to the ones in $\underline{\mathcal M}^{gen}$. \subsection{Functoriality over $\underline{{\mathcal M}}^{mon}$} In this section, we illustrate how relaxing the functoriality permits more clustering algorithms. In other words, we will restrict ourselves to $\underline{{\mathcal M}}^{mon}$ which is smaller (less stringent) than $\underline{{\mathcal M}}^{gen}$ but larger (more stringent) than $\underline{{\mathcal M}}^{iso}$. We consider the restriction of ${\mathcal R}^{gen}$ to the category $\underline{{\mathcal M}}^{mon}$. For any metric space and every value of the persistence parameter $r$, we will obtain a partition of the underlying set $X$ of the metric space in question, and the set of equivalence classes under $\sim _r$. For any $x \in X$, let $[x]_r$ be the equivalence class of $x$ under the equivalence relation $\sim _r$, and define $c(x) = \# [x]_r$. For any integer $m$, we now define $X_m \subseteq X$ by $X_m = \{ x \in X| c(x) \geq m\} $. We note that for any morphism $f: X \rightarrow Y$ in $\underline{{\mathcal M}}^{mon}$, we find that $f(X_m) \subseteq Y_m$. This property clearly does not hold for more general morphisms. For every $r$, we can now define a new equivalence relation $\sim ^m_r $ on $X$, which refines $\sim _r$, by requiring that each equivalence class of $\sim _r$ which has cardinality $\geq m$ is an equivalence class of $\sim _r^m$, and that for any $x $ for which $c(x) < m$, $x$ defines a singleton equivalence class in $\sim ^m_r$. We now obtain a new persistent set $(X,\theta ^m )$, where $\theta ^m (r)$ will denote the partition associated to the equivalence relation $\sim ^m_r$. It is readily checked that $X \rightarrow (X, \theta ^m )$ is functorial on $\underline{{\mathcal M}}^{mon}$. This scheme could be motivated by the intuitition that one does not regard clusters of small cardinality as significant, and therefore makes points lying in small clusters into singletons, where one can then remove them as representing ``outliers". \section{Metric stability and convergence properties of ${\mathcal R}^{gen}$} \label{sec:stab} In this section we briefly discuss some further properties of ${\mathcal R}^{gen}$ (single linkage dendrograms). We will provide quantitative results on the \emph{stability} and \emph{convergence/consitency} properties of this functor (algorithm). To the best of our knowledge, the only other related results obtained for this algorithm appear in \cite{hartigan}. The issues of stability and convergence/consistency of clustering algorithms have brought back into attention recently, see \cite{ulrike_general,sober} and references therein. In Theorem \ref{theo:conv} besides proving stability, we prove convergence in a simple setting. Given finite metric spaces $(X,d_X)$ and $(Y,d_Y)$, our goal is to define a distance between the persistence objects $\theta^{d_X}$ and $\theta^{d_Y}$ respectively produced by ${\mathcal R}^{gen}$. We know that this functor actually outputs dendrograms (rooted trees), which have a natural metric structure attached to them. Moroever, it is well known that rooted trees are uniquely characterized by their distance matrix, \cite{book-trees}. For a finite metric space $(X,d_X)$ consider the derived metric space $(X,\varepsilon_X)$ with the same underlying set and metric \beq{def:varepsilon-X} \varepsilon_X(x,x'):=\min\{\varepsilon\geq 0|\,x\sim_\varepsilon x'\}. \end{equation} Note that $(X,\theta^{d_X})$ can therefore obviously be regarded as the metric space $(X,\varepsilon_X)$, cf. with the construction of the metric in section \ref{sec:klein-conds}. We now check that indeed $\varepsilon_X$ defines a metric on $X$. \begin{Proposition} For any finite metric space $(X,d_X)$, $(X,\varepsilon_X)$ is also a metric space. \end{Proposition} \begin{proof} (a) Since $d_X$ is a metric on $X$, it is obvious that $\varepsilon_X(x,x')=0$ implies $x=x'$. (b) Symmetry is also obvious since $\sim_\varepsilon$ is an equivalence relation. (c) Triangle inequality: Pick $x,x',x''\in X$. Let $\varepsilon_X(x,x')=\varepsilon_1$ and $\varepsilon_X(x',x'')=\varepsilon_2$. Then, there exist points $a_0,a_1,\ldots,a_j$ and $b_0,b_1,\ldots,b_k$ in $X$ with $a_0=x$, $a_j=x'=b_0$, $b_k=x''$ and $d_X(a_i,a_{i+1})\leq \varepsilon_1$ for $i=0,\ldots,j-1$ and $d_X(b_i,b_{i+1})\leq \varepsilon_2$ for $i=0,\ldots,k-1$. Consider the points $ \{c_i\}_{i=0}^{j+k+1}=\{a_0,\ldots,a_j,b_1,\ldots,b_k\}.$\\ Then $d_X(c_i,c_{i+1})\leq \max(\varepsilon_1,\varepsilon_2)\leq \varepsilon_1+\varepsilon_2$. Hence \mbox{$x\sim_{\varepsilon_{12}} x''$} with $\varepsilon_{12}=\varepsilon_1+\varepsilon_2$ and then by definition (\ref{def:varepsilon-X}), $\varepsilon_X(x,x'')\leq \varepsilon_X(x,x')+\varepsilon_X(x',x'').$ \end{proof} In order to compare the outputs of ${\mathcal R}^{gen}$ on two different finite metric spaces $(X,d_X)$ and $(X',d_{X'})$ we will instead compare the metric space representations of those outputs, $(X,\varepsilon_X)$ and $(X',\varepsilon_{X'})$, respectively. For this purpose, we choose to work with the Gromov-Hausdorff distance which we define now, \cite{burago-book}. \begin{Definition}[Correspondence] For sets $A$ and $B$, a subset $R\subset A\times B$ is a \emph{correspondence} (between $A$ and $B$) if and and only if \begin{itemize} \item $\forall$ $a\in A$, there exists $b\in B$ s.t. $(a,b)\in R$ \item $\forall$ $b\in B$, there exists $a\in X$ s.t. $(a,b)\in R$ \end{itemize} \end{Definition} Let ${\mathcal R}(A,B)$ denote the set of all possible correspondences between sets $A$ and $B$. Consider finite metric spaces $(X,d_X)$ and $(Y,d_Y)$. Let $\Gamma_{X,Y}:X\times Y \bigtimes X\times Y\rightarrow \R^+$ be given by $$(x,y,x',y')\mapsto |d_X(x,x')-d_Y(y,y')|.$$ Then, the \textbf{Gromov-Hausdorff} distance between $X$ and $Y$ is given by \beq{dgh} \dgro{X}{Y} := \inf_{R\in{\mathcal R}(X,Y)}\,\,\max_{(x,y),(x',y')\in R}\Gamma_{X,Y}(x,y,x',y') \end{equation} \begin{Remark} This expression defines a \textbf{metric} on the set of (isometry classes of) finite metric spaces, \cite{burago-book} (Theorem 7.3.30). \end{Remark} One has: \begin{Proposition}\label{prop:varep} For any finite metric spaces $(X,d_X)$ and $(Y,d_Y)$ $$\dgro{(X,d_X)}{(Y,d_Y)}\geq \dgro{(X,\varepsilon_X)}{(Y,\varepsilon_Y)}.$$ \end{Proposition} \begin{proof} Let $\eta=\dgro{(X,d_X)}{(Y,d_Y)}$ and $R\in{\mathcal R}(X,Y)$ s.t. $|d_X(x,x')-d_Y(y,y')|\leq \eta$ for all $(x,y),(x',y')\in R$. Fix $(x,y)$ and $(x',y')\in R$. Let $x_0,\ldots,x_m\in X$ be s.t. $x_0=x$, $x_m=x'$ and $d_X(x_i,x_{i+1})\leq \varepsilon(x,x')$ for all $i=0,\ldots,m-1$. Let $y=y_0,y_1,\ldots,y_{m-1},y_m=y'\in Y$ be s.t. $(x_i,y_i)\in R$ for all $i=0,\ldots,m$ (this is possible by definition of $R$). Then, $d_Y(y_i,y_{i+1})\leq d_X(x_i,x_{i+1})+\eta\leq \varepsilon_X(x,x')+\eta$ for all $i=0,\ldots,m-1$ and hence $\varepsilon_Y(y,y')\leq \varepsilon_X(x,x')+\eta$. By exchanging the roles of $X$ and $Y$ one obtains the inequality $\varepsilon_X(x,x')\leq \varepsilon_Y(y,y')+\eta$. This means $|\varepsilon_X(x,x')-\varepsilon_Y(y,y')|\leq \eta$. Since $(x,y),(x',y')\in R$ are arbitrary, and upon recalling the definition of the Gromov-Hausdorff distance we obtain the desired conclusion. \end{proof} Proposition \ref{prop:varep} will allow us to quantify \emph{stability} and \emph{convergence}. We provide deterministic arguments. The same construction, essentially, yields similar results under the assumption that $(Z,d_Z)$ is enriched with a (Borel) probability measure and one takes i.i.d. samples w.r.t. this probability measure. Assume $(Z,d_Z)$ is an underlying (perhaps ``continuous'') metric space from which different finite samples are drawn. We would like to see, quantitatively, (1) how the results yielded by ${\mathcal R}^{gen}$ differ when applied to those different sample sets (which possibly contain different numbers of points), this is \emph{stability} and (2) that when the underlying metric space is partitioned, there is \emph{convergence} and \emph{consistency} in a precise sense. Assume $A$ is a finite set and let $W:A\times A\rightarrow \R^+$ be a symmetric map. Using the usual path-length construction, we endow $A$ with the (pseudo)metric $$d_A(a,a'):=\min\sum_{k=0}^{m-1}W(a_{k},a_{k+1})$$ where the minimum is taken over $m$ and all sets of $m+1$ points $a_0,\ldots,a_m$ such that $a_0=a$ and $a_m=a'$. We denote $d_A = {\mathcal L}(W)$. This is a standard construction, see \cite{bridson} \S1.24. For a compact metric space $(Z,d_Z)$ and any two of its compact subsets $Z_1,Z_2$ let $$D_Z(Z_1,Z_2)=\min_{z_1\in Z_1}\min_{z_2\in Z_2}d_Z(z_1,z_2).$$ For $(Z,d_Z)$ compact and $X,X'\subset Z$ compact, let $d_{\mathcal H}^Z(X,X')$ denote the Hausdorff distance (in $Z$) between $X$ and $X'$, \cite{burago-book}. For any $X\subset Z$ let $R(X):=d_{\mathcal H}^Z(X,Z)$. Intuitively this number measures how well $X$ approximates $Z$. One says that $X$ is an $R(X)$-\emph{covering} of $Z$ or an $R(X)$-net of $Z$. The following theorem summarizes our main results regarding metric stability and convergence/consistency. The situation described by the theorem is depicted in Figure \ref{fig:theoconv}. \begin{Theorem}\label{theo:conv} Assume $(Z,d_Z)$ is a compact metric space. Let $X$ and $X'$ be any two finite sets of points sampled from $Z$. Endow these two sets with the (restricted) metric $d_Z$. Then, \begin{enumerate} \item (Finite Stability) $\dgro{(X,\varepsilon_X)}{(X',\varepsilon_{X'})}\leq 2(R(X)+R(X')).$ \item (Asymptotic Stability) As $\max\left(R(X),R(X')\right)\rightarrow 0$ one has $\dgro{(X,\varepsilon_X)}{(X',\varepsilon_{X'})}\rightarrow 0.$ \item (Convergence/consistency) Assume in addition that $Z=\cup_{\alpha\in A}Z_\alpha$ where $A$ is a finite index set and $Z_\alpha$ are compact, disjoint and path-connected sets. Let $(A,d_A)$ be the finite metric space with underlying set $A$ and metric given by $d_A:={\mathcal L}(W)$ where $W(\alpha,\alpha'):=D_Z(Z_{\alpha},Z_{\alpha'})$ for $\alpha,\alpha'\in A$.\footnote{Since the $Z_\alpha$ are disjoint, $d_A$ is a true metric on $A$.} Then, as $R(X)\rightarrow 0$ one has $$\dgro{(X,\varepsilon_X)}{(A,\varepsilon_A)}\rightarrow 0.$$ \end{enumerate} \end{Theorem} \begin{figure} \caption{Explanation of Theorem \ref{theo:conv} \label{fig:theoconv} \end{figure} \begin{proof} Let $\delta>0$ be s.t. $\min_{\alpha\neq \beta}D_Z(Z_\alpha,Z_\beta)\geq \delta.$ Claim 1. follows from Proposition \ref{prop:varep}: let $d_X$ (resp. $d_{X'}$) equal the restriction of $d_Z$ to $X\times X$ (resp. $X'\times X'$). Then, by the triangle inequality for the Gromov-Hausdorff distance $$\dgro{X}{Z} + \dgro{X'}{Z}\geq \dgro{(X,\varepsilon_X))}{(X',\varepsilon_{X'}))}.$$ Now, the claim follows from the fact that whenever $Z\subset Z'$, $\dgro{Z'}{Z}\leq 2d_{\mathcal H}^{Z}(Z,Z')=2R(Z')$, \cite{burago-book}, \S7.3. Claim 2. follows directly from claim 1. We now prove the third claim. For each $x\in X$ let $\alpha(x)$ denote the index of the path connected component of $Z$ s.t. $x\in Z_{\alpha(x)}$. Assume, $R(X)< \frac{\delta}{2}$. Then, it is clear that $\#\left(Z_\alpha\cap X\right)\geq 1$ for all $\alpha\in A$. Then it follows that $R=\{(x,\alpha(x))|x\in X\}$ belongs to ${\mathcal R}(X,A)$. We prove below that for all $x,x'\in X$ $$\varepsilon_A(\alpha(x),\alpha(x')) \stackrel{(1)}{\leq} \varepsilon_X(x,x') \stackrel{(2)}{\leq} \varepsilon_A(\alpha(x),\alpha(x')) + 2R(X).$$ It follows immediately from the definition of $W$ that for all $y,y'\in X$, $W(\alpha(y),\alpha(y'))\leq d_X(y,y')$. From the definition of $d_A$ it follows that $W(\alpha,\alpha')\geq d_A(\alpha,\alpha')$. Then in order to prove (1) pick $x_0,\ldots,x_m$ in $X$ with $x_0=x$, $x_m=x'$ and $d_X(x_i,x_{i+1})\leq \varepsilon_X(x,x')$. Consider the points in $A$ given by $\alpha(x)=\alpha(x_0),\ldots, \alpha(x_m)=\alpha(x')$. Then, $d_A(\alpha(x_i),\alpha(x_{i+1}))\leq W(\alpha(x_i),\alpha(x_{i+1}))\leq d_X(x_i,x_{i+1})\leq \varepsilon_X(x,x')$ for $i=0,\ldots,m-1$ by the claim above. Hence (1) follows. We now prove (2). Assume first that $\alpha(x)=\alpha(x')=\alpha$. Fix $\epsilon_0>0$ small. Let $\gamma:[0,1]\rightarrow Z_\alpha$ be a continuous path s.t. $\gamma(0)=x$ and $\gamma(1)=x'$. Let $z_1,\ldots,z_m$ be points on $\mbox{image}(\gamma)$ s.t. $z_0 = x$, $z_m = x'$ and $d_X(z_i,z_{i+1}) \leq {\epsilon_0}$, $i=0,\ldots,m-1$. By hypothesis, one can find $x=x_0,x_1,\ldots,x_{m-1},x_m=x'$ s.t. $d_Z(x_i,z_i)\leq R(X)$. Hence $d_X(x_i,x_{i+1})\leq {\epsilon_0}+2R(X)$ and hence $\varepsilon_X(x,x')\leq {\epsilon_0}+2R(X)$. Let $\epsilon_0\rightarrow 0$ to obtain the desired result. Now if $\alpha=\alpha(x)\neq \alpha(x')=\beta$, let $\alpha_0,\alpha_1,\ldots,\alpha_l\in A$ be s.t. $\alpha_0=\alpha(x)$, $\alpha_l=\alpha(x')$ and $d_A(\alpha_j,\alpha_{j+1})\leq \varepsilon_A(\alpha(x),\alpha(x'))$ for $j=0,\ldots,l-1$. By definition of $d_A$, for each $j=0,\ldots,l-1$ one can find a path ${\mathcal C}_j=\{\alpha_j^{(0)}, \ldots, \alpha_j^{(r_j)}\}$ s.t. $\alpha_j^{(0)}=\alpha_j$, $\alpha_{j}^{(r_j)}=\alpha_{j+1}$ and $\sum_{i=0}^{r_j-1}W(\alpha_{j}^{(i)},\alpha_{j}^{(i+1)})=d_A(\alpha_j,\alpha_{j+1})\leq \varepsilon_A(\alpha,\beta).$ It follows that $W(\alpha_{j}^{(i)},\alpha_{j}^{(i+1)})\leq \varepsilon_A(\alpha,\beta)$ for $i=0,\ldots,r_{j}-1$. Consider the path ${\mathcal C}=\{\widehat{\alpha}_0,\ldots, \widehat{\alpha}_s\}$ in $A$ joining $\alpha$ to $\beta$ given by the concatenation of all the ${\mathcal C}_j$. By eliminating repeated consecutive elements in ${\mathcal C}$ if necessary, one can assume that $\widehat{\alpha}_i\neq \widehat{\alpha}_{i+1}$. By construction $W(\widehat{\alpha}_i,\widehat{\alpha}_{i+1})\leq \varepsilon_A(\alpha,\beta)$ and $\widehat{\alpha}_0=\alpha$, $\widehat{\alpha}_s=\beta$. We will now lift $\mathcal C$ into a path in $Z$ joining $x$ to $x'$. Note that by compactness, for all $\nu,\mu\in A$, $\nu\neq \mu$ there exist $z_{\nu,\mu}^\nu\in Z_\nu$ and $z_{\nu,\mu}^\mu\in Z_\mu$ s.t. $W(\nu,\mu)=d_Z(z_{\nu,\mu}^\nu,z_{\nu,\mu}^\mu)$. Consider the path $\mathcal G$ in $Z$ given by $${\mathcal G}=\{x,z_{\widehat{\alpha}_0,\widehat{\alpha}_1}^{\widehat{\alpha}_0},z_{\widehat{\alpha}_0,\widehat{\alpha}_1}^{\widehat{\alpha}_1},\ldots,z_{\widehat{\alpha}_{s-1},\widehat{\alpha}_{s}}^{\widehat{\alpha}_s},x'\}.$$ For each point $g\in \mathcal G$ pick a point $x(g)\in X$ s.t. $d_Z(g,x(g))\leq R(X)$. This is possible by definition of $R(X)$. Let ${\mathcal G}' = \{x_0,x_1,\ldots,x_t\}$ be the resulting path in $X$. Notice that if $\alpha(x_t)\neq\alpha(x_{t+1})$ then $d_X(x_t,x_{t+1})\leq 2R(X)+W(\alpha(x_t),\alpha(X_{t+1}))$ by the triangle inequality. Also, by construction,$(*)\hspace{.5in}W(\alpha(x_t),\alpha(x_{t+1}))\leq \varepsilon_A(\alpha,\beta).$ Now, we claim that $$\varepsilon_X(x,x')\leq \max_{t}W(\alpha(x_t),\alpha(x_{t+1}))+2R(X).$$ This claim will follow from the simple observation that $\varepsilon_X(x,x')\leq \max_t \varepsilon_X(x_t,x_{t+1}).$ If $\alpha(x_t)=\alpha(x_{t+1})$ we already proved that $\varepsilon_X(x_t,x_{t+1})\leq 2R(X)$. If on the other hand $\alpha(x_t)\neq \alpha(x_{t+1})$ then, $\varepsilon_X(x_t,x_{t+1})\leq 2R(X)+W(\alpha(x_t),\alpha(x_{t+1}))$ and hence the claim. Combine this fact with $(*)$ to conclude the proof of (2). Putting (1) and (2) together we have $$\dgro{(X,\varepsilon_X)}{(A,\varepsilon_A)}\leq 2R(X)$$ and the conclusion follows by letting $R(X)\rightarrow 0$. \end{proof} \section{Functoriality and bootstrap clustering}\label{sec:zigzags}\label{bootstrap} In the previous section, we have observed that by encoding the output of a clustering scheme as diagram (i.e. as a persistent set or dendrogram) allows one to assess stability of the clustering obtained from the scheme. In this section, we will demonstrate that another use of functoriality can be used to assess stability of clustering schemes whose output is simply a partition of the underlying point cloud. We begin by recalling the basics of the {\em bootstrap} method developed by B. Efron \cite{efron}. The bootstrap considers a set of point cloud data $\Bbb{X}$, and repeatedly samples (with replacement) collections of (say) $n$ elements from $\Bbb{X}$. For each sample, one measures of central tendency such as means, medians, variances, are computed, and the distribution of these measures as a statistic are studied. It is understood that such computations are more informative than the measures computed a single time on the full set $\Bbb{X}$. We wish to perform a similar analysis for clustering. The difficulty is that the output of clustering is not a single numerical statistic, but is rather a structural, qualitative output. We will now show how functoriality can be used to assess compatibilty of clusterings of subsamples, and thereby obtain a method for confirming that clustering is a significant feature of the data rather than an artifact. In the context of clustering these bootstrapping ideas arise when dealing with massive datasets: one is forced to analysing several smaller, more manageable random subsamples of the original data to produce partial pictures of the underlying clustering structure. The problem then is how to agglomerate all this information together. In this section, for us, a clustering scheme will denote any rule $C$ which assigns to every finite metric space $S$ a partition ${\mathcal P}_{C}(S)$. We write ${\mathcal B}_C(S)$ for the set of blocks of the partition ${\mathcal P}_C(S)$. If we are given two finite metric spaces $S,T$, an \underline{embedding} from $S$ to $T$, is an injective set map $\iota: S \hookrightarrow T$, so that $d_T(\iota (x), \iota (x^{\prime})) = d_S(x,x^{\prime})$. Given any partition ${\mathcal P}$ of a metric space $T$, and given any set map $\varphi : S \rightarrow T$, we write $\varphi ^*({\mathcal P})$ for the partition of $S$ which places $s,s^{\prime} \in S$ in the same block if and only if $\varphi (s) $ and $\varphi (s^{\prime}) $ lie in the same block of ${\mathcal P}$. The clustering scheme $C$ is now said to be {\bf I-functorial } if ${\mathcal P}_C(S) $ refines $\iota ^* ({\mathcal P}_C(T))$ for any embedding $\iota: S \rightarrow T$. Note that for any I-functorial clustering scheme $C$, there is an induced map ${\mathcal B}_C(\iota): {\mathcal B}_C(S) \rightarrow {\mathcal B}_C(T)$ for any embedding $\iota : S \hookrightarrow T$. An example of an I-functorial clustering scheme is single linkage clustering for a fixed threshhold $\epsilon$. Now let $\Bbb{X}$ be a set of point cloud data, equipped with a metric $d$. We build collections of samples $S_i \subseteq \Bbb{X}$ of size $n $ from $\Bbb{X}$, { with replacement}, for $1 \leq i \leq N$. We assume we are given an I-functorial clustering scheme $C$. We note that each of the samples $S_i$ and the sets $S_i \cup S_{i+1}$ are finite metric spaces in their own right, that the natural inclusions $S_i \hookrightarrow S_i \cup S_{i+1}$ and $S_{i+1} \hookrightarrow S_i \cup S_{i+1}$ are embeddings of finite metric spaces. It follows from the I-functoriality of the clustering scheme $C$ that we obtain a diagram of sets in Figure \ref{diagram-g} \begin{figure} \caption{Diagram of sets obtained via I-functoriality of the clustering scheme.} \label{diagram-g} \end{figure} We will refer to such a diagram as a {\em zig zag}. In an intuitive sense, this diagram now carries information about the stability or the significance of the clustering. The informal idea is that sequences of the form $\{ x_{\nu} \}_{\nu = s}^{t}$ (formed by consecutive elements), with $x_{\nu} \in {\mathcal B}_C(S_{\nu})$, and with ${\mathcal B}_C(i^+_{\nu})(x_{\nu}) ={\mathcal B}_C(i^-_{\nu +1})(x_{\nu + 1})$ should describe small scale pictures of a clustering of $\Bbb{X}$, where $i^+_{\nu}: S_{\nu} \hookrightarrow S_{\nu} \cup S_{\nu +1 }$ and $i^-_{\nu +1 }: S_{\nu +1 } \hookrightarrow S_{\nu} \cup S_{\nu +1 }$ are the inclusions. Informally, the idea is that ``compatible families" of clusterings of the samples $S_{\nu}$ should correspond to clusterings of the entire set $\Bbb{X}$. Of course, the length of the sequence ($t-s+1$) must be significant. A single pair of compatible clusters will not be as significant as a long sequence. The problem with this idea as stated is that it is very hard to make precise the definitions of the sequences, and to describe them. Unlike the case of ordinary persistent sets, where dendrograms provide a straightforward visualization of all such structure, we believe that in the case of zig-zags of sets no such simple representation is possible. However, there turns out to be (see below) a readily computable analogue of the \emph{persistence barcode}, \cite{ghrist,persistence}. We now see how this works. We note first that this situation has certain things in common with dendrograms. Rooted trees can be viewed as diagrams of sets of the form $$ X_0 \stackrel{f_0}{\rightarrow} X_1 \stackrel{f_1}{\rightarrow} \cdots X_{n-1} \stackrel{f_{n-1}}{\rightarrow} X_{n} \rightarrow \cdots$$ for which there is an integer $N$ so that $X_k$ consists of one element for all $k \geq N$. The smallest such $N$ will be called the depth of the tree, $d$. One constructs a tree from such a diagram by forming the disjoint union $\coprod_{i=0}^{d} X_i \times [0,1]$, and then forms the quotient by the equivalence relation generated by the equivalences $x \times 1 \simeq f_s(x) \times 0$ for all $x \in X_s$ and $s < d$. The set $X_l$ will now correspond to the nodes of depth $d-l$ in the tree. The tree representation turns out to be a useful representation of structure of the sets of clusters as a set varying with a threshhold parameter. Given instead a zig zag diagram as above, it is again possible to construct a graph which represents the data, but it is harder to make useful sense of it, since it is a fairly general graph. Nonetheless, it turns out that it is possible to obtain a useful partial description using algebraic techniques. One begins with a field $k$ (typically $\Bbb{F}_2$, the field with two elements), and constructs for each of the sets ${\mathcal B}(S_i )$ and ${\mathcal B}(S_{i} \cup S_{i +1})$ in the zig zag the corresponding vector spaces $k[{\mathcal B}(S_i )]$ and $k[{\mathcal B}(S_{i} \cup S_{i +1})]$, i.e. vector spaces with the given sets as bases. The zig zag diagram now gives rise to a diagram of vector spaces and linear transformations of the same shape. It turns out that there is an algebraic classification of such diagrams up to isomorphism. To describe this classification, we will describe every zig zag diagram as a family of vector spaces $\{ V_i \}_{i }$, equipped with linear transformations $\lambda _i : V_{2i} \rightarrow V_{2i + 1}$ and $\mu _i : V_{2i} \rightarrow V_{2i -1 }$. Given integers $a \leq b$, we denote by $Z[a,b]$ the zig zag diagram for which $V_i = k$ for all $a \leq i \leq b$, and $V_i = \{ 0 \} $ for $i \notin [a,b]$, and for which every possible non-zero linear transformation is equal to the identity. For example, $Z[3,6]$ is the diagram $$ \cdots \{ 0 \} \rightarrow V_3 \stackrel{id}{\leftarrow} V_4 \stackrel{id}{\rightarrow} V_5 \stackrel{id}{\leftarrow} V_6 \rightarrow \{ 0 \} \cdots $$ where $V_3,V_4, V_5, V_6 = k$. Note that these diagrams are parametrized by closed intervals with integer endpoints. We now have the following theorem of Gabriel (see \cite{gabriel}). \begin{Theorem} Every zig zag diagram is isomorphic to a direct sum of diagrams of the form $[a_i, b_ i]$, and the decomposition is unique up to reordering of the summands. \end{Theorem} \section{Discussion} \label{various} We have presented the ideas of functoriality and persisence as useful organizing principles for clustering algorithms. We have made particular choices of category structures on the collection of finite metric spaces, as well as for the notion of multiscale/resolution sets. One can imagine different notions of morphisms of metric spaces and of persistent sets. For example, the idea of {\em multidimensional persistence} (see \cite{multid}) could provide methods which in addition to the parameter $r$ could track density as estimated by some estimator, giving a more informative picture of the dataset. It also appears likely that from the point of view described here, it will in many cases be possible, given a collection of constraints on a clustering functor, to determine the {\em universal } one satisfying the constraints. One could therefore use sets of constraints as the definition of clustering functors. We believe that the conceptual framework presented here can be a useful tool in reasoning about clustering algorithms. We have also shown that clustering methods which have some degree of functoriality admit the possibility of certain kind of qualitative geometric analysis of datasets which can be quite valuable. The general idea that the morphisms between mathematical objects (together with the notion of functoriality) are critical in many situations is well-established in many areas of mathematics, and we would argue that it is valuable in this statistical situation as well. We have also discussed how to obtain quantitative stability, consistency and convergence results using a metric space representation of the output of clustering algorithms. We believe these tools can also contribute to the understanding of theoretical questions about clustering as well. Finally we would like to comment on the fact that functoriality ideas and metric based study complement eachother. In the sense that using functoriality, first, one can reason about global stability or rigidity of methods in order to identify a class of them that is sensible, and then, by applying metric tools one can understand the behaviour/convergence as, say, the number of samples goes to infinity, or to the quantify error in approximating the underlying reality when only finitely many samples are used. \end{document}
math
Our line of 100% solids, tank linings are designed to maximize corrosion and abrasion protection while reducing application costs and extending the service life of the assets. SPC’s single coat, novolac epoxy linings are, self-priming, thick films, designed to conform to irregular shapes while maintaining excellent edge retention. These single coat systems reduce man hours by eliminating the need for multiple coats, which in turn also prevents the potential for intercoat adhesion concerns. The quick curing properties of our linings allow minimal down time and significant savings on potential additional costs associated with controlling environmental conditions.
english
\begin{document} \begin{center} {\Large \bf On Colorings of graph fractional powers}\\ \vspace*{0.5cm} {\bf Moharram N. Iradmusa}\\ {\it Department of Mathematical Sciences}\\ {\it Shahid Beheshti University, G.C.}\\ {\it P.O. Box {\rm 19839-63113}, Tehran, Iran}\\ {\it m\[email protected]}\\ \vspace*{0.5cm} \emph{Dedicated to Professor Cheryl Praeger}\\ \emph{on her 60th birthday}\\ \end{center} \begin{abstract} \noindent For any $k\in \mathbb{N}$, the $k-$subdivision of graph $G$ is a simple graph $G^{\frac{1}{k}}$, which is constructed by replacing each edge of $G$ with a path of length $k$. In this paper we introduce the $m$th power of the $n-$subdivision of $G$, as a fractional power of $G$, denoted by $G^{\frac{m}{n}}$. In this regard, we investigate chromatic number and clique number of fractional power of graphs. Also, we conjecture that $\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n})$ provided that $G$ is a connected graph with $\Delta(G)\geq3$ and $\frac{m}{n}<1$. It is also shown that this conjecture is true in some special cases. \begin{itemize} \item[]{{\footnotesize {\bf Key words:}\ Chromatic number, Subdivision of a graph, Power of a graph.}} \item[]{ {\footnotesize {\bf Subject classification: 05C} .}} \end{itemize} \end{abstract} \section{Introduction and Preliminaries} In this paper we only consider simple graphs which are finite, undirected, with no loops or multiple edges. We mention some of the definitions which are referred to throughout the paper. As usual, we denote by $\Delta(G)$ or simply by $\Delta$, the maximum degrees of the vertices of graph $G$, and by $\omega(G)$, the maximum size of a clique of $G$, where a clique of $G$ is a set of mutually adjacent vertices. In addition $N_G(v)$, called the neighborhood of $v$ in $G$, denotes the set of vertices of $G$ which are adjacent to the vertex $v$ of $G$. For another necessary definitions and notations we refer the reader to textbook [4].\\ Let $G$ be a graph and $k$ be a positive integer. The \emph{$k-$power of $G$}, denoted by $G^k$, is defined on the vertex set $V(G)$, by connecting any two distinct vertices $x$ and $y$ with distance at most $k$ [1,8]. In other words, $E(G^k)=\{xy:1\leq d_G(x,y)\leq k\}$. Also \emph{$k-$subdivision of $G$}, denoted by $G^\frac{1}{k}$, is constructed by replacing each edge $ij$ of $G$ with a path of length $k$, say $P_{ij}$. These $k-$pathes are called \emph{hyperedges} and any new vertex is called \emph{internal vertex} or brifely \emph{i-vertex} and is denoted by $i(l)j$, if it belongs to the hyperedge $P_{ij}$ and has distance $l$ from the vertex $i$, where $l\in\{1,2,\ldots,k-1\}$. Note that $i(l)j=j(k-l)i$, $i(0)j\mbox {$\ \stackrel{\rm def}{=} \ $} i$ and $i(k)j\mbox {$\ \stackrel{\rm def}{=} \ $} j$. Also any vertex $i=i(0)j$ of $G^\frac{1}{k}$ is called \emph{terminal vertex} or brifely \emph{t-vertex}.\\ Note that for $k=1$, we have $G^\frac{1}{1}=G^1=G$, and if the graph $G$ has $v$ vertices and $e$ edges, then the graph $G^\frac{1}{k}$ has $v+(k-1)e$ vertices and $ke$ edges.\\ Now we can define the fractional power of a graph as follows. \begin{defin} {Let $G$ be a graph and $m,n\in \mathbb{N}$. The graph $G^{\frac{m}{n}}$ is defined by the $m-$power of the $n-$subdivision of $G$. In other words $G^{\frac{m}{n}}\mbox {$\ \stackrel{\rm def}{=} \ $} (G^{\frac{1}{n}})^m$. } \end{defin} Note that the graphs $(G^{\frac{1}{n}})^m$ and $(G^{m})^{\frac{1}{n}}$ are different graphs and in this paper, we only consider the graphs $(G^{\frac{1}{n}})^m$ for all positive integers $m$ and $n$. Furthermore, we assume that $V(G^{\frac{m}{n}})=V(G^{\frac{1}{n}})$.\\ As you know, for any graph $G$, we have $\chi(G)\leq\chi(G^2)\leq\chi(G^3)\leq\ldots\leq\chi(G^d)=|V(G)|$, where $d$ is the diameter of $G$. In other words, the chromatic number of a graph $G$ increases when we replace $G$ by its powers. The problem of coloring of graph powers, specially planar graph powers, is studied in the literature (see for example [1, 2, $5-11$]).\\ On the other hand, it is easy to verify that by replacing any graph by its subdivisions, the chromatic number of graph decreases. In other words, $\chi(G)\geq\chi(G^\frac{1}{k})$ for any $k\in \mathbb{N}$. Now the following question arises naturally.\\ \textbf{Question.} \emph{What happened for the chromatic number, when we consider the fractional power of a graph, specially less than (or more than) one power?}\\ In this paper we answer to this question for the less than one power of a graph and find the chromatic number of $G^{\frac{m}{n}}$ for rational number $\frac{m}{n}<1$.\\ Another motivation for us to define the fractional power of a graph is the \emph{Total Coloring Conjecture}. Vizing (1964) and, independently, Behzad (1965) conjectured that the total chromatic number of a simple graph G never exceeds $\Delta+2$ (and thus $\Delta+1\leq\chi''(G)\leq\Delta+2$) [3,12]. By virtue of Definition 1, the total chromatic number of a simple graph $G$ is equal to $\chi(G^{\frac{2}{2}})$ and $\omega(G^{\frac{2}{2}})=\Delta+1$ when $\Delta\geq2$. Thus, the Total Coloring Conjecture can be reformulated as follows. \\ \emph{\textbf{Total Coloring Conjecture.}} \emph{Let $G$ be a simple graph and $\Delta(G)\geq2$. Then $\chi(G^{\frac{2}{2}})\leq\omega(G^{\frac{2}{2}})+1$.} \\ In the next section, at first, we calculate the clique number of $G^{\frac{m}{n}}$ when $\frac{m}{n}<1$. Next, the chromatic number of $G^{\frac{m}{n}}$ is calculated when $\Delta(G)=2$. Later, we show that $\chi(G^{\frac{2}{n}})=\omega(G^{\frac{2}{n}})$ for any positive integer $n\geq 3$ when $\Delta(G)\geq3$. Finally we show that $\chi(G^{\frac{m}{m+1}})=\omega(G^{\frac{m}{m+1}})$ when $G$ is a connected graph with $\Delta(G)\geq3$ and $m\in \mathbb{N}$, which leads us to claim the following conjecture.\\ \emph{\noindent\textbf{Conjecture A.} Let $G$ be a connected graph with $\Delta(G)\geq3$, $n,m\in \mathbb{N}$ and $1<m<n$. Then \[\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n}).\]} \section{Main Results} Here, we find the clique number of $G^{\frac{m}{n}}$ when $\frac{m}{n}<1$. \begin{thm} {Let $G$ be a graph, $n,m\in \mathbb{N}$ and $m<n$. Then \[\omega(G^{\frac{m}{n}})=\left\{\begin{array}{ll}m+1&\Delta(G)=1\\\frac{m}{2} \Delta(G)+1&\Delta(G)\geq2,m\equiv 0\medspace(mod\thinspace2)\\\frac{m-1}{2}\Delta(G)+2&\Delta(G)\geq2,m\equiv 1\medspace(mod\thinspace2).\end{array}\right.\] } \end{thm} \begin{proof} {We consider three cases:\\ \emph{Case 1.} $\Delta(G)=1$\\ In this case, each connected component of $G$ is $K_2$ or $K_1$. Therefore, $\omega(G^{\frac{m}{n}})=\omega(K_2^{\frac{m}{n}})=\omega(P_{n+1}^m)$. It is clear that $\omega(P_{n+1}^m)=m+1$.\\ \emph{Case 2.} $\Delta(G)\geq2$, $m\equiv 0\medspace(mod\thinspace2)$\\ Choose a t-vertex $x$ of $G^{\frac{m}{n}}$ such that $d_G(x)=\Delta(G)$. Consider the set $S=\{y:d_{G^{\frac{m}{n}}}(y,x)\leq \frac{m}{2}\}$. Obviously, this set is a clique with $\frac{m}{2} \Delta(G)+1$ vertices. Now, we show that any clique of $G^{\frac{m}{n}}$ has at most $\frac{m}{2} \Delta(G)+1$ vertices. Let $C$ be a maximal clique of $G^{\frac{m}{n}}$. The distance of any two t-vertices in $G^{\frac{m}{n}}$ is at least two. Therefore, $C$ has at most one t-vertex. First, suppose that $C$ has not any t-vertex. Thus, all vertices of $C$ belong to a hyperedge and hence $|C|\leq m+1\leq \frac{m}{2} \Delta(G)+1$. Now, suppose that $C$ has a t-vertex $x$. Let $y$ has the greatest distance from $x$ among the other vertices of $C$. If $d_{G^{\frac{m}{n}}}(y,x)=d>\frac{m}{2}$, then $C$ has at most $m-d$ vertices in common with any hyperedge that is adjacent with $x$ and doesn't contain $y$. Therefore, \[|C|=(d(x)-1)(m-d)+d+1\leq(\Delta(G)-2)(m-d)+m+1\] \[\leq(\Delta(G)-2)\frac{m}{2}+m+1=\Delta(G)\frac{m}{2}+1.\] In other case, if $d\leq \frac{m}{2}$, then $C$ has at most $\frac{m}{2}$ vertices in common with any hyperedge that is adjacent to $x$. So $|C|\leq \frac{m}{2}\Delta(G)+1$.\\ \emph{Case 3.} $\Delta(G)\geq2$, $m\equiv 1\medspace(mod\thinspace2)$\\ Choose a t-vertex $x$ of $G^{\frac{m}{n}}$ such that $d_G(x)=\Delta(G)$. Consider the sets $S_1=\{y:d_{G^{\frac{m}{n}}}(y,x)\leq \frac{m-1}{2}\}$ and $S_2=\{z:d_{G^{\frac{m}{n}}}(z,x)=\frac{m+1}{2}\}$. Obviously each element of $S_2$ along with the vertices of $S_1$, form a clique of size $\frac{m-1}{2}\Delta(G)+2$. Now, similar to case 2, one can prove that any clique of $G^{\frac{m}{n}}$ has at most $\frac{m-1}{2} \Delta(G)+2$ vertices.} \end{proof} F. Kramer and H. Kramer gave in [9] the following characterization of the graphs for which $\chi(G^m)=m+1$. \begin{alphthm} {\emph{[9]} Let $G=(V,E)$ be a simple and connected graph and $m\in \mathbb{N}$. We have $\chi(G^m)=m+1$ if and only if the graph $G$ is satisfying one of the following conditions:\\ $(a)$ $|V|=m+1$,\\ $(b)$ $G$ is a path of length greater than m,\\ $(c)$ $G$ is a cycle of length a multiple of $m+1$.} \end{alphthm} To find the chromatic number of $G^{\frac{m}{n}}$ in the case $\frac{m}{n}<1$, we only consider the connected graphs in two cases $\Delta(G)=2$ and $\Delta(G)>2$. Theorem A can be considered as a result of Theorems 1 and 2. \begin{thm} {Let $m,k\in \mathbb{N}$ and $k\geq3$. Then\\ $(a)$ $\chi(C_k^{m})=\left\{\begin{array}{ll}k&m\geq\lfloor\frac{k}{2}\rfloor\\ \lceil\frac{k}{\lfloor\frac{k}{m+1}\rfloor}\rceil&m<\lfloor\frac{k}{2}\rfloor,\end{array}\right.$\\ $(b)$ $\chi(P_k^{m})=min\{m+1,k\}$. } \end{thm} \begin{proof} {First, we prove part $(a)$. It is easy to see that $\chi(C_k^{m})=\chi(K_k)=k$ provided that $m\geq\lfloor\frac{k}{2}\rfloor$. Thus, suppose that $m<\lfloor\frac{k}{2}\rfloor$. Obviously, we can show that $\omega(C_k^{m})=m+1$. In fact, any set of $m+1$ consecutive vertices of $C_k^{m}$ is a clique of maximum size. Therefore, $\chi(C_k^{m})\geq m+1$. Let $V(C_k)=\{1,2,\ldots,k\}$ and $E(C_k)=\{\{1,2\},\{2,3\},\ldots,\{k-1,k\},\{k,1\}\}$. If $m+1|k$, then $\lceil\frac{k}{\lfloor\frac{k}{m+1}\rfloor}\rceil=m+1$. Hence, to have a proper coloring of $C_k^{m}$ with $m+1$ colors, it's enough to color the vertices $i+j(m+1)$ with the color $i$ when $1\leq i\leq m+1$ and $0\leq j\leq \frac{k}{m+1}-1$. Finally, suppose that $m+1\nmid k$. Because any set of $m+1$ consecutive vertices of $C_k^{m}$ is a clique of maximum size, then any stable set and specially any color class of a proper coloring, has at most $\lfloor\frac{k}{m+1}\rfloor$ vertices. Therefore, $\chi(C_k^{m})\lfloor\frac{k}{m+1}\rfloor\geq k$ that concludes $\chi(C_k^{m})\geq \lceil\frac{k}{\lfloor\frac{k}{m+1}\rfloor}\rceil$. Now we show that $\lceil\frac{k}{\lfloor\frac{k}{m+1}\rfloor}\rceil$ colors are enough. Let $k=(m+1)q+r$ and $1\leq r<m+1$. Then we have $\lceil\frac{k}{\lfloor\frac{k}{m+1}\rfloor}\rceil=m+1+\lceil\frac{r}{q}\rceil$. Suppose that $r=qq_{1}+r_1$ and $0\leq r_1<q$. In what follows, we partite $V(C_k^{m})$ to $q$ subsets, such that each of these subsets contains $m+1+\lceil\frac{r}{q}\rceil$ or $m+1+\lfloor\frac{r}{q}\rfloor$ consecutive vertices and then we color the vertices of each subset with the colors $\{1,2,\ldots,\chi\}$ consecutively.\\ Precisely, to have a proper coloring of $C_k^{m}$ with $m+1+\lceil\frac{r}{q}\rceil$ colors, we consider two cases.\\ \emph{\textbf{Case 1.}} If $1\leq r_1<q$, it's enough to color the vertices $i+j(m+1+\lceil\frac{r}{q}\rceil)$ with the color $i$ when $1\leq i\leq m+1+\lceil\frac{r}{q}\rceil$ and $0\leq j\leq r_1-1$. Also, color the vertices $i+j(m+1+\lfloor\frac{r}{q}\rfloor)+r_1(m+1+\lceil\frac{r}{q}\rceil)$ with color $i$ when $1\leq i\leq m+1+\lfloor\frac{r}{q}\rfloor$ and $0\leq j\leq q-r_1-1$.\\ \emph{\textbf{Case 2.}} If $r_1=0$, then $\lfloor\frac{r}{q}\rfloor=\lceil\frac{r}{q}\rceil$ and it's enough to color the vertices $i+j(m+1+\lceil\frac{r}{q}\rceil)$ with the color $i$ when $1\leq i\leq m+1+\lceil\frac{r}{q}\rceil$ and $0\leq j\leq q-1$.\\ Now we prove part $(b)$. We know that $d_{P_k}(x,y)\leq k-1$. So if $m+1\geq k$, then $P_k^{m}=K_k$ and $\chi(C_k^{m})=k$. Now suppose that $m+1<k$. We show that $m+1$ colors are enough. Let $l=s(m+1)>k+m+1$ and consider the graph $C_l^m$. In view of the first part, this graph has a proper coloring with $m+1$ colors. Because any $k$ consecutive vertices of $C_l^m$ induce a subgraph isomorphic to $P_k^{m}$, so any proper coloring of $C_l^m$ gives us a proper coloring of $P_k^{m}$.} \end{proof} The following corollary is a direct consequence of Theorem 2. \begin{cor} {Let $m,n,k\in \mathbb{N}$ and $k\geq3$. Then\\ $(a)$ $\chi(C_k^{\frac{m}{n}})=\left\{\begin{array}{ll}nk&m\geq\frac{nk}{2}\\ \lceil\frac{nk}{\lfloor\frac{nk}{m+1}\rfloor}\rceil&m<\frac{nk}{2},\end{array}\right.$\\ $(b)$ $\chi(P_k^{\frac{m}{n}})=min\{m+1,(k-1)n+1\}$. } \end{cor} \begin{proof} { Note that $C_k^{\frac{m}{n}}=C_{kn}^m$ and $P_k^{\frac{m}{n}}=P_{(k-1)n+1}^m$. Hence, in view of Theorem 2, we can easily conclude this corollary.} \end{proof} In Theorem 2 and Corollary 1 we consider the graphs with maximum degree 2. Hereafter, we focus on the graphs with maximum degree greater than 2. \begin{rem} {Let $G$ be a connected graph and $n$ be a positive integer greater than 1. Then easily one can see that at most three colors are enough to achieve a proper coloring of $G^{\frac{1}{n}}$. Precisely, \[\chi(G^{\frac{1}{n}})=\left\{\begin{array}{cl}3&n\equiv 1(mod\thinspace2)\ \& \ \chi(G)\geq3\\2&otherwise.\end{array}\right.\] } \end{rem} In Lemmas $1-4$, we construct the steps of the proof of Theorem 3 which states that $\chi(G^{\frac{2}{n}})=\omega(G^{\frac{2}{n}})$ for any positive integer $n$ greater than 2. \begin{lem} {Let $G$ be a graph, $n,m\in \mathbb{N}$ and $m<n$. If $\chi(G^{\frac{m}{n}})=\omega(G^{\frac{m}{n}})$, then $\chi(G^{\frac{m}{n+m+1}})=\omega(G^{\frac{m}{n+m+1}})$. } \end{lem} \begin{proof} {Note that $\omega(G^{\frac{m}{n}})=\omega(G^{\frac{m}{n+m+1}})$. Hence, it is sufficient to show that $\chi(G^{\frac{m}{n+m+1}})\leq\chi(G^{\frac{m}{n}})$. Let $f:V(G^{\frac{m}{n}})\rightarrow [\chi]$ be a proper coloring of $G^{\frac{m}{n}}$ with $\chi=\chi(G^{\frac{m}{n}})$ colors. Choose the edge incident to vertex $i$ in the hyperedge $P_{ij}$, for any $i<j$. Then replace each of these edges by an $(m+2)-$path, to make $G^{\frac{m}{n+m+1}}$. Now we color the new vertices on the hyperedge $P_{ij}$, by the set of colors $S_{ij}=\{f(v)|v\in P_{ij}\}$, such that the resulting coloring ($f':V(G^{\frac{m}{n+m+1}})\rightarrow [\chi]$) is a proper coloring of $G^{\frac{m}{n+m+1}}$. Precisely, suppose that an edge between $i$ and $i(1)j$ is replaced by an $(m+2)-$path $(i,i_1,i_2,\ldots,i_{m+1},i(1)j)$. Now use the color of $i(k)j$ to color the new vertex $i_k$ when $1\leq k\leq m+1$. Easily we can show that this coloring is a proper coloring. Consider $f'(N_m(i_k)\setminus \{i_k\})$ that is the colors of all vertices with distance $d\in\{1,2,\ldots,m\}$ from $i_k$. Because $f'(i_l)=f(i(l)j)$ so $f'(N_m(i_k)\setminus \{i_k\})\subseteq f(N_m(i(k)j)\setminus \{i(k)j\})$. Then the color of $i_k$ is different from the color of any other vertex in $N_m(i_k)$. In addition, for any two vertices $x$ and $y$ of $G^{\frac{m}{n}}$ with $f(x)=f(y)$, we have \[d_{G^{\frac{m}{n+m+1}}}(x,y)\geq d_{G^{\frac{m}{n}}}(x,y)>m.\] Hence, the coloring $f'$ is a proper coloring for $G^{\frac{m}{n+m+1}}$. } \end{proof} \begin{lem} {Let $G$ be a connected graph of order $n$. Then there is a labeling $f:V(G)\rightarrow[n]$, such that,\\ $(a)$ for any two distinct vertices $i$ and $j$ of $G$, $f(i)\neq f(j)$ and\\ $(b)$ for any vertex $x$, if $f(x)\neq n$, then $f(x)<f(y)$ for some vertex $y\in N(x)$.} \end{lem} \begin{proof} {A labeling $f$ of $V(G)$ is called a \emph{coloring order} of $G$, if it satisfies two properties $(a)$ and $(b)$. The proof is by induction on $n$. Obviously, the lemma is correct for $n=1$. Assume that there is a coloring order for any connected graph of order $n-1$. Because $G$ is connected, so there is a vertex $x$, such that $G[V(G)\setminus \{x\}]$ is connected. Then $G[V(G)\setminus \{x\}]$ has a coloring order like $f:G[V(G)\setminus \{x\}]\rightarrow[n-1]$. Now we can define a coloring order for $G$ as follows:\\ \[f'(i)=\left\{\begin{array}{ll}1&i=x\\f(i)+1&i\neq x.\end{array}\right.\] Clearly one can check that $f'$ is a coloring order for $G$. } \end{proof} \begin{lem} {Let $G$ be a connected graph with $\Delta(G)\geq3$. Then $\chi(G^{\frac{2}{3}})=\Delta(G)+1=\omega(G^{\frac{2}{3}})$.} \end{lem} \begin{proof} {It is well-known that any connected graph with maximum degree $\Delta$ is a subgraph of a connected $\Delta-$regular graph. Hence, it is sufficient to prove the lemma for the connected $\Delta-$regular graph $G$ of order $n$.\\ Color all t-vertices by the color 0. We show that the colors of $C=\{1,2,\ldots,\Delta\}$ are enough to color i-vertices. Suppose that $f$ is a coloring order of $G$ and $f^{-1}(i)$ is denoted by $f_i$ . We color the i-vertices of $N(f_1)$, $N(f_2)$, $\ldots$ and $N(f_n)$ consecutively in $n$ steps:\\ \emph{Step 1.} We color optionally the i-vertices of $N(f_1)$ with the colors of $C$.\\ \emph{Step 2.} If two vertices $f_1$ and $f_2$ are not adjacent in $G$, then we can color optionally the i-vertices of $N(f_2)$. If $f_1$ and $f_2$ are adjacent in $G$, then at first we color the i-vertex $f_2(1)f_1$, with color $c_1$ which is different from the color of $f_1(1)f_2$. Then we color the other i-vertices of $N(f_2)$ with the colors of $C\setminus\{c_1\}$ optionally.\\ \emph{Step $k$ $(3\leq k\leq n-1)$.} By considering the colored vertices in previous steps, there are $\Delta$ or $\Delta-1$ colors to color each vertex of $N(f_k)$. Now by using Hall's Theorem [4, page 419], we can find a perfect matching between the vertices of $N(f_k)$ and the colors $C=\{1,2,\ldots,\Delta\}$ which leads us to a coloring of the vertices of $N(f_k)$.\\ \emph{Step $n$.} Let $N_G(f_n)=\{f_{i_1},f_{i_2},\ldots,f_{i_{\Delta}}\}$. Suppose that $c_j$ is the color of $f_{i_j}(1)f_n$, for any $1\leq j\leq \Delta$. If $|\{c_j|1\leq j\leq \Delta\}|=k\geq2$, then we select $k$ vertices of \[\{f_{i_1}(1)f_n,f_{i_2}(1)f_n,\ldots,f_{i_{\Delta}}(1)f_n\}\] with different colors and color their neighbors in $N(f_n)$, with a disarrangement of that $k$ colors. After that, we can color the remaining i-vertices of $N(f_n)$, with the colors of $C\setminus\{c_1,c_2,\ldots,c_{\Delta}\}$.\\ Now let $k=1$. In other words, assume that $c_1=c_2=\ldots=c_{\Delta}=a$.\\ To color the i-vertices of $N(f_n)$ in this case, we need to change the color of one i-vertex of $F=\{f_{i_1}(1)f_n,f_{i_2}(1)f_n,\ldots,f_{i_{\Delta}}(1)f_n\}$.\\ Consider the subgraph of $G^{\frac{2}{3}}$, induced by the vertices of colors $a$ and $b$ and let $H$ be the connected component of this subgraph, containing the vertex $f_{i_1}(1)f_n$. Clearly, we have $1\leq d_H(x)\leq 2$ for each $x\in V(H)$ and $d_H(f_{i_1}(1)f_n)=1$. Therefore, $H$ is a path and we can interchange $k$ to 2, by changing the colors $a$ and $b$ in $H$ and then we can color the i-vertices of $N(f_n)$. } \end{proof} \begin{lem} {Let $G$ be a connected graph with $\Delta(G)\geq3$. Then\\ $(a)$ $\chi(G^{\frac{2}{4}})=\Delta(G)+1=\omega(G^{\frac{2}{4}})$,\\ $(b)$ $\chi(G^{\frac{2}{5}})=\Delta(G)+1=\omega(G^{\frac{2}{5}})$.} \end{lem} \begin{proof} {First, we prove part $(a)$. We use the proper coloring of $G^{\frac{2}{3}}$ which is defined in Lemma 3. Suppose that $f:V(G^{\frac{2}{3}})\rightarrow C$ is a proper coloring of $G^{\frac{2}{3}}$ where $C=\{0,1,2,\ldots,\Delta(G)\}$. Three colors are appeared on each hyperedge of $G^{\frac{2}{3}}$. Because $\Delta(G)+1\geq 4$, so for each hyperedge $P_{ij}$, there is at least one color, denoted by $c_{ij}$, which is not used for the coloring of the vertices of $P_{ij}$. Now on each hyperedge $P_{ij}$, replace the edge between $i(1)j$ and $j(1)i$ by a $2-$path and color the new vertex by $c_{ij}$. In other words, we have a proper coloring $f'$ for $G^{\frac{2}{4}}$ as follows:\\ \[f'(x)=\left\{\begin{array}{cl}c_{ij}&x=i(2)j=j(2)i\\f(x)&otherwise.\end{array}\right.\] Clearly, we can show that $f'$ is a proper coloring for $G^{\frac{2}{4}}$.\\ Now, we prove part $(b)$. Let $f:E(G)\rightarrow C$ be a proper edge coloring of $G$, where $C=\{0,1,2,\ldots,\Delta(G)\}$. At first, we color the i-vertices $i(1)j$ and $j(1)i$ with the same color of $f(ij)$. Then we color all t-vertices of $G^{\frac{2}{5}}$. Color the t-vertex $i$, with one color of $C\setminus\{f(ij)|d_G(j,i)=1\}$ which is non-empty. Finally we color two remaining i-vertices of each hyperedge. There are two cases for each hyperedge $P_{ij}$:\\ \emph{Case 1}. Two vertices $i$ and $j$ have the same color.\\ In this case, only two colors are used on the hyperedge $P_{ij}$. Because $\Delta(G)+1\geq 4$, so there are at least two colors which are not appeared on the vertices of $P_{ij}$, and we can color two uncolored i-vertices of $P_{ij}$, with the remaining colors.\\ \emph{Case 2.} Two vertices $i$ and $j$ have different colors.\\ In this case, we color the i-vertex $i(2)j$ with the color of $j$ and the i-vertex $j(2)i$ with the color of $i$. Note that $d_{G^\frac{2}{5}}(i(2)j,j)=2=d_{G^\frac{2}{5}}(j(2)i,i)$ and $d_{G^\frac{2}{5}}(i(2)j,i)=1=d_{G^\frac{2}{5}}(j(2)i,j)$. Clearly, one can show that this coloring is a proper coloring of $G^{\frac{2}{5}}$.} \end{proof} Now we can prove the following theorem inductively by applying Theorem 1 and Lemmas 1, 3 and 4. \begin{thm} {Let $G$ be a connected graph with $\Delta(G)\geq3$ and $n$ be a positive integer greater than 2. Then $\chi(G^{\frac{2}{n}})=\Delta(G)+1=\omega(G^{\frac{2}{n}})$. } \end{thm} Here, we show that the conjecture A is true for some rational number $\frac{m}{n}<1$. \begin{thm} {Let $G$ be a connected graph with $\Delta(G)\geq3$ and $m\in \mathbb{N}$. Then $\chi(G^{\frac{m}{m+1}})=\omega(G^{\frac{m}{m+1}})$. } \end{thm} \begin{proof} {We prove this theorem by induction on $m$. Remark 1 and Lemma 3 show that the assertion holds for $m=1,2$. We prove that if the assertion holds for $m=2k\in \mathbb{N}$, then it is correct for $m=2k+1$ and $m=2k+2$. Suppose that $\chi(G^{\frac{2k}{2k+1}})=\omega(G^{\frac{2k}{2k+1}})=k\Delta(G)+1$. To prove $\chi(G^{\frac{2k+1}{2k+2}})=k\Delta(G)+2$, we follow the same lines as in the proof of Lemma 4. Assume that $f:V(G^{\frac{2k}{2k+1}})\rightarrow C$ is a proper coloring of $G^{\frac{2k}{2k+1}}$ where $C=\{0,1,2,\ldots,k\Delta(G)\}$. Then we can define a proper coloring $f':V(G^{\frac{2k+1}{2k+2}})\rightarrow C\cup\{k\Delta(G)+1\}$ as follows:\\ \[f'(x)=\left\{\begin{array}{cl}k\Delta(G)+1&x=i(k+1)j\\f(x)&x=i(l)j,\thinspace l\leq k.\end{array}\right.\] In fact, one i-vertex with the color $k\Delta(G)+1$, is added to each hyperedge $P_{ij}$ of $G^{\frac{2k}{2k+1}}$ between two central i-vertices of $P_{ij}$ to construct $G^{\frac{2k+1}{2k+2}}$. We show that $f'$ is a proper coloring for $G^{\frac{2k+1}{2k+2}}$.\\ Suppose that $x$ and $y$ are two adjacent vertices of $G^{\frac{2k+1}{2k+2}}$. Therefore, $d_{G^{\frac{1}{2k+2}}}(x,y)\leq 2k+1$. It is easy to check that $f'(x)\neq f'(y)$ when $f'(x)=k\Delta(G)+1$ or $f'(y)=k\Delta(G)+1$. Hence, we assume that $f'(x)\neq k\Delta(G)+1\neq f'(y)$. If $d_{G^{\frac{1}{2k+2}}}(x,y)\leq 2k$, then before adding new i-vertices, we must have $d_{G^{\frac{1}{2k+1}}}(x,y)\leq 2k$. Then $x$ and $y$ are adjacent in $G^{\frac{2k}{2k+1}}$ and their colors are different in $f$. Now suppose that $d_{G^{\frac{1}{2k+2}}}(x,y)=2k+1$. Therefore, there is a path of length $2k+1$ between $x$ and $y$ which contains a new i-vertex with the color $k\Delta(G)+1$. Hence, the distance of $x$ to $y$ in $G^{\frac{1}{2k+1}}$ is at most $2k$. Thus, $x$ and $y$ are adjacent in $G^{\frac{2k}{2k+1}}$ and their colors are different. Consequently, $f'$ is a proper coloring for $G^{\frac{2k+1}{2k+2}}$.\\ To prove $\chi(G^{\frac{2k+2}{2k+3}})=(k+1)\Delta(G)+1$, suppose that $f:V(G^{\frac{2k}{2k+1}})\rightarrow C$ is a proper coloring of $G^{\frac{2k}{2k+1}}$ such that $C=\{0,1,2,\ldots,k\Delta(G)\}$ and all t-vertices, have the same color 0. To construct $G^{\frac{2k+2}{2k+3}}$ from $G^{\frac{2k}{2k+1}}$, subdivide the central edge $\{i(k)j,j(k)i\}$ of any hyperedge $P_{ij}$ to three edges $\{i(k)j,i[k+1]j\}$, $\{i[k+1]j,i[k+2])j\}$ and $\{i[k+2]j,j(k)i\}$ which contain two new i-vertices $i[k+1]j$ and $i[k+2]j$. Now let $S$ be the set of the new i-vertices and all t-vertices. Clearly we can show that $G^{\frac{2k+2}{2k+3}}[S]$ is isomorphic to $G^{\frac{2}{3}}$. Thus, in view of Lemma 3, we have $\chi(G^{\frac{2k+2}{2k+3}}[S])=\Delta(G)+1$. Now let $f':V(G^{\frac{2k+2}{2k+3}}[S])\rightarrow\{0,c_1,c_2,\ldots,c_{\Delta}\}$ be a proper coloring of $G^{\frac{2k+2}{2k+3}}[S]$ such that the color of all t-vertices is 0 and $\{0,c_1,c_2,\ldots,c_{\Delta}\}\cap C=\{0\}$. Now we can define a proper coloring $f''$ for $G^{\frac{2k+2}{2k+3}}$, as follows:\\ \[f''(x)=\left\{\begin{array}{cl}f'(x)&x\in S\\f(x)&x=i(l)j,\thinspace l\leq k.\end{array}\right.\] Suppose that $x$ and $y$ are two adjacent vertices of $G^{\frac{2k+2}{2k+3}}$. If $x,y\in S$, then they are adjacent in $G^{\frac{2k+2}{2k+3}}[S]$. Hence, $f''(x)=f'(x)\neq f'(y)=f''(y)$. If only one of them is in $S$, obviously we have $f''(x)=f'(x)\neq f''(y)$ or $f''(y)=f'(y)\neq f''(x)$. Now suppose that $x,y\in V(G^{\frac{2k+2}{2k+3}})\setminus S$. Therefore, $d_{G^{\frac{1}{2k+3}}}(x,y)\leq 2k+2$. If $d_{G^{\frac{1}{2k+3}}}(x,y)\leq 2k$, then before adding central two i-vertices to each hyperedge, we have $d_{G^{\frac{1}{2k+1}}}(x,y)\leq 2k$. Then $x$ and $y$ are adjacent in $G^{\frac{2k}{2k+1}}$ and their colors are different. Now suppose that $d_{G^{\frac{1}{2k+3}}}(x,y)\geq 2k+1$. Therefore, there is a path of length $2k+1$ or $2k+2 $ between $x$ and $y$, which contains two new i-vertices of $S$. Hence, the distance of $x$ to $y$ in $G^{\frac{1}{2k+1}}$ is at most $2k$. Thus, $x$ and $y$ are adjacent in $G^{\frac{2k}{2k+1}}$ and their colors are different.} \end{proof} \begin{cor} {Let $G$ be a connected graph with $\Delta(G)\geq3$ and $k,m\in \mathbb{N}$. Then $\chi(G^{\frac{m}{k(m+1)}})=\omega(G^{\frac{m}{k(m+1)}})$. } \end{cor} \begin{proof} {We can result this corollary by using Lemma 1 and Theorem 4. } \end{proof} \begin{thm} {Let $G$ be a connected graph and $1<m\in \mathbb{N}$.\\ $(a)$ If $m$ is an even integer and $\Delta(G)\geq4$, then for any integer $n\geq 2m+2$ \[\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n}).\] $(b)$ If $m$ is an odd integer and $\Delta(G)\geq5$, then for any integer $n\geq 2m+2$ \[\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n}).\] } \end{thm} \begin{proof} {The proofs of parts $(a)$ and $(b)$ are similar. Thus, we only prove the first part.\\ The proof is by induction on $n$. Corollary 2 shows that $(a)$ is holds for $n=2m+2$. Suppose that $\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n})$ for some $n\geq 2m+2$. Let $f:V(G^{\frac{m}{n}})\rightarrow [\chi]$ be a proper coloring of $G^{\frac{m}{n}}$ when $\chi=\chi(G^{\frac{m}{n}})$. We extend this coloring to a proper coloring of $G^{\frac{m}{n+1}}$. On each hyperedge $P_{ij}$, Consider $2m$ consecutive i-vertices $i(1)j,i(2)j,\ldots,i(2m)j$ and let $V_{ij}=\{i(1)j,i(2)j,\ldots,i(2m)j\}$. So $|f(V_{ij})|\leq 2m$.\\ We have $\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n})=\frac{m}{2}\Delta(G)+1\geq 2m+1$. Thus, for any hyperedge $P_{ij}$, there is a color which does not assign to the vertices of $V_{ij}$, denoted by $c_{ij}$. Subdivide the edge $\{i(m)j,i(m+1)j\}$ of each hyperedge $P_{ij}$, to two edges and color the new i-vertex by $c_{ij}$. It is easy to check that we have a proper coloring of $G^{\frac{m}{n+1}}$.} \end{proof} We can divide Conjecture A to the following conjectures.\\ \emph{\noindent\textbf{Conjecture $A(m)$.} Let $G$ be a connected graph with $\Delta(G)\geq3$ and $m$ be a positive integer greater than 1. Then for any positive integer $n\geq m$, we have $\chi(G^{\frac{m}{n}})=\omega(G^\frac{m}{n})$.}\\ In view of Theorem 3, Conjecture $A(2)$ holds. In addition, by applying Lemma 1, we can show that $A(m)$ holds if it is correct only for any $n\in \{m+1,m+2,\ldots,2m+1\}$. The Conjectures $A$ and $A(m)$ remain open for any $m\geq3$.\\ \noindent {\bf Acknowledgment}\\ We would like to thank Professor Hossein Hajiabolhassan, M. Alishahi, A. Taherkhani and S. Shaebani for their useful comments. \end{document}
math
2007 Honda Odyssey Fuse Diagram is good choice for you that looking for nice reading experience. We hope you glad to visit our website. Please read our description and our privacy and policy page. Finally I get this ebook, thanks for all these 2007 Honda Odyssey Fuse Diagram can get now!
english
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package com.datatorrent.lib.appdata.query.serde; import java.io.IOException; import com.datatorrent.lib.appdata.schemas.Message; /** * This is an interface for a message deserializer. Classes implementing this interface should have a public * no-arg constructor. * @since 3.0.0 */ public interface CustomMessageDeserializer { /** * This method is called to convert a json string into a Message object. * @param json The JSON to deserialize. * @param message The Class object of the type of message to deserialize to. * @param context Any additional contextual information required to deserialize the message. * @return The deserialized message. * @throws IOException */ public abstract Message deserialize(String json, Class<? extends Message> message, Object context) throws IOException; }
code
Datalogic Gryphon I GBT4100 murah dengan garansi resmi datalogic. Info barcode scanner wireless Datalogic Gryphon I GBT4100 - hubungi dealer datalogic Jakarta, datalogic Surabaya, atau reseller Datalogic Gryphon I GBT4100 di seluruh Indonesia. The Datalogic Gryphon I GBT4100 Bluetooth cordless barcode scanner offers the richest feature set for general purpose barcode scanning applications. The durable Datalogic Gryphon I GBT4100 Bluetooth cordless barcode scanners are the perfect solution for applications in retail and light industrial environments where Bluetooth communications and mobility is desired for improved productivity. Featuring Bluetooth Wireless Technology, the Gryphon I GBT4100 Bluetooth cordless barcode scanners eliminate the need for cables that limit operator movement and create safety concerns in the workplace. The reader can also transmit data to the host through its base station as well as to any commercial or embedded Bluetooth v2.0 compliant device. Datalogics unique 2-position cradle provides multiple features to answer different functional and reduced space requirements. When in the ?up position, this imager can be used as a hands-free or presentation style reader. With the Scan-While-Charging feature, there is never any concern about depleted or dead batteries, ensuring constant up-time for increased productivity. The Datalogic Gryphon I GBT4100 Bluetooth cordless barcode scanner batch mode capability allows more than 1200 barcodes to be stored in the memory when the operator is out of range to transmit barcode data. This feature combined with a 33,000 scans per charge Lithium-Ion battery allows unlimited mobility and reliable data collection when out-of-range from the base station.
english
Experience the craftsmanship of early 20th century architectural details: hand-laid, Arts and Crafts style mosaic floors, carved-wood banisters and wrought-iron balustrades marked by the flowery emblems of the Art Nouveau style. With bright, updated interiors and natural hardwood floors throughout, these apartments are an ideal space in which to make your home. Large windows gathering light illuminate the interior and provide natural ventilation. I'm interested in 4455 Greenwood Avenue. Please let me know when this listing is available on ApartmentGuide.
english
A selection of images from the Linda Cooke Words and Images collection. English bluebells in Chaddesley Woods in Worcestershire. Soft focus. Shot on transparency film with diffuser on lens.
english
// Copyright (c) Hercules Dev Team, licensed under GNU GPL. // See the LICENSE file // Portions Copyright (c) Athena Dev Teams #ifndef _MAP_SKILL_H_ #define _MAP_SKILL_H_ #include "../common/mmo.h" // MAX_SKILL, struct square #include "../common/db.h" #include "map.h" // struct block_list /** * Declarations **/ struct map_session_data; struct homun_data; struct skill_unit; struct skill_unit_group; struct status_change_entry; struct square; /** * Defines **/ #define MAX_SKILL_DB MAX_SKILL #define MAX_SKILL_PRODUCE_DB 270 #define MAX_PRODUCE_RESOURCE 10 #define MAX_SKILL_ARROW_DB 140 #define MAX_ARROW_RESOURCE 5 #define MAX_SKILL_ABRA_DB 210 #define MAX_SKILL_IMPROVISE_DB 30 #define MAX_SKILL_LEVEL 10 #define MAX_SKILL_UNIT_LAYOUT 45 #define MAX_SQUARE_LAYOUT 5 // 11*11 Placement of a maximum unit #define MAX_SKILL_UNIT_COUNT ((MAX_SQUARE_LAYOUT*2+1)*(MAX_SQUARE_LAYOUT*2+1)) #define MAX_SKILLTIMERSKILL 15 #define MAX_SKILLUNITGROUP 25 #define MAX_SKILL_ITEM_REQUIRE 10 #define MAX_SKILLUNITGROUPTICKSET 25 #define MAX_SKILL_NAME_LENGTH 30 // (Epoque:) To-do: replace this macro with some sort of skill tree check (rather than hard-coded skill names) #define skill_ischangesex(id) ( \ ((id) >= BD_ADAPTATION && (id) <= DC_SERVICEFORYOU) || ((id) >= CG_ARROWVULCAN && (id) <= CG_MARIONETTE) || \ ((id) >= CG_LONGINGFREEDOM && (id) <= CG_TAROTCARD) || ((id) >= WA_SWING_DANCE && (id) <= WM_UNLIMITED_HUMMING_VOICE)) #define MAX_SKILL_SPELLBOOK_DB 17 #define MAX_SKILL_MAGICMUSHROOM_DB 23 //Walk intervals at which chase-skills are attempted to be triggered. #define WALK_SKILL_INTERVAL 5 /** * Enumerations **/ //Constants to identify the skill's inf value: enum e_skill_inf { INF_ATTACK_SKILL = 0x01, INF_GROUND_SKILL = 0x02, INF_SELF_SKILL = 0x04, // Skills casted on self where target is automatically chosen // 0x08 not assigned INF_SUPPORT_SKILL = 0x10, INF_TARGET_TRAP = 0x20, }; //Constants to identify a skill's nk value (damage properties) //The NK value applies only to non INF_GROUND_SKILL skills //when determining skill castend function to invoke. enum e_skill_nk { NK_NO_DAMAGE = 0x01, NK_SPLASH = 0x02|0x04, // 0x4 = splash & split NK_SPLASHSPLIT = 0x04, NK_NO_CARDFIX_ATK = 0x08, NK_NO_ELEFIX = 0x10, NK_IGNORE_DEF = 0x20, NK_IGNORE_FLEE = 0x40, NK_NO_CARDFIX_DEF = 0x80, }; //A skill with 3 would be no damage + splash: area of effect. //Constants to identify a skill's inf2 value. enum e_skill_inf2 { INF2_QUEST_SKILL = 0x0001, INF2_NPC_SKILL = 0x0002, // NPC skills are those that players can't have in their skill tree. INF2_WEDDING_SKILL = 0x0004, INF2_SPIRIT_SKILL = 0x0008, INF2_GUILD_SKILL = 0x0010, INF2_SONG_DANCE = 0x0020, INF2_ENSEMBLE_SKILL = 0x0040, INF2_TRAP = 0x0080, INF2_TARGET_SELF = 0x0100, // Refers to ground placed skills that will target the caster as well (like Grandcross) INF2_NO_TARGET_SELF = 0x0200, INF2_PARTY_ONLY = 0x0400, INF2_GUILD_ONLY = 0x0800, INF2_NO_ENEMY = 0x1000, INF2_NOLP = 0x2000, // Spells that can ignore Land Protector INF2_CHORUS_SKILL = 0x4000, // Chorus skill }; // Flags passed to skill_attack/skill_area_sub enum e_skill_display { SD_LEVEL = 0x1000, // skill_attack will send -1 instead of skill level (affects display of some skills) SD_ANIMATION = 0x2000, // skill_attack will use '5' instead of the skill's 'type' (this makes skills show an animation) SD_SPLASH = 0x4000, // skill_area_sub will count targets in skill_area_temp[2] SD_PREAMBLE = 0x8000, // skill_area_sub will transmit a 'magic' damage packet (-30000 dmg) for the first target selected }; enum { UF_DEFNOTENEMY = 0x0001, // If 'defunit_not_enemy' is set, the target is changed to 'friend' UF_NOREITERATION = 0x0002, // Spell cannot be stacked UF_NOFOOTSET = 0x0004, // Spell cannot be cast near/on targets UF_NOOVERLAP = 0x0008, // Spell effects do not overlap UF_PATHCHECK = 0x0010, // Only cells with a shootable path will be placed UF_NOPC = 0x0020, // May not target players UF_NOMOB = 0x0040, // May not target mobs UF_SKILL = 0x0080, // May target skills UF_DANCE = 0x0100, // Dance UF_ENSEMBLE = 0x0200, // Duet UF_SONG = 0x0400, // Song UF_DUALMODE = 0x0800, // Spells should trigger both ontimer and onplace/onout/onleft effects. UF_RANGEDSINGLEUNIT = 0x2000 // Hack for ranged layout, only display center }; //Returns the cast type of the skill: ground cast, castend damage, castend no damage enum { CAST_GROUND, CAST_DAMAGE, CAST_NODAMAGE }; enum wl_spheres { WLS_FIRE = 0x44, WLS_WIND, WLS_WATER, WLS_STONE, }; enum { ST_NONE, ST_HIDING, ST_CLOAKING, ST_HIDDEN, ST_RIDING, ST_FALCON, ST_CART, ST_SHIELD, ST_SIGHT, ST_EXPLOSIONSPIRITS, ST_CARTBOOST, ST_RECOV_WEIGHT_RATE, ST_MOVE_ENABLE, ST_WATER, ST_RIDINGDRAGON, ST_WUG, ST_RIDINGWUG, ST_MADO, ST_ELEMENTALSPIRIT, ST_POISONINGWEAPON, ST_ROLLINGCUTTER, ST_MH_FIGHTING, ST_MH_GRAPPLING, ST_PECO, }; enum e_skill { NV_BASIC = 1, SM_SWORD, SM_TWOHAND, SM_RECOVERY, SM_BASH, SM_PROVOKE, SM_MAGNUM, SM_ENDURE, MG_SRECOVERY, MG_SIGHT, MG_NAPALMBEAT, MG_SAFETYWALL, MG_SOULSTRIKE, MG_COLDBOLT, MG_FROSTDIVER, MG_STONECURSE, MG_FIREBALL, MG_FIREWALL, MG_FIREBOLT, MG_LIGHTNINGBOLT, MG_THUNDERSTORM, AL_DP, AL_DEMONBANE, AL_RUWACH, AL_PNEUMA, AL_TELEPORT, AL_WARP, AL_HEAL, AL_INCAGI, AL_DECAGI, AL_HOLYWATER, AL_CRUCIS, AL_ANGELUS, AL_BLESSING, AL_CURE, MC_INCCARRY, MC_DISCOUNT, MC_OVERCHARGE, MC_PUSHCART, MC_IDENTIFY, MC_VENDING, MC_MAMMONITE, AC_OWL, AC_VULTURE, AC_CONCENTRATION, AC_DOUBLE, AC_SHOWER, TF_DOUBLE, TF_MISS, TF_STEAL, TF_HIDING, TF_POISON, TF_DETOXIFY, ALL_RESURRECTION, KN_SPEARMASTERY, KN_PIERCE, KN_BRANDISHSPEAR, KN_SPEARSTAB, KN_SPEARBOOMERANG, KN_TWOHANDQUICKEN, KN_AUTOCOUNTER, KN_BOWLINGBASH, KN_RIDING, KN_CAVALIERMASTERY, PR_MACEMASTERY, PR_IMPOSITIO, PR_SUFFRAGIUM, PR_ASPERSIO, PR_BENEDICTIO, PR_SANCTUARY, PR_SLOWPOISON, PR_STRECOVERY, PR_KYRIE, PR_MAGNIFICAT, PR_GLORIA, PR_LEXDIVINA, PR_TURNUNDEAD, PR_LEXAETERNA, PR_MAGNUS, WZ_FIREPILLAR, WZ_SIGHTRASHER, WZ_FIREIVY, WZ_METEOR, WZ_JUPITEL, WZ_VERMILION, WZ_WATERBALL, WZ_ICEWALL, WZ_FROSTNOVA, WZ_STORMGUST, WZ_EARTHSPIKE, WZ_HEAVENDRIVE, WZ_QUAGMIRE, WZ_ESTIMATION, BS_IRON, BS_STEEL, BS_ENCHANTEDSTONE, BS_ORIDEOCON, BS_DAGGER, BS_SWORD, BS_TWOHANDSWORD, BS_AXE, BS_MACE, BS_KNUCKLE, BS_SPEAR, BS_HILTBINDING, BS_FINDINGORE, BS_WEAPONRESEARCH, BS_REPAIRWEAPON, BS_SKINTEMPER, BS_HAMMERFALL, BS_ADRENALINE, BS_WEAPONPERFECT, BS_OVERTHRUST, BS_MAXIMIZE, HT_SKIDTRAP, HT_LANDMINE, HT_ANKLESNARE, HT_SHOCKWAVE, HT_SANDMAN, HT_FLASHER, HT_FREEZINGTRAP, HT_BLASTMINE, HT_CLAYMORETRAP, HT_REMOVETRAP, HT_TALKIEBOX, HT_BEASTBANE, HT_FALCON, HT_STEELCROW, HT_BLITZBEAT, HT_DETECTING, HT_SPRINGTRAP, AS_RIGHT, AS_LEFT, AS_KATAR, AS_CLOAKING, AS_SONICBLOW, AS_GRIMTOOTH, AS_ENCHANTPOISON, AS_POISONREACT, AS_VENOMDUST, AS_SPLASHER, NV_FIRSTAID, NV_TRICKDEAD, SM_MOVINGRECOVERY, SM_FATALBLOW, SM_AUTOBERSERK, AC_MAKINGARROW, AC_CHARGEARROW, TF_SPRINKLESAND, TF_BACKSLIDING, TF_PICKSTONE, TF_THROWSTONE, MC_CARTREVOLUTION, MC_CHANGECART, MC_LOUD, AL_HOLYLIGHT, MG_ENERGYCOAT, NPC_PIERCINGATT, NPC_MENTALBREAKER, NPC_RANGEATTACK, NPC_ATTRICHANGE, NPC_CHANGEWATER, NPC_CHANGEGROUND, NPC_CHANGEFIRE, NPC_CHANGEWIND, NPC_CHANGEPOISON, NPC_CHANGEHOLY, NPC_CHANGEDARKNESS, NPC_CHANGETELEKINESIS, NPC_CRITICALSLASH, NPC_COMBOATTACK, NPC_GUIDEDATTACK, NPC_SELFDESTRUCTION, NPC_SPLASHATTACK, NPC_SUICIDE, NPC_POISON, NPC_BLINDATTACK, NPC_SILENCEATTACK, NPC_STUNATTACK, NPC_PETRIFYATTACK, NPC_CURSEATTACK, NPC_SLEEPATTACK, NPC_RANDOMATTACK, NPC_WATERATTACK, NPC_GROUNDATTACK, NPC_FIREATTACK, NPC_WINDATTACK, NPC_POISONATTACK, NPC_HOLYATTACK, NPC_DARKNESSATTACK, NPC_TELEKINESISATTACK, NPC_MAGICALATTACK, NPC_METAMORPHOSIS, NPC_PROVOCATION, NPC_SMOKING, NPC_SUMMONSLAVE, NPC_EMOTION, NPC_TRANSFORMATION, NPC_BLOODDRAIN, NPC_ENERGYDRAIN, NPC_KEEPING, NPC_DARKBREATH, NPC_DARKBLESSING, NPC_BARRIER, NPC_DEFENDER, NPC_LICK, NPC_HALLUCINATION, NPC_REBIRTH, NPC_SUMMONMONSTER, RG_SNATCHER, RG_STEALCOIN, RG_BACKSTAP, RG_TUNNELDRIVE, RG_RAID, RG_STRIPWEAPON, RG_STRIPSHIELD, RG_STRIPARMOR, RG_STRIPHELM, RG_INTIMIDATE, RG_GRAFFITI, RG_FLAGGRAFFITI, RG_CLEANER, RG_GANGSTER, RG_COMPULSION, RG_PLAGIARISM, AM_AXEMASTERY, AM_LEARNINGPOTION, AM_PHARMACY, AM_DEMONSTRATION, AM_ACIDTERROR, AM_POTIONPITCHER, AM_CANNIBALIZE, AM_SPHEREMINE, AM_CP_WEAPON, AM_CP_SHIELD, AM_CP_ARMOR, AM_CP_HELM, AM_BIOETHICS, AM_BIOTECHNOLOGY, AM_CREATECREATURE, AM_CULTIVATION, AM_FLAMECONTROL, AM_CALLHOMUN, AM_REST, AM_DRILLMASTER, AM_HEALHOMUN, AM_RESURRECTHOMUN, CR_TRUST, CR_AUTOGUARD, CR_SHIELDCHARGE, CR_SHIELDBOOMERANG, CR_REFLECTSHIELD, CR_HOLYCROSS, CR_GRANDCROSS, CR_DEVOTION, CR_PROVIDENCE, CR_DEFENDER, CR_SPEARQUICKEN, MO_IRONHAND, MO_SPIRITSRECOVERY, MO_CALLSPIRITS, MO_ABSORBSPIRITS, MO_TRIPLEATTACK, MO_BODYRELOCATION, MO_DODGE, MO_INVESTIGATE, MO_FINGEROFFENSIVE, MO_STEELBODY, MO_BLADESTOP, MO_EXPLOSIONSPIRITS, MO_EXTREMITYFIST, MO_CHAINCOMBO, MO_COMBOFINISH, SA_ADVANCEDBOOK, SA_CASTCANCEL, SA_MAGICROD, SA_SPELLBREAKER, SA_FREECAST, SA_AUTOSPELL, SA_FLAMELAUNCHER, SA_FROSTWEAPON, SA_LIGHTNINGLOADER, SA_SEISMICWEAPON, SA_DRAGONOLOGY, SA_VOLCANO, SA_DELUGE, SA_VIOLENTGALE, SA_LANDPROTECTOR, SA_DISPELL, SA_ABRACADABRA, SA_MONOCELL, SA_CLASSCHANGE, SA_SUMMONMONSTER, SA_REVERSEORCISH, SA_DEATH, SA_FORTUNE, SA_TAMINGMONSTER, SA_QUESTION, SA_GRAVITY, SA_LEVELUP, SA_INSTANTDEATH, SA_FULLRECOVERY, SA_COMA, BD_ADAPTATION, BD_ENCORE, BD_LULLABY, BD_RICHMANKIM, BD_ETERNALCHAOS, BD_DRUMBATTLEFIELD, BD_RINGNIBELUNGEN, BD_ROKISWEIL, BD_INTOABYSS, BD_SIEGFRIED, BD_RAGNAROK, BA_MUSICALLESSON, BA_MUSICALSTRIKE, BA_DISSONANCE, BA_FROSTJOKER, BA_WHISTLE, BA_ASSASSINCROSS, BA_POEMBRAGI, BA_APPLEIDUN, DC_DANCINGLESSON, DC_THROWARROW, DC_UGLYDANCE, DC_SCREAM, DC_HUMMING, DC_DONTFORGETME, DC_FORTUNEKISS, DC_SERVICEFORYOU, NPC_RANDOMMOVE, NPC_SPEEDUP, NPC_REVENGE, WE_MALE, WE_FEMALE, WE_CALLPARTNER, ITM_TOMAHAWK, NPC_DARKCROSS, NPC_GRANDDARKNESS, NPC_DARKSTRIKE, NPC_DARKTHUNDER, NPC_STOP, NPC_WEAPONBRAKER, NPC_ARMORBRAKE, NPC_HELMBRAKE, NPC_SHIELDBRAKE, NPC_UNDEADATTACK, NPC_CHANGEUNDEAD, NPC_POWERUP, NPC_AGIUP, NPC_SIEGEMODE, NPC_CALLSLAVE, NPC_INVISIBLE, NPC_RUN, LK_AURABLADE, LK_PARRYING, LK_CONCENTRATION, LK_TENSIONRELAX, LK_BERSERK, LK_FURY, HP_ASSUMPTIO, HP_BASILICA, HP_MEDITATIO, HW_SOULDRAIN, HW_MAGICCRASHER, HW_MAGICPOWER, PA_PRESSURE, PA_SACRIFICE, PA_GOSPEL, CH_PALMSTRIKE, CH_TIGERFIST, CH_CHAINCRUSH, PF_HPCONVERSION, PF_SOULCHANGE, PF_SOULBURN, ASC_KATAR, ASC_HALLUCINATION, ASC_EDP, ASC_BREAKER, SN_SIGHT, SN_FALCONASSAULT, SN_SHARPSHOOTING, SN_WINDWALK, WS_MELTDOWN, WS_CREATECOIN, WS_CREATENUGGET, WS_CARTBOOST, WS_SYSTEMCREATE, ST_CHASEWALK, ST_REJECTSWORD, ST_STEALBACKPACK, CR_ALCHEMY, CR_SYNTHESISPOTION, CG_ARROWVULCAN, CG_MOONLIT, CG_MARIONETTE, LK_SPIRALPIERCE, LK_HEADCRUSH, LK_JOINTBEAT, HW_NAPALMVULCAN, CH_SOULCOLLECT, PF_MINDBREAKER, PF_MEMORIZE, PF_FOGWALL, PF_SPIDERWEB, ASC_METEORASSAULT, ASC_CDP, WE_BABY, WE_CALLPARENT, WE_CALLBABY, TK_RUN, TK_READYSTORM, TK_STORMKICK, TK_READYDOWN, TK_DOWNKICK, TK_READYTURN, TK_TURNKICK, TK_READYCOUNTER, TK_COUNTER, TK_DODGE, TK_JUMPKICK, TK_HPTIME, TK_SPTIME, TK_POWER, TK_SEVENWIND, TK_HIGHJUMP, SG_FEEL, SG_SUN_WARM, SG_MOON_WARM, SG_STAR_WARM, SG_SUN_COMFORT, SG_MOON_COMFORT, SG_STAR_COMFORT, SG_HATE, SG_SUN_ANGER, SG_MOON_ANGER, SG_STAR_ANGER, SG_SUN_BLESS, SG_MOON_BLESS, SG_STAR_BLESS, SG_DEVIL, SG_FRIEND, SG_KNOWLEDGE, SG_FUSION, SL_ALCHEMIST, AM_BERSERKPITCHER, SL_MONK, SL_STAR, SL_SAGE, SL_CRUSADER, SL_SUPERNOVICE, SL_KNIGHT, SL_WIZARD, SL_PRIEST, SL_BARDDANCER, SL_ROGUE, SL_ASSASIN, SL_BLACKSMITH, BS_ADRENALINE2, SL_HUNTER, SL_SOULLINKER, SL_KAIZEL, SL_KAAHI, SL_KAUPE, SL_KAITE, SL_KAINA, SL_STIN, SL_STUN, SL_SMA, SL_SWOO, SL_SKE, SL_SKA, SM_SELFPROVOKE, NPC_EMOTION_ON, ST_PRESERVE, ST_FULLSTRIP, WS_WEAPONREFINE, CR_SLIMPITCHER, CR_FULLPROTECTION, PA_SHIELDCHAIN, HP_MANARECHARGE, PF_DOUBLECASTING, HW_GANBANTEIN, HW_GRAVITATION, WS_CARTTERMINATION, WS_OVERTHRUSTMAX, CG_LONGINGFREEDOM, CG_HERMODE, CG_TAROTCARD, CR_ACIDDEMONSTRATION, CR_CULTIVATION, ITEM_ENCHANTARMS, TK_MISSION, SL_HIGH, KN_ONEHAND, AM_TWILIGHT1, AM_TWILIGHT2, AM_TWILIGHT3, HT_POWER, GS_GLITTERING, GS_FLING, GS_TRIPLEACTION, GS_BULLSEYE, GS_MADNESSCANCEL, GS_ADJUSTMENT, GS_INCREASING, GS_MAGICALBULLET, GS_CRACKER, GS_SINGLEACTION, GS_SNAKEEYE, GS_CHAINACTION, GS_TRACKING, GS_DISARM, GS_PIERCINGSHOT, GS_RAPIDSHOWER, GS_DESPERADO, GS_GATLINGFEVER, GS_DUST, GS_FULLBUSTER, GS_SPREADATTACK, GS_GROUNDDRIFT, NJ_TOBIDOUGU, NJ_SYURIKEN, NJ_KUNAI, NJ_HUUMA, NJ_ZENYNAGE, NJ_TATAMIGAESHI, NJ_KASUMIKIRI, NJ_SHADOWJUMP, NJ_KIRIKAGE, NJ_UTSUSEMI, NJ_BUNSINJYUTSU, NJ_NINPOU, NJ_KOUENKA, NJ_KAENSIN, NJ_BAKUENRYU, NJ_HYOUSENSOU, NJ_SUITON, NJ_HYOUSYOURAKU, NJ_HUUJIN, NJ_RAIGEKISAI, NJ_KAMAITACHI, NJ_NEN, NJ_ISSEN, MB_FIGHTING, MB_NEUTRAL, MB_TAIMING_PUTI, MB_WHITEPOTION, MB_MENTAL, MB_CARDPITCHER, MB_PETPITCHER, MB_BODYSTUDY, MB_BODYALTER, MB_PETMEMORY, MB_M_TELEPORT, MB_B_GAIN, MB_M_GAIN, MB_MISSION, MB_MUNAKKNOWLEDGE, MB_MUNAKBALL, MB_SCROLL, MB_B_GATHERING, MB_M_GATHERING, MB_B_EXCLUDE, MB_B_DRIFT, MB_B_WALLRUSH, MB_M_WALLRUSH, MB_B_WALLSHIFT, MB_M_WALLCRASH, MB_M_REINCARNATION, MB_B_EQUIP, SL_DEATHKNIGHT, SL_COLLECTOR, SL_NINJA, SL_GUNNER, AM_TWILIGHT4, DA_RESET, DE_BERSERKAIZER, DA_DARKPOWER, DE_PASSIVE, DE_PATTACK, DE_PSPEED, DE_PDEFENSE, DE_PCRITICAL, DE_PHP, DE_PSP, DE_RESET, DE_RANKING, DE_PTRIPLE, DE_ENERGY, DE_NIGHTMARE, DE_SLASH, DE_COIL, DE_WAVE, DE_REBIRTH, DE_AURA, DE_FREEZER, DE_CHANGEATTACK, DE_PUNISH, DE_POISON, DE_INSTANT, DE_WARNING, DE_RANKEDKNIFE, DE_RANKEDGRADIUS, DE_GAUGE, DE_GTIME, DE_GPAIN, DE_GSKILL, DE_GKILL, DE_ACCEL, DE_BLOCKDOUBLE, DE_BLOCKMELEE, DE_BLOCKFAR, DE_FRONTATTACK, DE_DANGERATTACK, DE_TWINATTACK, DE_WINDATTACK, DE_WATERATTACK, DA_ENERGY, DA_CLOUD, DA_FIRSTSLOT, DA_HEADDEF, DA_SPACE, DA_TRANSFORM, DA_EXPLOSION, DA_REWARD, DA_CRUSH, DA_ITEMREBUILD, DA_ILLUSION, DA_NUETRALIZE, DA_RUNNER, DA_TRANSFER, DA_WALL, DA_ZENY, DA_REVENGE, DA_EARPLUG, DA_CONTRACT, DA_BLACK, DA_DREAM, DA_MAGICCART, DA_COPY, DA_CRYSTAL, DA_EXP, DA_CARTSWING, DA_REBUILD, DA_JOBCHANGE, DA_EDARKNESS, DA_EGUARDIAN, DA_TIMEOUT, ALL_TIMEIN, DA_ZENYRANK, DA_ACCESSORYMIX, NPC_EARTHQUAKE, NPC_FIREBREATH, NPC_ICEBREATH, NPC_THUNDERBREATH, NPC_ACIDBREATH, NPC_DARKNESSBREATH, NPC_DRAGONFEAR, NPC_BLEEDING, NPC_PULSESTRIKE, NPC_HELLJUDGEMENT, NPC_WIDESILENCE, NPC_WIDEFREEZE, NPC_WIDEBLEEDING, NPC_WIDESTONE, NPC_WIDECONFUSE, NPC_WIDESLEEP, NPC_WIDESIGHT, NPC_EVILLAND, NPC_MAGICMIRROR, NPC_SLOWCAST, NPC_CRITICALWOUND, NPC_EXPULSION, NPC_STONESKIN, NPC_ANTIMAGIC, NPC_WIDECURSE, NPC_WIDESTUN, NPC_VAMPIRE_GIFT, NPC_WIDESOULDRAIN, ALL_INCCARRY, NPC_TALK, NPC_HELLPOWER, NPC_WIDEHELLDIGNITY, NPC_INVINCIBLE, NPC_INVINCIBLEOFF, NPC_ALLHEAL, GM_SANDMAN, CASH_BLESSING, CASH_INCAGI, CASH_ASSUMPTIO, ALL_CATCRY, ALL_PARTYFLEE, ALL_ANGEL_PROTECT, ALL_DREAM_SUMMERNIGHT, NPC_CHANGEUNDEAD2, ALL_REVERSEORCISH, ALL_WEWISH, ALL_SONKRAN, NPC_WIDEHEALTHFEAR, NPC_WIDEBODYBURNNING, NPC_WIDEFROSTMISTY, NPC_WIDECOLD, NPC_WIDE_DEEP_SLEEP, NPC_WIDESIREN, NPC_VENOMFOG, NPC_MILLENNIUMSHIELD, NPC_COMET, NPC_WIDEWEB, NPC_WIDESUCK, NPC_STORMGUST2, NPC_FIRESTORM, NPC_REVERBERATION, NPC_REVERBERATION_ATK, NPC_LEX_AETERNA, KN_CHARGEATK = 1001, CR_SHRINK, AS_SONICACCEL, AS_VENOMKNIFE, RG_CLOSECONFINE, WZ_SIGHTBLASTER, SA_CREATECON, SA_ELEMENTWATER, HT_PHANTASMIC, BA_PANGVOICE, DC_WINKCHARM, BS_UNFAIRLYTRICK, BS_GREED, PR_REDEMPTIO, MO_KITRANSLATION, MO_BALKYOUNG, SA_ELEMENTGROUND, SA_ELEMENTFIRE, SA_ELEMENTWIND, RK_ENCHANTBLADE = 2001, RK_SONICWAVE, RK_DEATHBOUND, RK_HUNDREDSPEAR, RK_WINDCUTTER, RK_IGNITIONBREAK, RK_DRAGONTRAINING, RK_DRAGONBREATH, RK_DRAGONHOWLING, RK_RUNEMASTERY, RK_MILLENNIUMSHIELD, RK_CRUSHSTRIKE, RK_REFRESH, RK_GIANTGROWTH, RK_STONEHARDSKIN, RK_VITALITYACTIVATION, RK_STORMBLAST, RK_FIGHTINGSPIRIT, RK_ABUNDANCE, RK_PHANTOMTHRUST, GC_VENOMIMPRESS, GC_CROSSIMPACT, GC_DARKILLUSION, GC_RESEARCHNEWPOISON, GC_CREATENEWPOISON, GC_ANTIDOTE, GC_POISONINGWEAPON, GC_WEAPONBLOCKING, GC_COUNTERSLASH, GC_WEAPONCRUSH, GC_VENOMPRESSURE, GC_POISONSMOKE, GC_CLOAKINGEXCEED, GC_PHANTOMMENACE, GC_HALLUCINATIONWALK, GC_ROLLINGCUTTER, GC_CROSSRIPPERSLASHER, AB_JUDEX, AB_ANCILLA, AB_ADORAMUS, AB_CLEMENTIA, AB_CANTO, AB_CHEAL, AB_EPICLESIS, AB_PRAEFATIO, AB_ORATIO, AB_LAUDAAGNUS, AB_LAUDARAMUS, AB_EUCHARISTICA, AB_RENOVATIO, AB_HIGHNESSHEAL, AB_CLEARANCE, AB_EXPIATIO, AB_DUPLELIGHT, AB_DUPLELIGHT_MELEE, AB_DUPLELIGHT_MAGIC, AB_SILENTIUM, WL_WHITEIMPRISON = 2201, WL_SOULEXPANSION, WL_FROSTMISTY, WL_JACKFROST, WL_MARSHOFABYSS, WL_RECOGNIZEDSPELL, WL_SIENNAEXECRATE, WL_RADIUS, WL_STASIS, WL_DRAINLIFE, WL_CRIMSONROCK, WL_HELLINFERNO, WL_COMET, WL_CHAINLIGHTNING, WL_CHAINLIGHTNING_ATK, WL_EARTHSTRAIN, WL_TETRAVORTEX, WL_TETRAVORTEX_FIRE, WL_TETRAVORTEX_WATER, WL_TETRAVORTEX_WIND, WL_TETRAVORTEX_GROUND, WL_SUMMONFB, WL_SUMMONBL, WL_SUMMONWB, WL_SUMMON_ATK_FIRE, WL_SUMMON_ATK_WIND, WL_SUMMON_ATK_WATER, WL_SUMMON_ATK_GROUND, WL_SUMMONSTONE, WL_RELEASE, WL_READING_SB, WL_FREEZE_SP, RA_ARROWSTORM, RA_FEARBREEZE, RA_RANGERMAIN, RA_AIMEDBOLT, RA_DETONATOR, RA_ELECTRICSHOCKER, RA_CLUSTERBOMB, RA_WUGMASTERY, RA_WUGRIDER, RA_WUGDASH, RA_WUGSTRIKE, RA_WUGBITE, RA_TOOTHOFWUG, RA_SENSITIVEKEEN, RA_CAMOUFLAGE, RA_RESEARCHTRAP, RA_MAGENTATRAP, RA_COBALTTRAP, RA_MAIZETRAP, RA_VERDURETRAP, RA_FIRINGTRAP, RA_ICEBOUNDTRAP, NC_MADOLICENCE, NC_BOOSTKNUCKLE, NC_PILEBUNKER, NC_VULCANARM, NC_FLAMELAUNCHER, NC_COLDSLOWER, NC_ARMSCANNON, NC_ACCELERATION, NC_HOVERING, NC_F_SIDESLIDE, NC_B_SIDESLIDE, NC_MAINFRAME, NC_SELFDESTRUCTION, NC_SHAPESHIFT, NC_EMERGENCYCOOL, NC_INFRAREDSCAN, NC_ANALYZE, NC_MAGNETICFIELD, NC_NEUTRALBARRIER, NC_STEALTHFIELD, NC_REPAIR, NC_TRAININGAXE, NC_RESEARCHFE, NC_AXEBOOMERANG, NC_POWERSWING, NC_AXETORNADO, NC_SILVERSNIPER, NC_MAGICDECOY, NC_DISJOINT, SC_FATALMENACE, SC_REPRODUCE, SC_AUTOSHADOWSPELL, SC_SHADOWFORM, SC_TRIANGLESHOT, SC_BODYPAINT, SC_INVISIBILITY, SC_DEADLYINFECT, SC_ENERVATION, SC_GROOMY, SC_IGNORANCE, SC_LAZINESS, SC_UNLUCKY, SC_WEAKNESS, SC_STRIPACCESSARY, SC_MANHOLE, SC_DIMENSIONDOOR, SC_CHAOSPANIC, SC_MAELSTROM, SC_BLOODYLUST, SC_FEINTBOMB, LG_CANNONSPEAR = 2307, LG_BANISHINGPOINT, LG_TRAMPLE, LG_SHIELDPRESS, LG_REFLECTDAMAGE, LG_PINPOINTATTACK, LG_FORCEOFVANGUARD, LG_RAGEBURST, LG_SHIELDSPELL, LG_EXEEDBREAK, LG_OVERBRAND, LG_PRESTIGE, LG_BANDING, LG_MOONSLASHER, LG_RAYOFGENESIS, LG_PIETY, LG_EARTHDRIVE, LG_HESPERUSLIT, LG_INSPIRATION, SR_DRAGONCOMBO, SR_SKYNETBLOW, SR_EARTHSHAKER, SR_FALLENEMPIRE, SR_TIGERCANNON, SR_HELLGATE, SR_RAMPAGEBLASTER, SR_CRESCENTELBOW, SR_CURSEDCIRCLE, SR_LIGHTNINGWALK, SR_KNUCKLEARROW, SR_WINDMILL, SR_RAISINGDRAGON, SR_GENTLETOUCH, SR_ASSIMILATEPOWER, SR_POWERVELOCITY, SR_CRESCENTELBOW_AUTOSPELL, SR_GATEOFHELL, SR_GENTLETOUCH_QUIET, SR_GENTLETOUCH_CURE, SR_GENTLETOUCH_ENERGYGAIN, SR_GENTLETOUCH_CHANGE, SR_GENTLETOUCH_REVITALIZE, WA_SWING_DANCE = 2350, WA_SYMPHONY_OF_LOVER, WA_MOONLIT_SERENADE, MI_RUSH_WINDMILL = 2381, MI_ECHOSONG, MI_HARMONIZE, WM_LESSON = 2412, WM_METALICSOUND, WM_REVERBERATION, WM_REVERBERATION_MELEE, WM_REVERBERATION_MAGIC, WM_DOMINION_IMPULSE, WM_SEVERE_RAINSTORM, WM_POEMOFNETHERWORLD, WM_VOICEOFSIREN, WM_DEADHILLHERE, WM_LULLABY_DEEPSLEEP, WM_SIRCLEOFNATURE, WM_RANDOMIZESPELL, WM_GLOOMYDAY, WM_GREAT_ECHO, WM_SONG_OF_MANA, WM_DANCE_WITH_WUG, WM_SOUND_OF_DESTRUCTION, WM_SATURDAY_NIGHT_FEVER, WM_LERADS_DEW, WM_MELODYOFSINK, WM_BEYOND_OF_WARCRY, WM_UNLIMITED_HUMMING_VOICE, SO_FIREWALK = 2443, SO_ELECTRICWALK, SO_SPELLFIST, SO_EARTHGRAVE, SO_DIAMONDDUST, SO_POISON_BUSTER, SO_PSYCHIC_WAVE, SO_CLOUD_KILL, SO_STRIKING, SO_WARMER, SO_VACUUM_EXTREME, SO_VARETYR_SPEAR, SO_ARRULLO, SO_EL_CONTROL, SO_SUMMON_AGNI, SO_SUMMON_AQUA, SO_SUMMON_VENTUS, SO_SUMMON_TERA, SO_EL_ACTION, SO_EL_ANALYSIS, SO_EL_SYMPATHY, SO_EL_CURE, SO_FIRE_INSIGNIA, SO_WATER_INSIGNIA, SO_WIND_INSIGNIA, SO_EARTH_INSIGNIA, GN_TRAINING_SWORD = 2474, GN_REMODELING_CART, GN_CART_TORNADO, GN_CARTCANNON, GN_CARTBOOST, GN_THORNS_TRAP, GN_BLOOD_SUCKER, GN_SPORE_EXPLOSION, GN_WALLOFTHORN, GN_CRAZYWEED, GN_CRAZYWEED_ATK, GN_DEMONIC_FIRE, GN_FIRE_EXPANSION, GN_FIRE_EXPANSION_SMOKE_POWDER, GN_FIRE_EXPANSION_TEAR_GAS, GN_FIRE_EXPANSION_ACID, GN_HELLS_PLANT, GN_HELLS_PLANT_ATK, GN_MANDRAGORA, GN_SLINGITEM, GN_CHANGEMATERIAL, GN_MIX_COOKING, GN_MAKEBOMB, GN_S_PHARMACY, GN_SLINGITEM_RANGEMELEEATK, AB_SECRAMENT = 2515, WM_SEVERE_RAINSTORM_MELEE, SR_HOWLINGOFLION, SR_RIDEINLIGHTNING, LG_OVERBRAND_BRANDISH, LG_OVERBRAND_PLUSATK, ALL_ODINS_RECALL = 2533, RETURN_TO_ELDICASTES, ALL_BUYING_STORE, ALL_GUARDIAN_RECALL, ALL_ODINS_POWER, BEER_BOTTLE_CAP, NPC_ASSASSINCROSS, NPC_DISSONANCE, NPC_UGLYDANCE, ALL_TETANY, ALL_RAY_OF_PROTECTION, MC_CARTDECORATE, RL_GLITTERING_GREED = 2551, RL_RICHS_COIN, RL_MASS_SPIRAL, RL_BANISHING_BUSTER, RL_B_TRAP, RL_FLICKER, RL_S_STORM, RL_E_CHAIN, RL_QD_SHOT, RL_C_MARKER, RL_FIREDANCE, RL_H_MINE, RL_P_ALTER, RL_FALLEN_ANGEL, RL_R_TRIP, RL_D_TAIL, RL_FIRE_RAIN, RL_HEAT_BARREL, RL_AM_BLAST, RL_SLUGSHOT, RL_HAMMER_OF_GOD, RL_R_TRIP_PLUSATK, RL_B_FLICKER_ATK, RL_GLITTERING_GREED_ATK, KO_YAMIKUMO = 3001, KO_RIGHT, KO_LEFT, KO_JYUMONJIKIRI, KO_SETSUDAN, KO_BAKURETSU, KO_HAPPOKUNAI, KO_MUCHANAGE, KO_HUUMARANKA, KO_MAKIBISHI, KO_MEIKYOUSISUI, KO_ZANZOU, KO_KYOUGAKU, KO_JYUSATSU, KO_KAHU_ENTEN, KO_HYOUHU_HUBUKI, KO_KAZEHU_SEIRAN, KO_DOHU_KOUKAI, KO_KAIHOU, KO_ZENKAI, KO_GENWAKU, KO_IZAYOI, KG_KAGEHUMI, KG_KYOMU, KG_KAGEMUSYA, OB_ZANGETSU, OB_OBOROGENSOU, OB_OBOROGENSOU_TRANSITION_ATK, OB_AKAITSUKI, ECL_SNOWFLIP = 3031, ECL_PEONYMAMY, ECL_SADAGUI, ECL_SEQUOIADUST, ECLAGE_RECALL, GC_DARKCROW = 5001, RA_UNLIMIT, GN_ILLUSIONDOPING, RK_DRAGONBREATH_WATER, RK_LUXANIMA, NC_MAGMA_ERUPTION, WM_FRIGG_SONG, SO_ELEMENTAL_SHIELD, SR_FLASHCOMBO, SC_ESCAPE, AB_OFFERTORIUM, WL_TELEKINESIS_INTENSE, LG_KINGS_GRACE, ALL_FULL_THROTTLE, SR_FLASHCOMBO_ATK_STEP1, SR_FLASHCOMBO_ATK_STEP2, SR_FLASHCOMBO_ATK_STEP3, SR_FLASHCOMBO_ATK_STEP4, HLIF_HEAL = 8001, HLIF_AVOID, HLIF_BRAIN, HLIF_CHANGE, HAMI_CASTLE, HAMI_DEFENCE, HAMI_SKIN, HAMI_BLOODLUST, HFLI_MOON, HFLI_FLEET, HFLI_SPEED, HFLI_SBR44, HVAN_CAPRICE, HVAN_CHAOTIC, HVAN_INSTRUCT, HVAN_EXPLOSION, MUTATION_BASEJOB, MH_SUMMON_LEGION, MH_NEEDLE_OF_PARALYZE, MH_POISON_MIST, MH_PAIN_KILLER, MH_LIGHT_OF_REGENE, MH_OVERED_BOOST, MH_ERASER_CUTTER, MH_XENO_SLASHER, MH_SILENT_BREEZE, MH_STYLE_CHANGE, MH_SONIC_CRAW, MH_SILVERVEIN_RUSH, MH_MIDNIGHT_FRENZY, MH_STAHL_HORN, MH_GOLDENE_FERSE, MH_STEINWAND, MH_HEILIGE_STANGE, MH_ANGRIFFS_MODUS, MH_TINDER_BREAKER, MH_CBC, MH_EQC, MH_MAGMA_FLOW, MH_GRANITIC_ARMOR, MH_LAVA_SLIDE, MH_PYROCLASTIC, MH_VOLCANIC_ASH, MS_BASH = 8201, MS_MAGNUM, MS_BOWLINGBASH, MS_PARRYING, MS_REFLECTSHIELD, MS_BERSERK, MA_DOUBLE, MA_SHOWER, MA_SKIDTRAP, MA_LANDMINE, MA_SANDMAN, MA_FREEZINGTRAP, MA_REMOVETRAP, MA_CHARGEARROW, MA_SHARPSHOOTING, ML_PIERCE, ML_BRANDISH, ML_SPIRALPIERCE, ML_DEFENDER, ML_AUTOGUARD, ML_DEVOTION, MER_MAGNIFICAT, MER_QUICKEN, MER_SIGHT, MER_CRASH, MER_REGAIN, MER_TENDER, MER_BENEDICTION, MER_RECUPERATE, MER_MENTALCURE, MER_COMPRESS, MER_PROVOKE, MER_AUTOBERSERK, MER_DECAGI, MER_SCAPEGOAT, MER_LEXDIVINA, MER_ESTIMATION, MER_KYRIE, MER_BLESSING, MER_INCAGI, EL_CIRCLE_OF_FIRE = 8401, EL_FIRE_CLOAK, EL_FIRE_MANTLE, EL_WATER_SCREEN, EL_WATER_DROP, EL_WATER_BARRIER, EL_WIND_STEP, EL_WIND_CURTAIN, EL_ZEPHYR, EL_SOLID_SKIN, EL_STONE_SHIELD, EL_POWER_OF_GAIA, EL_PYROTECHNIC, EL_HEATER, EL_TROPIC, EL_AQUAPLAY, EL_COOLER, EL_CHILLY_AIR, EL_GUST, EL_BLAST, EL_WILD_STORM, EL_PETROLOGY, EL_CURSED_SOIL, EL_UPHEAVAL, EL_FIRE_ARROW, EL_FIRE_BOMB, EL_FIRE_BOMB_ATK, EL_FIRE_WAVE, EL_FIRE_WAVE_ATK, EL_ICE_NEEDLE, EL_WATER_SCREW, EL_WATER_SCREW_ATK, EL_TIDAL_WEAPON, EL_WIND_SLASH, EL_HURRICANE, EL_HURRICANE_ATK, EL_TYPOON_MIS, EL_TYPOON_MIS_ATK, EL_STONE_HAMMER, EL_ROCK_CRUSHER, EL_ROCK_CRUSHER_ATK, EL_STONE_RAIN, }; /// The client view ids for land skills. enum { UNT_SAFETYWALL = 0x7e, UNT_FIREWALL, UNT_WARP_WAITING, UNT_WARP_ACTIVE, UNT_BENEDICTIO, //TODO UNT_SANCTUARY, UNT_MAGNUS, UNT_PNEUMA, UNT_DUMMYSKILL, //These show no effect on the client UNT_FIREPILLAR_WAITING, UNT_FIREPILLAR_ACTIVE, UNT_HIDDEN_TRAP, //TODO UNT_TRAP, //TODO UNT_HIDDEN_WARP_NPC, //TODO UNT_USED_TRAPS, UNT_ICEWALL, UNT_QUAGMIRE, UNT_BLASTMINE, UNT_SKIDTRAP, UNT_ANKLESNARE, UNT_VENOMDUST, UNT_LANDMINE, UNT_SHOCKWAVE, UNT_SANDMAN, UNT_FLASHER, UNT_FREEZINGTRAP, UNT_CLAYMORETRAP, UNT_TALKIEBOX, UNT_VOLCANO, UNT_DELUGE, UNT_VIOLENTGALE, UNT_LANDPROTECTOR, UNT_LULLABY, UNT_RICHMANKIM, UNT_ETERNALCHAOS, UNT_DRUMBATTLEFIELD, UNT_RINGNIBELUNGEN, UNT_ROKISWEIL, UNT_INTOABYSS, UNT_SIEGFRIED, UNT_DISSONANCE, UNT_WHISTLE, UNT_ASSASSINCROSS, UNT_POEMBRAGI, UNT_APPLEIDUN, UNT_UGLYDANCE, UNT_HUMMING, UNT_DONTFORGETME, UNT_FORTUNEKISS, UNT_SERVICEFORYOU, UNT_GRAFFITI, UNT_DEMONSTRATION, UNT_CALLFAMILY, UNT_GOSPEL, UNT_BASILICA, UNT_MOONLIT, UNT_FOGWALL, UNT_SPIDERWEB, UNT_GRAVITATION, UNT_HERMODE, UNT_KAENSIN, //TODO UNT_SUITON, UNT_TATAMIGAESHI, UNT_KAEN, UNT_GROUNDDRIFT_WIND, UNT_GROUNDDRIFT_DARK, UNT_GROUNDDRIFT_POISON, UNT_GROUNDDRIFT_WATER, UNT_GROUNDDRIFT_FIRE, UNT_DEATHWAVE, //TODO UNT_WATERATTACK, //TODO UNT_WINDATTACK, //TODO UNT_EARTHQUAKE, //TODO UNT_EVILLAND, UNT_DARK_RUNNER, //TODO UNT_DARK_TRANSFER, //TODO UNT_EPICLESIS, UNT_EARTHSTRAIN, UNT_MANHOLE, UNT_DIMENSIONDOOR, UNT_CHAOSPANIC, UNT_MAELSTROM, UNT_BLOODYLUST, UNT_FEINTBOMB, UNT_MAGENTATRAP, UNT_COBALTTRAP, UNT_MAIZETRAP, UNT_VERDURETRAP, UNT_FIRINGTRAP, UNT_ICEBOUNDTRAP, UNT_ELECTRICSHOCKER, UNT_CLUSTERBOMB, UNT_REVERBERATION, UNT_SEVERE_RAINSTORM, UNT_FIREWALK, UNT_ELECTRICWALK, UNT_NETHERWORLD, UNT_PSYCHIC_WAVE, UNT_CLOUD_KILL, UNT_POISONSMOKE, UNT_NEUTRALBARRIER, UNT_STEALTHFIELD, UNT_WARMER, UNT_THORNS_TRAP, UNT_WALLOFTHORN, UNT_DEMONIC_FIRE, UNT_FIRE_EXPANSION_SMOKE_POWDER, UNT_FIRE_EXPANSION_TEAR_GAS, UNT_HELLS_PLANT, UNT_VACUUM_EXTREME, UNT_BANDING, UNT_FIRE_MANTLE, UNT_WATER_BARRIER, UNT_ZEPHYR, UNT_POWER_OF_GAIA, UNT_FIRE_INSIGNIA, UNT_WATER_INSIGNIA, UNT_WIND_INSIGNIA, UNT_EARTH_INSIGNIA, UNT_POISON_MIST, UNT_LAVA_SLIDE, UNT_VOLCANIC_ASH, UNT_ZENKAI_WATER, UNT_ZENKAI_LAND, UNT_ZENKAI_FIRE, UNT_ZENKAI_WIND, UNT_MAKIBISHI, UNT_VENOMFOG, UNT_ICEMINE, UNT_FLAMECROSS, UNT_HELLBURNING, UNT_MAGMA_ERUPTION, UNT_KINGS_GRACE, UNT_GLITTERING_GREED, UNT_B_TRAP, UNT_FIRE_RAIN, /** * Guild Auras **/ UNT_GD_LEADERSHIP = 0xc1, UNT_GD_GLORYWOUNDS = 0xc2, UNT_GD_SOULCOLD = 0xc3, UNT_GD_HAWKEYES = 0xc4, UNT_MAX = 0x190 }; /** * Structures **/ struct skill_condition { int weapon,ammo,ammo_qty,hp,sp,zeny,spiritball,mhp,state; int itemid[MAX_SKILL_ITEM_REQUIRE],amount[MAX_SKILL_ITEM_REQUIRE]; }; // Database skills struct s_skill_db { unsigned short nameid; char name[MAX_SKILL_NAME_LENGTH]; char desc[40]; int range[MAX_SKILL_LEVEL],hit,inf,element[MAX_SKILL_LEVEL],nk,splash[MAX_SKILL_LEVEL],max; int num[MAX_SKILL_LEVEL]; int cast[MAX_SKILL_LEVEL],walkdelay[MAX_SKILL_LEVEL],delay[MAX_SKILL_LEVEL]; #ifdef RENEWAL_CAST int fixed_cast[MAX_SKILL_LEVEL]; #endif int upkeep_time[MAX_SKILL_LEVEL],upkeep_time2[MAX_SKILL_LEVEL],cooldown[MAX_SKILL_LEVEL]; int castcancel,cast_def_rate; int inf2,maxcount[MAX_SKILL_LEVEL],skill_type; int blewcount[MAX_SKILL_LEVEL]; int hp[MAX_SKILL_LEVEL],sp[MAX_SKILL_LEVEL],mhp[MAX_SKILL_LEVEL],hp_rate[MAX_SKILL_LEVEL],sp_rate[MAX_SKILL_LEVEL],zeny[MAX_SKILL_LEVEL]; int weapon,ammo,ammo_qty[MAX_SKILL_LEVEL],state,spiritball[MAX_SKILL_LEVEL]; int itemid[MAX_SKILL_ITEM_REQUIRE],amount[MAX_SKILL_ITEM_REQUIRE]; int castnodex[MAX_SKILL_LEVEL], delaynodex[MAX_SKILL_LEVEL]; int nocast; int unit_id[2]; int unit_layout_type[MAX_SKILL_LEVEL]; int unit_range[MAX_SKILL_LEVEL]; int unit_interval; int unit_target; int unit_flag; }; struct s_skill_unit_layout { int count; int dx[MAX_SKILL_UNIT_COUNT]; int dy[MAX_SKILL_UNIT_COUNT]; }; struct skill_timerskill { int timer; int src_id; int target_id; int map; short x,y; uint16 skill_id,skill_lv; int type; // a BF_ type (NOTE: some places use this as general-purpose storage...) int flag; }; struct skill_unit_group { int src_id; int party_id; int guild_id; int bg_id; int map; int target_flag; //Holds BCT_* flag for battle_check_target int bl_flag; //Holds BL_* flag for map_foreachin* functions int64 tick; int limit,interval; uint16 skill_id,skill_lv; int val1,val2,val3; char *valstr; int unit_id; int group_id; int unit_count,alive_count; int item_id; //store item used. struct skill_unit *unit; struct { unsigned ammo_consume : 1; unsigned song_dance : 2; //0x1 Song/Dance, 0x2 Ensemble unsigned guildaura : 1; } state; }; struct skill_unit { struct block_list bl; struct skill_unit_group *group; int limit; int val1,val2; short alive,range; }; struct skill_unit_group_tickset { int64 tick; int id; }; // Create Database item struct s_skill_produce_db { int nameid, trigger; int req_skill,req_skill_lv,itemlv; int mat_id[MAX_PRODUCE_RESOURCE],mat_amount[MAX_PRODUCE_RESOURCE]; }; // Creating database arrow struct s_skill_arrow_db { int nameid, trigger; int cre_id[MAX_ARROW_RESOURCE],cre_amount[MAX_ARROW_RESOURCE]; }; // Abracadabra database struct s_skill_abra_db { uint16 skill_id; int req_lv; int per; }; //GCross magic mushroom database struct s_skill_magicmushroom_db { uint16 skill_id; }; struct skill_cd_entry { int duration;//milliseconds #if PACKETVER >= 20120604 int total;/* used for display on newer clients */ #endif short skidx;//the skill index entries belong to int64 started;/* gettick() of when it started, used vs duration to measure how much left upon logout */ int timer;/* timer id */ uint16 skill_id;//skill id }; /** * Skill Cool Down Delay Saving * Struct skill_cd is not a member of struct map_session_data * to keep cooldowns in memory between player log-ins. * All cooldowns are reset when server is restarted. **/ struct skill_cd { struct skill_cd_entry *entry[MAX_SKILL_TREE]; unsigned char cursor; }; /** * Skill Unit Persistency during endack routes (mostly for songs see bugreport:4574) **/ struct skill_unit_save { uint16 skill_id, skill_lv; }; struct s_skill_improvise_db { uint16 skill_id; short per;//1-10000 }; struct s_skill_changematerial_db { int itemid; short rate; int qty[5]; short qty_rate[5]; }; struct s_skill_spellbook_db { int nameid; uint16 skill_id; int point; }; typedef int (*SkillFunc)(struct block_list *src, struct block_list *target, uint16 skill_id, uint16 skill_lv, int64 tick, int flag); /** * Skill.c Interface **/ struct skill_interface { int (*init) (bool minimal); int (*final) (void); void (*reload) (void); void (*read_db) (bool minimal); /* */ DBMap* cd_db; // char_id -> struct skill_cd DBMap* name2id_db; DBMap* unit_db; // int id -> struct skill_unit* DBMap* usave_db; // char_id -> struct skill_unit_save DBMap* group_db;// int group_id -> struct skill_unit_group* /* */ struct eri *unit_ers; //For handling skill_unit's [Skotlex] struct eri *timer_ers; //For handling skill_timerskills [Skotlex] struct eri *cd_ers; // ERS Storage for skill cool down managers [Ind/Hercules] struct eri *cd_entry_ers; // ERS Storage for skill cool down entries [Ind/Hercules] /* */ struct s_skill_db db[MAX_SKILL_DB]; struct s_skill_produce_db produce_db[MAX_SKILL_PRODUCE_DB]; struct s_skill_arrow_db arrow_db[MAX_SKILL_ARROW_DB]; struct s_skill_abra_db abra_db[MAX_SKILL_ABRA_DB]; struct s_skill_magicmushroom_db magicmushroom_db[MAX_SKILL_MAGICMUSHROOM_DB]; struct s_skill_improvise_db improvise_db[MAX_SKILL_IMPROVISE_DB]; struct s_skill_changematerial_db changematerial_db[MAX_SKILL_PRODUCE_DB]; struct s_skill_spellbook_db spellbook_db[MAX_SKILL_SPELLBOOK_DB]; bool reproduce_db[MAX_SKILL_DB]; struct s_skill_unit_layout unit_layout[MAX_SKILL_UNIT_LAYOUT]; /* */ int enchant_eff[5]; int deluge_eff[5]; int firewall_unit_pos; int icewall_unit_pos; int earthstrain_unit_pos; int area_temp[8]; int unit_temp[20]; // temporary storage for tracking skill unit skill ids as players move in/out of them int unit_group_newid; /* accesssors */ int (*get_index) ( uint16 skill_id ); int (*get_type) ( uint16 skill_id ); int (*get_hit) ( uint16 skill_id ); int (*get_inf) ( uint16 skill_id ); int (*get_ele) ( uint16 skill_id, uint16 skill_lv ); int (*get_nk) ( uint16 skill_id ); int (*get_max) ( uint16 skill_id ); int (*get_range) ( uint16 skill_id, uint16 skill_lv ); int (*get_range2) (struct block_list *bl, uint16 skill_id, uint16 skill_lv); int (*get_splash) ( uint16 skill_id, uint16 skill_lv ); int (*get_hp) ( uint16 skill_id, uint16 skill_lv ); int (*get_mhp) ( uint16 skill_id, uint16 skill_lv ); int (*get_sp) ( uint16 skill_id, uint16 skill_lv ); int (*get_state) (uint16 skill_id); int (*get_spiritball) (uint16 skill_id, uint16 skill_lv); int (*get_zeny) ( uint16 skill_id, uint16 skill_lv ); int (*get_num) ( uint16 skill_id, uint16 skill_lv ); int (*get_cast) ( uint16 skill_id, uint16 skill_lv ); int (*get_delay) ( uint16 skill_id, uint16 skill_lv ); int (*get_walkdelay) ( uint16 skill_id, uint16 skill_lv ); int (*get_time) ( uint16 skill_id, uint16 skill_lv ); int (*get_time2) ( uint16 skill_id, uint16 skill_lv ); int (*get_castnodex) ( uint16 skill_id, uint16 skill_lv ); int (*get_delaynodex) ( uint16 skill_id ,uint16 skill_lv ); int (*get_castdef) ( uint16 skill_id ); int (*get_weapontype) ( uint16 skill_id ); int (*get_ammotype) ( uint16 skill_id ); int (*get_ammo_qty) ( uint16 skill_id, uint16 skill_lv ); int (*get_unit_id) (uint16 skill_id,int flag); int (*get_inf2) ( uint16 skill_id ); int (*get_castcancel) ( uint16 skill_id ); int (*get_maxcount) ( uint16 skill_id, uint16 skill_lv ); int (*get_blewcount) ( uint16 skill_id, uint16 skill_lv ); int (*get_unit_flag) ( uint16 skill_id ); int (*get_unit_target) ( uint16 skill_id ); int (*get_unit_interval) ( uint16 skill_id ); int (*get_unit_bl_target) ( uint16 skill_id ); int (*get_unit_layout_type) ( uint16 skill_id ,uint16 skill_lv ); int (*get_unit_range) ( uint16 skill_id, uint16 skill_lv ); int (*get_cooldown) ( uint16 skill_id, uint16 skill_lv ); int (*tree_get_max) ( uint16 skill_id, int b_class ); const char* (*get_name) ( uint16 skill_id ); const char* (*get_desc) ( uint16 skill_id ); /* check */ void (*chk) (uint16* skill_id); /* whether its CAST_GROUND, CAST_DAMAGE or CAST_NODAMAGE */ int (*get_casttype) (uint16 skill_id); int (*get_casttype2) (uint16 index); bool (*is_combo) (int skill_id); int (*name2id) (const char* name); int (*isammotype) (struct map_session_data *sd, int skill_id); int (*castend_id) (int tid, int64 tick, int id, intptr_t data); int (*castend_pos) (int tid, int64 tick, int id, intptr_t data); int (*castend_map) ( struct map_session_data *sd,uint16 skill_id, const char *mapname); int (*cleartimerskill) (struct block_list *src); int (*addtimerskill) (struct block_list *src, int64 tick, int target, int x, int y, uint16 skill_id, uint16 skill_lv, int type, int flag); int (*additional_effect) (struct block_list* src, struct block_list *bl, uint16 skill_id, uint16 skill_lv, int attack_type, int dmg_lv, int64 tick); int (*counter_additional_effect) (struct block_list* src, struct block_list *bl, uint16 skill_id, uint16 skill_lv, int attack_type, int64 tick); int (*blown) (struct block_list* src, struct block_list* target, int count, int8 dir, int flag); int (*break_equip) (struct block_list *bl, unsigned short where, int rate, int flag); int (*strip_equip) (struct block_list *bl, unsigned short where, int rate, int lv, int time); struct skill_unit_group* (*id2group) (int group_id); struct skill_unit_group *(*unitsetting) (struct block_list* src, uint16 skill_id, uint16 skill_lv, short x, short y, int flag); struct skill_unit *(*initunit) (struct skill_unit_group *group, int idx, int x, int y, int val1, int val2); int (*delunit) (struct skill_unit *su); struct skill_unit_group *(*init_unitgroup) (struct block_list* src, int count, uint16 skill_id, uint16 skill_lv, int unit_id, int limit, int interval); int (*del_unitgroup) (struct skill_unit_group *group, const char* file, int line, const char* func); int (*clear_unitgroup) (struct block_list *src); int (*clear_group) (struct block_list *bl, int flag); int (*unit_onplace) (struct skill_unit *src, struct block_list *bl, int64 tick); int (*unit_ondamaged) (struct skill_unit *src, struct block_list *bl, int64 damage, int64 tick); int (*cast_fix) ( struct block_list *bl, uint16 skill_id, uint16 skill_lv); int (*cast_fix_sc) ( struct block_list *bl, int time); int (*vf_cast_fix) ( struct block_list *bl, double time, uint16 skill_id, uint16 skill_lv); int (*delay_fix) ( struct block_list *bl, uint16 skill_id, uint16 skill_lv); int (*check_condition_castbegin) (struct map_session_data *sd, uint16 skill_id, uint16 skill_lv); int (*check_condition_castend) (struct map_session_data *sd, uint16 skill_id, uint16 skill_lv); int (*consume_requirement) (struct map_session_data *sd, uint16 skill_id, uint16 skill_lv, short type); struct skill_condition (*get_requirement) (struct map_session_data *sd, uint16 skill_id, uint16 skill_lv); int (*check_pc_partner) (struct map_session_data *sd, uint16 skill_id, uint16* skill_lv, int range, int cast_flag); int (*unit_move) (struct block_list *bl, int64 tick, int flag); int (*unit_onleft) (uint16 skill_id, struct block_list *bl, int64 tick); int (*unit_onout) (struct skill_unit *src, struct block_list *bl, int64 tick); int (*unit_move_unit_group) ( struct skill_unit_group *group, int16 m,int16 dx,int16 dy); int (*sit) (struct map_session_data *sd, int type); void (*brandishspear) (struct block_list* src, struct block_list* bl, uint16 skill_id, uint16 skill_lv, int64 tick, int flag); void (*repairweapon) (struct map_session_data *sd, int idx); void (*identify) (struct map_session_data *sd,int idx); void (*weaponrefine) (struct map_session_data *sd,int idx); int (*autospell) (struct map_session_data *md,uint16 skill_id); int (*calc_heal) (struct block_list *src, struct block_list *target, uint16 skill_id, uint16 skill_lv, bool heal); bool (*check_cloaking) (struct block_list *bl, struct status_change_entry *sce); int (*enchant_elemental_end) (struct block_list *bl, int type); int (*not_ok) (uint16 skill_id, struct map_session_data *sd); int (*not_ok_hom) (uint16 skill_id, struct homun_data *hd); int (*not_ok_mercenary) (uint16 skill_id, struct mercenary_data *md); int (*chastle_mob_changetarget) (struct block_list *bl,va_list ap); int (*can_produce_mix) ( struct map_session_data *sd, int nameid, int trigger, int qty); int (*produce_mix) ( struct map_session_data *sd, uint16 skill_id, int nameid, int slot1, int slot2, int slot3, int qty ); int (*arrow_create) ( struct map_session_data *sd,int nameid); int (*castend_nodamage_id) (struct block_list *src, struct block_list *bl, uint16 skill_id, uint16 skill_lv, int64 tick, int flag); int (*castend_damage_id) (struct block_list* src, struct block_list *bl, uint16 skill_id, uint16 skill_lv, int64 tick,int flag); int (*castend_pos2) (struct block_list *src, int x, int y, uint16 skill_id, uint16 skill_lv, int64 tick, int flag); int (*blockpc_start) (struct map_session_data *sd, uint16 skill_id, int tick); int (*blockhomun_start) (struct homun_data *hd, uint16 skill_id, int tick); int (*blockmerc_start) (struct mercenary_data *md, uint16 skill_id, int tick); int (*attack) (int attack_type, struct block_list* src, struct block_list *dsrc, struct block_list *bl, uint16 skill_id, uint16 skill_lv, int64 tick, int flag); int (*attack_area) (struct block_list *bl,va_list ap); int (*area_sub) (struct block_list *bl, va_list ap); int (*area_sub_count) (struct block_list *src, struct block_list *target, uint16 skill_id, uint16 skill_lv, int64 tick, int flag); int (*check_unit_range) (struct block_list *bl, int x, int y, uint16 skill_id, uint16 skill_lv); int (*check_unit_range_sub) (struct block_list *bl, va_list ap); int (*check_unit_range2) (struct block_list *bl, int x, int y, uint16 skill_id, uint16 skill_lv); int (*check_unit_range2_sub) (struct block_list *bl, va_list ap); void (*toggle_magicpower) (struct block_list *bl, uint16 skill_id); int (*magic_reflect) (struct block_list* src, struct block_list* bl, int type); int (*onskillusage) (struct map_session_data *sd, struct block_list *bl, uint16 skill_id, int64 tick); int (*cell_overlap) (struct block_list *bl, va_list ap); int (*timerskill) (int tid, int64 tick, int id, intptr_t data); int (*trap_splash) (struct block_list *bl, va_list ap); int (*check_condition_mercenary) (struct block_list *bl, int skill_id, int lv, int type); struct skill_unit_group *(*locate_element_field) (struct block_list *bl); int (*graffitiremover) (struct block_list *bl, va_list ap); int (*activate_reverberation) ( struct block_list *bl, va_list ap); int (*dance_overlap_sub) (struct block_list* bl, va_list ap); int (*dance_overlap) (struct skill_unit* su, int flag); struct s_skill_unit_layout *(*get_unit_layout) (uint16 skill_id, uint16 skill_lv, struct block_list* src, int x, int y); int (*frostjoke_scream) (struct block_list *bl, va_list ap); int (*greed) (struct block_list *bl, va_list ap); int (*destroy_trap) ( struct block_list *bl, va_list ap ); int (*icewall_block) (struct block_list *bl,va_list ap); struct skill_unit_group_tickset *(*unitgrouptickset_search) (struct block_list *bl, struct skill_unit_group *group, int64 tick); bool (*dance_switch) (struct skill_unit* su, int flag); int (*check_condition_char_sub) (struct block_list *bl, va_list ap); int (*check_condition_mob_master_sub) (struct block_list *bl, va_list ap); void (*brandishspear_first) (struct square *tc, uint8 dir, int16 x, int16 y); void (*brandishspear_dir) (struct square* tc, uint8 dir, int are); int (*get_fixed_cast) ( uint16 skill_id ,uint16 skill_lv ); int (*sit_count) (struct block_list *bl, va_list ap); int (*sit_in) (struct block_list *bl, va_list ap); int (*sit_out) (struct block_list *bl, va_list ap); void (*unitsetmapcell) (struct skill_unit *src, uint16 skill_id, uint16 skill_lv, cell_t cell, bool flag); int (*unit_onplace_timer) (struct skill_unit *src, struct block_list *bl, int64 tick); int (*unit_effect) (struct block_list* bl, va_list ap); int (*unit_timer_sub_onplace) (struct block_list* bl, va_list ap); int (*unit_move_sub) (struct block_list* bl, va_list ap); int (*blockpc_end) (int tid, int64 tick, int id, intptr_t data); int (*blockhomun_end) (int tid, int64 tick, int id, intptr_t data); int (*blockmerc_end) (int tid, int64 tick, int id, intptr_t data); int (*split_atoi) (char *str, int *val); int (*unit_timer) (int tid, int64 tick, int id, intptr_t data); int (*unit_timer_sub) (DBKey key, DBData *data, va_list ap); void (*init_unit_layout) (void); bool (*parse_row_skilldb) (char* split[], int columns, int current); bool (*parse_row_requiredb) (char* split[], int columns, int current); bool (*parse_row_castdb) (char* split[], int columns, int current); bool (*parse_row_castnodexdb) (char* split[], int columns, int current); bool (*parse_row_unitdb) (char* split[], int columns, int current); bool (*parse_row_producedb) (char* split[], int columns, int current); bool (*parse_row_createarrowdb) (char* split[], int columns, int current); bool (*parse_row_abradb) (char* split[], int columns, int current); bool (*parse_row_spellbookdb) (char* split[], int columns, int current); bool (*parse_row_magicmushroomdb) (char* split[], int column, int current); bool (*parse_row_reproducedb) (char* split[], int column, int current); bool (*parse_row_improvisedb) (char* split[], int columns, int current); bool (*parse_row_changematerialdb) (char* split[], int columns, int current); /* save new unit skill */ void (*usave_add) (struct map_session_data * sd, uint16 skill_id, uint16 skill_lv); /* trigger saved unit skills */ void (*usave_trigger) (struct map_session_data *sd); /* load all stored skill cool downs */ void (*cooldown_load) (struct map_session_data * sd); /* run spellbook of nameid id */ int (*spellbook) (struct map_session_data *sd, int nameid); /* */ int (*block_check) (struct block_list *bl, enum sc_type type, uint16 skill_id); int (*detonator) (struct block_list *bl, va_list ap); bool (*check_camouflage) (struct block_list *bl, struct status_change_entry *sce); int (*magicdecoy) (struct map_session_data *sd, int nameid); int (*poisoningweapon) ( struct map_session_data *sd, int nameid); int (*select_menu) (struct map_session_data *sd,uint16 skill_id); int (*elementalanalysis) (struct map_session_data *sd, int n, uint16 skill_lv, unsigned short *item_list); int (*changematerial) (struct map_session_data *sd, int n, unsigned short *item_list); int (*get_elemental_type) (uint16 skill_id, uint16 skill_lv); void (*cooldown_save) (struct map_session_data * sd); int (*get_new_group_id) (void); bool (*check_shadowform) (struct block_list *bl, int64 damage, int hit); }; struct skill_interface *skill; void skill_defaults(void); #endif /* _MAP_SKILL_H_ */
code
जानिए कितनी बदतर जिंदगी जी रहे दिल्ली की बस्तियों के बच्चे - बिग न्यूज दिल्ली की बस्तियों में रहने वाले बच्चों की हालत बेहद बदतर है. इस बाबत दिल्ली फोर्सेस नींव की ओर से किए गए एक सर्वे में यह सनसनीखेज खुलासा हुआ है. इस सर्वे में कई चौकाने वाले आंकडे सामने आए हैं. यह सर्वे दिल्ली की १२० बस्तियों में १२०0 परिवारों के साथ छोटे बच्चों पर किया गया है. इस सर्वे में महिलाओं को लेकर भी चौंकाने वाले आंकड़े सामने आए हैं. संस्था ने शनिवार को इन आंकड़ों को सार्वजनिक कर दिया सर्वे में सामने आए आंकड़ों पर एक नजर छोटे बच्चों की स्थिति ४७% बच्चों का वजन उम्र के मुताबिक कम. ३२% बच्चों का संपूर्ण टीकाकरण नहीं हुआ. ३४% बच्चे खुले में शौच के लिए जाते हैं. ४७% लोगों को आंगनवाड़ी सेवाओं की जानकारी नहीं. महिलाओं की स्थिति पर एक नजर २१% महिलाएं काम के लिए घर से बाहर जाती है. इनमें आधी महिलाएं अपने छोटे बच्चों को बड़े भाई-बहन और पड़ोसी के भरोसे छोड़कर जाती है. ५०% महिलाओं को प्रसव उपरांत स्वास्थ्य सेवाएं नहीं मिल पाती. दिल्ली में रोज २० बच्चे होते हैं गायब मगर दिल्ली फोर्सेस नींव की एडवाइजर अमृता जैन का कहना है कि सरकार अगर अपनी नीतियों पर सही से काम करे, तो सब ठीक हो सकता है. दिल्ली में आज हर रोज करीब २० बच्चे गायब होते है. इस संस्था ने यह भी जानने की कोशिश की कि आखिर नाकामी कहां है? फिलहाल संस्था के आंकड़े और सरकार के आंकड़े में काफी अंतर है. यहां पर कई महिलाएं भी थी, जिन्होंने अपनी समस्या बताई. फिलहाल सरकार को इन चीजों पर काम करने की ज़रुरत है. कुछ ऐसा करने की ज़रुरत है, जिससे कि ये बस्तियों में रहने वाले बच्चों की ज़िदगी में सुधार आ सके.
hindi
आई रिक्रूटमेन्ट २०१८: एयरपोर्ट ऑपरेशन और इंजीनियर के ४९ पदों पर भर्ती अगर आप किसी अच्छी सरकारी नौकरी का इंतजार कर रहे है तो आज हम आपके लिए एक अच्छी खबर लेकर आए है। दरअसल अभी हाल ही में एयरपोर्ट अथॉरिटी ऑफ इंडिया (आई) ने कई पदों पर भर्ती के लिए आवेदन आमंत्रित किए है। पदों की कुल संख्या ४९ है। ये सभी पद एयरपोर्ट ऑपरेशन, इंजीनियर और सीएनएस पर्सशनल के है। आपको बता दें कि ये सभी पद रिटायर्ड ऑफिसियल्स के लिए निकाले गये है। अगर आप इन पदों पर भर्ती के लिए आवेदन करना चाहते है तो २० जुलाई २०18 तक आवेदन कर सकते है। आवेदन करने से पहले इस वैकेंसी से जुड़ी ऑफिसियल जानकारी जरूर देख लें। नामी ऑफ थे पोस्ट एयरपोर्ट ऑपरेशन, इंजीनियर और सीएनएस पर्सशनल (४९ पद) अगर आप इन पदों के लिए जरूरी योग्यता रखते है तो २० जुलाई २०18 तक ऑनलाइन माध्यम से आवेदन कर सकते है। आवेदन करने के लिए आपको ईमेल एड्रेस पर आवेदन पत्र मेल करना होगा। इसके अलावा हॉर्ड कॉपी को निम्न पते पर भेजना होगा। ये भी पढ़ें- यहां निकली टीचर के ६६७ पदों पर भर्ती, ऐसे करें आवेदन
hindi
using System; using System.Collections.Generic; using System.Text; using System.IO; using System.Threading; using System.Diagnostics; using System.Runtime.InteropServices; using System.Management; using Microsoft.Win32; using System.ComponentModel; namespace Dfo.Controlling { /// <summary> /// Represents a state of DFO. /// </summary> public enum LaunchState { /// <summary> /// The game has not been started. /// </summary> None, /// <summary> /// The user is being logged in. /// </summary> Login, /// <summary> /// The game is in the process of starting. /// </summary> Launching, /// <summary> /// The main game window has been created. /// </summary> GameInProgress } /// <summary> /// This class allows launching of DFO with a variety of options. /// </summary> public class DfoLauncher : IDisposable { private object m_syncHandle = new object(); private Thread m_dfoMonitorThread = null; // Not needed? Maybe hold onto it in case we ever want it private AutoResetEvent m_monitorCancelEvent = new AutoResetEvent( false ); // tied to m_cancelMonitorThread private AutoResetEvent m_monitorFinishedEvent = new AutoResetEvent( false ); // I guess it would have been easier to do a Join on the thread, but oh well, this is done and works private AutoResetEvent m_launcherDoneEvent = new AutoResetEvent( false ); // Set when the launcher process that checks for patches terminates private Process m_launcherProcess = null; private bool m_disposed = false; private LaunchParams m_workingParams = null; /// <summary> /// Gets or sets m_workingParams in a thread-safe manner. Properties of WorkingParams should not be set. /// It should be treated as read-only. /// </summary> private LaunchParams WorkingParams { get { lock ( m_syncHandle ) { return m_workingParams; } } set { lock ( m_syncHandle ) { m_workingParams = value; } } } private IntPtr m_gameWindowHandle = IntPtr.Zero; private IntPtr GameWindowHandle { get { lock ( m_syncHandle ) { return m_gameWindowHandle; } } set { lock ( m_syncHandle ) { m_gameWindowHandle = value; } } } private LaunchParams m_params = new LaunchParams(); /// <summary> /// Gets or sets the parameters to use when launching the game. /// Changes will not effect an existing launch. /// </summary> /// <exception cref="ArgumentNullException">Attempted to set the value to null.</exception> public LaunchParams Params { get { return m_params; } set { value.ThrowIfNull( "Params" ); m_params = value; } } /// <summary> /// Raised when the State property changes. The event may be raised inside a method called by the caller or /// from another thread. Only the State property may be safely accessed in the event handler. /// No other properties or methods may be called without synchronizing access to this object. /// /// Only a caller-initiated launch can take the state out of <c>LaunchStateNone</c>. /// </summary> public event EventHandler<EventArgs> LaunchStateChanged { add { lock ( m_LaunchStateChangedLock ) { m_LaunchStateChangedDelegate += value; } } remove { lock ( m_LaunchStateChangedLock ) { m_LaunchStateChangedDelegate -= value; } } } #region thread-safe event stuff private object m_LaunchStateChangedLock = new object(); private EventHandler<EventArgs> m_LaunchStateChangedDelegate; /// <summary> /// Raises the <c>LaunchStateChanged</c> event. /// </summary> /// <param name="e">An <c>EventArgs</c> that contains the event data.</param> protected virtual void OnLaunchStateChanged( EventArgs e ) { EventHandler<EventArgs> currentDelegate; lock ( m_LaunchStateChangedLock ) { currentDelegate = m_LaunchStateChangedDelegate; } if ( currentDelegate != null ) { currentDelegate( this, e ); } } #endregion /// <summary> /// Raised when the window mode setting could not be used. If the <c>Cancel</c> property of the event args /// is set to true, the launch is cancelled. /// </summary> public event EventHandler<CancelErrorEventArgs> WindowModeFailed { add { lock ( m_WindowModeFailedLock ) { m_WindowModeFailedDelegate += value; } } remove { lock ( m_WindowModeFailedLock ) { m_WindowModeFailedDelegate -= value; } } } #region thread-safe event stuff private object m_WindowModeFailedLock = new object(); private EventHandler<CancelErrorEventArgs> m_WindowModeFailedDelegate; /// <summary> /// Raises the <c>WindowModeFailed</c> event. /// </summary> /// <param name="e">A <c>CancelErrorEventArgs</c> that contains the event data.</param> protected virtual void OnWindowModeFailed( CancelErrorEventArgs e ) { EventHandler<CancelErrorEventArgs> currentDelegate; lock ( m_WindowModeFailedLock ) { currentDelegate = m_WindowModeFailedDelegate; } if ( currentDelegate != null ) { currentDelegate( this, e ); } else { Logging.Log.ErrorFormat( "Error while applying the window mode setting: {0}", e.Error.Message ); Logging.Log.DebugFormat( "Exception detail: ", e.Error ); } } #endregion /// <summary> /// Raised when a file requested to be switched could not be switched or switched back. /// </summary> public event EventHandler<ErrorEventArgs> FileSwitchFailed { add { lock ( m_FileSwitchFailedLock ) { m_FileSwitchFailedDelegate += value; } } remove { lock ( m_FileSwitchFailedLock ) { m_FileSwitchFailedDelegate -= value; } } } #region thread-safe event stuff private object m_FileSwitchFailedLock = new object(); private EventHandler<ErrorEventArgs> m_FileSwitchFailedDelegate; /// <summary> /// Raises the <c>FileSwitchFailed</c> event. /// </summary> /// <param name="e">An <c>ErrorEventArgs</c> that contains the event data.</param> protected virtual void OnFileSwitchFailed( ErrorEventArgs e ) { EventHandler<ErrorEventArgs> currentDelegate; lock ( m_FileSwitchFailedLock ) { currentDelegate = m_FileSwitchFailedDelegate; } if ( currentDelegate != null ) { currentDelegate( this, e ); } else { Logging.Log.ErrorFormat( "Error while switching files: {0}", e.Error.Message ); Logging.Log.DebugFormat( "Exception details: ", e.Error ); } } #endregion /// <summary> /// Raised when there is an error while trying to automatically close the popup at the end of the game. /// </summary> public event EventHandler<ErrorEventArgs> PopupKillFailed { add { lock ( m_PopupKillFailedLock ) { m_PopupKillFailedDelegate += value; } } remove { lock ( m_PopupKillFailedLock ) { m_PopupKillFailedDelegate -= value; } } } #region thread-safe event stuff private object m_PopupKillFailedLock = new object(); private EventHandler<ErrorEventArgs> m_PopupKillFailedDelegate; /// <summary> /// Raises the <c>PopupKillFailed</c> event. /// </summary> /// <param name="e">An <c>ErrorEventArgs</c> that contains the event data.</param> protected virtual void OnPopupKillFailed( ErrorEventArgs e ) { EventHandler<ErrorEventArgs> currentDelegate; lock ( m_PopupKillFailedLock ) { currentDelegate = m_PopupKillFailedDelegate; } if ( currentDelegate != null ) { currentDelegate( this, e ); } else { Logging.Log.WarnFormat( "Error while trying to kill the ad popup: {0}", e.Error.Message ); Logging.Log.DebugFormat( "Exception details: ", e.Error ); } } #endregion // If state is None, the monitor thread either does not exist or is on its way out. private LaunchState m_state = LaunchState.None; /// <summary> /// Gets the current state of launching. This property is thread-safe. /// </summary> public LaunchState State { get { LaunchState state; lock ( m_syncHandle ) { state = m_state; } return state; } private set { bool stateChanged = false; LaunchState oldState = LaunchState.None; LaunchState newState = LaunchState.None; lock ( m_syncHandle ) { if ( m_state != value ) { oldState = m_state; newState = value; m_state = value; stateChanged = true; } } if ( stateChanged ) { // Do logging outside of the lock Logging.Log.DebugFormat( "State change from {0} to {1}.", oldState, newState ); OnLaunchStateChanged( EventArgs.Empty ); } } } /// <summary> /// Constructs a new <c>DfoLauncher</c> with default parameters. /// </summary> public DfoLauncher() { ; } /// <summary> /// Launches DFO using the parameters in the <c>Params</c> property. /// </summary> /// /// <exception cref="System.InvalidOperationException">The game has already been launched.</exception> /// <exception cref="System.ArgumentNullException">Params.Username, Params.Password, or /// Params.DfoDir is null, or Params.SwitchSoundpacks is true and Params.SoundpackDir, /// Params.CustomSoundpackDir, or Params.TempSoundpackDir is null.</exception> /// <exception cref="System.ArgumentOutOfRangeException">Params.LoginTimeoutInMs or was negative.</exception> /// <exception cref="System.Security.SecurityException">The caller does not have permission to connect to the DFO /// URI.</exception> /// <exception cref="System.Net.WebException">A timeout occurred.</exception> /// <exception cref="Dfo.Controlling.DfoAuthenticationException">Either the username/password is incorrect /// or a change was made to the way the authentication token is given to the browser, in which case /// this function will not work.</exception> /// <exception cref="Dfo.Controlling.DfoLaunchException">The game could not be launched.</exception> /// <exception cref="System.ObjectDisposedException">This object has been Disposed of.</exception> public void Launch() { Logging.Log.DebugFormat( "Starting game. Launch parameters:{0}{1}", Environment.NewLine, Params ); if ( State != LaunchState.None ) { throw new InvalidOperationException( "The game has already been launched" ); } try { StartDfo(); } catch ( Exception ) { Reset(); // TODO: Filter the exceptions to only the relevant ones? throw; } } /// <summary> /// Launches DFO. /// </summary> /// <exception cref="System.ArgumentNullException">Params.Username or Params.Password was null.</exception> /// <exception cref="System.ArgumentOutOfRangeException">Params.LoginTimeoutInMs was negative.</exception> /// <exception cref="System.Security.SecurityException">The caller does not have permission to connect to the DFO /// URI.</exception> /// <exception cref="System.Net.WebException">A timeout occurred.</exception> /// <exception cref="Dfo.Controlling.DfoAuthenticationException">Either the username/password is incorrect /// or a change was made to the way the authentication token is given to the browser, in which case /// this function will not work.</exception> /// <exception cref="Dfo.Controlling.DfoLaunchException">The game could not be launched.</exception> /// <exception cref="System.ObjectDisposedException">This object has been Disposed of.</exception> private void StartDfo() { if ( m_disposed ) { throw new ObjectDisposedException( "DfoLauncher" ); } if ( Params.LoginTimeoutInMs < 0 ) { throw new ArgumentOutOfRangeException( "LoginTimeoutInMs cannot be negative." ); } Params.Username.ThrowIfNull( "Params.Username" ); Params.Password.ThrowIfNull( "Params.Password" ); Params.FilesToSwitch.ThrowIfNull( "Params.FilesToSwitch" ); bool ok = EnforceWindowedSetting(); if ( !ok ) { Logging.Log.DebugFormat( "Error while applying window mode setting; aborting launch." ); return; } State = LaunchState.Login; // We are now logging in try { string dfoArg = DfoLogin.GetDfoArg( Params.Username, Params.Password, Params.LoginTimeoutInMs ); // Log in //string dfoArg = "abc"; // DEBUG State = LaunchState.Launching; // If we reach this line, we successfully logged in. Now we're launching. m_monitorCancelEvent.Reset(); m_monitorFinishedEvent.Reset(); m_launcherDoneEvent.Reset(); // Make sure to reset this before starting the launcher process // Start the launcher process lock ( m_syncHandle ) { m_launcherProcess = new Process(); } m_launcherProcess.StartInfo.FileName = Params.DfoLauncherExe; m_launcherProcess.StartInfo.Arguments = dfoArg; // This argument contains the authentication token we got from logging in m_launcherProcess.EnableRaisingEvents = true; m_launcherProcess.Exited += LauncherProcessExitedHandler; // Use async notification instead of synchronous waiting so we can cancel while the launcher process is going Logging.Log.DebugFormat( "Starting game process '{0}' with arguments '{1}'", m_launcherProcess.StartInfo.FileName, m_launcherProcess.StartInfo.Arguments.HideSensitiveData( SensitiveData.LoginCookies ) ); m_launcherProcess.Start(); lock ( m_syncHandle ) { // Start the thread that monitors the state of DFO m_dfoMonitorThread = new Thread( BackgroundThreadEntryPoint ); m_dfoMonitorThread.IsBackground = true; m_dfoMonitorThread.Name = "DFO monitor"; Logging.Log.DebugFormat( "Starting monitor thread." ); // Give it a copy of the launch params so the caller can change the Params property while // the game is running with no effects for the next time they launch WorkingParams = Params.Clone(); m_dfoMonitorThread.Start(); } } catch ( System.Security.SecurityException ex ) { throw new System.Security.SecurityException( string.Format( "This program does not have the permssions needed to log in to the game. {0}", ex.Message ), ex ); } catch ( System.Net.WebException ex ) { throw new System.Net.WebException( string.Format( "There was a problem connecting. Check your Internet connection. Details: {0}", ex.Message ), ex ); } catch ( DfoAuthenticationException ex ) { throw new DfoAuthenticationException( string.Format( "Error while authenticating: {0}", ex.Message ), ex ); } catch ( System.ComponentModel.Win32Exception ex ) { throw new DfoLaunchException( string.Format( "Error while starting DFO using {0}: {1}", Params.DfoLauncherExe, ex.Message ), ex ); } } /// <summary> /// /// </summary> /// <returns>True if everything went ok or if there was an error and the caller's event handler did not /// tell us to stop.</returns> private bool EnforceWindowedSetting() { Logging.Log.DebugFormat( "Applying window mode setting." ); string magicWindowModeDirectoryName = "zo3mo4"; string magicWindowModeDirectoryPath = Path.Combine( Params.GameDir, magicWindowModeDirectoryName ); Exception error = null; if ( Params.LaunchInWindowed ) { Logging.Log.DebugFormat( "Forcing window mode by creating directory {0}.", magicWindowModeDirectoryPath ); try { Directory.CreateDirectory( magicWindowModeDirectoryPath ); Logging.Log.DebugFormat( "{0} created.", magicWindowModeDirectoryPath ); } catch ( Exception ex ) { if ( ex is System.IO.IOException || ex is System.UnauthorizedAccessException || ex is System.ArgumentException || ex is System.IO.PathTooLongException || ex is System.IO.DirectoryNotFoundException || ex is System.NotSupportedException ) { error = new IOException( string.Format( "Error while trying to create directory {0}: {1}", magicWindowModeDirectoryPath, ex.Message ), ex ); } else { throw; } } } else { Logging.Log.DebugFormat( "Forcing full-screen mode by deleting directory {0}.", magicWindowModeDirectoryPath ); try { Directory.Delete( magicWindowModeDirectoryPath, true ); Logging.Log.DebugFormat( "{0} removed.", magicWindowModeDirectoryPath ); } catch ( DirectoryNotFoundException ) { // It's ok if the directory doesn't exist Logging.Log.DebugFormat( "{0} does not exist.", magicWindowModeDirectoryPath ); } catch ( Exception ex ) { if ( ex is System.IO.IOException || ex is System.UnauthorizedAccessException || ex is System.ArgumentException || ex is System.IO.PathTooLongException || ex is System.ArgumentException ) { error = new IOException( string.Format( "Error while trying to remove directory {0}: {1}", magicWindowModeDirectoryPath, ex.Message ), ex ); } else { throw; } } } if ( error != null ) { CancelErrorEventArgs e = new CancelErrorEventArgs( error ); OnWindowModeFailed( e ); if ( e.Cancel ) { return false; // false = not ok } else { return true; // true = ok } } else { return true; } } /// <summary> /// Resets to an unattached state. This function may block if it needs to do any cleanup. /// </summary> public void Reset() { Logging.Log.DebugFormat( "Resetting to an unattached state." ); if ( m_disposed ) { Logging.Log.DebugFormat( "Already disposed so already unattached." ); return; } // Send cancel signal to monitor thread if it's running and wait for it to finish terminating if ( MonitorThreadIsRunning() ) { Logging.Log.DebugFormat( "Monitor thread is running, sending it a reset signal and waiting for it to finish." ); m_monitorCancelEvent.Set(); m_monitorFinishedEvent.WaitOne(); // m_dfoMonitorThread got set to null as it was terminating itself // m_launcherProcess got disposed of and set to null as monitor thread was exiting if it hadn't done it already Logging.Log.DebugFormat( "Monitor thread reported completion." ); } else { Logging.Log.DebugFormat( "Monitor thread not running. Setting state to None." ); // If an exception happened while launching but before the monitor thread is started, // we need to set the state back to None. Normally the monitor thread does that as it's exiting. State = LaunchState.None; WorkingParams = null; } lock ( m_syncHandle ) { // Need to set m_launcherProcess to null if there was an exception while launching. if ( m_launcherProcess != null ) { Logging.Log.DebugFormat( "Disposing of the launcher process." ); m_launcherProcess.Dispose(); m_launcherProcess = null; } } Logging.Log.DebugFormat( "Reset complete." ); } private bool MonitorThreadIsRunning() { lock ( m_syncHandle ) { if ( m_dfoMonitorThread != null ) { return true; } else { return false; } } } // Was going to implement this, but it doesn't seem like it'd be very useful //public void AttachToExistingDfo() //{ //} /// <summary> /// Entry point for the monitor thread. /// </summary> private void BackgroundThreadEntryPoint() { Logging.Log.DebugFormat( "Monitor thread started." ); LaunchParams copiedParams = WorkingParams; // Once this thread is created, it has full control over the State property. No other thread can // change it until this thread sets it back to None. // This thread can be canceled by calling the Reset() method. Reset() will put m_monitorCancelEvent // in the Set state. bool canceled = false; Logging.Log.DebugFormat( "Waiting for the game launcher process to stop or a cancel signal." ); // Wait for launcher process to end or a cancel notice // We have a process handle to the launcher process and if we get an async notification that it completed, // m_launcherDoneEvent is set. Better than polling. :) int setHandleIndex = WaitHandle.WaitAny( new WaitHandle[] { m_launcherDoneEvent, m_monitorCancelEvent } ); lock ( m_syncHandle ) { m_launcherProcess.Dispose(); m_launcherProcess = null; } if ( setHandleIndex == 1 ) // canceled { Logging.Log.DebugFormat( "Got cancel signal." ); canceled = true; } else { Logging.Log.DebugFormat( "Launcher process ended." ); } List<SwitchedFile> switchedFiles = new List<SwitchedFile>(); if ( !canceled ) { // Switch any files we need to foreach ( FileSwitcher fileToSwitch in copiedParams.FilesToSwitch ) { switchedFiles.Add( SwitchFile( fileToSwitch ) ); } } IntPtr dfoMainWindowHandle = IntPtr.Zero; if ( !canceled ) { Logging.Log.DebugFormat( "Waiting for DFO window to be created AND be visible, the DFO process to not exist, or a cancel notice." ); // Wait for DFO window to be created AND be visible, the DFO process to not exist, or a cancel notice. Pair<IntPtr, bool> pollResults = PollUntilCanceled<IntPtr>( copiedParams.GameWindowCreatedPollingIntervalInMs, () => { IntPtr dfoWindowHandle = GetGameWindowHandle( copiedParams.GameToLaunch ); if ( dfoWindowHandle != IntPtr.Zero ) { if ( DfoWindowIsOpen( dfoWindowHandle ) ) { return new Pair<IntPtr, bool>( dfoWindowHandle, true ); // Window exists and is visible, done polling } else { return new Pair<IntPtr, bool>( dfoWindowHandle, false ); // Window exists but is not visible yet, keep polling } } else { // Check if the DFO process is running. Under normal conditions, it certainly would be // by now because it gets started by the launcher process and the launcher process has ended. // If it is not, that means the launcher ended unsuccessfully or the DFO process // ended very quickly. In either case, our job is done here. Process[] dfoProcesses = Process.GetProcessesByName( Path.GetFileNameWithoutExtension( copiedParams.DfoExe ) ); if ( dfoProcesses.Length > 0 ) { return new Pair<IntPtr, bool>( dfoWindowHandle, false ); // Window does not exist, keep polling } else { return new Pair<IntPtr, bool>( dfoWindowHandle, true ); // DFO process doesn't exist anymore, treat this like a cancel request } } } ); if ( pollResults.Second ) { Logging.Log.DebugFormat( "Received a cancel signal." ); } else { if ( pollResults.First != IntPtr.Zero ) { Logging.Log.DebugFormat( "Game window created and visible." ); } else { Logging.Log.DebugFormat( "Game process does not exist." ); } } canceled = pollResults.Second; if ( !canceled ) { if ( pollResults.First == IntPtr.Zero ) { canceled = true; // Treat a premature closing of the DFO process the same as a cancel request. } } GameWindowHandle = pollResults.First; } if ( !canceled ) { // Game is up. State = LaunchState.GameInProgress; if ( copiedParams.LaunchInWindowed && ( copiedParams.WindowHeight.HasValue || copiedParams.WindowWidth.HasValue ) ) { canceled = ResizeDfoWindow( copiedParams.WindowWidth, copiedParams.WindowHeight ); } } if ( !canceled ) { Logging.Log.DebugFormat( "Waiting for the main game window to not exist or to not be visible, or for a cancel signal." ); // Wait for DFO game window to be closed or a cancel notice // Note that there is a distinction between a window existing and a window being visible. // When the popup is displayed, the DFO window still "exists", but it is hidden Pair<IntPtr, bool> pollResults = PollUntilCanceled<IntPtr>( copiedParams.GameDonePollingIntervalInMs, () => { IntPtr dfoWindowHandle = GetGameWindowHandle( copiedParams.GameToLaunch ); if ( dfoWindowHandle == IntPtr.Zero ) { return new Pair<IntPtr, bool>( dfoWindowHandle, true ); // Window does not exist, done polling } else { if ( !DfoWindowIsOpen( dfoWindowHandle ) ) { return new Pair<IntPtr, bool>( dfoWindowHandle, true ); // Window "exists" but is not visible, done polling } else { return new Pair<IntPtr, bool>( dfoWindowHandle, false ); // Window still open, keep polling } } } ); canceled = pollResults.Second; if ( !canceled ) { if ( pollResults.First != IntPtr.Zero ) { Logging.Log.DebugFormat( "Game window exists but is not visible." ); } else { Logging.Log.DebugFormat( "Game window does not exist." ); } } } GameWindowHandle = IntPtr.Zero; if ( !canceled ) { if ( copiedParams.ClosePopup ) { // Kill the DFO process to kill the popup. Logging.Log.DebugFormat( "Killing the game process to kill the popup." ); // A normal Process.Kill gets a Win32Exception with "Access is denied", possibly because // of HackShield. // This WMI stuff works, although I'm not entirely sure why. // // http://stackoverflow.com/questions/2069157/what-is-the-difference-between-these-two-methods-of-killing-a-process // - "WMI calls are not performed within the security context of your process. They are // handled in another process (I'm guessing the Winmgmt service). This service runs under // the SYSTEM account, and HackShield may be allowing the termination continue due to this." // // Thanks to Tomato (author of DFOAssist) for his help with this! try { ConnectionOptions options = new ConnectionOptions(); options.Impersonation = ImpersonationLevel.Impersonate; ManagementScope scope = new ManagementScope( @"\\.\root\cimv2", options ); scope.Connect(); ObjectQuery dfoProcessQuery = new ObjectQuery( string.Format( "Select * from Win32_Process Where Name = '{0}'", Path.GetFileName( copiedParams.DfoExe ) ) ); using ( ManagementObjectSearcher dfoProcessSearcher = new ManagementObjectSearcher( scope, dfoProcessQuery ) ) using ( ManagementObjectCollection dfoProcessCollection = dfoProcessSearcher.Get() ) { foreach ( ManagementObject dfoProcess in dfoProcessCollection ) { try { using ( dfoProcess ) { Logging.Log.DebugFormat( "Killing a game process." ); object ret = dfoProcess.InvokeMethod( "Terminate", new object[] { } ); } } catch ( ManagementException ex ) { OnPopupKillFailed( new ErrorEventArgs( new ManagementException( string.Format( "Could not kill {0}: {1}", Path.GetFileName( copiedParams.DfoExe ), ex.Message ), ex ) ) ); } } } } catch ( ManagementException ex ) { OnPopupKillFailed( new ErrorEventArgs( new ManagementException( string.Format( "Error while doing WMI stuff: {0}", ex.Message ), ex ) ) ); } Logging.Log.DebugFormat( "Done killing popup." ); } } // Done, clean up. // Switch back any switched files. if ( switchedFiles.Count > 0 ) { // Wait for DFO process to end, otherwise the OS won't let us move the files that are used by the game string gameProcessName = Path.GetFileNameWithoutExtension( copiedParams.DfoExe ); Logging.Log.DebugFormat( "Waiting for there to be no processes called '{0}' so that switched files can be switched back.", gameProcessName ); Process[] dfoProcesses; do { dfoProcesses = Process.GetProcessesByName( gameProcessName ); if ( dfoProcesses.Length > 0 ) { Thread.Sleep( copiedParams.GameDeadPollingIntervalInMs ); } } while ( dfoProcesses.Length > 0 ); Logging.Log.DebugFormat( "No more proccesses called '{0}'; switching files back now.", gameProcessName ); foreach ( SwitchedFile switchedFile in switchedFiles ) { SwitchBackFile( switchedFile ); switchedFile.Dispose(); } } lock ( m_syncHandle ) { m_dfoMonitorThread = null; } State = LaunchState.None; WorkingParams = null; m_monitorFinishedEvent.Set(); Logging.Log.DebugFormat( "Monitor thread exiting." ); } /// <summary> /// For use by the monitor thread only. Poll for some value until the value is acceptable or we are /// canceled. /// </summary> /// <typeparam name="TReturn"></typeparam> /// <param name="pollingIntervalInMs"></param> /// <param name="pollingFunction"></param> /// <returns>A Pair&lt;<typeparamref name="TReturn"/>, bool&gt; containing the acceptable polled value or /// the default value of the type if the thread is canceled in the first value of the pair and a boolean /// that is true if the thread is canceled, false if not.</returns> private Pair<TReturn, bool> PollUntilCanceled<TReturn>( int pollingIntervalInMs, Func<Pair<TReturn, bool>> pollingFunction ) { bool canceled = m_monitorCancelEvent.WaitOne( 0 ); TReturn polledValue = default( TReturn ); bool polledValueAcceptable = false; while ( !polledValueAcceptable && !canceled ) { Pair<TReturn, bool> poll = pollingFunction(); // first = polled value, second = whether value is acceptable polledValue = poll.First; polledValueAcceptable = poll.Second; if ( !polledValueAcceptable ) { // Sleep for a bit or until canceled canceled = m_monitorCancelEvent.WaitOne( pollingIntervalInMs ); } } if ( canceled ) { Logging.Log.DebugFormat( "Got a cancel signal." ); return new Pair<TReturn, bool>( default( TReturn ), true ); } else { return new Pair<TReturn, bool>( polledValue, false ); } } /// <summary> /// Gets a window handle to the window of the given game. /// </summary> /// <param name="game"></param> /// <returns></returns> private IntPtr GetGameWindowHandle( Game game ) { switch ( game ) { case Game.DFO: return Interop.FindWindow( "DFO", null ); //return Interop.FindWindow( null, "DFO" ); // DEBUG default: throw new Exception( "Oops, missed a game type." ); } } private bool DfoWindowIsOpen( IntPtr dfoWindowHandle ) { return Interop.IsWindowVisible( dfoWindowHandle ); } /// <summary> /// Switches the given FileSwitcher. Returns the resulting SwitchedFile on success, calls /// OnFileSwitchFailed() and returns null on failure. /// </summary> /// <param name="fileToSwitch"></param> /// <returns></returns> private SwitchedFile SwitchFile( FileSwitcher fileToSwitch ) { if ( fileToSwitch == null ) { return null; } try { return fileToSwitch.Switch(); } catch ( IOException ex ) { OnFileSwitchFailed( new ErrorEventArgs( ex ) ); return null; } } private void SwitchBackFile( SwitchedFile fileToSwitchBack ) { if ( fileToSwitchBack == null ) { return; } try { fileToSwitchBack.SwitchBack(); } catch ( IOException ex ) { OnFileSwitchFailed( new ErrorEventArgs( ex ) ); } } private void LauncherProcessExitedHandler( object sender, EventArgs e ) { bool launcherDoneSet = false; lock ( m_syncHandle ) { if ( m_launcherProcess == sender ) { m_launcherDoneEvent.Set(); launcherDoneSet = true; } } if ( launcherDoneSet ) { Logging.Log.DebugFormat( "Launcher process exited, launcher done event set." ); } else { Logging.Log.DebugFormat( "A launcher process exited but does not match the saved launcher process." ); } } /// <summary> /// Helper function to reduce size of main monitor thread method. At least one of the parameters must /// be non-null. /// </summary> /// <param name="possibleWidth"></param> /// <param name="possibleHeight"></param> /// <returns>True if we received a cancel signal, false if not.</returns> private bool ResizeDfoWindow( int? possibleWidth, int? possibleHeight ) { int width = 0; int height = 0; if ( !possibleHeight.HasValue ) { width = possibleWidth.Value; height = GetHeightFromWidth( width ); } else if ( !possibleWidth.HasValue ) { height = possibleHeight.Value; width = GetWidthFromHeight( height ); } else { width = possibleWidth.Value; height = possibleHeight.Value; } Logging.Log.DebugFormat( "Resizing the game window on game startup to {0}x{1}.", width, height ); if ( width == DefaultGameWindowWidth && height == DefaultGameWindowHeight ) { Logging.Log.DebugFormat( "Skipping because the game starts as {0}x{1}.", DefaultGameWindowWidth, DefaultGameWindowHeight ); return false; } Logging.Log.DebugFormat( "Waiting for game window to be {0}x{1}.", DefaultGameWindowWidth, DefaultGameWindowHeight ); // Wait until the game window is 640x480 to resize it. The game starts at some other size, then // later resizes to 640x480. If we do a resize and it goes through before the game resizes itself // to 640x480, our resize gets stomped on. Pair<Interop.RECT, bool> pollResults = PollUntilCanceled<Interop.RECT>( 1000, () => { Interop.RECT gameWindowRect; bool success = Interop.GetWindowRect( GameWindowHandle, out gameWindowRect ); if ( !success ) { Win32Exception error = new Win32Exception(); // This is just a helper method, no need to throw an exception because we can // "handle" it here Logging.Log.WarnFormat( "Could not get game window size: {0}", error.Message ); return new Pair<Interop.RECT, bool>( new Interop.RECT(), true ); } else { int currentWidth = gameWindowRect.Right - gameWindowRect.Left; int currentHeight = gameWindowRect.Bottom - gameWindowRect.Top; if ( currentWidth == DefaultGameWindowWidth && currentHeight == DefaultGameWindowHeight ) { return new Pair<Interop.RECT, bool>( gameWindowRect, true ); } else { return new Pair<Interop.RECT, bool>( gameWindowRect, false ); } } } ); if ( pollResults.Second ) { return true; // canceled } int windowCurrentWidth = pollResults.First.Right - pollResults.First.Left; int windowCurrentHeight = pollResults.First.Bottom - pollResults.First.Top; if ( windowCurrentWidth == 0 && windowCurrentHeight == 0 ) { return false; // Getting the window size failed; window probably doesn't exist anymore. } Logging.Log.DebugFormat( "Game window is now {0}x{1}, doing the resize now.", windowCurrentWidth, windowCurrentHeight ); try { ResizeDfoWindow( width, height ); } catch ( Win32Exception ex ) { Logging.Log.ErrorFormat( "Could not resize the game window to {0}x{1}: {2}", width, height, ex.Message ); Logging.Log.Debug( "Exception details:", ex ); } return false; // not canceled } /// <summary> /// Resizes the game window. /// </summary> /// <param name="width">Width, in pixels, to resize to.</param> /// <param name="height">Height, in pixels, to resize to.</param> /// <exception cref="System.ArgumentOutOfRangeException"><paramref name="width"/> or <paramref name="height"/> /// is not a positive number.</exception> /// <exception cref="System.InvalidOperationException">This object is not currently attached to a /// game instance (State is not GameInProgress) or the game window does not exist.</exception> /// <exception cref="System.ComponentModel.Win32Exception">The resize operation failed.</exception> public void ResizeDfoWindow( int width, int height ) { Logging.Log.DebugFormat( "Resizing game window to {0}x{1}.", width, height ); if ( width <= 0 ) { throw new ArgumentOutOfRangeException( "width", "Window width must be positive." ); } if ( height <= 0 ) { throw new ArgumentOutOfRangeException( "height", "Window height must be positive." ); } IntPtr gameWindowHandle = GameWindowHandle; if ( gameWindowHandle == IntPtr.Zero ) { if ( WorkingParams == null ) { throw new InvalidOperationException( "Not attached to a game instance." ); } else { throw new InvalidOperationException( "The game window does not exist." ); } } ResizeWindow( gameWindowHandle, width, height ); Logging.Log.DebugFormat( "Game window resized." ); } /// <summary> /// Resizes the window with the given window handle to the given size and centers it on the screen /// that the window is mostly on. /// </summary> /// <param name="windowHandle"></param> /// <param name="x"></param> /// <param name="y"></param> /// <exception cref="System.ComponentModel.Win32Exception">The resize operation failed.</exception> private void ResizeWindow( IntPtr windowHandle, int width, int height ) { System.Drawing.Rectangle centeredPosition = GetCenteredPosition( windowHandle, width, height ); bool success = Interop.SetWindowPos( windowHandle, IntPtr.Zero, centeredPosition.Left, centeredPosition.Top, width, height, Interop.SWP_ASYNCWINDOWPOS | Interop.SWP_NOACTIVATE | Interop.SWP_NOOWNERZORDER | Interop.SWP_NOZORDER ); if ( !success ) { throw new Win32Exception(); // Magically gets the Windows error message from the SetWindowPos call } } private static System.Drawing.Rectangle GetCenteredPosition( IntPtr windowHandle, int futureWidth, int futureHeight ) { System.Windows.Forms.Screen windowScreen = System.Windows.Forms.Screen.FromHandle( windowHandle ); if ( windowScreen.BitsPerPixel == 0 ) // means getting the screen from the handle failed { throw new Win32Exception( "Could not get the screen that the window is on." ); } // Cap future width and height at the size of the screen, otherwise centering it could put the // window off-screen futureWidth = Math.Min( futureWidth, windowScreen.Bounds.Width ); futureHeight = Math.Min( futureHeight, windowScreen.Bounds.Height ); int x = ( 2 * windowScreen.Bounds.Left + windowScreen.Bounds.Width - futureWidth ) / 2; int y = ( 2 * windowScreen.Bounds.Top + windowScreen.Bounds.Height - futureHeight ) / 2; return new System.Drawing.Rectangle( x, y, futureWidth, futureHeight ); } /// <summary> /// Frees unmanaged resources. This function may block. /// </summary> public void Dispose() { Logging.Log.DebugFormat( "Disposing a DfoLauncher object." ); if ( !m_disposed ) { Reset(); Logging.Log.DebugFormat( "Disposing of synchronization objects." ); m_monitorCancelEvent.Close(); m_monitorFinishedEvent.Close(); m_launcherDoneEvent.Close(); m_disposed = true; Logging.Log.DebugFormat( "DfoLauncher Dispose complete." ); } else { Logging.Log.DebugFormat( "Already disposed." ); } } // XXX: This is only for DFO public static int DefaultGameWindowWidth { get { return 640; } } public static int DefaultGameWindowHeight { get { return 480; } } public static int GetHeightFromWidth( int width ) { int height = (int)Math.Round( width * ( (double)DefaultGameWindowHeight / DefaultGameWindowWidth ) ); return Math.Max( height, 1 ); } public static int GetWidthFromHeight( int height ) { int width = (int)Math.Round( height * ( (double)DefaultGameWindowWidth / DefaultGameWindowHeight ) ); return Math.Max( width, 1 ); } /// <summary> /// Tries to figure out where the game directory for a given game is. The returned path is /// guaranteed to be a valid path string (no invalid characters). /// </summary> /// <exception cref="System.IO.IOException">The game directory could not be detected.</exception> public static string AutoDetectGameDir( Game game ) { object gameRoot = null; string keyName; string valueName; if ( game == Game.DFO ) { keyName = @"HKEY_LOCAL_MACHINE\SOFTWARE\Nexon\DFO"; valueName = "RootPath"; } else { throw new Exception( "Oops, missed a game." ); } Logging.Log.DebugFormat( "Detecting {0} directory by getting registry value '{1}' in registry key '{2}'", game, valueName, keyName ); try { gameRoot = Registry.GetValue( keyName, valueName, null ); } catch ( System.Security.SecurityException ex ) { ThrowAutoDetectException( keyName, valueName, ex ); } catch ( IOException ex ) { ThrowAutoDetectException( keyName, valueName, ex ); } if ( gameRoot == null ) { ThrowAutoDetectException( keyName, valueName, "The registry value does not exist." ); } string gameRootDir = gameRoot.ToString(); if ( Utilities.PathIsValid( gameRootDir ) ) { Logging.Log.DebugFormat( "Game directory is {0}", gameRootDir ); return gameRootDir; } else { throw new IOException( string.Format( "Registry value {0} in {1} is not a valid path." ) ); } } private static void ThrowAutoDetectException( string keyname, string valueName, Exception ex ) { throw new IOException( string.Format( "Could not read registry value {0} in {1}. {2}", valueName, keyname, ex.Message ), ex ); } private static void ThrowAutoDetectException( string keyname, string valueName, string message ) { throw new IOException( string.Format( "Could not read registry value {0} in {1}. {2}", valueName, keyname, message ) ); } } } /* Copyright 2010 Greg Najda Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */
code
४ युवक दोस्त की बहन को जबरन ले गए नोएडा, २ दिन तक बारी-बारी से किया दुष्कर्म, ऐसे पहुंची घर। यूपी में इटावा के बकेवर थाना क्षेत्र के एक गांव की किशोरी ने भाई के दोस्तों पर जबरन नोएडा ले जाकर सामूहिक दुष्कर्म करने का आरोप लगाया है। रविवार को भाई के साथ किशोरी थाने पहुंची और आपबीती सुनाई। पुलिस मामले की जांच कर रही है फिलहाल मामला दर्ज नहीं हुआ है। बकेवर थाना क्षेत्र के एक गांव की कक्षा १० की छात्रा ने बताया कि चार दिन पहले वह मां की दवा लेने उरेंग जा रही थी। रास्ते में बोलेरो पर उसके भाई का दोस्त सहित चार लोग सवार थे। वे उसे जबरन बैठाकर नोएडा ले गए। जहां एक कमरे में ले जाकर उसे बंद कर दिया और दो दिन तक बारी-बारी दुष्कर्म किया। इसके बाद उसे भरथना के पास छोड़ गए। किशोरी घर पहुंची और आपबीती सुनाई। किशोरी ने बताया कि उसका भाई नोएडा में प्राइवेट नौकरी करता है और दुष्कर्म करने वाला युवक भी पहले उसके साथ रहकर काम करता था। करीब २० दिन पहले वे अलग रहने लगा था। भाई का दोस्त और अन्य लोग भरथना क्षेत्र के एक गांव के रहने वाले हैं। इस कारण उनका किशोरी के गांव में आना-जाना भी है। किशोरी के भाई के प्रार्थना पत्र पर थाना पुलिस मामले की जांच में जुटी है। थाना प्रभारी समीर कुमार सिंह ने बताया कि मामले की जांच की जा रही है। जांच के बाद कार्रवाई की जाएगी। प्रेवियस आर्टियलएक्सप्रेस वे हादसाः डिवाइडर से टकराकर पलटी कार, १ की मौत नेक्स्ट आर्टियल१५ मिनट में पद से बर्खास्त होगा कांग्रेस का सीएम स्वराज यात्रा दूसरे दिन पहुंची १४ गांवों में गिरिहिंडा पहाड़ पर बने अतिथिशाला के शौचालय से युवक की सड़ी-... दिन भर गोपालगंज की खबरें एक नजर रूदौली ब्लाक प्रमुख शिल्पी सिंह ने पुलिस पर लगाया आरोप विवाह मंडप पर घंटों चला हाई वोल्टेज ड्रामा, ग्रामीणों ने बरातियों...
hindi
/* * Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ package com.outerthoughts.javadoc.iframed.formats.html; /** * Enum representing various section names of generated API documentation. * * <p><b>This is NOT part of any supported API. * If you write code that depends on this, you do so at your own risk. * This code and its internal interfaces are subject to change or * deletion without notice.</b> * * @author Bhavesh Patel */ public enum SectionName { ANNOTATION_TYPE_ELEMENT_DETAIL("annotation.type.element.detail"), ANNOTATION_TYPE_FIELD_DETAIL("annotation.type.field.detail"), ANNOTATION_TYPE_FIELD_SUMMARY("annotation.type.field.summary"), ANNOTATION_TYPE_OPTIONAL_ELEMENT_SUMMARY("annotation.type.optional.element.summary"), ANNOTATION_TYPE_REQUIRED_ELEMENT_SUMMARY("annotation.type.required.element.summary"), CONSTRUCTOR_DETAIL("constructor.detail"), CONSTRUCTOR_SUMMARY("constructor.summary"), ENUM_CONSTANT_DETAIL("enum.constant.detail"), ENUM_CONSTANTS_INHERITANCE("enum.constants.inherited.from.class."), ENUM_CONSTANT_SUMMARY("enum.constant.summary"), FIELD_DETAIL("field.detail"), FIELDS_INHERITANCE("fields.inherited.from.class."), FIELD_SUMMARY("field.summary"), METHOD_DETAIL("method.detail"), METHODS_INHERITANCE("methods.inherited.from.class."), METHOD_SUMMARY("method.summary"), NAVBAR_BOTTOM("navbar.bottom"), NAVBAR_BOTTOM_FIRSTROW("navbar.bottom.firstrow"), NAVBAR_TOP("navbar.top"), NAVBAR_TOP_FIRSTROW("navbar.top.firstrow"), NESTED_CLASSES_INHERITANCE("nested.classes.inherited.from.class."), NESTED_CLASS_SUMMARY("nested.class.summary"), OVERVIEW_DESCRIPTION("overview.description"), PACKAGE_DESCRIPTION("package.description"), PROPERTY_DETAIL("property.detail"), PROPERTIES_INHERITANCE("properties.inherited.from.class."), PROPERTY_SUMMARY("property.summary"), SKIP_NAVBAR_BOTTOM("skip.navbar.bottom"), SKIP_NAVBAR_TOP("skip.navbar.top"), UNNAMED_PACKAGE_ANCHOR("unnamed.package"); private final String value; SectionName(String sName) { this.value = sName; } public String getName() { return this.value; } }
code
This entry was posted on Tuesday, November 8th, 2016 at 2:15 pm and is filed under Art, Campaigns, Elections, History, Voting. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. 2 Responses to What can we learn from election-day art? What a great post! We are all still in this together. I suppose elections have always been divisive but this one has been uncivil in a mean way. I can’t see Trump conceding like a gentleman if she wins and his followers accepting her as president and we all go back to being friends. Growing up in Duval County in South Texas I remember my parents always voting and supporting a local county party called the Freedom Party.
english
Logan now marks another ending for the Marvel franchise. Although he didn’t know it until recently, star Patrick Stewart has revealed that Logan his last X-Men movie. Speaking alongside costar Hugh Jackman and director James Mangold, Stewart told a Sirius XM Town Hall crowd (via EW) that his time as Professor Charles Xavier has come to an end. The news may be a surprise to some, as Stewart had hinted as recently as last week that he wasn’t necessarily done with the role. In fact, it wasn’t until a recent screening of Logan‘s final cut that Stewart decided on his franchise exit. RELATED: Watch the Logan Trailer! Set in the near future, the film follows a weary Logan (Jackman) as he cares for an ailing Professor X (Sir Patrick Stewart) in a hideout on the Mexican border. But Logan’s attempts to hide from the world and his legacy are up-ended when a young mutant arrives, being pursued by dark forces. Also starring Dafne Keen, Eriq La Salle, Stephen Merchant, Elise Neal and Elizabeth Rodriguez, Logan is coming to theaters on March 3, 2017. Directed by James Mangold (Walk the Line, The Wolverine) and scripted by David James Kelly, the 20th Century Fox will mark the ninth (and said to be final) time that Jackman has played the Marvel Comics character on the big screen. What do you think of Logan being the last X-Men movie for Patrick Stewart?
english
آسامس منٛز روزانہٕ شایع گَژھن والؠن انٛگرٟزۍ اخبارن منٛز چھِ دی آسام ٹِربٛیٛوُن دی سیٚنٛٹِنیل دی ٹیلی گرٛاف دی ٹایمز آف اِنٛڈِیا دی نارٕتھ اٟسٹ ٹایمز اٟسٹٕرٕن کرٛونِکل تہٕ دی ہِلز ٹایمز شٲمِل
kashmiri
आज की डिनर डिप्लोमेसी पर टिकी है सबकी निगाहें स्वराज खबर राजनीति में भोज भात का अपना अलग खेल है। इस खेल में बहुत कुछ समाया होता है। एक तो मिलबांटकर खाने का आनद और दूसरा सामने वाले को कमजोड़ करने की कूटनीति। राजनीति चुकी ठगी और झूठवाद पर ही टिकी होती है इसलिए राजनीति गिरोह चलाने वाले नेता रूपी अभिनेता अक्सर भोजभात का आयोजन करते रहते हैं। सच यह भी है कि चुकी राजनीति का कोई चरित्र नहीं होता इसलिए किसी नेताओं के बारे में कोई प्रमाणपत्र की जरूरत भी नहीं होती। देश मानकर चलता है कि जब सब सारे नेता एक ही चरित्र के होते हैं। अंतर केवल जनता के बीच झूठवाद को सलीके से कौन परोसता है यह महत्वपूर्ण रह जाता है। अब चुकी लोक सभा चुनाव होने में साल भर ही रह गए हैं इसलिए तमाम विपक्षी पार्टियां एक दूसरे से मिलकर भविष्य को निहारने के लिए गोलबंद हो रही है। यही गोलबंदी है जो कांग्रेस की पूर्व अध्यक्ष सोनिया गांधी की डिनर पार्टी के नाम पर देखने को मिलेगी। भारतीय जनता पार्टी के विजय रथ को रोकने के लिए विपक्ष एकजुटता के लिए रणनीति बना रहा है। भोजभात के नाम पर यह गोलबंदी सबको जच गयी तो एक व्यापक संगठन बनेगा और बीजेपी को रोकने का खेल होगा। इस गोलबंदी पर बीजेपी की ख़ास नजर है। उसके दूत सभी दलों के बीच पहुँच गए हैं और सबकी इच्छा को पढ़कर अपने हाई कमान को रिपोर्ट कर रहे हैं। इस डिनर में २०१९ लोकसभा चुनाव के लिए रणनीति पर विचार हो सकता है। इस डिनर में करीब १७ विपक्षी पार्टियों के नेताओं के शामिल होने की संभावना है। हालांकि, इन पार्टियों के अलावा भी कई चेहरे ऐसे हैं जिनपर सभी की नज़र है। इनमें बाबूलाल मरांडी और जीतन राम मांझी का नाम भी शामिल है। दोनों मंगलवार शाम को सोनिया के डिनर में शामिल हो सकते हैं। कांग्रेस, सपा, बीएसपी, टीएमसी, सीपीएम, सीपीआई, डीएमके, जेएमएम, हिंदुस्तानी अवाम मोर्चा, आरजेडी, जेडीएस, केरल काँग्रेस, इंडियन यूनियन मुस्लिम लीग, आरएसपी, एनसीपी, नेशनल कांफ्रेंस, एआईयूडीएफ, आरएलडी। माना जा रहा है कि कई पार्टियां इस डिनर में अपने प्रतिनिधियों को भी भेज सकती हैं। एनसीपी प्रमुख शरद पवार के शामिल होने पर लगातार संशय बरकरार है। दूसरी तरफ पश्चिम बंगाल की मुख्यमंत्री ममता बनर्जी इस डिनर में शामिल नहीं होंगी हालांकि उनकी पार्टी की तरफ से प्रतिनिधि जरूर हिस्सा लेंगे। दरअसल, शरद पवार ने खुद विपक्षी पार्टियों के लिए डिनर आयोजित किया है इसलिए उनके शामिल होने पर संशय बरकरार है। शरद पवार ने कांग्रेस को भी अपने डिनर के लिए निमंत्रण भेजा है। शरद पवार टीआरएस को भी डिनर में शामिल होने के लिए बुलावा भेज सकती है। साफ है कि इस डिनर डिप्लोमेसी के जरिये सोनिया एक तीर से दो निशाना साधना चाहती हैं। विपक्षी नेताओं को डिनर पर बुलाकर वह ये साबित करना चाहती हैं कि मोदी के विकल्प के तौर पर बनने वाले गठजोड़ का नेतृत्व कांग्रेस के पास ही होगा। अब देखना होगा कि सोनिया की यह डिनर पॉलिटिक्स आगामी राजनीति के लिए क्या तैयारी कर पाती है। सभी दलों के विचार क्या बनते हैं। कौन सी पार्टी इस डिनर पर क्या बोलती है और उसके क्या मायने निकलते हैं ,सब समझने की जरूरत है।
hindi
काजू किशमिश भरे हुये नर्म मुलायम मलाई कोफ्ते और वह भी काजू, दही और क्रीम की मखनी सफेद तरी के साथ. भूख न भी हो तो आपका मन खाने को हो जायेगा. जब भी कभी घर में कोई खास मौका हो तो इस खास मुगलाई मलाई कोफ्ता करी (मुगलई मलाई कोफ्ता करी) को बनाईये, अपको और सभी को बहुत पसंद आयेगी पनीर - १/२ कप ,कद्दूकस किया (१00 ग्राम) मावा - १/२ कप ,कद्दूकस किया (१00 ग्राम) कार्न फ्लोर - १/३ कप (४० ग्राम) काजू - ८ किशमिश - १ टेबल स्पून छोटी इलाइची - २ नमक - १/२ छोटी चम्मच काली मिर्च - १/४ छोटी चम्मच तेल - कोफ्ते तलने के लिये तेल - २ टेबल स्पून काजू - १/३ कप (५० गाम) खरबूजे के बीज - १/३ कप (५० ग्राम) क्रीम - १ कप (२०० गाम) ताजा दही - १/२ कप (१00 गाम) हरा धनियां - २-३ टेबल स्पून धनियां (बारीक कटा हुआ) साबुत गरम मसाला (२ बड़ी इलाइची, ६ काली मिर्च,२ लोंग, आधा इंच दालचीनी का टुकड़ा) अदरक - १ इंच टुकड़ा (कद्दूकस किया हुआ या १ छोटी चम्मच पेस्ट) हरी मिर्च - १-२ बारीक कटी हुई धनियां पाउडर - आधा छोटी चम्मच लाल मिर्च पाउडर - १/४ छोटी चम्मच मक्खन - १ टेबल स्पून पनीर और मावा को किसी प्लेट में डालिये, आधा कार्न फ्लोर नमक और काली मिर्च भी डाल दीजिये और सारी चीजों को अच्छी तरह मिक्स करके, मसल मसल कर बिलकुल चपाती के आटे जैसा चिकना तैयार कर लीजिये. कोफ्ते बनाने के लिये मिश्रण तैयार है. कोफ्ते में भरने के लिये स्टफिंग तैयार कर लीजिये: काजू को १ काजू के ७-८ टुकड़े करते हुये काट लीजिये, किशमिश की डंडियां तोड़ कर, कपड़े से पोंछ कर साफ कर लीजिए, इलाइची को छील कर पाउडर बना लीजिये. कटे काजू, किशमिश और इलाइची पाउडर को मिक्स कर लीजिये, कोफ्ते में भरने के लिये स्टफिंग तैयार है. कोफ्ते के मिश्रण से थोड़ा सा छोटे नीबू के बराबर मिश्रण तोड़कर गोल करके चपटा कीजिये, बीच से थोड़ा सा गहरा कीजिये और ३ - ४ टुकड़ा काजू, २ किशमिश गहरे भाग पर रखिये, मिश्रण को चारों ओर से उठाकर स्टफिंग को इस तरह बन्द कीजिये कि कोफ्ते फटने न पायें, कोफ्ते को गोल कीजिये. बने गोले को कार्न फ्लोर में अच्छी तरह लपेटिये और प्लेट में रख दीजिये, सारे कोफ्ते इसी तरह बनाकर प्लेट में रख लीजिये. कढ़ाई में तेल डालकर गरम कीजिये, तेल बहुत अधिक गरम और ठंडा भी न हो, मिडियम गरम तेल में २-३ कोफ्ते डालिये और गरम तेल उनके ऊपर उछालते हुये कोफ्ते हल्के ब्राउन होने तक तल कर निकाल लीजिये, सारे कोफ्ते इसी तरह तल कर निकाल लीजिये. कोफ्ते तैयार है, अब ग्रेवी बनाते हैं. सबसे पहले काजू और खरबूजे के बीज से पेस्ट बना लीजिये, पेस्ट बनाने के लिये काजू और खरबूजे के बीज को गरम पानी में आधा घंटे भिगो लीजिये या सादा पानी में २ घंटे भिगो लीजिये, भीगे काजू और खरबूजे के बीज को बारीक पीस कर पेस्ट बना लीजिये. साबुत गरम मसाले को तैयार कीजिये, बड़ी इलाइची छीलिये, छिलका हटा कर दाने ले लीजिये, काली मिर्च, लोंग और दाल चीनी सब को एक साथ एकदम मोटा मोटा कूट लीजिये. पैन गरम कीजिये, पैन में तेल डालकर गरम होने दीजिये, जीरा डालकर तड़कने दीजिये, कुटा हुआ गरम मसाला डालिये, थोड़ा सा भूनिये, अदरक और हरी मिर्च डालिये और थोड़ा भूनिये, काजू और खरबूजे का पेस्ट डालिये और तब तक भूनिये जब तक तेल पेस्ट से अलग न होने लगे, क्रीम डालकर मसाले को चलाते हुये फिर से इतना भूनिये कि मसाले से तेल अलग होने लगे. भुने मसालों में धनियां पाउडर, लाल मिर्च पाउडर डालकर मिला दीजिये, दही डालकर मसाले को चमचे से चलाते हुये मसाले में अच्छी तरह उबाल आने तक पका लीजिये, मसाले भुन कर तैयार है, अब आप ग्रेवी को अपने अनुसार जितना पतला रखना चाहते है, उतना पानी डालिये १- १ १/२ कप पानी मिला दीजिये और ग्रेवी को चमचे चलाते हुये तब तक पकाइये जब तक कि उसमें उबाल न आ जाय. ग्रेवी में नमक, चीनी, मक्खन और हरा धनियां डालकर मिलाकर २ मिनिट धीमी आग पर पकने दीजिये. ग्रेवी तैयार है, गरम ग्रेवी में परोसने से पहले कोफ्ते डालकर, २ मिनिट ढक दीजिये और स्वादिष्ट मलाई कोफ्ता सब्जी चपाती, परांठे, पूरी या नान के साथ परोसिये और खाइये. सब्जी को इस तरह भी परोस सकते हैं, ग्रेवी और कोफ्ते दोनों अलग रखे हैं, सर्व करने वाली प्याली में २ कोफ्ते रखें और ऊपर से २ चमचे गरम ग्रेवी डालें और थोड़ा हरे धनियां डालकर गार्निस करके सर्व करें. अगर आप इस सब्जी में खटास न पसन्द करें तो दही की जगह दूध का प्रयोग कीजिये, दूध नार्मल तापमान पर जिस तरह दही डालते समय चमचे से चलाते हुये ग्रेवी को पकाया है, उसी तरह दूध डालते समय ग्रेवी को पकाइये. ग्रेवी में प्याज पसन्द करते हैं तब २ प्याज मोटी काट कर उबालिये, उबलते पानी में डालिये और ३ -४ मिनिट तक उबलने दीजिये, प्याज को ठंडा होने पर बारीक पीस कर पेस्ट बना लीजिये. तेल में जीरा और गरम मसाला भूनने के बाद प्याज का पेस्ट डालकर हल्की सी पिंक होने तक भून लीजिये, और सारे मसाले इसी तरह डालते हुये ग्रेवी तैयार कर लीजिये. २०२० मिर्ची फैक्ट्स.कॉम
hindi
<!DOCTYPE html> <html> <head> <title>Colorbox</title> <link rel="stylesheet" href="colorbox.css" /> <style> #images { overflow: hidden; } .flickr_image { float: left; } .flickr_image>img { width: 200px; height: 120px; float: left; } </style> <script src="http://code.jquery.com/jquery-1.10.2.js"></script> <script src="jquery.colorbox.js"></script> <script> $(document).ready(function () { $('#search_form').submit(function (event) { // Ajax를 수행합니다. var url = ''; url += 'http://api.flickr.com/services/feeds/photos_public.gne'; url += '?jsoncallback=?'; $.getJSON(url, $(this).serialize(), function (data) { // #images를 비워줍니다. $('#images').empty(); // #images에 image를 추가합니다. $.each(data.items, function (i, item) { // img 태그를 생성합니다. var $image = $('<img />').attr({ 'src': item.media.m }); // a 태그를 생성합니다. $('<a></a>').attr({ 'class': 'flickr_image', 'href': item.media.m, 'rel': 'colorbox' }).html($image).appendTo('#images'); }); // Colorbox 플러그인을 사용합니다. $('a.flickr_image').colorbox(); }); // 이벤트 전달과 기본 이벤트를 막습니다. return false; }); }); </script> </head> <body> <h1>Flickr Image Album jQuery Colorbox</h1> <form id="search_form"> <input type="text" name="tags"/> <input type="hidden" name="format" value="json"/> <input type="submit" value="검색"/> </form> <div id="images"> </div> </body> </html>
code
- डीए को लेकर डाककर्मियों ने रखी हड़ताल , कौशंबी न्यूज इन हिन्दी -अमर उजाला बेहतर अनुभव के लिए अपनी सेटिंग्स में जाकर हाई मोड चुनें। मंझनपुर। वेतन विसंगतियों और डीए को मर्ज करने जैसी मांगों को लेकर बुधवार को डाक कर्मचारी हड़ताल पर रहे। इससे सुबह खुलते ही डाकघरों में ताला बंद कर दिया गया। उधर, डाककर्मचारियों के कामकाज नहीं करने से लोगों को तमाम दिक्कतों का सामना करना पड़ा। सबसे ज्यादा परेशान अभ्यर्थियों को आवेदन पत्रों की रजिस्ट्री नहीं होने से उठानी पड़ी। नेशनल फेडरेशन आफ पोस्टल इंप्लाइज के आह्वान पर बुधवार को मंझनपुर, भरवारी, सिराथू, करारी, सरायअकिल, चायल, मनौरी, मूरतगंज समेत जिलेभर के सभी डाकघर के कर्मियों ने १८ सूत्रीय मांगों को लेकर कामकाज ठप रखा। सुबह डाकघर पहुंचने के बाद हड़ताल का ऐलान करते हुए डाक कर्मचारियों ने काम-काज बंद कर दिया। साथ ही ताला लगाकर वे घर लौट गए। डाक कर्मियों की मांग है कि जेडीएस समेत केन्द्रीय कर्मचारियों के वेतन का संशोधन के लिए सातवें वेतन आयोग का गठन किया जाए। अनुकंपा नियुक्तियों पर लगे प्रतिबंध को हटाया जाए। जीडीएस का विभागीयकरण करते हुए सभी कर्मचारियों को नियमित किया जाए। इस मौके पर अनिरूद्ध कुमार, धनराज, राजेन्द्र सिंह, एसएन द्विवेदी , बलराम, सीएल प्रजापति, अजय कुमार, ज्ञानचंद्र, केपी सिंह आदि मौजूद रहे।
hindi
\begin{document} \author{Vadim E. Levit and Eugen Mandrescu \\ Department of Computer Science\\ Holon Academic Institute of Technology\\ 52 Golomb Str., P.O. Box 305\\ Holon 58102, ISRAEL\\ \{levitv, eugen\_m\}@barley.cteh.ac.il} \title{Bipartite graphs with uniquely restricted maximum matchings and their corresponding greedoids} \date{} \maketitle \begin{abstract} A \textit{maximum stable set }in a graph $G$ is a stable set of maximum size. $S$ is a \textit{local maximum stable set} of $G$, and we write $S\in \Psi (G)$, if $S$ is a maximum stable set of the subgraph spanned by $S\cup N(S)$, where $N(S)$ is the neighborhood of $S$. A matching $M$ is \textit{ uniquely restricted} if its saturated vertices induce a subgraph which has a unique perfect matching, namely $M$ itself. Nemhauser and Trotter Jr. \cite {NemhTro}, proved that any $S\in \Psi (G)$ is a subset of a maximum stable set of $G$. In \cite{LevMan2} we have shown that the family $\Psi (T)$ of a forest $T$ forms a greedoid on its vertex set. In this paper we demonstrate that for a bipartite graph $G,\Psi (G)$ is a greedoid on its vertex set if and only if all its maximum matchings are uniquely restricted. \end{abstract} \section{Introduction} Throughout this paper $G=(V,E)$ is a simple (i.e., a finite, undirected, loopless and without multiple edges) graph with vertex set $V=V(G)$ and edge set $E=E(G).$ If $X\subset V$, then $G[X]$ is the subgraph of $G$ spanned by $X$. By $G-W$ we mean the subgraph $G[V-W]$, if $W\subset V(G)$. We also denote by $G-F$ the partial subgraph of $G$ obtained by deleting the edges of $F$, for $F\subset E(G)$, and we write shortly $G-e$, whenever $F$ $ =\{e\} $. If $X,Y\subset V$ are disjoint and non-empty, then by $(X,Y)$ we mean the set $\{xy:xy\in E,x\in XA,y\in Y\}$. The \textit{neighborhood} of a vertex $v\in V$ is the set $N(v)=\{w:w\in V$ \ \textit{and} $vw\in E\}$. If $ \left| N(v)\right| =1$, then $v$ is a \textit{pendant vertex} of $G$; by $ \mathrm{pend}(G)$ we designate the set of all pendant vertices of $G$. We denote the \textit{neighborhood} of $A\subset V$ by $N_{G}(A)=\{v\in V-A:N(v)\cap A\neq \emptyset \}$ and its \textit{closed neighborhood} by $ N_{G}[A]=A\cup N(A)$, or shortly, $N(A)$ and $N[A]$, if no ambiguity. $ K_{n},C_{n}$ denote respectively, the complete graph on $n\geq 1$ vertices and the chordless cycle on $n\geq 3$ vertices. By $G=(A,B,E)$ we mean a bipartite graph having $\{A,B\}$ as its standard bipartition. A \textit{stable} set in $G$ is a set of pairwise non-adjacent vertices. A stable set of maximum size will be referred to as a \textit{maximum stable set} of $G$, and the \textit{stability number }of $G$, denoted by $\alpha (G) $, is the cardinality of a maximum stable set in $G$. Let $\Omega (G)$ stand for the set of all maximum stable sets of $G$. A set $A\subseteq V(G)$ is a \textit{local maximum stable set} of $G$ if $A$ is a maximum stable set in the subgraph spanned by $N[A]$, i.e., $A\in \Omega (G[N[A]])$, \cite {LevMan2}. In the sequel, by $\Psi (G)$ we denote the set of all local maximum stable sets of the graph $G$. For instance, any set $S\subseteq \mathrm{pend}(G)$ belongs to $\Psi (G)$, while the converse is not generally true; e.g., $\{a\},\{e,d\}\in \Psi (G)$ and $\{e,d\}\cap \mathrm{pend} (G)=\emptyset $, where $G$ is the graph in Figure \ref{fig10}. \begin{figure} \caption{A graph {with diverse local maximum stable sets} \label{fig10} \end{figure} Not any stable set of a graph $G$ is included in some maximum stable set of $ G$. For example, there is no $S\in \Omega (G)$ such that $\{c,f\}\subset S$, where $G$ is the graph depicted in Figure \ref{fig2929}. The following theorem due to Nemhauser and Trotter Jr. \cite{NemhTro}, shows that some special maximum stable sets can be enlarged to maximum stable sets. \begin{theorem} \cite{NemhTro}\label{th1} Any local maximum stable set of a graph is a subset of a maximum stable set. \end{theorem} Let us notice that the converse of Theorem \ref{th1} is not generally true. For instance, $C_{n}$, $n\geq 4$, has no proper local maximum stable set. The graph $G$ in Figure \ref{fig10} shows another counterexample: any $S\in \Omega (G)$ contains some local maximum stable set, but these local maximum stable sets are of different cardinalities. As examples, $\{a,d,f\}\in \Omega (G)$ and $\{a\},\{d,f\}\in \Psi (G)$, while for $\{b,e,g\}\in \Omega (G)$ only $\{e,g\}\in \Psi (G)$. In \cite{LevMan2} we have proved the following result: \begin{theorem} \label{th2}The family of local maximum stable sets of a forest of order at least two forms a greedoid on its vertex set. \end{theorem} Theorem \ref{th2} is not specific for forests. For instance, the family $ \Psi (G)$ of the graph $G$ in Figure \ref{fig101} is a greedoid. \begin{figure} \caption{A graph {whose } \label{fig101} \end{figure} The definition of greedoids we use in the sequel is as follows. \begin{definition} \cite{BjZiegler}, \cite{KorLovSch} A \textit{greedoid} is a pair $(E, \mathcal{F})$, where $\mathcal{F}\subseteq 2^{E}$ is a set system satisfying the following conditions: \setlength {\parindent}{0.0cm}(\textit{Accessibility}) for every non-empty $ X\in \mathcal{F}$ there is an $x\in X$ such that $X-\{x\}\in \mathcal{F}$; \setlength {\parindent}{3.45ex} \setlength {\parindent}{0.0cm}(\textit{Exchange}) for $X,Y\in \mathcal{F} ,\left| X\right| =\left| Y\right| +1$, there is an $x\in X-Y$ such that $ Y\cup \{x\}\in \mathcal{F}$.\setlength {\parindent}{3.45ex} \end{definition} Clearly, $\Omega (G)\subseteq \Psi (G)$ holds for any graph $G$. It is worth observing that if $\Psi (G)$ is a greedoid and $S\in \Psi (G)$, $\left| S\right| =k\geq 2$, then by accessibility property, there is a chain \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},...,x_{k-1}\}\subset \{x_{1},...,x_{k-1},x_{k}\}=S \] such that $\{x_{1},x_{2},...,x_{j}\}\in \Psi (G)$, for all $j\in \{1,...,k-1\}$. Such a chain we call an \textit{accessibility chain }for $S$ . As an example, for $S=\{a,c,e\}\in \Psi (G)$, where $G$ is the graph in Figure \ref{fig101}, an accessibility chain is $\{a\}\subset \{a,e\}\subset S $. A \textit{matching} in a graph $G=(V,E)$ is a set of edges $M\subseteq E$ having the property that no two edges of $M$ share a common vertex. We denote the size of a \textit{maximum matching} (a matching of maximum cardinality) by $\mu (G)$. A \textit{perfect matching} is a matching saturating all the vertices of the graph. Let us recall that $G$ is a \textit{K\"{o}nig-Egerv\'{a}ry graph }provided $ \alpha (G)+\mu (G)=\left| V(G)\right| $, \cite{eger}, \cite{koen}. As a well-known example, any bipartite graph is a K\"{o}nig-Egerv\'{a}ry graph. Some non-bipartite K\"{o}nig-Egerv\'{a}ry graphs are presented in Figure \ref {fig27}. A matching $M=\{a_{i}b_{i}:a_{i},b_{i}\in V(G),1\leq i\leq k\}$ of a graph $ G $ is called \textit{a uniquely restricted\emph{\ }matching} if $M$ is the unique perfect matching of the subgraph $G[\{a_{i},b_{i}:1\leq i\leq k\}]$, \cite{GolHiLew} (first time this kind of matching appeared in \cite{HerSchn} for bipartite graphs under the name ''\textit{constrained matching}''. Let $ \mu _{r}(G)$ be the maximum size of a uniquely restricted matching in $G$. Clearly, $0\leq \mu _{r}(G)\leq \mu (G)$ holds for any graph $G$. For instance, $0=\mu _{r}(C_{2n})<n=\mu (C_{2n})$, while $\mu _{r}(C_{2n+1})=\mu (C_{2n+1})=n$. In this paper we characterize the bipartite graphs whose family of local maximum stable sets are greedoids. Namely, we prove that for a bipartite graph $G,$ the family $\Psi (G)$ is a greedoid on the vertex set of $G$ if and only if all its maximum matchings are uniquely restricted. Golumbic, Hirst and Lewenstein have shown in \cite{GolHiLew} that $\mu _{r}(G)=\mu (G)$ holds when $G$ is a tree or has only odd cycles. Our findings reveal another class of graphs enjoying this property. \section{Preliminary results} An edge $e$ of a graph $G$ is $\alpha $\textit{-critical} ($\mu $\textit{ -critical}) if $\alpha (G)<\alpha (G-e)$ ($\mu (G)>\mu (G-e)$, respectively). Let us observe that there is no general connection between the $\alpha $-critical and the $\mu $-critical edges of a graph. For instance, the edge $e$ of the graph $G_{1}$ in Figure \ref{fig2023} is $\mu $ -critical and non-$\alpha $-critical, while the edge $e$ of the graph $G_{2}$ in the same figure is $\alpha $-critical and non-$\mu $-critical. \begin{figure} \caption{Non-K\"{o} \label{fig2023} \end{figure} Nevertheless, for K\"{o}nig-Egerv\'{a}ry graphs and especially for bipartite graphs, there is a closed relationship between these two kinds of edges. \begin{lemma} \cite{LevMan3}\label{critical} In a K\"{o}nig-Egerv\'{a}ry graph, $\alpha $ -critical edges are also $\mu $-critical, and they coincide in a bipartite graph. \end{lemma} In a K\"{o}nig-Egerv\'{a}ry graph, maximum matchings have a very specific property, emphasized by the following statement: \begin{lemma} \cite{LevMan1}\label{match} Any maximum matching $M$ of a K\"{o}nig-Egerv\'{a}ry graph $G$ is contained in each $(S,V(G)-\nolinebreak S)$ and $\left| M\right| =\left| V(G)-S\right| $, where $S\in \Omega (G)$. \end{lemma} Clearly, not any matching of a graph is contained in a maximum matching. For example, there is no maximum matching of the graph $G$ in Figure \ref{fig101} that includes the matching $M=\{ab,cf\}$. Let us observe that $M$ is a maximum matching in $G[N[\{a,f\}]],\{a,f\}$ is stable in $G$, but $ \{a,f\}\notin \Psi (G)$. The following result shows that, under certain conditions, a matching of a bipartite graph can be extended to a maximum matching. \begin{lemma} \label{lem3}If $G$ is a bipartite graph, $\widehat{S}\in \Psi (G)$, and $ \widehat{M}$ is a maximum matching in $G[N[\widehat{S}]]$, then there exists a maximum matching $M$ in $G$ such that $\widehat{M}\subseteq M$. \end{lemma} \setlength {\parindent}{0.6cm}\textbf{Proof.} Let $W=N(\widehat{S}),$ $H=G[N[ \widehat{S}]]$, and $S^{\prime }$ be a stable set in $G$ such that $S= \widehat{S}\cup S^{\prime }\in \Omega (G)$ (such $S^{\prime }$ exists according to Theorem \ref{th1}).\setlength {\parindent}{3.45ex} Since $H$ is bipartite and $\widehat{M}$ is a maximum matching in $H$, it follows that \[ \left| \widehat{S}\right| +\left| \widehat{M}\right| =\alpha (H)+\mu (H)=\left| V(H)\right| =\left| \widehat{S}\right| +\left| W\right| . \] Let $M$ be a maximum matching in $G$. Then, by Lemma \ref{match}, $ M\subseteq (S,V(G)-S)$, because $S\in \Omega (G)$, and \[ \left| M\right| =\left| V(G)-S\right| =\left| N(\bar{S})\right| +\left| N(S^{\prime })-N(\widehat{S})\right| =\left| \widehat{M}\right| +\left| V(G)-S-W\right| . \] Let $M^{\prime }$ be the subset of $M$ containing edges having an endpoint in $V(G)-S-W$. Since no edge joins a vertex of $\widehat{S}$ to some vertex in $V(G)-S-W$, it follows that $M^{\prime }$ is the restriction of $M$ to $ G[V(G)-S-W]$. Consequently, $\widehat{M}\cup M^{\prime }$ is a matching in $ G $ that contains $\widehat{M}$, and because $\left| \widehat{M}\cup M^{\prime }\right| =\left| \widehat{M}\right| +\left| V(G)-S-W\right| =\left| M\right| $, we see that $\widehat{M}\cup M^{\prime }$ is a maximum matching in $G$. \rule{2mm}{2mm}\newline Let us notice that Lemma \ref{lem3} can not be generalized to non-bipartite graphs. For instance, the graph $G$ presented in Figure \ref{fig2929} has $ \widehat{S}=\{a,d\}\in \Psi (G),\widehat{M}=\{ac,df\}$ is a maximum matching in $G[N[\widehat{S}]]$, but there is no maximum matching in $G$ that includes $\widehat{M}$. \begin{figure} \caption{$\widehat{M} \label{fig2929} \end{figure} \begin{lemma} \label{lem1}If $G=(A,B,E)$ is a connected bipartite graph having a unique perfect matching, then $A\cap \mathrm{pend}(G)\neq \emptyset $ and $B\cap \mathrm{pend}(G)\neq \emptyset $. \end{lemma} \setlength {\parindent}{0.6cm}\textbf{Proof. }Let $M=\{a_{i}b_{i}:1\leq i\leq n,a_{i}\in A,b_{i}\in B\}$ be the unique perfect matching of $G$. Clearly, $\left| A\right| =\left| B\right| $. Suppose that $B\cap \mathrm{ pend}(G)=\emptyset $. Hence, $\left| N(b_{i})\right| \geq 2$ for any $ b_{i}\in B$. Under these conditions, we shall build some cycle $C$ having half of edges contained in $M$, and this allows us to find a new perfect matching in $G$, which contradicts the uniqueness of $M$. We begin with the edge $a_{1}b_{1}$ . Since $\left| N(b_{1})\right| \geq 2$, there is some $a\in (A-\{a_{1}\})\cap N(b_{1})$, say $a_{2}$. We continue with $a_{2}b_{2}\in M$ . Further, $N(b_{2})$ contains some $a\in (A-\{a_{2}\})$. If $a_{1}\in N(b_{2})$, we are done, because $G[\{a_{1},a_{2},b_{1},b_{2}\}]=C_{4}$. Otherwise, we may suppose that $a=a_{3}$, and we add to the growing cycle the edge $a_{3}b_{3}$. Since $G$ has a finite number of vertices, after a number of edges from $M$, we must find some edge $a_{j}b_{k}$ with $1\leq j<k $. So, the cycle $C$ we found has \[ V(C)=\{a_{i},b_{i}:j\leq i\leq k\},\ E(C)=\{a_{i}b_{i}:j\leq i\leq k\}\cup \{b_{i}a_{i+1}:j\leq i<k\}\cup \{a_{j}b_{k}\}. \] Clearly, half of edges of $C$ are contained in $M$. \setlength {\parindent}{3.45ex} Similarly, we can show that also $A\cap \mathrm{pend}(G)\neq \emptyset $. \rule{2mm}{2mm}\newline The following proposition presents a recursive structure of bipartite graphs owing unique perfect matchings, which generalizes the recursive structure of trees having perfect matching due to Fricke, Hedetniemi, Jacobs and Trevisan, \cite{FrHedJaTre}. \begin{proposition} \label{prop1}$K_{2}$ is a bipartite graph, and it has a unique perfect matching. If $G$ is a bipartite graph with a unique perfect matching, then $ G+K_{2}$ is also a bipartite graph having a unique perfect matching. Moreover, any bipartite graph containing a unique perfect matching can be obtained in this way. By $G+K_{2}$ we mean the graph comprising the disjoint union of $G$ and $ K_{2}$, and additional edges joining at most one of endpoints of $K_{2}$ to vertices belonging to only one color class of $G$. \end{proposition} \setlength {\parindent}{0.6cm}\textbf{Proof. }Let $G=(A,B,E)$ be a bipartite graph having a unique perfect matching, say $M=\{a_{i}b_{i}:1\leq i\leq n,a_{i}\in A,b_{i}\in B\}$. If $K_{2}=(\{x,y\},\{xy\})$, then $H=G+K_{2}$ is also bipartite and $M\cup \{xy\}$ is a unique perfect matching in $H$, since $M$ was unique in $G$ and at least one of $x,y$ is pendant in $H$. \setlength {\parindent}{3.45ex} Conversely, let $G$ be a bipartite graph with a unique perfect matching. By Lemma \ref{lem1}, it follows that $G$ has at least one pendant vertex, say $ x $. If $y\in N(x)$, then, clearly, $G=(G-\{x,y\})+K_{2}$. \rule{2mm}{2mm} \newline \section{Main results} \begin{proposition} \label{prop2}If $G$ is a bipartite graph of order $2n$ having a perfect matching $M$, then $M$ is unique if and only if for some $S\in \Omega (G)$ there exists an accessibility chain. \end{proposition} \setlength {\parindent}{0.6cm}\textbf{Proof. }Since $\mu (G)=n$, in every set of size greater than $n$ there exists a pair of adjacent vertices, and hence $\alpha (G)=n$.\setlength {\parindent}{3.45ex} Suppose that $G$ is a bipartite graph of order $2n$ with a unique perfect matching. We prove, by induction on $n$, that for some $S\in \Omega (G)$ there exists an accessibility chain. For $n=2$, let $S=\{x_{1},x_{2}\}\in \Omega (G),N(S)=\{y_{1},y_{2}\}$ and $ x_{1}y_{1},x_{2}y_{2}\in M$, where $M$ is its unique perfect matching. Then, at least one of $x_{1},x_{2}$ is pendant, say $x_{1}$. Hence, $ \{x_{1}\}\subset \{x_{1},x_{2}\}=S$ is an accessibility chain. Suppose that the assertion is true for $k<n$. Let $G=(A,B,E)$ be of order $ 2n $ and $M=\{a_{i}b_{i}:1\leq i\leq n,a_{i}\in A,b_{i}\in B\}$ be its unique perfect matching. According to Proposition \ref{prop1}, $G=H+K_{2}$. Consequently, we may assume that: $K_{2}=(\{a_{1},b_{1}\},\{a_{1}b_{1}\})$ and $a_{1}\in \mathrm{pend}(G)$. Clearly, $H$ is a bipartite graph containing a unique perfect matching, namely $M_{H}=M-\{a_{1}b_{1}\}$. \emph{Case 1.} $a_{1}\in S$. Hence, $S_{n-1}=S-\{a_{1}\}\in \Omega (H)$, and by induction hypothesis, there is a chain \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{n-2}\}\subset \{x_{1},x_{2},...,x_{n-1}\}=S_{n-1} \] such that $\{x_{1},x_{2},...,x_{k}\}\in \Psi (H)$ for any $k\in \{1,...,n-1\} $. Since $N(a_{1})=\{b_{1}\}$, it follows that $ N_{G}(\{x_{1},x_{2},...,x_{k}\}\cup \{a_{1}\})=N_{H}(\{x_{1},x_{2},...,x_{k}\})\cup \{b_{1}\}$, and therefore $ \{x_{1},x_{2},...,x_{k}\}\cup \{a_{1}\}\in \Psi (G)$ for any $k\in \{1,...,n-1\}$. Clearly, $\{a_{1}\}\in \Psi (G)$, and consequently, we have the chain: \begin{eqnarray*} \{a_{1}\} &\subset &\{a_{1},x_{1}\}\subset \{a_{1},x_{1},x_{2}\}\subset ...\subset \{a_{1},x_{1},x_{2},...,x_{n-2}\}\subset \\ &\subset &\{a_{1},x_{1},x_{2},...,x_{n-1}\}=\{a_{1}\}\cup S_{n-1}=S, \end{eqnarray*} where $\{a_{1},x_{1},x_{2},...,x_{k}\}\in \Psi (G)$, for all $k\in \{1,...,n-1\}$. \emph{Case 2.} $b_{1}\in S$. Hence, $S_{n-1}=S-\{b_{1}\}\in \Omega (H)$ and also $S_{n-1}\in \Psi (G)$, because $N_{G}[S_{n-1}]=A\cup B-\{a_{1},b_{1}\}$ . By induction hypothesis, there is a chain \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{n-2}\}\subset \{x_{1},x_{2},...,x_{n-1}\}=S_{n-1} \] such that $\{x_{1},x_{2},...,x_{k}\}\in \Psi (H)$ for any $k\in \{1,...,n-1\} $. Since none of $a_{1},b_{1}$ is contained in $ N_{G}(\{x_{1},x_{2},...,x_{k}\})$, it follows that $\{x_{1},x_{2},...,x_{k} \}\in \Psi (G)$, for any $k\in \{1,...,n-1\}$. Consequently, we have the chain \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{n-1}\}=S_{n-1}\subset S_{n-1}\cup \{b_{1}\}=S, \] where $\{x_{1},x_{2},...,x_{k}\}\in \Psi (G)$, for all $k\in \{1,...,n-1\}$. Conversely, let $M=\{x_{i}y_{i}:1\leq i\leq n\}$ be a perfect matching in $G$ , and suppose that for $S\in \Omega (G)$ there exists a chain of local maximum stable sets \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{n-1}\}\subset \{x_{1},x_{2},...,x_{\alpha -1},x_{\alpha }\}=S. \] We show, by induction on $k=\left| \{x_{1},x_{2},...,x_{k}\}\right| $ that $ H_{k}=G[N[\{x_{1},x_{2},...,x_{k}\}]]$ owns a unique perfect matching. For $k=1$, the assertion is true, because $\{x_{1}\}\in \Psi (G)$ ensures that $x_{1}$ is pendant, and therefore, $H_{1}=G[N[\{x_{1}\}]]$ has a unique perfect matching, consisting of the unique edge issuing from $x_{1}$, namely $x_{1}y_{1}$. Assume that $H_{k}$ has a unique perfect matching, say $M_{k}$. We may assert that $M_{k}\subseteq M$, because $M_{k}$ is unique and included in $ H_{k}$ and also $M$ matches $x_{1},x_{2},...,x_{k}$ onto vertices belonging to $N(\{x_{1},x_{2},...,x_{k}\})$. Hence, $M_{k+1}=M_{k}\cup \{x_{k+1}y_{k+1}\}$ is a maximum matching in $H_{k+1}$. If $M_{k+1}$ is not unique in $H_{k+1}$, then there exists some $z\in N(a_{k+1})-N[\{a_{1},a_{2},...,a_{k}\}]$ such that $z\neq y_{k+1}$. Therefore, we infer that the set $\{x_{1},x_{2},...,x_{k}\}\cup \{z,y_{k+1}\} $ is stable in $H_{k+1}$ and larger than $ \{x_{1},x_{2},...,x_{k+1}\}$, which contradicts the fact that $ \{x_{1},x_{2},...,x_{k+1}\}\in \Psi (G)$. Consequently, $M_{k+1}$ is unique and also perfect in $H_{k+1}$. \rule{2mm}{2mm}\newline If one of the maximum matchings of a bipartite graph is uniquely restricted, this is not necessarily true for all its maximum matchings. For instance, let us consider the bipartite graph $G$ presented in Figure \ref{fig24}. The set of edges $M_{1}=\{ab,ce\}$ is one of uniquely restricted maximum matchings of $G$, while $M_{2}=\{bd,cf\}$ is one of its maximum matchings, but it is not uniquely restricted. \begin{figure} \caption{{Not all } \label{fig24} \end{figure} \begin{theorem} \label{th10}If $G$ is a bipartite graph, then the following assertions are equivalent: ($\mathit{i}$) there exists some $S\in \Omega (G)$ having an accessibility chain; ($\mathit{ii}$) there exists a uniquely restricted maximum matching in $G$; ($\mathit{iii}$) each $S\in \Omega (G)$ has an accessibility chain. \end{theorem} \setlength {\parindent}{0.6cm}\textbf{Proof. }($\mathit{i}$) $\Rightarrow $ ( $\mathit{ii}$) Let us consider an accessibility chain of $S\in \Omega (G)$ \[ \emptyset \subset \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{\alpha -1}\}\subset \{x_{1},x_{2},...,x_{\alpha }\}=S, \] for which we define $S_{i}=\{x_{1},x_{2},...,x_{i}\}$ and $S_{0}=\emptyset $. \setlength {\parindent}{3.45ex} Since $S_{i-1}\in \Psi (G),S_{i}=S_{i-1}\cup \{x_{i}\}\in \Psi (G)$ and $G$ is bipartite, it follows that $\left| N(x_{i})-N[S_{i-1}]\right| \leq 1$, because otherwise, if $\{a,b\}\subset N(x_{i})-N[S_{i-1}]$, then the set $ \{a,b\}\cup $ $S_{i-1}$ is stable in $N[S_{i-1}\cup \{x_{i}\}]$, and larger than $S_{i}=S_{i-1}\cup \{x_{i}\}$, in contradiction with the fact that $ S_{i}\in \Psi (G)$. Let $\{y_{i_{j}}:1\leq j\leq \mu \}$ be such that $\{y_{i_{j}} \}=N(x_{i_{j}})-N[S_{i_{j}-1}]$, for all $i\in \{1,...,\alpha \}$ with $ \left| N(x_{i})-N[S_{i-1}]\right| =1$. Hence, $M=\{x_{i_{j}}y_{i_{j}}:1\leq j\leq \mu \}$ is a matching in $G$. \begin{itemize} \item \emph{Claim 1.} $\mu =\mu (G)$, i.e., $M$ is a maximum matching in $G$ . \end{itemize} Since $\left| N(x_{i})-N[S_{i-1}]\right| \leq 1$ holds for all $i\in \{1,...,\alpha \}$, where $S_{0}=N[S_{0}]=\emptyset $, and $ \{y_{i_{j}}\}=N(x_{i_{j}})-N[S_{i_{j}-1}]$, for all $i\in \{1,...,\alpha \}$ satisfying $\left| N(x_{i})-N[S_{i-1}]\right| =1$, it follows that $ N(S)=\{y_{i_{j}}:1\leq j\leq \mu \}$, and this ensures that $M$ is a maximal matching in $G$, i.e., it is impossible to add an edge to $M$ and to get a new matching. In addition, we have \[ \left| V(G)\right| =\left| N[S]\right| =\left| S\right| +\left| N(S)\right| =\left| S\right| +\left| \{y_{i_{j}}:1\leq j\leq \mu \}\right| =\alpha (G)+\left| M\right| , \] and because $\left| V(G)\right| =\alpha \left( G\right) +\mu \left( G\right) $, we infer that $\left| M\right| =\mu \left( G\right) $. In other words, $M$ is a maximum matching in $G$. \begin{itemize} \item \emph{Claim 2.} $M$ is a uniquely restricted maximum matching in $G$. \end{itemize} We use induction on $k=\left| S_{k}\right| $ to show that the restriction of $M$ to $H_{k}=G(N[S_{k}])$, which we denote by $M_{k}$, is a uniquely restricted maximum matching in $H_{k}$. For $k=1,S_{1}=\{x_{1}\}\in \Psi (G)$ and this implies that $ N(x_{1})=\{y_{i_{1}}\}$. Clearly, $M_{1}=\{x_{1}y_{i_{1}}\}$ is a uniquely restricted maximum matching in $H_{1}$. Suppose that the assertion is true for all $j\leq k-1$. Let us observe that \[ N[S_{k}]=N[S_{k-1}]\cup (N(x_{k})-N[S_{k-1}])\cup \{x_{k}\}, \] because $S_{k}=S_{k-1}\cup \{x_{k}\}$. Further we will distinguish between two different situations depending on the number of new vertices, which the set $N(x_{k})$ brings to the set $ N[S_{k-1}]$. \emph{Case 1.} $N(x_{k})-N[S_{k-1}]=\emptyset $. Hence, we obtain: \[ \left| V(H_{k})\right| =\left| S_{k-1}\cup \{x_{k}\}\right| +\left| M_{k-1}\right| =\left| S_{k}\right| +\left| M_{k-1}\right| =\alpha (H_{k})+\left| M_{k-1}\right| . \] Since $\left| V(H_{k})\right| =\alpha (H_{k})+\mu \left( H_{k}\right) $, the equality $\left| V(H_{k})\right| =\alpha (H_{k})+\left| M_{k-1}\right| $ ensures that $M_{k-1}$ is a maximum matching of $H_{k}$. Therefore, $M_{k-1}$ is a uniquely restricted maximum matching in $H_{k}$. \emph{Case 2.} $N(x_{k})-N[S_{k-1}]=\{y_{i_{k}}\}$. Then we have: \[ \left| V(H_{k})\right| =\left| S_{k-1}\cup \{x_{k}\}\right| +\left| M_{k-1}\cup \{x_{k}y_{i_{k}}\}\right| =\left| S_{k}\right| +\left| M_{k}\right| =\alpha (H_{k})+\left| M_{k}\right| , \] and this assures that $M_{k}=M_{k-1}\cup \{x_{k}y_{i_{k}}\}$ is a maximum matching in $H_{k}$. The edge $e=x_{k}y_{i_{k}}$ is $\alpha $-critical in $ H_{k}$, because $\{y_{i_{k}}\}=N(x_{k})-N[S_{k-1}]$. According to Lemma \ref {critical}, $e$ is also $\mu $-critical in $H_{k}$. Therefore, any maximum matching of $H_{k}$ contains $e$, and since $M_{k}=M_{k-1}\cup \{e\}$ and $ M_{k-1}$ is a uniquely restricted maximum matching in $H_{k-1}=H_{k}- \{x_{k},y_{i_{k}}\}$, it follows that $M_{k}$ is a uniquely restricted maximum matching in $H_{k}$. ($\mathit{ii}$) $\Rightarrow $ ($\mathit{iii}$) Let $M$ be a uniquely restricted maximum matching in $G$. According to Lemma \ref{match}, $ M\subseteq (S,V(G)-S)$ and $\left| M\right| =\left| V(G)-S\right| =\mu (G)$. Therefore, $M$ is a unique perfect matching in $H=G[N[S_{\mu }]]$, where \[ S_{\mu }=\{x:x\in S,x\ is\ an\ endpoint\ of\ an\ edge\ in\ M\}. \] It is clear that $S_{\mu }$ is a maximum stable set in $H$, because $ N(S_{\mu })=V(G)-S$ and $S_{\mu }$ is stable. In other words, $S_{\mu }\in \Psi (G)$. Since $H$ is bipartite and $M$ is its unique perfect matching, Proposition \ref{prop2} implies that there exists a chain \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{\mu -1}\}\subset \{x_{1},x_{2},...,x_{\mu -1},x_{\mu }\}=S_{\mu }, \] such that all $S_{k}=\{x_{1},x_{2},...,x_{k}\},1\leq k\leq \mu $ are local maximum stable sets in $H$. The equality $N_{H}[S_{k}]=N_{G}[S_{k}]$ explains why $S_{k}\in \Psi (G)$ for all $k\in \{1,...,\mu (G)\}$. Let now $ x\in S-S_{\mu }$. Then $N(x)\subseteq V(G)-S$, and therefore, $N(S_{\mu }\cup \{x\})=V(G)-S$. Since $S_{\mu }$ is a maximum stable set in $H$ and $ S_{\mu }\cup \{x\}$ is stable in $H\cup \{x\}=G[N[S_{\mu }\cup \{x\}]]$, we get that $S_{\mu }\cup \{x\}$ is a maximum stable set in $H\cup \{x\}$, i.e., $S_{\mu +1}=S_{\mu }\cup \{x\}\in \Psi (G)$. If there still exists some $y\in S-S_{\mu +1}$, in the same manner as above we infer that $S_{\mu +2}=S_{\mu +1}\cup \{y\}\in \Psi (G)$. In such a way we build the following accessibility chain \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{\mu }\}\subset S_{\mu +1}\subset S_{\mu +1}\subset ...\subset S_{\alpha }=S. \] Clearly,\textbf{\ }($\mathit{iii}$) $\Rightarrow $ ($\mathit{i}$), and this completes the proof. \rule{2mm}{2mm}\newline As an example of the process of building a uniquely restricted maximum matching with the help of an accessibility chain, let us consider the bipartite graph $G$ presented in Figure \ref{fig2444}. The accessibility chain \[ \{h\}\subset \{h,d\}\subset \{h,d,f\}\subset \{h,d,f,c\}\subset \{h,d,f,c,a\}\in \Psi (G) \] gives rise to the uniquely restricted maximum matching $M=\{hg,de,cb\}$. Notice that $\Psi (G)$ is not a greedoid, because $\{d,f\}\in \Psi (G)$, while $\{d\},\{f\}\notin \Psi (G)$. \begin{figure} \caption{{The chain of uniquely restricted } \label{fig2444} \end{figure} The following theorem will show us another reason, why the family $\Psi (G)$ of the graph $G$ from Figure \ref{fig2444} is not a greedoid, namely $ \{bc,de,fg\}$ is a maximum matching, but not uniquely restricted. \begin{theorem} \label{th11}If $G$ is a bipartite graph, then $\Psi (G)$ is a greedoid if and only if all its maximum matchings are uniquely restricted. \end{theorem} \setlength {\parindent}{0.6cm}\textbf{Proof.} Assume that $\Psi (G)$ is a greedoid. Let $M$ be a maximum matching in $G$. According to Lemma \ref {match}, we have that $M\subseteq (S,V(G)-S)$ and $\left| M\right| =\left| V(G)-S\right| $ for any $S\in \Omega (G)$. Let $S_{\mu }$ contain the vertices of some $S\in \Omega (G)$ matched by $M$ with the vertices of $ V(G)-S$. Since $M$ is a perfect matching in $G[N[S_{\mu }]]$ and $\left| S_{\mu }\right| =\left| M\right| $, it follows that $S_{\mu }$ is a maximum stable set in $G[N[S_{\mu }]]$, i.e., $S_{\mu }\in \Psi (G)$. Hence, there exists an accessibility chain of the following structure: \[ \{x_{1}\}\subset \{x_{1},x_{2}\}\subset ...\subset \{x_{1},x_{2},...,x_{\mu }\}=S_{\mu }\subset S_{\mu }\cup \{x_{\mu +1}\}\subset ...\subset S. \] While the existence of the first part of this chain, i.e., $ \{x_{1}\},\{x_{1},x_{2}\},...,\{x_{1},x_{2},...,x_{\mu }\}$, is based on the accessibility property of the family $\Psi (G)$, the existence of the second part of the same chain, namely $S_{\mu },S_{\mu }\cup \{x_{\mu +1}\},...,S$, stems from the exchange property of $\Psi (G)$. Now, according to Proposition \ref{prop2}, we may conclude that the perfect matching $M$ is unique in $G[N[S_{\mu }]]$. Hence, $M$ is a uniquely restricted maximum matching in $G$.\setlength {\parindent}{3.45ex} Conversely, suppose that all maximum matchings of $G$ are uniquely restricted. Let $\widehat{S}\in \Psi (G),H=G[N[\widehat{S}]]$, and $\widehat{ M}$ be a maximum matching in $H$. The graph $H$ is bipartite as a subgraph of a bipartite graph. By Lemma \ref{lem3}, there exists a maximum matching in $G$, say $M$, such that $\widehat{M}\subseteq M$. Since $M$ is uniquely restricted in $G$, it follows that $\widehat{M}$ is uniquely restricted in $ H $. According to Theorem \ref{th10}, there exists an accessibility chain of $\widehat{S}$ in $H$ \[ S_{1}\subset S_{2}\subset ...\subset S_{q-1}\subset S_{q}=\widehat{S}. \] Since $N_{H}[S_{k}]=N_{G}[S_{k}]$, we infer that $S_{k}\in \Psi (G)$, for any $k\in \{1,...,q\}$. To complete the proof, we have to show\emph{\ }that, in addition to the accessibility property, $\Psi (G)$ satisfies also the exchange property. Let $X,Y\in $ $\Psi (G)$ and $\left| Y\right| =\left| X\right| +1=m+1$. Hence, there is an accessibility chain \[ \{y_{1}\}\subset \{y_{1},y_{2}\}\subset ...\subset \{y_{1},...,y_{m}\}\subset \{y_{1},...,y_{m},y_{m+1}\}=Y. \] Since $Y$ is stable, $X\in $ $\Psi (G)$, and $\left| X\right| <\left| Y\right| $, it follows that there exists some $y\in Y-X$, such that $y\notin N[X]$. Let $M_{X}$ be a maximum matching in $H=G[N[X]]$. Since $H$ is bipartite, $X$ is a maximum stable set in $H$, and $M_{X}$ is a maximum matching in $H$, it follows that \[ \left| X\right| +\left| M_{X}\right| =\left| N[X]\right| =\left| X\right| +\left| N(X)\right| ,\ i.e.,\left| M_{X}\right| =\left| N(X)\right| . \] Let $y_{k+1}\in Y$ be the first vertex in $Y$ satisfying the conditions: $ y_{1},...,y_{k}\in N[X]$ and $y_{k+1}\notin N[X]$. Since $ \{y_{1},...,y_{k}\} $ is stable in $N[X]$, there is $\{x_{1},...,x_{k}\} \subseteq X$ such that for any $i\in \{1,...,k\}$ either $x_{i}=y_{i}$ or $ x_{i}y_{i}\in M_{X}$. Now we show that $X\cup \{y_{k+1}\}\in \Psi (G)$. \emph{Case 1}. $N[X\cup \{y_{k+1}\}]=N[X]\cup \{y_{k+1}\}$. Clearly, $X\cup \{y_{k+1}\}$ is stable in $G(N[X\cup \{y_{k+1}\}])$ and $\left| X\cup \{y_{k+1}\}\right| =\left| X\right| +1$ ensures that $X\cup \{y_{k+1}\}\in \Psi (G)$, because $X\in $ $\Psi (G)$ too. \emph{Case 2}. $N[X\cup \{y_{k+1}\}]\neq N[X]\cup \{y_{k+1}\}$. Suppose there are $a,b\in N(y_{k+1})-N[X]$. Hence, it follows that $ \{a,b,x_{1},...,x_{k}\}$ is a stable set included in $N[\{y_{1},...,y_{k+1} \}]$ and larger than $\{y_{1},...,y_{k+1}\}$, in contradiction with the fact that $\{y_{1},...,y_{k+1}\}\in \Psi (G)$. Therefore, there exists a unique $ a\in N(y_{k+1})-N[X]$. Consequently, \[ N[X\cup \{y_{k+1}\}]=N[X]\cup N[y_{k+1}]=N[X]\cup \{a,y_{k+1}\}, \] and since $ay_{k+1}\in E(G)$, we obtain that $X\cup \{y_{k+1}\}$ is a maximum stable set in $G[N[X\cup \{y_{k+1}\}]]$, i.e., $X\cup \{y_{k+1}\}\in \Psi (G)$. \rule{2mm}{2mm}\newline As an immediate consequence of Theorem \ref{th11}, we obtain the following: \begin{corollary} \label{cor1}For any bipartite graph $G$ having a perfect matching, $\Psi (G)$ is a greedoid if and only if $G$ has a unique perfect matching. \end{corollary} Corollary \ref{cor1} and, consequently, Theorem \ref{th11} are not valid for non-bipartite graphs. For example, the graph $C_{5}+e$ in Figure \ref{fig27} is a non-bipartite graph having only uniquely restricted maximum matchings, (in fact, it has a unique perfect matching), but $\Psi (C_{5}+e)$ is not a greedoid, because $\{u,v\}\in \Psi (C_{5}+e)$, while $\{u\},\{v\}\notin \Psi (C_{5}+e)$. \begin{figure} \caption{Non-bipartite graphs {with unique perfect matchings} \label{fig27} \end{figure} However, there are non-bipartite graphs with a unique perfect matching, whose $\Psi (G)$ is a greedoid. For instance, while the graph $C_{5}+3e$ in Figure \ref{fig27} is a non-bipartite graph with a unique perfect matching, the family $\Psi (C_{5}+3e)$ is a greedoid. Let us also notice that there exist both bipartite and non-bipartite graphs without a perfect matching whose family of local maximum stable sets is a greedoid. For instance, neither $G_{1}$ nor $G_{2}$ in Figure \ref{fig20} has a perfect matching, $G_{1}$ is bipartite, and $\Psi (G_{1}),\Psi (G_{2})$ are greedoids. \begin{figure} \caption{$\Psi (G_{1} \label{fig20} \end{figure} Since any forest, by definition, has no cycles, the following Lemma \ref {lem4} ensures that all matchings of a forest are uniquely restricted. \begin{lemma} \label{lem4}\cite{LevMan4} If a bipartite graph has two perfect matchings $ M_{1}$ and $M_{2}$, then any of its vertices, from which are issuing edges contained in $M_{1}$ and $M_{2}$, respectively, belongs to some cycle that is alternating with respect to at least one of $M_{1}$, $M_{2}$. \end{lemma} It is also interesting to note that Golumbic, Hirst and Lewenstein have proved the following generalization of Lemma \ref{lem4}. \begin{theorem} \label{th4}\cite{GolHiLew} A matching $M$ in a graph $G$ is uniquely restricted if and only if there is no even-length cycle with edges alternating between matched and non-matched edges. \end{theorem} Now restricting Theorem \ref{th11} to forests we immediately obtain that the family of local maximum stable sets of a forest forms a greedoid on its vertex set, which gives a new proof of the main finding from \cite{LevMan2}, namely Theorem \ref{th2}. \section{Conclusions} We have shown that to have all maximum matchings uniquely restricted is necessary and sufficient for a bipartite graph $G$ to enjoy the property that $\Psi (G)$ is a greedoid. We have also described all the bipartite graphs having a unique perfect matching, or in other words, all bipartite graphs having a perfect matching and whose $\Psi (G)$ is a greedoid. It seems to be interesting to describe a recursive structure of all bipartite graphs whose $\Psi (G)$ is a greedoid. A linear time algorithm to decide whether a matching in a bipartite graph is uniquely restricted is presented in \cite{GolHiLew}. It is also shown there that the problem of finding a maximum uniquely restricted matching is \textbf{NP}-complete for bipartite graphs. These results motivate us to propose another open problem, namely: how to recognize bipartite graphs whose $\Psi (G)$ is a greedoid? \end{document}
math
<html> <head> <title>User agent detail - Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36</title> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/css/materialize.min.css"> <link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet"> </head> <body> <div class="container"> <div class="section"> <h1 class="header center orange-text">User agent detail</h1> <div class="row center"> <h5 class="header light"> Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 </h5> </div> </div> <div class="section"> <table class="striped"><tr><th></th><th colspan="3">General</th><th colspan="5">Device</th><th colspan="3">Bot</th><th colspan="2"></th></tr><tr><th>Provider</th><th>Browser</th><th>Engine</th><th>OS</th><th>Brand</th><th>Model</th><th>Type</th><th>Is mobile</th><th>Is touch</th><th>Is bot</th><th>Name</th><th>Type</th><th>Parse time</th><th>Actions</th></tr><tr><th colspan="14" class="green lighten-3">Source result (test suite)</th></tr><tr><td>ua-parser/uap-core<br /><small>vendor/thadafinser/uap-core/tests/test_device.yaml</small></td><td> </td><td> </td><td> </td><td style="border-left: 1px solid #555">Samsung</td><td>GT-I7105</td><td></td><td></td><td></td><td style="border-left: 1px solid #555"></td><td></td><td></td><td></td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-test">Detail</a> <!-- Modal Structure --> <div id="modal-test" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>Testsuite result detail</h4> <p><pre><code class="php">Array ( [user_agent_string] => Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 [family] => Samsung GT-I7105 [brand] => Samsung [model] => GT-I7105 ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><th colspan="14" class="green lighten-3">Providers</th></tr><tr><td>BrowscapPhp<br /><small>6012</small></td><td>Chrome 31.0</td><td>Blink </td><td>Android 4.2</td><td style="border-left: 1px solid #555"></td><td></td><td>Mobile Phone</td><td>yes</td><td>yes</td><td style="border-left: 1px solid #555"></td><td></td><td></td><td>0.116</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-215ac98d-ccf8-4615-916e-5a819d6a59c9">Detail</a> <!-- Modal Structure --> <div id="modal-215ac98d-ccf8-4615-916e-5a819d6a59c9" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>BrowscapPhp result detail</h4> <p><pre><code class="php">stdClass Object ( [browser_name_regex] => /^mozilla\/5\.0 \(.*linux.*android.4\.2.*\) applewebkit\/.* \(khtml, like gecko.*\) chrome\/31\..*safari\/.*$/ [browser_name_pattern] => mozilla/5.0 (*linux*android?4.2*) applewebkit/* (khtml, like gecko*) chrome/31.*safari/* [parent] => Chrome 31.0 for Android [comment] => Chrome 31.0 [browser] => Chrome [browser_type] => Browser [browser_bits] => 32 [browser_maker] => Google Inc [browser_modus] => unknown [version] => 31.0 [majorver] => 31 [minorver] => 0 [platform] => Android [platform_version] => 4.2 [platform_description] => Android OS [platform_bits] => 32 [platform_maker] => Google Inc [alpha] => [beta] => [win16] => [win32] => [win64] => [frames] => 1 [iframes] => 1 [tables] => 1 [cookies] => 1 [backgroundsounds] => [javascript] => 1 [vbscript] => [javaapplets] => [activexcontrols] => [ismobiledevice] => 1 [istablet] => [issyndicationreader] => [crawler] => [cssversion] => 3 [aolversion] => 0 [device_name] => general Mobile Phone [device_maker] => unknown [device_type] => Mobile Phone [device_pointing_method] => touchscreen [device_code_name] => general Mobile Phone [device_brand_name] => unknown [renderingengine_name] => Blink [renderingengine_version] => unknown [renderingengine_description] => a WebKit Fork by Google [renderingengine_maker] => Google Inc ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>DonatjUAParser<br /><small>v0.5.0</small></td><td>Chrome 31.0.1650.59</td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td>0</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-f1436016-fdf1-4aea-b4be-1d7c99ab0661">Detail</a> <!-- Modal Structure --> <div id="modal-f1436016-fdf1-4aea-b4be-1d7c99ab0661" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>DonatjUAParser result detail</h4> <p><pre><code class="php">Array ( [platform] => Android [browser] => Chrome [version] => 31.0.1650.59 ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>NeutrinoApiCom<br /><small></small></td><td>Chrome Mobile 31.0.1650.59</td><td><i class="material-icons">close</i></td><td>Android 4.2.2</td><td style="border-left: 1px solid #555">Generic</td><td>Android 4.2</td><td>mobile-browser</td><td>yes</td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"></td><td></td><td><i class="material-icons">close</i></td><td>0.277</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-9b0fa449-ec1b-40c8-8b1c-9486eb3b9cbc">Detail</a> <!-- Modal Structure --> <div id="modal-9b0fa449-ec1b-40c8-8b1c-9486eb3b9cbc" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>NeutrinoApiCom result detail</h4> <p><pre><code class="php">stdClass Object ( [mobile_screen_height] => 480 [is_mobile] => 1 [type] => mobile-browser [mobile_brand] => Generic [mobile_model] => Android 4.2 [version] => 31.0.1650.59 [is_android] => 1 [browser_name] => Chrome Mobile [operating_system_family] => Android [operating_system_version] => 4.2.2 [is_ios] => [producer] => Google Inc. [operating_system] => Android 4.2 Jelly Bean [mobile_screen_width] => 320 [mobile_browser] => Android Webkit ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>PiwikDeviceDetector<br /><small>3.5.2</small></td><td>Chrome Mobile 31.0</td><td>Blink </td><td>Android 4.2</td><td style="border-left: 1px solid #555">Samsung</td><td>GT-I7105</td><td>smartphone</td><td>yes</td><td></td><td style="border-left: 1px solid #555"></td><td></td><td></td><td>0.005</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-21638055-738d-46ba-a1b1-f5114bc26475">Detail</a> <!-- Modal Structure --> <div id="modal-21638055-738d-46ba-a1b1-f5114bc26475" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>PiwikDeviceDetector result detail</h4> <p><pre><code class="php">Array ( [client] => Array ( [type] => browser [name] => Chrome Mobile [short_name] => CM [version] => 31.0 [engine] => Blink ) [operatingSystem] => Array ( [name] => Android [short_name] => AND [version] => 4.2 [platform] => ) [device] => Array ( [brand] => SA [brandName] => Samsung [model] => GT-I7105 [device] => 1 [deviceName] => smartphone ) [bot] => [extra] => Array ( [isBot] => [isBrowser] => 1 [isFeedReader] => [isMobileApp] => [isPIM] => [isLibrary] => [isMediaPlayer] => [isCamera] => [isCarBrowser] => [isConsole] => [isFeaturePhone] => [isPhablet] => [isPortableMediaPlayer] => [isSmartDisplay] => [isSmartphone] => 1 [isTablet] => [isTV] => [isDesktop] => [isMobile] => 1 [isTouchEnabled] => ) ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>SinergiBrowserDetector<br /><small>6.0.0</small></td><td>Chrome 31.0.1650.59</td><td><i class="material-icons">close</i></td><td>Android 4.2.2</td><td style="border-left: 1px solid #555"><i class="material-icons">close</i></td><td></td><td><i class="material-icons">close</i></td><td>yes</td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td>0.001</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-5415e7f2-ef7b-434c-abe0-b71ba9f6707c">Detail</a> <!-- Modal Structure --> <div id="modal-5415e7f2-ef7b-434c-abe0-b71ba9f6707c" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>SinergiBrowserDetector result detail</h4> <p><pre><code class="php">Array ( [browser] => Sinergi\BrowserDetector\Browser Object ( [userAgent:Sinergi\BrowserDetector\Browser:private] => Sinergi\BrowserDetector\UserAgent Object ( [userAgentString:Sinergi\BrowserDetector\UserAgent:private] => Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 ) [name:Sinergi\BrowserDetector\Browser:private] => Chrome [version:Sinergi\BrowserDetector\Browser:private] => 31.0.1650.59 [isRobot:Sinergi\BrowserDetector\Browser:private] => [isChromeFrame:Sinergi\BrowserDetector\Browser:private] => ) [operatingSystem] => Sinergi\BrowserDetector\Os Object ( [name:Sinergi\BrowserDetector\Os:private] => Android [version:Sinergi\BrowserDetector\Os:private] => 4.2.2 [isMobile:Sinergi\BrowserDetector\Os:private] => 1 [userAgent:Sinergi\BrowserDetector\Os:private] => Sinergi\BrowserDetector\UserAgent Object ( [userAgentString:Sinergi\BrowserDetector\UserAgent:private] => Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 ) ) [device] => Sinergi\BrowserDetector\Device Object ( [name:Sinergi\BrowserDetector\Device:private] => unknown [userAgent:Sinergi\BrowserDetector\Device:private] => Sinergi\BrowserDetector\UserAgent Object ( [userAgentString:Sinergi\BrowserDetector\UserAgent:private] => Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 ) ) ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>UAParser<br /><small>v3.4.5</small></td><td>Chrome Mobile 31.0.1650</td><td><i class="material-icons">close</i></td><td>Android 4.2.2</td><td style="border-left: 1px solid #555">Samsung</td><td>GT-I7105</td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"></td><td></td><td><i class="material-icons">close</i></td><td>0.005</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-346c1a98-5fd3-454f-b6c8-350f2f505d8b">Detail</a> <!-- Modal Structure --> <div id="modal-346c1a98-5fd3-454f-b6c8-350f2f505d8b" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>UAParser result detail</h4> <p><pre><code class="php">UAParser\Result\Client Object ( [ua] => UAParser\Result\UserAgent Object ( [major] => 31 [minor] => 0 [patch] => 1650 [family] => Chrome Mobile ) [os] => UAParser\Result\OperatingSystem Object ( [major] => 4 [minor] => 2 [patch] => 2 [patchMinor] => [family] => Android ) [device] => UAParser\Result\Device Object ( [brand] => Samsung [model] => GT-I7105 [family] => Samsung GT-I7105 ) [originalUserAgent] => Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>UserAgentStringCom<br /><small></small></td><td>Android Webkit Browser </td><td><i class="material-icons">close</i></td><td>Android 4.2.2</td><td style="border-left: 1px solid #555"><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"></td><td></td><td></td><td>0.05</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-9cdd8b45-a2eb-406b-bd27-7e48af38ffd4">Detail</a> <!-- Modal Structure --> <div id="modal-9cdd8b45-a2eb-406b-bd27-7e48af38ffd4" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>UserAgentStringCom result detail</h4> <p><pre><code class="php">stdClass Object ( [agent_type] => Browser [agent_name] => Android Webkit Browser [agent_version] => -- [os_type] => Android [os_name] => Android [os_versionName] => [os_versionNumber] => 4.2.2 [os_producer] => [os_producerURL] => [linux_distibution] => Null [agent_language] => [agent_languageTag] => ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>WhatIsMyBrowserCom<br /><small></small></td><td>Chrome 31.0.1650.59</td><td>WebKit 537.36</td><td>Android 4.2.2</td><td style="border-left: 1px solid #555">Samsung</td><td></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td>0.404</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-9795f66f-7271-430e-973a-a5c0e14dc35a">Detail</a> <!-- Modal Structure --> <div id="modal-9795f66f-7271-430e-973a-a5c0e14dc35a" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>WhatIsMyBrowserCom result detail</h4> <p><pre><code class="php">stdClass Object ( [operating_system_name] => Android [simple_sub_description_string] => [simple_browser_string] => Chrome 31 on Android (Jelly Bean) [browser_version] => 31 [extra_info] => Array ( ) [operating_platform] => [extra_info_table] => stdClass Object ( [System Build] => JRO03C ) [layout_engine_name] => WebKit [detected_addons] => Array ( ) [operating_system_flavour_code] => [hardware_architecture] => [operating_system_flavour] => [operating_system_frameworks] => Array ( ) [browser_name_code] => chrome [operating_system_version] => Jelly Bean [simple_operating_platform_string] => Samsung GT-I7105 [is_abusive] => [layout_engine_version] => 537.36 [browser_capabilities] => Array ( ) [operating_platform_vendor_name] => Samsung [operating_system] => Android (Jelly Bean) [operating_system_version_full] => 4.2.2 [operating_platform_code] => GT-I7105 [browser_name] => Chrome [operating_system_name_code] => android [user_agent] => Mozilla/5.0 (Linux; Android 4.2.2; GT-I7105 Build/JRO03C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36 [browser_version_full] => 31.0.1650.59 [browser] => Chrome 31 ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>WhichBrowser<br /><small>2.0.10</small></td><td>Chrome 31</td><td>Blink </td><td>Android 4.1.1</td><td style="border-left: 1px solid #555"></td><td>GT-I7105</td><td>mobile:smart</td><td>yes</td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"></td><td></td><td><i class="material-icons">close</i></td><td>0.09601</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-342c8d32-4765-40a8-8a5c-af3a38d19ae4">Detail</a> <!-- Modal Structure --> <div id="modal-342c8d32-4765-40a8-8a5c-af3a38d19ae4" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>WhichBrowser result detail</h4> <p><pre><code class="php">Array ( [browser] => Array ( [name] => Chrome [version] => 31 [type] => browser ) [engine] => Array ( [name] => Blink ) [os] => Array ( [name] => Android [version] => 4.1.1 ) [device] => Array ( [type] => mobile [subtype] => smart [model] => GT-I7105 ) ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>Woothee<br /><small>v1.2.0</small></td><td>Chrome 31.0.1650.59</td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td>smartphone</td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td style="border-left: 1px solid #555"></td><td></td><td><i class="material-icons">close</i></td><td>0</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-3f285ff5-314b-4db4-9948-54572e92e7b6">Detail</a> <!-- Modal Structure --> <div id="modal-3f285ff5-314b-4db4-9948-54572e92e7b6" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>Woothee result detail</h4> <p><pre><code class="php">Array ( [name] => Chrome [vendor] => Google [version] => 31.0.1650.59 [category] => smartphone [os] => Android [os_version] => 4.2.2 ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr><tr><td>Wurfl<br /><small>1.6.4</small></td><td>Chrome Mobile 31</td><td><i class="material-icons">close</i></td><td>Android 4.2</td><td style="border-left: 1px solid #555">Samsung</td><td>GT-I7105</td><td>Smartphone</td><td>yes</td><td>yes</td><td style="border-left: 1px solid #555"></td><td><i class="material-icons">close</i></td><td><i class="material-icons">close</i></td><td>0.069</td><td> <!-- Modal Trigger --> <a class="modal-trigger btn waves-effect waves-light" href="#modal-1a1aee36-7ce7-4111-a391-8e2c501f1532">Detail</a> <!-- Modal Structure --> <div id="modal-1a1aee36-7ce7-4111-a391-8e2c501f1532" class="modal modal-fixed-footer"> <div class="modal-content"> <h4>Wurfl result detail</h4> <p><pre><code class="php">Array ( [virtual] => Array ( [is_android] => true [is_ios] => false [is_windows_phone] => false [is_app] => false [is_full_desktop] => false [is_largescreen] => true [is_mobile] => true [is_robot] => false [is_smartphone] => true [is_touchscreen] => true [is_wml_preferred] => false [is_xhtmlmp_preferred] => false [is_html_preferred] => true [advertised_device_os] => Android [advertised_device_os_version] => 4.2 [advertised_browser] => Chrome Mobile [advertised_browser_version] => 31 [complete_device_name] => Samsung GT-I7105 (Galaxy Note II Duos) [form_factor] => Smartphone [is_phone] => true [is_app_webview] => false ) [all] => Array ( [brand_name] => Samsung [model_name] => GT-I7105 [unique] => true [ununiqueness_handler] => [is_wireless_device] => true [device_claims_web_support] => true [has_qwerty_keyboard] => true [can_skip_aligned_link_row] => true [uaprof] => [uaprof2] => [uaprof3] => [nokia_series] => 0 [nokia_edition] => 0 [device_os] => Android [mobile_browser] => Android Webkit [mobile_browser_version] => [device_os_version] => 4.2 [pointing_method] => touchscreen [release_date] => 2013_july [marketing_name] => Galaxy Note II Duos [model_extra_info] => [nokia_feature_pack] => 0 [can_assign_phone_number] => true [is_tablet] => false [manufacturer_name] => [is_bot] => false [is_google_glass] => false [proportional_font] => false [built_in_back_button_support] => false [card_title_support] => true [softkey_support] => false [table_support] => true [numbered_menus] => false [menu_with_select_element_recommended] => false [menu_with_list_of_links_recommended] => true [icons_on_menu_items_support] => false [break_list_of_links_with_br_element_recommended] => true [access_key_support] => false [wrap_mode_support] => false [times_square_mode_support] => false [deck_prefetch_support] => false [elective_forms_recommended] => true [wizards_recommended] => false [image_as_link_support] => false [insert_br_element_after_widget_recommended] => false [wml_can_display_images_and_text_on_same_line] => false [wml_displays_image_in_center] => false [opwv_wml_extensions_support] => false [wml_make_phone_call_string] => wtai://wp/mc; [chtml_display_accesskey] => false [emoji] => false [chtml_can_display_images_and_text_on_same_line] => false [chtml_displays_image_in_center] => false [imode_region] => none [chtml_make_phone_call_string] => tel: [chtml_table_support] => false [xhtml_honors_bgcolor] => true [xhtml_supports_forms_in_table] => true [xhtml_support_wml2_namespace] => false [xhtml_autoexpand_select] => false [xhtml_select_as_dropdown] => false [xhtml_select_as_radiobutton] => false [xhtml_select_as_popup] => false [xhtml_display_accesskey] => false [xhtml_supports_invisible_text] => false [xhtml_supports_inline_input] => false [xhtml_supports_monospace_font] => false [xhtml_supports_table_for_layout] => true [xhtml_supports_css_cell_table_coloring] => true [xhtml_format_as_css_property] => false [xhtml_format_as_attribute] => false [xhtml_nowrap_mode] => false [xhtml_marquee_as_css_property] => false [xhtml_readable_background_color1] => #FFFFFF [xhtml_readable_background_color2] => #FFFFFF [xhtml_allows_disabled_form_elements] => true [xhtml_document_title_support] => true [xhtml_preferred_charset] => iso-8859-1 [opwv_xhtml_extensions_support] => false [xhtml_make_phone_call_string] => tel: [xhtmlmp_preferred_mime_type] => text/html [xhtml_table_support] => true [xhtml_send_sms_string] => sms: [xhtml_send_mms_string] => mms: [xhtml_file_upload] => supported [cookie_support] => true [accept_third_party_cookie] => true [xhtml_supports_iframe] => full [xhtml_avoid_accesskeys] => true [xhtml_can_embed_video] => none [ajax_support_javascript] => true [ajax_manipulate_css] => true [ajax_support_getelementbyid] => true [ajax_support_inner_html] => true [ajax_xhr_type] => standard [ajax_manipulate_dom] => true [ajax_support_events] => true [ajax_support_event_listener] => true [ajax_preferred_geoloc_api] => w3c_api [xhtml_support_level] => 4 [preferred_markup] => html_web_4_0 [wml_1_1] => false [wml_1_2] => false [wml_1_3] => false [html_wi_w3_xhtmlbasic] => true [html_wi_oma_xhtmlmp_1_0] => true [html_wi_imode_html_1] => false [html_wi_imode_html_2] => false [html_wi_imode_html_3] => false [html_wi_imode_html_4] => false [html_wi_imode_html_5] => false [html_wi_imode_htmlx_1] => false [html_wi_imode_htmlx_1_1] => false [html_wi_imode_compact_generic] => false [html_web_3_2] => true [html_web_4_0] => true [voicexml] => false [multipart_support] => false [total_cache_disable_support] => false [time_to_live_support] => false [resolution_width] => 720 [resolution_height] => 1280 [columns] => 60 [max_image_width] => 320 [max_image_height] => 480 [rows] => 40 [physical_screen_width] => 69 [physical_screen_height] => 122 [dual_orientation] => true [density_class] => 1.0 [wbmp] => true [bmp] => false [epoc_bmp] => false [gif_animated] => false [jpg] => true [png] => true [tiff] => false [transparent_png_alpha] => true [transparent_png_index] => true [svgt_1_1] => true [svgt_1_1_plus] => false [greyscale] => false [gif] => true [colors] => 65536 [webp_lossy_support] => true [webp_lossless_support] => true [post_method_support] => true [basic_authentication_support] => true [empty_option_value_support] => true [emptyok] => false [nokia_voice_call] => false [wta_voice_call] => false [wta_phonebook] => false [wta_misc] => false [wta_pdc] => false [https_support] => true [phone_id_provided] => false [max_data_rate] => 3600 [wifi] => true [sdio] => false [vpn] => false [has_cellular_radio] => true [max_deck_size] => 2000000 [max_url_length_in_requests] => 256 [max_url_length_homepage] => 0 [max_url_length_bookmark] => 0 [max_url_length_cached_page] => 0 [max_no_of_connection_settings] => 0 [max_no_of_bookmarks] => 0 [max_length_of_username] => 0 [max_length_of_password] => 0 [max_object_size] => 0 [downloadfun_support] => false [directdownload_support] => true [inline_support] => false [oma_support] => true [ringtone] => false [ringtone_3gpp] => false [ringtone_midi_monophonic] => false [ringtone_midi_polyphonic] => false [ringtone_imelody] => false [ringtone_digiplug] => false [ringtone_compactmidi] => false [ringtone_mmf] => false [ringtone_rmf] => false [ringtone_xmf] => false [ringtone_amr] => false [ringtone_awb] => false [ringtone_aac] => false [ringtone_wav] => false [ringtone_mp3] => false [ringtone_spmidi] => false [ringtone_qcelp] => false [ringtone_voices] => 1 [ringtone_df_size_limit] => 0 [ringtone_directdownload_size_limit] => 0 [ringtone_inline_size_limit] => 0 [ringtone_oma_size_limit] => 0 [wallpaper] => false [wallpaper_max_width] => 0 [wallpaper_max_height] => 0 [wallpaper_preferred_width] => 0 [wallpaper_preferred_height] => 0 [wallpaper_resize] => none [wallpaper_wbmp] => false [wallpaper_bmp] => false [wallpaper_gif] => false [wallpaper_jpg] => false [wallpaper_png] => false [wallpaper_tiff] => false [wallpaper_greyscale] => false [wallpaper_colors] => 2 [wallpaper_df_size_limit] => 0 [wallpaper_directdownload_size_limit] => 0 [wallpaper_inline_size_limit] => 0 [wallpaper_oma_size_limit] => 0 [screensaver] => false [screensaver_max_width] => 0 [screensaver_max_height] => 0 [screensaver_preferred_width] => 0 [screensaver_preferred_height] => 0 [screensaver_resize] => none [screensaver_wbmp] => false [screensaver_bmp] => false [screensaver_gif] => false [screensaver_jpg] => false [screensaver_png] => false [screensaver_greyscale] => false [screensaver_colors] => 2 [screensaver_df_size_limit] => 0 [screensaver_directdownload_size_limit] => 0 [screensaver_inline_size_limit] => 0 [screensaver_oma_size_limit] => 0 [picture] => false [picture_max_width] => 0 [picture_max_height] => 0 [picture_preferred_width] => 0 [picture_preferred_height] => 0 [picture_resize] => none [picture_wbmp] => false [picture_bmp] => false [picture_gif] => false [picture_jpg] => false [picture_png] => false [picture_greyscale] => false [picture_colors] => 2 [picture_df_size_limit] => 0 [picture_directdownload_size_limit] => 0 [picture_inline_size_limit] => 0 [picture_oma_size_limit] => 0 [video] => false [oma_v_1_0_forwardlock] => false [oma_v_1_0_combined_delivery] => false [oma_v_1_0_separate_delivery] => false [streaming_video] => true [streaming_3gpp] => true [streaming_mp4] => true [streaming_mov] => false [streaming_video_size_limit] => 0 [streaming_real_media] => none [streaming_flv] => false [streaming_3g2] => false [streaming_vcodec_h263_0] => 10 [streaming_vcodec_h263_3] => -1 [streaming_vcodec_mpeg4_sp] => 2 [streaming_vcodec_mpeg4_asp] => -1 [streaming_vcodec_h264_bp] => 3.0 [streaming_acodec_amr] => nb [streaming_acodec_aac] => lc [streaming_wmv] => none [streaming_preferred_protocol] => rtsp [streaming_preferred_http_protocol] => apple_live_streaming [wap_push_support] => false [connectionless_service_indication] => false [connectionless_service_load] => false [connectionless_cache_operation] => false [connectionoriented_unconfirmed_service_indication] => false [connectionoriented_unconfirmed_service_load] => false [connectionoriented_unconfirmed_cache_operation] => false [connectionoriented_confirmed_service_indication] => false [connectionoriented_confirmed_service_load] => false [connectionoriented_confirmed_cache_operation] => false [utf8_support] => true [ascii_support] => false [iso8859_support] => false [expiration_date] => false [j2me_cldc_1_0] => false [j2me_cldc_1_1] => false [j2me_midp_1_0] => false [j2me_midp_2_0] => false [doja_1_0] => false [doja_1_5] => false [doja_2_0] => false [doja_2_1] => false [doja_2_2] => false [doja_3_0] => false [doja_3_5] => false [doja_4_0] => false [j2me_jtwi] => false [j2me_mmapi_1_0] => false [j2me_mmapi_1_1] => false [j2me_wmapi_1_0] => false [j2me_wmapi_1_1] => false [j2me_wmapi_2_0] => false [j2me_btapi] => false [j2me_3dapi] => false [j2me_locapi] => false [j2me_nokia_ui] => false [j2me_motorola_lwt] => false [j2me_siemens_color_game] => false [j2me_siemens_extension] => false [j2me_heap_size] => 0 [j2me_max_jar_size] => 0 [j2me_storage_size] => 0 [j2me_max_record_store_size] => 0 [j2me_screen_width] => 0 [j2me_screen_height] => 0 [j2me_canvas_width] => 0 [j2me_canvas_height] => 0 [j2me_bits_per_pixel] => 0 [j2me_audio_capture_enabled] => false [j2me_video_capture_enabled] => false [j2me_photo_capture_enabled] => false [j2me_capture_image_formats] => none [j2me_http] => false [j2me_https] => false [j2me_socket] => false [j2me_udp] => false [j2me_serial] => false [j2me_gif] => false [j2me_gif89a] => false [j2me_jpg] => false [j2me_png] => false [j2me_bmp] => false [j2me_bmp3] => false [j2me_wbmp] => false [j2me_midi] => false [j2me_wav] => false [j2me_amr] => false [j2me_mp3] => false [j2me_mp4] => false [j2me_imelody] => false [j2me_rmf] => false [j2me_au] => false [j2me_aac] => false [j2me_realaudio] => false [j2me_xmf] => false [j2me_wma] => false [j2me_3gpp] => false [j2me_h263] => false [j2me_svgt] => false [j2me_mpeg4] => false [j2me_realvideo] => false [j2me_real8] => false [j2me_realmedia] => false [j2me_left_softkey_code] => 0 [j2me_right_softkey_code] => 0 [j2me_middle_softkey_code] => 0 [j2me_select_key_code] => 0 [j2me_return_key_code] => 0 [j2me_clear_key_code] => 0 [j2me_datefield_no_accepts_null_date] => false [j2me_datefield_broken] => false [receiver] => false [sender] => false [mms_max_size] => 0 [mms_max_height] => 0 [mms_max_width] => 0 [built_in_recorder] => false [built_in_camera] => true [mms_jpeg_baseline] => false [mms_jpeg_progressive] => false [mms_gif_static] => false [mms_gif_animated] => false [mms_png] => false [mms_bmp] => false [mms_wbmp] => false [mms_amr] => false [mms_wav] => false [mms_midi_monophonic] => false [mms_midi_polyphonic] => false [mms_midi_polyphonic_voices] => 0 [mms_spmidi] => false [mms_mmf] => false [mms_mp3] => false [mms_evrc] => false [mms_qcelp] => false [mms_ota_bitmap] => false [mms_nokia_wallpaper] => false [mms_nokia_operatorlogo] => false [mms_nokia_3dscreensaver] => false [mms_nokia_ringingtone] => false [mms_rmf] => false [mms_xmf] => false [mms_symbian_install] => false [mms_jar] => false [mms_jad] => false [mms_vcard] => false [mms_vcalendar] => false [mms_wml] => false [mms_wbxml] => false [mms_wmlc] => false [mms_video] => false [mms_mp4] => false [mms_3gpp] => false [mms_3gpp2] => false [mms_max_frame_rate] => 0 [nokiaring] => false [picturemessage] => false [operatorlogo] => false [largeoperatorlogo] => false [callericon] => false [nokiavcard] => false [nokiavcal] => false [sckl_ringtone] => false [sckl_operatorlogo] => false [sckl_groupgraphic] => false [sckl_vcard] => false [sckl_vcalendar] => false [text_imelody] => false [ems] => false [ems_variablesizedpictures] => false [ems_imelody] => false [ems_odi] => false [ems_upi] => false [ems_version] => 0 [siemens_ota] => false [siemens_logo_width] => 101 [siemens_logo_height] => 29 [siemens_screensaver_width] => 101 [siemens_screensaver_height] => 50 [gprtf] => false [sagem_v1] => false [sagem_v2] => false [panasonic] => false [sms_enabled] => true [wav] => false [mmf] => false [smf] => false [mld] => false [midi_monophonic] => false [midi_polyphonic] => false [sp_midi] => false [rmf] => false [xmf] => false [compactmidi] => false [digiplug] => false [nokia_ringtone] => false [imelody] => false [au] => false [amr] => false [awb] => false [aac] => true [mp3] => true [voices] => 1 [qcelp] => false [evrc] => false [flash_lite_version] => [fl_wallpaper] => false [fl_screensaver] => false [fl_standalone] => false [fl_browser] => false [fl_sub_lcd] => false [full_flash_support] => false [css_supports_width_as_percentage] => true [css_border_image] => webkit [css_rounded_corners] => webkit [css_gradient] => none [css_spriting] => true [css_gradient_linear] => none [is_transcoder] => false [transcoder_ua_header] => user-agent [rss_support] => false [pdf_support] => true [progressive_download] => true [playback_vcodec_h263_0] => 10 [playback_vcodec_h263_3] => -1 [playback_vcodec_mpeg4_sp] => 0 [playback_vcodec_mpeg4_asp] => -1 [playback_vcodec_h264_bp] => 3.0 [playback_real_media] => none [playback_3gpp] => true [playback_3g2] => false [playback_mp4] => true [playback_mov] => false [playback_acodec_amr] => nb [playback_acodec_aac] => none [playback_df_size_limit] => 0 [playback_directdownload_size_limit] => 0 [playback_inline_size_limit] => 0 [playback_oma_size_limit] => 0 [playback_acodec_qcelp] => false [playback_wmv] => none [hinted_progressive_download] => true [html_preferred_dtd] => html4 [viewport_supported] => true [viewport_width] => device_width_token [viewport_userscalable] => no [viewport_initial_scale] => [viewport_maximum_scale] => [viewport_minimum_scale] => [mobileoptimized] => false [handheldfriendly] => false [canvas_support] => full [image_inlining] => true [is_smarttv] => false [is_console] => false [nfc_support] => true [ux_full_desktop] => false [jqm_grade] => A [is_sencha_touch_ok] => false [controlcap_is_smartphone] => default [controlcap_is_ios] => default [controlcap_is_android] => default [controlcap_is_robot] => default [controlcap_is_app] => default [controlcap_advertised_device_os] => default [controlcap_advertised_device_os_version] => default [controlcap_advertised_browser] => default [controlcap_advertised_browser_version] => default [controlcap_is_windows_phone] => default [controlcap_is_full_desktop] => default [controlcap_is_largescreen] => default [controlcap_is_mobile] => default [controlcap_is_touchscreen] => default [controlcap_is_wml_preferred] => default [controlcap_is_xhtmlmp_preferred] => default [controlcap_is_html_preferred] => default [controlcap_form_factor] => default [controlcap_complete_device_name] => default ) ) </code></pre></p> </div> <div class="modal-footer"> <a href="#!" class="modal-action modal-close waves-effect waves-green btn-flat ">close</a> </div> </div> </td></tr></table> </div> <div class="section"> <h1 class="header center orange-text">About this comparison</h1> <div class="row center"> <h5 class="header light"> The primary goal of this project is simple<br /> I wanted to know which user agent parser is the most accurate in each part - device detection, bot detection and so on...<br /> <br /> The secondary goal is to provide a source for all user agent parsers to improve their detection based on this results.<br /> <br /> You can also improve this further, by suggesting ideas at <a href="https://github.com/ThaDafinser/UserAgentParserComparison">ThaDafinser/UserAgentParserComparison</a><br /> <br /> The comparison is based on the abstraction by <a href="https://github.com/ThaDafinser/UserAgentParser">ThaDafinser/UserAgentParser</a> </h5> </div> </div> <div class="card"> <div class="card-content"> Comparison created <i>2016-02-13 13:29:48</i> | by <a href="https://github.com/ThaDafinser">ThaDafinser</a> </div> </div> </div> <script src="https://code.jquery.com/jquery-2.1.4.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/js/materialize.min.js"></script> <script src="http://cdnjs.cloudflare.com/ajax/libs/list.js/1.1.1/list.min.js"></script> <script> $(document).ready(function(){ // the "href" attribute of .modal-trigger must specify the modal ID that wants to be triggered $('.modal-trigger').leanModal(); }); </script> </body> </html>
code
नट खबर एक्सप्रेस: ०४ अगस्त २०१७ 'क्राइम पेट्रोल' में पुलिस इंस्पेक्टर का किरदार निभाने वाले कमलेश पांडे ने किया सूइसाइड टीवी सीरियल 'क्राइम पेट्रोल' में पुलिस का किरदार निभाने वाले ऐक्टर कमलेश पांडे ने सोमवार रात खुद को गोली मारकर सूइसाइड कर लिया। उन्होंने कई अन्य टीवी सीरियलों में भी काम किया था। ३८ वर्षीय ऐक्टर ने साईं कॉलोनी स्थित अपने साढ़ू के घर में सीने में गोली मार ली... टीवी कलाकार कमलेश पांडे ने गोली मारकर की आत्महत्या जबलपुर :मप्र:, १४ दिसंबर :भाषा: टेलीविजन धारावाहिक क्राइम पेट्रोल में इंस्पेक्टर की भूमिका अदा करने वाले कमलेश पांडे ने खुद को गोली मारकर यहां आत्महत्या कर ली। वह ३८ वर्ष के थे। हल्दवानी में शहीद मेजर कमलेश पांडे को नम आंखों से अंतिम विदाई दी गई...
hindi
मुंबई। बॉलीवुड के खिलाड़ी कुमार अक्षय कुमार अपनी आने वाली फिल्म मिशन मंगल को लेकर गर्व महसूस कर रहे हैं। अक्षय की आने वाली फिल्म मिशन मंगल का ट्रेलर रिलीज कर दिया गया है। मिशन मंगल में विद्या बालन, सोनाक्षी सिन्हा, तापसी पन्नू, नित्या मेनन जैसी अभिनेत्रियों ने महत्वपूर्ण भूमिकाएं निभाई हैं। यह फिल्म मार्स ऑर्बिटर मिशन की उपलब्धियों को दिखाएगी। जगन शक्ति निर्देशित इस फिल्म में पहली बार मंगल ग्रह भी महत्वपूर्ण भूमिका निभाएगा। इससे पहले तापसी पन्नू कह चुकी हैं कि देश में स्पेस से जुड़ी फिल्मों को प्रमोट करना चाहिए। अक्षय ने कहा कि फिल्म में काम करने से पहले तक हमें मार्स ऑर्बिटर मिशन की ज्यादा समझ नहीं थी, लेकिन धीरे-धीरे हमें काफी कुछ पता चला था और ऐसा फिल्म के निर्देशक जगन के चलते मुमकिन हो पाया था। मेरी फिल्म २.० का बजट ही सिर्फ 5०० करोड़ था और ये मिशन (भारत का मंगल मिशन) सिर्फ 45० करोड़ रुपए में पूरा हो गया था, ऐसे में मैं इस प्रोजेक्ट को लेकर काफी गर्व महसूस करता हूं। बताया जा रहा है कि अक्षय इस फिल्म में मिशन के डायरेक्टर यानी राकेश धवन का किरदार निभा रहे हैं। वह फिल्म में इस प्रोजेक्ट को लीड करते हैं और बाकी लोगों में उत्साह जगाने की कोशिश करते हैं कि ये बेहद मुश्किल मिशन भी सफल हो सकता है। इस फिल्म को लेकर अक्षय कह चुके हैं कि यह फिल्म महिलाओं के बारे में है।
hindi
Need to get Expert Computer Services in Ft Worth Texas? Laptop or computer upgrades and repairs: we could do this at your place or you'll come to our offices. For instance , keyboard replacement, systems halting, and upgrading memory and software installations. Wireless and wired networking: carry out this through the different topologies for example network designing, set up and support. We then implement and maintain the network. Recovering lost data: we retrieve data lost through formatting and deleting of computer files. We can even retrieve data from the formatted computer and support your data so that no info is lost in the foreseeable future.
english
\begin{document} \title[Fluctuations of ergodic averages]{Fluctuations of ergodic averages\\ for Actions of Groups of Polynomial Growth.} \author[Nikita Moriakov]{Nikita Moriakov} \address{Delft Institute of Applied Mathematics, Delft University of Technology, P.O. Box 5031, 2600 GA Delft, The Netherlands} \email{[email protected]} \subjclass{Primary 28D05, 28D15} \renewcommand{\subjclassname}{\textup{2000} Mathematics Subject Classification} \date{\today} \begin{abstract} It was shown by S. Kalikow and B. Weiss that, given a measure-preserving action of $\mathbb{Z}^d$ on a probability space $\mathrm{X}$ and a nonnegative measurable function $f$ on $\mathrm{X}$, the probability that the sequence of ergodic averages $$ \frac 1 {(2k+1)^d} \sum\limits_{g \in [-k,\dots,k]^d} f(g \cdot x) $$ has at least $n$ fluctuations across an interval $(\alpha,\beta)$ can be bounded from above by $c_1 c_2^n$ for some universal constants $c_1 \in \mathbb{R}$ and $c_2 \in (0,1)$, which depend only on $d,\alpha,\beta$. The purpose of this article is to generalize this result to measure-preserving actions of groups of polynomial growth. As the main tool we develop a generalization of effective Vitali covering theorem for groups of polynomial growth. \wedged{abstract} \maketitle \section{Introduction} Given an integer $n \in \mathbb{Z}p$ and some numbers $\alpha,\beta \in \mathbb{R}$ such that $\alpha<\beta$, a sequence of real numbers $(a_i)_{i = 1}^k$ is said to \textbf{fluctuate at least $n$ times} across the interval $(\alpha,\beta)$ if there are indexes $1 \leq i_0 < i_1 < \dots < i_n \leq k$ such that \begin{aufziii} \item if $j$ is odd, then $a_{i_j} < \alpha$; \item if $j$ is even, then $a_{i_j} > \beta$. \wedged{aufziii} In this case it is clear that for every even $j$ we have \[ a_{i_j} > \beta \quad \text{and} \quad a_{i_{j+1}} < \alpha, \] i.e., $(a_i)_{i=1}^k$ has at least $\lceil \frac n 2 \rceil$ \textbf{downcrossings} from $\beta$ to $\alpha$ and at least $\lfloor \frac n 2 \rfloor$ \textbf{upcrossings} from $\alpha$ to $\beta$. If $(a_i)_{i \geq 1}$ is an infinite sequence of real numbers, we use the same terminology and say that $(a_i)_{i \geq 1}$ fluctuates at least $n$ times across the interval $(\alpha,\beta)$ if some initial segment $(a_i)_{i=1}^k$ of the sequence fluctuates at least $n$ times across $(\alpha,\beta)$. We denote the sets of all real-valued sequences having at least $n$ fluctuations across an interval $(\alpha, \beta)$ by $\mathcal{F}_{(\alpha,\beta)}^n$, and it will be clear from the context if we are talking about finite or infinite sequences. The main result of this article is the following theorem, which generalizes the results in \cite{kw1999} about fluctuations of averages of nonnegative functions. \begin{thmn} Let $\Gamma$ be a group of polynomial growth and let $(\alpha, \beta) \subset \mathbb{R}ps$ be some nonempty interval. Then there are some constants $c_1,c_2 \in \mathbb{R}ps$ with $c_2<1$, which depend only on $\Gamma$, $\alpha$ and $\beta$, such that the following assertion holds. For any probability space $\mathrm X=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\mathrm X$ and any measurable $f \geq 0$ on $X$ we have \[ \mu(\{ x: (\avg{g \in \mathrm B(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq 1$. \wedged{thmn} The paper is structured as follows. We provide some background on groups of polynomial growth in Section \ref{ss.grouppolgr}, discuss some special properties of averages on groups of polynomial growth and a transference principle in Section \ref{ss.avgongrp} and prove effective Vitali covering theorem in Section \ref{ss.vitcov}. The main theorem of this paper is Theorem \ref{t.expdec}, which is proved in Section \ref{s.upcrineq}. This research was done during the author's PhD studies under the supervision of Markus Haase. I would like to thank him for his support and advice. \section{Preliminaries} \subsection{Groups of Polynomial Growth} \label{ss.grouppolgr} Let $\Gamma$ be a finitely generated group and $\{ \gamma_1,\dots,\gamma_k\}$ be a fixed generating set. Each element $\gamma \in \Gamma$ can be represented as a product $\gamma_{i_1}^{p_1} \gamma_{i_2}^{p_2} \dots \gamma_{i_l}^{p_l}$ for some indexes $i_1,i_2,\dots,i_l \in 1,\dots,k$ and some integers $p_1,p_2,\dots,p_l \in \mathbb{Z}$. We define the \textbf{norm} of an element $\gamma \in \Gamma$ by \[ \| \gamma \|:=\inf\{ \sum\limits_{i=1}^l |p_i|: \gamma = \gamma_{i_1}^{p_1} \gamma_{i_2}^{p_2} \dots \gamma_{i_l}^{p_l} \}, \] where the infinum is taken over all representations of $\gamma$ as a product of the generating elements. The norm $\| \cdot \|$ on $\Gamma$, in general, does depend on the generating set. However, it is easy to show \cite[Corollary 6.4.2]{ceccherini2010} that two different generating sets produce equivalent norms. We will always say what generating set is used in the definition of a norm, but we will omit an explicit reference to the generating set later on. For every $n \in \mathbb{R}p$ let \[ \mathrm B(n):= \{ \gamma \in \Gamma: \| \gamma \| \leq n\} \] be the closed ball of radius $n$. The norm $\| \cdot \|$ yields a right invariant metric on $\Gamma$ defined by \[ d_R(x,y):=\| x y^{-1}\| \quad (x,y \in \Gamma), \] and a left invariant metric on $\Gamma$ defined by \[ d_L(x,y):=\| x^{-1} y\| \quad (x,y \in \Gamma), \] which we call the \textbf{word metrics}. The right invariance of $d_R$ means that the right multiplication \[ R_g: \Gamma \to \Gamma, \quad x \mapsto x g \quad ( x \in \Gamma) \] is an isometry for every $g \in \Gamma$ with respect to $d_R$. Similarly, the left invariance of $d_L$ means that the left multiplications are isometries with respect to $d_L$. We let $d:=d_R$ and view $\Gamma$ as a metric space with the metric $d$. For $x\in \Gamma$, $r \in \mathbb{R}p$ let \[ \mathrm B(x,r):=\{ y \in \Gamma: d(x,y) \leq r\} \] be the closed ball of radius $r$ with center $x$. Using the right invariance of the metric $d$, it is easy to see that \[ \cntm{\mathrm B(x,r)} = \cntm{\mathrm B(y,r)} \quad \text{ for all } x,y \in \Gamma. \] Let $\mathrm{e} \in \Gamma$ be the neutral element. It is clear that \[ \mathrm B(n) = \{ \gamma: d_R(\mathrm{e},\gamma) \leq n\} = \{ \gamma: d_L(\mathrm{e},\gamma) \leq n\}, \] i.e., the ball $\mathrm B(n)$ is precisely the ball $\mathrm B(\mathrm{e}, n)$ with respect to the left and the right word metric. It is important to understand how fast the balls $\mathrm B(n)$ in the group $\Gamma$ grow as $n \to \infty$. The \textbf{growth function} $\gamma: \mathbb{N} \to \mathbb{N}$ is defined by \[ \gamma(n):=\cntm{\mathrm B(n)} \quad (n \in \mathbb{N}). \] We say that the group $\Gamma$ is of \textbf{polynomial growth} if there are constants $C,d>0$ such that for all $n \geq 1$ we have \[ \gamma(n) \leq C(n^d+1). \] \begin{exa} \label{ex.zdex} Consider the group $\mathbb{Z}^d$ for $d \in \mathbb{N}$ and let $\gamma_1,\dots,\gamma_d \in \mathbb{Z}^d$ be the standard basis elements of $\mathbb{Z}^d$. That is, $\gamma_i$ is defined by \[ \gamma_i(j):=\delta_i^j \quad (j=1,\dots, d) \] for all $i=1,\dots,d$. We consider the generating set given by elements $\sum\limits_{k \in I} (-1)^{\varepsilon_k}\gamma_k$ for all subsets $I \subseteq [1,d]$ and all functions $\varepsilon_{\cdot} \in \{ 0,1\}^I$. Then it is easy to see by induction on dimension that $\mathrm B(n) = [-n,\dots,n]^d$, hence \[ \cntm{\mathrm B(n)} = (2n+1)^d \quad \text{ for all } n \in \mathbb{N} \] with respect to this generating set, i.e., $\mathbb{Z}^d$ is a group of polynomial growth. \wedged{exa} Let $d \in \mathbb{Z}p$. We say that the group $\Gamma$ has \textbf{polynomial growth of degree $d$} if there is a constant $C>0$ such that \[ \frac 1 C n^d \leq \gamma(n) \leq C n^d \quad \text{ for all } n \in \mathbb{N}. \] It was shown in \cite{bass1972} that, if $\Gamma$ is a finitely generated nilpotent group, then $\Gamma$ has polynomial growth of some degree $d \in \mathbb{Z}p$. Furthermore, one can show \cite[Proposition 6.6.6]{ceccherini2010} that if $\Gamma$ is a group and $\Gamma' \leq \Gamma$ is a finite index, finitely generated nilpotent subgroup, having polynomial growth of degree $d \in \mathbb{Z}p$, then the group $\Gamma$ has polynomial growth of degree $d$ as well. A surprising fact is that the converse is true as well. Namely, it was proved in \cite{gromov1981} that, if $\Gamma$ is a group of polynomial growth, then there is a finite index, finitely generated nilpotent subgroup $\Gamma' \leq \Gamma$. It follows that if $\Gamma$ is a group of polynomial growth with the growth function $\gamma$, then there is a constant $C>0$ and an integer $d\in \mathbb{Z}p$, called the \textbf{degree of polynomial growth}, such that \[ \frac 1 C n^d \leq \gamma(n) \leq C n^d \quad \text{ for all } n \in \mathbb{N}. \] An even stronger result was obtained in \cite{pansu1983}, where it is shown that, if $\Gamma$ is a group of polynomial growth of degree $d \in \mathbb{Z}p$, then the limit \begin{equation} \label{eq.pansu} c_{\Gamma}:=\lim\limits_{n \to \infty} \frac{\gamma(n)}{n^d} \wedged{equation} exists. As a consequence, one can show that groups of polynomial growth are amenable. \begin{prop} Let $\Gamma$ be a group of polynomial growth. Then $(\mathrm B(n))_{n \geq 1}$ is a F{\o}lner sequence in $\Gamma$. \wedged{prop} \begin{proof} We want to show that for every $g \in \Gamma$ \[ \lim\limits_{n \to \infty} \frac{\cntm{g \mathrm B(n) \triangle \mathrm B(n)}}{\cntm{\mathrm B(n)}} = 0. \] Let $m:=d(g,e) \in \mathbb{Z}p$. Then $g \mathrm B(n) \subseteq \mathrm B(n+m)$, hence \[ \frac{\cntm{g \mathrm B(n) \triangle \mathrm B(n)}}{\cntm{\mathrm B(n)}} \leq \frac{\cntm{\mathrm B(n+m)} - \cntm{ \mathrm B(n)}} {\cntm{\mathrm B(n)}} \to 0, \] where we use the existence of the limit in Equation \eqref{eq.pansu}. \wedged{proof} It will be useful later to have a special notion for the points which are `close enough' to the boundary of a ball in $\Gamma$. Let $W:=\mathrm B(y,s)$ be some ball in $\Gamma$. For a given $r \in \mathbb{R}ps$ the \textbf{$r$-interior} of $W$ is defined as \[ \intr{r}(W):=\mathrm B(y,(1-5/r)s). \] The \textbf{$r$-boundary} of $W$ is defined as \[ \bdr{r}(W):=W \setminus \intr{r}(W). \] If a set $\mathcal{C}$ is a disjoint collection of balls in $\Gamma$, we define the $r$-interior and the $r$-boundary of $\mathcal{C}$ as \[ \intr{r}(\mathcal{C}):=\bigsqcup\limits_{W \in \mathcal{C}} \intr{r}(W) \] and \[ \bdr{r}(\mathcal{C}):=\bigsqcup\limits_{W \in \mathcal{C}} \bdr{r}(W) \] respectively. It will be essential to know that the $r$-boundary becomes small (respectively, the $r$-interior becomes large) for large enough balls and large enough $r$. More precisely, we state the following lemma, whose proof follows from the result of Pansu (see Equation \eqref{eq.pansu}). \begin{lemma} \label{l.smallbdr} Let $\Gamma$ be a group of polynomial growth and $\delta \in (0,1)$ be some constant. Then there exist constants $n_0, r_0 \in \mathbb{N}$, depending only on $\Gamma$ and $\delta$, such that the following holds. If $\mathcal{C}$ is a finite collection of disjoint balls with radii greater than $n_0$, then for all $r > r_0$ \[ \cntm{\intr{r} ( \mathcal{C} )} > (1-\delta) \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W} \] and \[ \cntm{\bdr{r} ( \mathcal{C} )} < \delta \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W}. \] \wedged{lemma} \subsection{Averages on Groups of Polynomial Growth and a Transference Principle} \label{ss.avgongrp} We collect some useful results about averages on groups of polynomial growth in this subsection. At the end of the subsection we will discuss a transference principle, which will become essential later in Section \ref{s.upcrineq}. We start with a preliminary lemma, whose proof is straightforward. \begin{lemma} \label{l.ballgrowth} Let $f$ be a nonnegative function on a group of polynomial growth $\Gamma$. Let $\{ B_1, \dots, B_k\}$ be some disjoint balls in $\Gamma$ such that \[ \avg{g \in B_i} f(g) > \beta \quad \text{for each } i=1,\dots,k. \] Let $B$ be a ball in $\Gamma$, containing all $B_i$'s, such that \begin{equation*} \avg{g \in B} f(g) < \alpha. \wedged{equation*} Then \[ \frac{\sum\limits_{i=1}^k \cntm{B_i}}{\cntm{B}} < \frac{\alpha}{\beta}. \] \wedged{lemma} \noindent We refine this result as follows. \begin{lemma} \label{l.uskip} Let $\varepsilon \in (0,1)$. There is $n_0 \in \mathbb{N}$, depending only on the group of polynomial growth $\Gamma$ and $\varepsilon$, such that the following assertion holds. Given a nonnegative function $f$ on $\Gamma$, the condition \begin{equation} \label{eq.fluctcond} \avg{g \in \mathrm B(n)} f( g ) > \beta \quad \text{and} \quad \avg{g \in \mathrm B(m)} f( g ) < \alpha \wedged{equation} for some $n_0 \leq n < m$ and an interval $(\alpha,\beta) \subset \mathbb{R}ps$ implies that \[ \frac m n > (1-\varepsilon) \left(\frac {\beta}{\alpha} \right)^{1/d}. \] \wedged{lemma} \begin{proof} First of all, note that condition \eqref{eq.fluctcond} implies that \[ \frac{\cntm{\mathrm B(m)}}{\cntm{\mathrm B(n)}} > \frac{\beta}{\alpha} \] for \emph{all} indexes $n<m$ (see the previous lemma). Using the result of Pansu (Equation \eqref{eq.pansu}), we deduce that there is $n_0$ depending only on $\Gamma$ and $\varepsilon$ such that for all $n_0 \leq n<m$ we have \[ \frac{m^d}{n^d}>(1-\varepsilon)^d \frac{\cntm{\mathrm B(m)}}{\cntm{\mathrm B(n)}}. \] This implies that \[ \frac m n > (1-\varepsilon) \left( \frac{\beta}{\alpha}\right)^{1/d}, \] and the proof of the lemma is complete. \wedged{proof} \noindent Lemma \ref{l.uskip} has the following straightforward corollary. \begin{cor} \label{c.skip} For a constant $\varepsilon \in (0,1)$ and a group of polynomial growth $\Gamma$ let $n_0:=n_0(\varepsilon)$ be given by Lemma \ref{l.uskip}. Given a measure-preserving action of $\Gamma$ on a probability space $\mathrm X$, a nonnegative function $f$ on $X$ and $x \in X$, the condition that the sequence \[ \left( \avg{g \in \mathrm B(i)} f(g \cdot x) \right)_{i=n}^m \] fluctuates at least $k$ times across an interval $(\alpha, \beta) \subset \mathbb{R}ps$ with $n>n_0$ implies that \[ \frac m n > (1-\varepsilon)^{\lceil \frac k 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac k 2 \rceil} \cdot \frac 1 d} \] \wedged{cor} Finally, we will need an adapted version of the `easy direction' in Calder\'{o}n's transference principle for groups of polynomial growth. Suppose that a group $\Gamma$ of polynomial growth acts on a probability space $\mathrm X=(X,\mathcal{B},\mu)$ by measure-preserving transformations and that we want to estimate the size of a measurable set $E$. Fix an integer $m \in \mathbb{Z}p$. For an integer $L \in \mathbb{N}$ and a point $x \in X$ we define the set \[ B_{L,m,x}:= \{ g: \ g \cdot x \in E \text{ and } \| g \| \leq L-m \} \subseteq \mathrm B(L). \] The lemma below tells us that each universal upper bound on the density of $B_{L,m,x}$ in $\mathrm B(L)$ bounds the measure of $E$ from above as well. \begin{lemma}[Transference principle] \label{l.caldtrans} Suppose that for a given constant $t \in \mathbb{R}p$ the following holds: there is some $L_0 \in \mathbb{N}$ such that for all $L \geq L_0$ and for $\mu$-almost all $x \in X$ we have \[ \frac 1 {\cntm{\mathrm B(L)}} \cntm{B_{L,m,x}} \leq t. \] Then \[ \mu(E) \leq t. \] \wedged{lemma} \begin{proof} Indeed, since $\Gamma$ acts on $\mathrm X$ by measure-preserving transformations, we have \[ \sum\limits_{g \in \mathrm B(L)} \int\limits_{\mathrm X} \independenti{E}(g \cdot x) d \mu = \cntm{\mathrm B(L)} \mu(E). \] Then \begin{align*} \mu(E) &= \int\limits_{\mathrm X} \left( \frac 1 {\cntm{\mathrm B(L)}} \sum\limits_{g \in \mathrm B(L)} \independenti{E} (g \cdot x) \right) d \mu \leq \\ &\leq \int\limits_{\mathrm X} \left( \frac{\cntm{B_{L,m,x}} + \cntm{\mathrm B(L) \setminus \mathrm B(L-m)}}{\cntm{\mathrm B(L)}} \right) d \mu, \wedged{align*} and the proof is complete since $L$ can be arbitrarily large and $\Gamma$ is a group of polynomial growth. \wedged{proof} \subsection{Vitali Covering Lemma} \label{ss.vitcov} In this section we discuss the generalization of Effective Vitali Covering lemma from \cite{kw1999} to groups of polynomial growth. We fix some notation first. Given a number $t \in \mathbb{R}p$ and a ball $B=\mathrm B(x,r) \subseteq \mathrm X$ in a metric space $\mathrm X$, we denote by $t \cdot B$ the $t$-enlargement of $B$, i.e., the ball $\mathrm B(x,rt)$. We state the basic finitary Vitali covering lemma first, whose proof is well-known. \begin{lemma} \label{l.fvc} Let $\mathcal{B}:=\{ B_1,\dots,B_n \}$ be a finite collection of balls in a metric space $\mathrm X$. Then there is a finite subset $\{ B_{j_1},\dots, B_{j_m}\} \subseteq \mathcal{B}$ consisting of pairwise disjoint balls such that \[ \bigcup\limits_{i=1}^n B_i \subseteq \bigcup\limits_{l=1}^m 3 \cdot B_{j_l}. \] \wedged{lemma} Infinite version of this lemma is used, for example, in the proof of the standard Vitali covering theorem, which can be generalized to arbitrary doubling measure spaces. However, the standard Vitali covering theorem is not sufficient for our purposes. It was shown in \cite{kw1999} that the groups $\mathbb{Z}^d$ for $d \in \mathbb{N}$, which are of course doubling measure spaces when endowed with the counting measure and the word metric, enjoy a particularly useful `effective' version of the theorem. We prove a generalization of this result to groups of polynomial growth below. \begin{thm}[Effective Vitali covering] \label{t.evc} Let $\Gamma$ be a group of polynomial growth of degree $d$. Let $C \geq 1$ be a constant such that \[ \frac 1 C m^d \leq \gamma(m) \leq C m^d \quad \text{ for all } m \in \mathbb{N} \] and let $c:=3^d C^2$. Let $R,n,r>2$ be some fixed natural numbers and $X \subseteq \mathrm B(R)$ be a subset of the ball $\mathrm B(R) \subset \Gamma$. Suppose that to each $p \in X$ there are associated balls $A_1(p),\dots,A_n(p)$ such that the following assertions hold: \begin{aufzi} \item $p \in A_i(p) \subseteq \mathrm B(R)$ for $i=1,\dots,n$; \item For all $i=1,\dots,n-1$ the $r$-enlargement of $A_i(p)$ is contained in $A_{i+1}(p)$. \wedged{aufzi} Let \[ S_i:=\bigcup\limits_{p \in X} A_i(p) \quad (i=1,\dots,n). \] There is a disjoint subcollection $\mathcal{C} $ of $\{ A_i(p) \}_{p \in X, i=1,\dots,n}$ such that the following conclusions hold: \begin{aufzi} \item The union of $\left( 1+ \frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ together with the the set $S_n \setminus S_1$ covers all but at most $\left( \frac {c-1} c \right)^n$ of $S_n$; \item The measure of the union of $\left( 1+ \frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ is at least $(1 - \left( \frac {c-1} c \right)^n)$ times the measure of $S_1$. \wedged{aufzi} \wedged{thm} \begin{rem} \label{r.maxball} Prior to proceeding to the proof of the theorem we make the following remarks. Firstly, we do not require the balls $A_i(p)$ from the theorem to be centered around $p$. Secondly, the balls of the form $A_i(p)$ for $i=1,\dots,n$ and $p \in X$ will be called \textbf{$i$-th level balls}. An $i$-th level ball $A_i(p)$ is called \textbf{maximal} if it is not contained in any other $i$-th level ball. It is clear that each $S_i$ is the union of maximal $i$-level balls as well. It will follow from the proof below that the balls in $\mathcal{C}$ can be chosen to be maximal. \wedged{rem} \begin{proof} To simplify the notation, let \[ s:=1+\frac 4 {r-2} \] be the scaling factor that is used in the theorem. The main idea of the proof is to cover a positive fraction of $S_n$ by a disjoint union of $n$-level balls via Lemma \ref{l.fvc}, then cover a positive fraction of what remains in $S_{n-1}$ by a disjoint union of $(n-1)$-level balls and so on. Thus we begin by covering a fraction of $S_n$ by $n$-level balls. Let $\mathcal{C}_n \subseteq \{ A_n(p) \}_{p \in X}$ be the collection of disjoint balls, obtained by applying Lemma \ref{l.fvc} to the collection of all $n$-th level \emph{maximal} balls. For every ball $B=\mathrm B(p,m) \in \mathcal{C}_n$ we have \[ \cntm{3 \cdot B} \leq C (3m)^d \leq C^2 3^d \cntm{B}, \] hence \[ \cntm{S_n} \leq \cntm{\bigcup\limits_{B \in \mathcal{C}_n} 3 \cdot B} \leq \sum\limits_{B \in \mathcal{C}_n} c \cntm{B} \] and so \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_n} B} \geq \frac 1 c \cntm{S_n}. \] Let $U_n:=\bigsqcup\limits_{B \in \mathcal{C}_n} B$. The computation above shows that \begin{equation} \label{eq.stepa} U_n \text{ covers at least } \frac 1 c \text{-fraction of } S_n \wedged{equation} and \begin{equation} \label{eq.stepb} \cntm{S_1} - \cntm{U_n} \leq \cntm{S_1} - \frac 1 c \cntm{S_1} = \frac{c-1} c \cntm{S_1}. \wedged{equation} We proceed by restricting to $(n-1)$-level balls. Assume for the moment that the following claim is true. \begin{claimn} If a ball $A_{n-1}(p)$ has a nonempty intersection with $U_n$, then $A_{n-1}(p)$ is contained in the $s$-enlargement of the ball in $\mathcal{C}_n$ that it intersects. \wedged{claimn} \noindent Let \begin{align*} \widetilde \mathcal{C}_{n-1}:=\{ A_{n-1}(p): \ &A_{n-1}(p) \text{ is a maximal } (n-1)-\text{level ball} \\ &\text{ such that } A_{n-1}(p) \cap U_n = \varnothing\} \wedged{align*} be the collection of all maximal $(n-1)$-level balls disjoint from $U_n$ and let $\widetilde U_{n-1}$ be its union. We apply Lemma \ref{l.fvc} once again to obtain a collection $\mathcal{C}_{n-1} \subseteq \widetilde \mathcal{C}_{n-1}$ of pairwise disjoint maximal balls such that \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_{n-1}} B} \geq \frac 1 c \cntm{\widetilde U_{n-1}}. \] Let $U_{n-1}:=\bigsqcup\limits_{B \in \mathcal{C}_{n-1}} B$. In order to show that \begin{equation} \label{eq.s1est} \cntm{S_1} - \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}} \leq \left( \frac{c-1} c \right)^2 \cntm{S_1} \wedged{equation} it suffices to prove that \begin{equation} \label{eq.snm1} \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}} \geq \cntm{U_n} + \frac 1 c \cntm{S_{n-1} \setminus U_n}, \wedged{equation} due to the obvious inequalities \begin{align*} \cntm{S_{n-1} \setminus U_n} \geq &\cntm{S_{n-1}} - \cntm{U_n} \geq \cntm{S_1} - \cntm{U_n},\\ &\cntm{U_n} \geq \frac 1 c \cntm{S_1}. \wedged{align*} We decompose the set $S_{n-1} \setminus U_n$ as follows \[ S_{n-1} \setminus U_n = \widetilde U_{n-1} \sqcup \left( S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})\right). \] The part $S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})$ is covered by the $(n-1)$-level balls intersecting $U_n$. Hence, if Claim 1 above is true, the set $S_{n-1} \setminus (U_n \cup \widetilde U_{n-1})$ is covered by the $s$-enlargements of balls in $\mathcal{C}_n$. Next, $U_{n-1}$ covers at least $\frac 1 c$ fraction of $\widetilde U_{n-1}$. It follows that the set $\bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup U_{n-1}$ covers the set $U_n$ and at least $\frac 1 c$-fraction of the set $S_{n-1} \setminus U_n$. Thus we have proved inequalities \eqref{eq.snm1} and \eqref{eq.s1est}. A similar argument shows that \begin{align} \label{eq.2ndstepa} \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup & \bigcup\limits_{B \in \mathcal{C}_{n-1}} \left(s \cdot B \right) \cup (S_n \setminus S_{n-1}) \text{ covers all but} \\ &\text{ at most } \left( 1 - \frac 1 c\right)^2 \text{ of } S_n. \nonumber \wedged{align} Comparing Equations \eqref{eq.2ndstepa} and \eqref{eq.s1est} to the statements $\mathrm{(a)}$ and $\mathrm{(b)}$ of the theorem, we see that the proof would be complete apart from Claim 1 if $n$ was equal to $2$. So we proceed further to $(n-2)$-level balls and use the following claim. \begin{claimn} If a ball $A_{n-2}(p)$ has a nonempty intersection with $U_n \cup U_{n-1}$, then $A_{n-2}(p)$ is contained in the $s$-enlargement of the ball in $\mathcal{C}_n \cup \mathcal{C}_{n-1}$ that it intersects. \wedged{claimn} We let $\mathcal{C}_{n-2}$ be the collection of all maximal $(n-2)$-level balls disjoint from $U_n \cup U_{n-1}$ and let $\widetilde U_{n-2}$ be its union. We apply Lemma \ref{l.fvc} once again to obtain a collection $\mathcal{C}_{n-2} \subseteq \widetilde \mathcal{C}_{n-2}$ of pairwise disjoint balls such that \[ \cntm{ \bigsqcup\limits_{B \in \mathcal{C}_{n-2}} B} \geq \frac 1 c \cntm{\widetilde U_{n-2}} \] and let $U_{n-2}:=\bigsqcup\limits_{B \in \mathcal{C}_{n-2}} B$. Similar arguments show that \begin{equation*} \cntm{S_1} - \cntm{ \bigcup\limits_{B \in \mathcal{C}_n} \left(s \cdot B \right) \cup \bigcup\limits_{B \in \mathcal{C}_{n-1}} \left(s \cdot B \right) \cup U_{n-2}} \leq \left( \frac{c-1} c \right)^3 \cntm{S_1} \wedged{equation*} and that the union of $s$-enlargements of balls in $\mathcal{C}_n$, $\mathcal{C}_{n-1}$ and $\mathcal{C}_{n-2}$, together with $S_n \setminus S_{n-2}$, covers all but at most $\left( 1 - \frac 1 c\right)^3$ of $S_n$. It is obvious that one can continue in this way down to the $1$-st level balls, using the obvious generalization of Claim 2. This would yield a collection of maximal balls \[ \mathcal{C}:=\bigcup\limits_{i=1}^n \mathcal{C}_i \] so that the union of $s$-enlargements of balls in $\mathcal{C}$ together with $S_n \setminus S_1$ covers all but most $\left( 1 -\frac 1 c\right)^n$ of $S_n$ and that the measure of the union of these $s$-enlargements is at least $\left( 1- \left( 1 - \frac 1 c\right)^n\right)$ times the measure of $S_1$. We conclude that the proof is complete once we prove the claims above and their generalizations. For this it suffices to prove the following statement: \begin{claimn} If $1 \leq i < j \leq n$ and $A_j(q)$ is a maximal ball, then for all $p \in X$ \[ A_i(p) \cap A_j(q) \neq \varnothing \mathbb{R}ightarrow A_{i}(p) \subseteq s \cdot A_j(q). \] \wedged{claimn} Suppose this is not the case. Let $x,y$ be the centers and $r_1,r_2$ be the radii of $A_{i}(p)$ and $A_j(q)$ respectively. Recall that $s = 1+\frac 4 {r-2}$. Since the $s$-enlargement of $A_j(q)$ does not contain $A_i(p)$, it follows $\frac {4 r_2} {r-2} \leq 2 r_1$, hence \[ r r_1 \geq 2 r_1 + 2 r_2. \] The intersection of $A_i(p)$ and $A_j(q)$ is nonempty, hence $d \leq r_1 + r_2$. This implies that \[ r r_1 \geq d+r_1+r_2, \] so the $r$-enlargement of the ball $A_i(p)$ contains $A_j(q)$. Since $r \cdot A_i(p) \subseteq A_{i+1}(p)$, we conclude that the ball $A_j(q)$ is not maximal. Contradiction. \wedged{proof} \begin{cor} \label{c.evccor} Suppose that in addition to all the assumptions of Theorem \ref{t.evc} we have \[ \cntm{S_n} \leq (c+1) \cntm{S_1}, \] where $c$ is the constant defined in Theorem \ref{t.evc}. Then there is a disjoint subcollection $\mathcal{C}$ of maximal balls such the union of $\left( 1+ \frac 4 {r-2}\right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1-(c+1) \left( \frac{c-1}{c}\right)^n \right)$ of $S_1$. \wedged{cor} \begin{proof} From the proof of Theorem \ref{t.evc} it follows that one can find a disjoint collection $\mathcal{C}$ of maximal balls satisfying assertions \textrm{(a)} and \textrm{(b)} of the theorem. The statement of the corollary is an easy consequence of \textrm{(a)}. \wedged{proof} As the main application we will use the corollary above in the proof of Theorem \ref{t.expdec}. It will be essential to know that one can ensure that the extra $\left( 1+\frac 4 {r-2}\right)$-enlargement does change the size of the union of the balls too much. \begin{lemma} \label{l.smallenl} Let $\Gamma$ be a group of polynomial growth and $\delta \in (0,1)$ be some constant. Then there exist integers $n_0, r_0 > 2$, depending only on $\Gamma$ and $\delta$, such that the following assertion holds. If $\mathcal{C}$ is a finite collection of disjoint balls with radii greater than $n_0$, then for all $r \geq r_0$ we have \[ \cntm{\bigsqcup\limits_{W \in \mathcal{C}} W} \geq (1-\delta) \cntm{\bigcup\limits_{W\in \mathcal{C}} \left( 1+\frac 4 {r-2} \right) \cdot W}. \] \wedged{lemma} \noindent The proof of the lemma follows from the result of Pansu (see Equation \eqref{eq.pansu}). \section{Fluctuations of Averages of Nonnegative Functions} \label{s.upcrineq} The purpose of this section is to prove the following theorem. \begin{thm} \label{t.expdec} Let $\Gamma$ be a group of polynomial growth of degree $d \in \mathbb{Z}p$ and let $(\alpha, \beta) \subset \mathbb{R}ps$ be some nonempty interval. Then there are some constants $c_1,c_2 \in \mathbb{R}ps$ with $c_2<1$, which depend only on $\Gamma$, $\alpha$ and $\beta$, such that the following assertion holds. For any probability space $\mathrm X=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\mathrm X$ and any measurable $f \geq 0$ on $X$ we have \[ \mu(\{ x: (\avg{g \in \mathrm B(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq 1$. \wedged{thm} To simplify the presentation we use the adjective \textbf{universal} to talk about constants determined by $\Gamma$ and $(\alpha,\beta)$. When a constant $c$ is determined by $\Gamma, (\alpha, \beta)$ and a parameter $\delta$, we say that $c$ is \textbf{$\delta$-universal}. Prior to proceeding to the proof of Theorem \ref{t.expdec}, we make some straightforward observations. \begin{rem} \label{r.fluctbd} It easy to see how one can generalize the theorem above for arbitrary functions bounded from below. If a measurable function $f$ on $X$ is greater than $-m$ for some constant $m \in \mathbb{R}p$, then \[ \mu(\{ x: (\avg{ g \in \mathrm B(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < \widetilde{c}_1 \widetilde{c}_2^N, \] where the constants $\widetilde{c}_1, \widetilde{c}_2$ are given by applying Theorem \ref{t.expdec} to the function $f+m$ and the interval $(\alpha+m,\beta+m)$. \wedged{rem} \begin{rem} \label{r.abclose} Recall that $\gamma: \mathbb{Z}p \to \mathbb{Z}p$ is a growth function of a group $\Gamma$. Let $C \geq 1$ be a constant such that \[ \frac 1 C r^d \leq \gamma(r) \leq C r^d \quad \text{ for all } r\in \mathbb{N} \] and let $c:=3^d C^2$. Then it suffices to prove Theorem \ref{t.expdec} only for intervals $(\alpha,\beta)$ such that \[ \frac{\beta}{\alpha} \leq \frac{c+1}{c}. \] If the interval does not satisfy this condition, we replace it with a sufficiently small subinterval and apply Theorem \ref{t.evc}. The importance of this observation will be apparent later. \wedged{rem} \begin{rem} \label{r.largen} Instead of proving the original assertion of Theorem \ref{t.expdec}, we will prove the following weaker assertion, which is clearly sufficient to deduce Theorem \ref{t.expdec}. \emph{There is a universal integer $\widetilde N_0 \in \mathbb{N}$ such that for any probability space $\mathrm X=(X, \mathcal{B}, \mu)$, any measure-preserving action of $\Gamma$ on $\mathrm X$ and any measurable $f \geq 0$ on $\mathrm X$ we have \[ \mu(\{ x: (\avg{g \in \mathrm B(k)} f(g \cdot x))_{k \geq 1} \in \mathcal{F}_{(\alpha,\beta)}^N \}) < c_1 c_2^N \] for all $N \geq \widetilde N_0$. }\wedged{rem} The upcrossing inequalities given by Theorem \ref{t.expdec} and Remark \ref{r.fluctbd} allow for a short proof of the pointwise ergodic theorem on $\Ell{\infty}$ for actions of groups of polynomial growth. \begin{thm} \label{t.polergthm} Let $\Gamma$ be a group of polynomial growth acting on a probability space $\mathrm X=(X, \mathcal{B}, \mu)$ by measure-preserving transformations. Then for every $f \in \Ell{\infty}(\mathrm X)$ the limit \[ \lim\limits_{n \to \infty} \avg{g \in \mathrm B(n)} f(g \cdot x) \] exists almost everywhere. \wedged{thm} \begin{proof} Let \[ X_0:= \{x \in X: \lim\limits_{n \to \infty} \avg{g \in \mathrm B(n)} f(g \cdot x) \text{ does not exist} \} \] be the set of the points in $\mathrm X$ where the ergodic averages do not converge. Let $((\alpha_i,\beta_i))_{i \geq 1}$ be a sequence of nonempty intervals such that each nonempty interval $(c,d) \subset \mathbb{R}$ contains some interval $(a_i,b_i)$. Then it is clear that if $x \in X_0$, then there is some interval $(a_i,b_i)$ such that the sequence of averages $\left( \avg{g \in \mathrm B(n)} f(g \cdot x) \right)_{n \geq 1}$ fluctuates over $(a_i,b_i)$ infinitely often, i.e., \[ X_0 \subseteq \{ x \in X: \left(\avg{g \in \mathrm B(n)} f(g \cdot x)\right)_{n \geq 1} \in \bigcup\limits_{i \geq 1} \bigcap\limits_{k \geq 1} \mathcal{F}_{(a_i,b_i)}^k\}. \] By Theorem \ref{t.expdec} and Remark \ref{r.fluctbd} we have for every interval $(a_i,b_i)$ that \[ \mu(\{ x \in X: \left(\avg{g \in \mathrm B(n)} f(g \cdot x)\right)_{n \geq 1} \in \bigcap\limits_{k \geq 1} \mathcal{F}_{(a_i,b_i)}^k\}) = 0, \] hence $\mu(X_0) = 0$ and the proof is complete. \wedged{proof} We now begin the proof of Theorem \ref{t.expdec}, namely we will prove the assertion in Remark \ref{r.largen}. Assume from now on that the group $\Gamma$ of polynomial growth of degree $d \in \mathbb{Z}p$ and the interval $(\alpha, \beta) \subset \mathbb{R}ps$ are \emph{fixed}. Given a measure-preserving action of $\Gamma$ on a probability space $\mathrm X =(X,\mathcal{B},\mu)$, let \[ E_N:=\{ x: \left( \avg{g \in \mathrm B(k)} f(g \cdot x) \right)_{k \geq 1} \in \mathcal{F}_{(\alpha, \beta)}^N\} \] be the set of all points $x \in X$ where the ergodic averages fluctuate at least $N \geq \widetilde N_0$ times across the interval $(\alpha,\beta)$. Here $\widetilde N_0$ is a universal constant, which will be determined later. For $m \geq 1$ define, furthermore, the set \[ E_{N,m}:=\{ x: \left( \avg{g \in \mathrm B(k)} f(g \cdot x) \right)_{k=1}^m \in \mathcal{F}_{(\alpha, \beta)}^N\} \] of all points such that the finite sequence $\left(\avg{\mathrm B(k)} f(g \cdot x) \right)_{k=1}^m$ fluctuates at least $N$ times across $(\alpha,\beta)$. Then, clearly, $(E_{N,m})_{m \geq 1}$ is a monotone increasing sequence of sets and \[ E_N = \bigcup\limits_{m \geq N} E_{N,m}. \] We will complete the proof by giving a universal estimate for $\mu(E_{N,m})$ for all $m \geq N$. For that we use the transference principle (Lemma \ref{l.caldtrans}), i.e., for an integer $L > m$ and a point $x \in X$ we let \[ B_{L,m,x}:=\{g: \ g \cdot x \in E_{N,m} \text{ and } \| g \| \leq L-m \}. \] The goal is to show that the density of the set \[ B_0:=B_{L,m,x} \subset \mathrm B(L) \] can be estimated by $c_1 c_2^N$ for some universal constants $c_1,c_2$. The main idea is as follows. For every point $z \in B_0$ the sequence of averages \[ k \mapsto \avg{g \in \mathrm B(k)} f( (gz) \cdot x), \quad k = 1,\dots,m \] fluctuates at least $N$ times. Since the word metric $d=d_R$ on $\Gamma$ is right-invariant, the set $\mathrm B(k)z$ is in fact a ball of radius $k$ centered at $z$ for each $k=1,\dots,m$. Given a parameter $\delta \in (0, 1-\sqrt{{\alpha}/{\beta}})$, we will pick some of these balls and apply effective Vitali covering theorem (Theorem \ref{t.evc}) multiple times to replace $B_0$ by a sequence \[ B_1,B_2,\dots, B_{\lfloor (N-N_0)/T \rfloor} \] of subsets of $\mathrm B(L)$ for some $\delta$-universal integers $T, N_0 \in \mathbb{N}$ which satisfies the assumption \begin{equation} \label{eq.oddstep} B_{2i+1} \text{ covers at least } \left( 1-\delta \right)-\text{fraction of } B_{2i} \quad \text{ for all indices } i \geq 0 \wedged{equation} at `odd' steps and the assumption \begin{equation} \label{eq.evenstep} \cntm{B_{2i}} \geq \frac{\beta}{\alpha}(1-\delta) \cntm{B_{2i-1}} \quad \text{ for all indexes } i \geq 1 \wedged{equation} at `even' steps. Each $B_i$ is, furthermore, a union \[ \bigsqcup\limits_{B \in \mathcal{C}_i} B \] of some family $\mathcal{C}_i$ of disjoint balls with centers in $B_0$. If such a sequence of sets $B_1,\dots,B_{\lfloor (N-N_0)/T \rfloor}$ exists, then \begin{align*} \cntm{\mathrm B(L)} \geq \cntm{B_{\lfloor (N-N_0)/T \rfloor}} \geq \left( \frac{\beta}{\alpha}(1-\delta)^2 \right)^{\lfloor \frac {N-N_0}{2 T} \rfloor } \cntm{B_0}, \wedged{align*} which gives the required exponential bound on the density of $B_0$ with \[ c_2:=\left( \frac{\alpha}{\beta}(1-\delta)^{-2} \right)^{ 1 / 2T} \] and a suitable $\delta$-universal $c_1$. To ensure that conditions \eqref{eq.oddstep} and \eqref{eq.evenstep} hold, one has to pick sufficiently large $\delta$-universal parameters $r$ and $n$ for the effective Vitali covering theorem. We make it precise at the end of the proof, for now we assume that $r$, $n$ are `large enough'. In order to force the sufficient growth rate of the balls (condition \textrm{(b)} of Theorem \ref{t.evc}), we employ the following argument. Let $K>0$ be the smallest integer such that \[ \left(1-\frac{1-(\alpha/\beta)^{1/d}}{2} \right)^{\lceil \frac K 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac K 2 \rceil} \cdot \frac 1 d} \geq r. \] Then, applying Corollary \ref{c.skip}, we obtain a universal integer $n_0 \in \mathbb{N}$ such that if a sequence \[ (\avg{g \in B(i)} f((gz) \cdot x))_{i=n}^m \quad \text{ for some } n>n_0, z \in B_0 \] fluctuates at least $K$ times across the interval $(\alpha,\beta)$, then \begin{equation} \label{eq.evccondb} \frac m n > \left(1-\frac{1-(\alpha/\beta)^{1/d}}{2}\right)^{\lceil \frac K 2 \rceil} \left( \frac{\beta}{\alpha} \right)^{{\lceil \frac K 2 \rceil} \cdot \frac 1 d} \geq r. \wedged{equation} Let $n$ be large enough for use in effective Vitali covering theorem. We define $T:=2nK$ and let $N_0 \geq n_0$ be sufficiently large (this will be made precise later). The first $N_0$ fluctuations are skipped to ensure that the balls have large enough radius, and the rest are divided into $\lfloor (N-N_0) / T \rfloor$ groups of $T$ consecutive fluctuations. The $i$-th group of consecutive fluctuations is used to construct the set $B_i$ for $i=1,\dots,\lfloor (N-N_0)/T \rfloor$ as follows. We distinguish between the `odd' and the `even' steps. \noindent \textbf{Odd step:} First, let us describe the procedure for odd $i$'s. For each point $z \in B_{i-1}$ we do the following. By induction we assume that $z \in B_{i-1}$ belongs to some unique ball $\mathrm B(u,s)$ from $(i-1)$-th step with $u \in B_0$. If $i=1$, then $z \in B_0$. Let $A_1(z)$ be the $(K+1)$-th ball $\mathrm B(u,s_1)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_1(z)} f(g \cdot x) > \beta, \] $A_2(z)$ be the $(2K+1)$-th ball $\mathrm B(u,s_2)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_2(z)} f(g \cdot x) > \beta \] and so on up to $A_n(z)$. It is clear that the $r$-enlargement of $A_j(z)$ is contained in $A_{j+1}(z)$ for all indexes $j<n$ and that the balls defined in this manner are contained in $\mathrm B(L)$. Thus the assumptions of Theorem \ref{t.evc} are satisfied. There are two further possibilities: either this collection satisfies the additional assumption in Corollary \ref{c.evccor}, i.e., \begin{equation} \label{eq.corcond} \cntm{S_n} \leq (c+1) \cntm{S_1} \wedged{equation} or not. If \eqref{eq.corcond} holds, then by the virtue of Corollary \ref{c.evccor} we obtain a disjoint collection $\mathcal{C}$ of maximal balls such that the measure of the union of $\left( 1+\frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1 - (c+1) \left(\frac{c-1}{c} \right)^n \right)$ of $S_1$. We let \[ B_{i}:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and $\mathcal{C}_i:=\mathcal{C}$. Condition \eqref{eq.oddstep} is satisfied if $r$ and $n$ are large enough, and we proceed to the following `even' step. If, on the contrary, \[ \cntm{S_n} > (c+1) \cntm{S_1}, \] then we apply the standard Vitali covering lemma to the collection of maximal $n$-th level balls and obtain a disjoint subcollection $\mathcal{C}$ such that \begin{equation} \label{eq.10cincr} \cntm{\bigsqcup\limits_{B \in \mathcal{C}} B} \geq \frac{1}{c}\cntm{S_n} > \frac{c+1}{c}\cntm{S_1} \wedged{equation} We assume without loss of generality that $\frac{\beta}{\alpha} \leq \frac{c+1}{c}$ (see Remark \ref{r.abclose}). We let \begin{align*} B_i&:=B_{i-1}, \\ B_{i+1}&:=\bigsqcup\limits_{B \in \mathcal{C}} B \wedged{align*} and \begin{align*} \mathcal{C}_i&:=\mathcal{C}_{i-1}, \\ \mathcal{C}_{i+1}&:=\mathcal{C}. \wedged{align*} The conditions \eqref{eq.oddstep}, \eqref{eq.evenstep} are satisfied and we proceed to the next `odd' step. \noindent \textbf{Even step:} We now describe the procedure for even $i$'s. For each point $z \in B_{i-1}$ we do the following. By induction we assume that $z \in B_{i-1}$ belongs to some unique ball $\mathrm B(u,s)$ from $(i-1)$-th step with $u \in B_0$. Let $A_1(z)$ be the $(K+1)$-th ball $\mathrm B(u,s_1)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_1(z)} f(g \cdot x) < \alpha, \] $A_2(z)$ be the $(2K+1)$-th ball $\mathrm B(u,s_2)$ in the $i$-th group of fluctuations such that \[ \avg{g \in A_2(z)} f(g \cdot x) < \alpha \] and so on up to $A_n(z)$. It is clear that the $r$-enlargement of $A_j(z)$ is contained in $A_{j+1}(z)$ for all indexes $j<n$ and that the balls defined in this manner are contained in $\mathrm B(L)$. Thus the assumptions of Theorem \ref{t.evc} are satisfied. There are two further possibilities: either this collection satisfies the additional assumption in Corollary \ref{c.evccor}, i.e., \begin{equation} \label{eq.corcond1} \cntm{S_n} \leq (c+1) \cntm{S_1} \wedged{equation} or not. If \[ \cntm{S_n} > (c+1) \cntm{S_1}, \] then we apply the standard Vitali covering lemma to the collection of maximal $n$-th level balls and obtain a disjoint subcollection $\mathcal{C}$ such that \begin{equation} \label{eq.10cincr1} \cntm{\bigsqcup\limits_{B \in \mathcal{C}} B} \geq \frac{1}{c}\cntm{S_n} > \frac{c+1}{c}\cntm{S_1} \wedged{equation} We assume without loss of generality that $\frac{\beta}{\alpha} \leq \frac{c+1}{c}$ (see Remark \ref{r.abclose}). We let \[ B_i:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and proceed to the following `odd' step. If \eqref{eq.corcond1} holds, then by the virtue of Corollary \ref{c.evccor} we obtain a disjoint collection $\mathcal{C}$ of maximal balls such that the measure of the union of $\left( 1+\frac 4 {r-2} \right)$-enlargements of balls in $\mathcal{C}$ covers at least $\left( 1 - (c+1) \left(\frac{c-1}{c} \right)^n \right)$ of $S_1$. We let \[ B_{i}:=\bigsqcup\limits_{B \in \mathcal{C}} B \] and $\mathcal{C}_i:=\mathcal{C}$. The goal is to prove that condition \eqref{eq.evenstep} is satisfied. If the balls from $\mathcal{C}_{i-1}$ were completely contained in the balls from $\mathcal{C}_i$, the proof would be completed by applying Lemma \ref{l.ballgrowth}. This, in general, might not be the case, so we argue as follows. First, we prove the following lemma. \begin{lemma} \label{l.bdrint} If a ball $W_1$ from $\mathcal{C}_{i-1}$ intersects $\intr{r}(W_2)$ for some ball $W_2 \in \mathcal{C}_i$, then $W_1 \subseteq W_2$. \wedged{lemma} \begin{proof} Let $W_1 = \mathrm B(y_1,s_1)$ and $W_2=\mathrm B(y_2,s_2)$ for some $y_1,y_2 \in B_0$. Since $W_1$ intersects $\intr{r}(W_2)$, we have \[ d(y_1,y_2) \leq s_2(1-5/r)+s_1. \] If $W_1$ is not contained in $W_2$, then $d(y_1,y_2) > s_2-s_1$. From these inequalities it follows that \[ s_1 \geq d(y_1,y_2)-s_2(1-5/r) >s_2-s_1-s_2+\frac{5s_2}{r}, \] hence $s_2<\frac{2rs_1}{5}$. We deduce that the $r$-enlargement of $W_1$ contains $W_2$. This is a contradiction since $W_2$ is maximal and the $r$-enlargement of $W_1$ is contained in $n$-th level ball $A_n(y_1)$. \wedged{proof} From the lemma above it follows that the set $B_{i-1}$ can be decomposed as \begin{align*} B_{i-1} = \left( \bigsqcup\limits_{W \in \mathcal{C}_{i-1}'} W \right) \sqcup (\bdr{r}(\mathcal{C}_i) \cap B_{i-1}) \sqcup (B_{i-1} \setminus B_{i}), \wedged{align*} where \[ \mathcal{C}_{i-1}':=\{ W \in \mathcal{C}_{i-1}: \ W \cap \intr{r}(V) \neq \varnothing \text{ for some } V \in \mathcal{C}_i \}. \] The rest of the argument depends on how much of $B_{i-1}$ is contained in $\bdr{r}(\mathcal{C}_i)$, so let \[ \Delta:=\frac{\cntm{\bdr{r}(\mathcal{C}_i) \cap B_{i-1}}}{\cntm{B_{i-1}}}. \] There are two possibilities. First, suppose that $\Delta>\frac{\delta} 3$. Then $\cntm{B_{i-1}} \leq \frac{\cntm{\bdr{r}(\mathcal{C}_i)}}{\delta/3}$. Let $r$ and the radii of the balls in $\mathcal{C}_i$ be large enough (see Lemma \ref{l.smallbdr}) so that \[ \frac{\cntm{\bdr{r}(\mathcal{C}_i)}}{\cntm{B_i}} < \frac{\alpha}{\beta} \frac{\delta} 3 (1-\delta)^{-1}. \] It is then easy to see that condition \eqref{eq.evenstep} is satisfied. Suppose, on the other hand, that $\Delta \leq \frac{\delta} 3$. Then, if $n$ and $r$ are large enough so that $\cntm{B_{i-1} \setminus B_i}$ is small compared to $\cntm{B_{i-1}}$, we obtain \begin{align*} \cntm{B_{i-1}} &\leq \frac{\alpha}{\beta}\cntm{B_i}+\cntm{\bdr{r}(\mathcal{C}_i) \cap B_{i-1}}+\cntm{B_{i-1} \setminus B_i} \leq \\ &\leq \frac{\alpha}{\beta}\cntm{B_i}+\frac{\delta}{3} \cntm{B_{i-1}}+\frac{\delta} 3 \cntm{B_{i-1}}, \wedged{align*} which implies that \[ \cntm{B_i} \geq \frac{\beta}{\alpha}(1-\frac{2 \delta} 3) \cntm{B_{i-1}}, \] i.e., condition \eqref{eq.evenstep} is satisfied as well. We proceed to the following `odd' step. The proof of the theorem is essentially complete. To finish it we only need to say how one can choose the constants $N_0, r, n$ and $\widetilde N_0$. Recall that $\delta \in (0, 1- \left( \alpha / \beta \right)^{1/2})$ is an arbitrary parameter. First, the integer $n \in \mathbb{N}$ is chosen so that \[ (c+1) \left( 1- \frac 1 c\right) ^n \leq 1-\sqrt{1-\delta / 4}. \] Next, we choose $r$ as the maximum of \begin{aufziii} \item the integer $r_0$ given by Lemma \ref{l.smallbdr} with the parameter $\frac{\alpha}{\beta}\frac{\delta} 3 (1-\delta)^{-1}$; \item the integer $r_0$ given by Lemma \ref{l.smallenl} with the parameter $1-\sqrt{1-\delta/4}$. \wedged{aufziii} The integer $K>0$ is picked so that condition \eqref{eq.evccondb} is satisfied. We choose $N_0$ as the maximum of \begin{aufziii} \item the integer $n_0$ given by Lemma \ref{l.smallbdr} with the parameter $\frac{\alpha}{\beta}\frac{\delta} 3 (1-\delta)^{-1}$; \item the integer $n_0$ given by Lemma \ref{l.smallenl} with the parameter $1-\sqrt{1-\delta/4}$; \item the integer $n_0$ given by Corollary \ref{c.skip} with the parameter $\frac{1-(\alpha/\beta)^{1/d}}{2}$; \wedged{aufziii} Finally, we define $\widetilde N_0$ as $\widetilde N_0:=N_0+4nK+1$. A straightforward computation shows that this choice of constants satisfies all requirements. We do not assert, however, that this choice yields \emph{optimal} constants $c_1$ and $c_2$. \qed \printbibliography[] \wedged{document}
math
Learn how to play on the guitar this Italian song. This is the sheet music in standard notation of my fingerstyle guitar arrangement of this song by Angelo Branduardi. Free sheet music, chords and video tutorial.
english
whoop you know its nearly Christmas when you go to the village carols in the park. i haven’t been to the village one in 7-8 years and it was lovely to see how busy it was but sadly we could only stay for 30mins because liam was bored and pushed over a girl who kept bugging him. i better say if people from my fb read my blogs. I’m sticking to what i said about not being on the site for a month but my Instagram photos are still gonna pop up (also using Instagram stories). 1)shops are doing nothing about the 5th on there windows or even in the shops. 2) school’s don’t talk about why we have bonfire night. 3) supermarkets are to scared to even sell fireworks because of dumb people. liam hates fireworks so we are staying in tonight so if you are out tonight be safe and don’t do nothing stupid. i got the sodding cold and flu and its driving me crazy. nose is running and eyes are killing. i better say sorry for not doing a blog for abit. sadly i have had a lovely bug what had me ill for 4days. went to bed on Monday night with a headache but next day i could not lift up my head , i felt cold , belly pains and i could not talk. i now feel 90% better and started to eat again. also I’ve been feeling kind of poop. liam has been home for 2weeks and its been cool but this week i feel kind of bad because we have not been able to go out and explore because of that dumb bug. I’ve also been feeling kind of lonely and before anyone says you got a 5 year old….yer but it would be nice to talk to someone about stuff what is not about Star Wars and Lego’s.
english
خیبر پختونخوا چھُ پاکِستان مَنٛز اَکھ صوٗبہٕ۔ اَتھ اۆس گۄڈ صوٗبہٕ سَرحد ناو۔
kashmiri
सुशांत सिंह राजपूत क्या वाकई में बन गए हैं 'सितारा... विजयवाड़ा में हजारों बौद्ध भिक्षुओं ने निकाली 'शांति यात्रा' बौद्ध धर्म से जुड़े विषय की रिसर्च में इतनी शान-ओ-शौकत क्यों? हाई कोर्ट ने बौद्ध धर्म से जुडे विषयों पर अध्ययन के लिए बने इंटरनैशनल रिसर्च इंस्टिट्यूट ऑफ बौद्ध स्टडीज में सरकारी खर्च पर तमाम शान-औ-शौकत और ... गुजरात के ३०० दलितों ने बौद्ध धर्म अपनाया अहमदाबाद, १२ अक्तूबर :भाषा: गुजरात के दलित समुदाय के २०० से अधिक लोगों ने विजया दशमी के मौके पर यहां बौद्ध संगठनों द्वारा आयोजित तीन अलग अलग कार्यक्रमों में बौद्ध धर्म अपनाया। राज्य के ९० अन्य लोगों ने नागपुर में धर्मांतरण किया और इस तरह कुल ३०० से अधिक दलितों ने बौद्ध धर्म को ग्रहण किया। गुजरात के १५० दलितों ने अपनाया बौद्ध धर्म मंगलवार को विजयादशमी के मौके पर प्रदेश के करीब १५० दलितों ने बौद्ध धर्म अपना लिया। गुजरात बुद्धिस्ट एकैडमी और दूसरे धार्मिक संगठनों की ओर से आयोजित कार्यक्रम में शहर के दनीलिम्दा इलाके में ७० से भी ज्यादा लोगों ने बौद्ध धर्म को स्वीकार किया। देखें, वियतनाम में कैसे बुद्ध के रंग में रंगे पीएम मोदी बीबीएयू में पहली बार छपेगी 'अनोखी' थीसिस बाबा साहब भीमराव अम्बेडकर विश्वविद्यालय (बीबीएयू) पहली बार किसी ऐसे छात्र की थीसिस प्रकाशित करवाने जा रहा है, जो अब दुनिया में नहीं है। तिब्बत की पहचान को यूं खत्म कर रहा है चीन लारूंग गार की पहाड़ियों पर मजदूर बौद्ध शिक्षा के सबसे बड़े केन्द्र को ध्वस्त करने में लगे हुए हैं। मशीन की गुर्राती हुई आवाजों के बीच नजदीक के मठ से लाउडस्पीकर से एक बौद्ध भिक्षु की पूजा की आवाज डूबती नजर आ रही है। १९८० में अपनी स्थापना के समय से ही लारूंग गार अपनी बेहतरीन मठों, उतार-चढ़ाव वाले मकानों और स्तूपों के कारण जाना जाता रहा है। बौद्ध शिक्षा के इस सबसे बड़े केन्द्र से पूरे क्षेत्र में बौद्ध धर्म के बारे में संदेश दिए जाते रहे हैं। अमरेली हिरासत में मौत का मामला : दलितों ने बौद्ध धर्म अपनाने की धमकी दी अहमदाबाद, पांच जुलाई :भाषा: दलित समुदाय के एक युवक की दो सप्ताह पहले न्यायिक हिरासत में मौत होने के मामले में पुलिस पर निष्पक्ष जांच नहीं करने का आरोप लगाते हुए अमरेली जिले के २०० दलितों ने बौद्ध धर्म अपनाने की धमकी दी है। बाबासाहब को बौद्ध धर्म की दीक्षा दिलाने वालों में शामिल थे प्रज्ञानंद उनकी उम्र ९० साल है। अब चेले ही उनकी देखभाल करते हैं। भदंता प्रज्ञानंद, दिन का ज्यादातर वक्त बिस्तर पर लेटे-लेटे गुजारते हैं। इशारों में बात करते हैं। कभी-कभी लिखकर बता देते हैं। लेकिन जैसे ही बाबासाहब भीमराव आंबेडकर का जिक्र होता है न जाने कौन सी शक्ति से वह साफ-साफ और ऊंची आवाज में बात करने लगते हैं। ग्रेनो पहुंची बौद्ध धर्म यात्रा बौद्ध धर्म को लोगों तक पहुंचाने के उद्देश्य से देश भर में बौद्ध धर्म रथयात्रा निकाली जा रही हैं। जिले में बौद्ध रथयात्रा की अगुआई बौद्धिस्ट भंते... प्राचीन काल में 'आनंदपुर' के नाम से जाना जाता था पीएम का होमटाउन वडनगर, खुदाई में मिलीं मुहरों में उल्लेख भारतीय पुरातत्व विभाग के अनुसार पीएम मोदी के गृहनगर वडनगर का नाम आनंदपुर था और इस शहर में बौद्ध धर्म का काफी प्रभाव था। हाल ही में पुरातत्व विभाग की खुदाई में २ मुहरे बरामद हुईं हैं जिनमे 'आनंदपुर' का उल्लेख है... श्रीलंका यात्रा के दूसरे दिन प्रधानमंत्री मोदी ने बौद्ध धर्म पर आयोजित कार्यक्रम में की शिरकत एल एंड टी का एकीकृत शुद्ध लाभ पहली तिमाही में ४६ प्रतिशत बढ़ा मुंबई, २८ जुलाई भाषा इंजीनयरिंग क्षेत्र की दिग्गज कंपनी लार्सन एंड टूब्रो एल एंड टी का एकीकृत शुद्ध लाभ ३० जून को समाप्त तिमाही में ४६.४१ प्रतिशत बढ़कर ८९२.५४ करोड़ रुपये रहा। भारत के लिए तिब्बत की सुरक्षा जरूरी- सांगेय तिब्बत की निर्वासित सरकार के पीएम डा. लोबसांग सांगेय का कहना है कि भारत तभी सुरक्षित रहेगा जब तिब्बत सुरक्षित होगा। सीमा पर तनाव के चलते पूरी दुनिया की नजर भारत और चीन पर टिकी हुई है। पीएम डा. लोबसांग शुक्रवार को बनारस में आए। उना घटना से नाराज कई दलितों की बौद्ध धर्म अपनाने की योजना अहमदाबाद, २६ जुलाई :भाषा: उना में दलितों की पिटायी के मद्देनजर बनासकांठा जिले में समुदाय के कम से कम १००० लोगों ने बौद्ध धर्म अपनाने की इच्छा जतायी है। उनका कहना है कि यदि उनसे बराबरी का व्यवहार नहीं किया जाए तो हिंदू धर्म में रहने का कोई मतलब नहीं है। बौधित्व तिब्बत ही नहीं भारत की भी परंपराः रिजिजू रोहित वेमुला की मां, भाई ने बौद्ध धर्म अपनाया रोहित वेमुला के परिवार ने अपनाया बौद्ध धर्म हैदराबाद यूनिवर्सिटी के दलित स्कॉलर रोहित वेमुला की मां और भाई ने मुंबई में बौद्ध धर्म की दीक्षा ली। बाबा साहब भीम राव आंबेडकर के पोते प्रकाश आंबेडकर ने गुरुवार को मुंबई के दादर स्थित आंबेडकर भवन में रोहित के परिवार को बौद्ध धर्म की दीक्षा दी। बौद्ध धर्म अपनाने की घोषणा परिवार ने पहले ही कर दी थी। रोहित वेमुला की मां और भाई बौद्ध धर्म स्वीकार करेंगे बाबा साहब भीम राव आंबेडकर के पोते प्रकाश आंबेडकर ने बुधवार को बताया कि हैदराबाद सेंट्रल यूनिवर्सिटी के दलित छात्र रोहित वेमुला की मां और भाई गुरुवार को बौद्ध धर्म स्वीकार करेंगे। संपर्क करने पर रोहित के छोटे भाई राजा वेमुला ने इसकी पुष्टि की है। उस नेवी का युद्धाभ्यास, चीन ने दी हमले की 'धमकी'
hindi
\begin{document} \begin{center} {\bf{\LARGE{ Bellman Residual Orthogonalization \\ for Offline Reinforcement Learning}}} \vspace*{0.5in} \begin{tabular}{lcl} Andrea Zanette$^\star$ && Martin J. Wainwright$^{\star,\dagger}$ \\ \texttt{[email protected]} && \texttt{[email protected]} \end{tabular} \vspace*{0.10in} \begin{tabular}{c} Department of Electrical Engineering and Computer Sciences$^{\star}$ \\ Department of Statistics$^\dagger$ \\ UC Berkeley, Berkeley, CA \\ \\ Department of Electrical Engineering and Computer Sciences$^{\dagger}$ \\ Department of Mathematics$^\dagger$ \\ Massachusetts Institute of Technology, Cambridge, MA \end{tabular} \vspace*{0.5in} \begin{abstract} We propose and analyze a reinforcement learning principle that approximates the Bellman equations by enforcing their validity only along an user-defined space of test functions. Focusing on applications to model-free offline RL with function approximation, we exploit this principle to derive confidence intervals for off-policy evaluation, as well as to optimize over policies within a prescribed policy class. We prove an oracle inequality on our policy optimization procedure in terms of a trade-off between the value and uncertainty of an arbitrary comparator policy. Different choices of test function spaces allow us to tackle different problems within a common framework. We characterize the loss of efficiency in moving from on-policy to off-policy data using our procedures, and establish connections to concentrability coefficients studied in past work. We examine in depth the implementation of our methods with linear function approximation, and provide theoretical guarantees with polynomial-time implementations even when Bellman closure does not hold. \end{abstract} \end{center} \section{Introduction} Markov decision processes (MDP) provide a general framework for optimal decision-making in sequential settings (e.g.,~\cite{puterman1994markov,Bertsekas_dyn1,Bertsekas_dyn2}). Reinforcement learning refers to a general class of procedures for estimating near-optimal policies based on data from an unknown MDP (e.g.,~\cite{bertsekas1996neuro,sutton2018reinforcement}). Different classes of problems can be distinguished depending on our access to the data-generating mechanism. Many modern applications of RL involve learning based on a pre-collected or offline dataset. Moreover, the state-action spaces are often sufficiently complex that it becomes necessary to implement function approximation. In this paper, we focus on model-free offline reinforcement learning (RL) with function approximation, where prior knowledge about the MDP is encoded via the value function. In this setting, we focus on two fundamental problems: (1) offline policy evaluation---namely, the task of accurately predicting the value of a target policy; and (2) offline policy optimization, which is the task of finding a high-performance policy. There are various broad classes of approaches to off-policy evaluation, including importance sampling~\cite{precup2000eligibility,thomas2016data,jiang2016doubly,liu2018breaking}, as well as regression-based methods~\cite{lagoudakis2003least,munos2008finite,chen2019information}. Many methods for offline policy optimization build on these techniques, with a line of recent papers including the addition of pessimism~\cite{jin2021pessimism,xie2021bellman,zanette2021provable}. We provide a more detailed summary of the literature in~\cref{sec:Literature}. In contrast, this work investigates a different model-free principle---different from importance sampling or regression-based methods---to learn from an offline dataset. It belongs to the class of weight learning algorithms, which leverage an auxiliary function class to either encode the marginalized importance weights of the target policy~\cite{liu2018breaking,xie2020Q}, or estimates of the Bellman errors~\cite{antos2008learning,chen2019information,xie2020Q}. Some work has considered kernel classes~\cite{feng2020accountable} or other weight classes to construct off-policy estimators~\cite{uehara2020minimax} as well as confidence intervals at the population level~\cite{jiang2020minimax}. However, these works do not examine in depth the statistical aspects of the problem, nor elaborate upon the design of the weight function classes.\footnote{For instance, the paper~\cite{feng2020accountable} only shows validity of ther intervals, not a performance bound; on the other hand, the paper~\cite{jiang2020minimax} gives analyses at the population level, and so does not address the alignment of weight functions with respect to the dataset in the construction of the empirical estimator, which we do via self-normalization and regularization. This precludes obtaining the same type of guarantees that we present here.} The last two considerations are essential to obtaining data-dependent procedures accompanied by rigorous guarantees, and to provide guidance on the choice of weight class, which are key contributions of this paper. For space reasons, we motivate our approach in the idealized case where the Bellman operator is known in~\cref{sec:appBRO}, and compare with the weight learning literature at the population level in~\cref{sec:WeightLearning}. Let us summarize our main contributions in the following three paragraphs. \paragraph{Conceptual contributions} Our paper makes two novel contributions of conceptual nature: \begin{enumerate}[leftmargin=*] \item We propose a method, based on \emph{approximate empirical orthogonalization} of the Bellman residual along test functions, to construct confidence intervals and to perform policy optimization. \item We propose a sample-based approximation of such principle, based on \emph{self-normalization} and \emph{regularization}, and obtain general guarantees for parametric as well as non-parametric problems. \end{enumerate} The construction of the estimator, its statistical analysis, and the concrete consequences (described in the next paragraph) are the major distinctions with respect to past work on weight learning methods~\cite{uehara2020minimax,jiang2020minimax}. Our analysis highlights the statistical trade-offs in the choice of the test functions. (See~\cref{sec:WeightLearning} for comparison with past work at the population level.) \paragraph{Domain-specific results} In order to illustrate the broad effectiveness and applicability of our general method and analysis, we consider several domains of interest. We show how to recover various results from past work---and to obtain novel ones---by making appropriate choices of the test functions and invoking our main result. Among these consequences, we discuss the following: \begin{enumerate}[leftmargin=*] \item When marginalized importance weights are available, they can be used as test class. In this case we recover a similar results as the paper~\cite{xie2020Q}; however, here we only require concentrability with respect to a comparator policy instead of over all policies in the class. \item When some knowledge of the Bellman error class is available, it can be used as test class. Similar results have appeared previously either with stronger concentrability~\cite{chen2019information} or in the special case of Bellman closure~\cite{xie2021bellman}. \item We provide a test class that projects the Bellman residual along the error space of the $\Q$ class. The resulting procedure is as an extension of the LSTD algorithm \cite{bradtke1996linear} to non-linear spaces, which makes it a natural approach if no domain-specific knowledge is available. A related result is the lower bound by \cite{foster2021offline}, which proves that without Bellman closure learning is hard even with small density ratios. In contrast, our work shows that learning is still possible even with large density ratios. \item Finally, our procedure inherits some form of ``multiple robustness''. For example, the two test classes corresponding to Bellman completeness and marginalized importance weights can be used together, and guarantees will be obtained if \emph{either} Bellman completeness holds or the importance weights are correct. We examine this issue in \cref{sec:MultipleRobustness}. \end{enumerate} \paragraph{Linear setting} We examine in depth an application to the linear setting, where we propose the first \emph{computationally tractable} policy optimization procedure \emph{without assuming Bellman completeness}. The closest result here is given in the paper~\cite{zanette2021provable}, which holds under Bellman closure. Our procedure can be thought of making use of LSTD-type estimates so as to establish confidence intervals for the projected Bellman equations, and then using an iterative scheme for policy improvement. \section{Background and set-up} \label{sec:background} We begin with some notation used throughout the paper. For a given probability distribution $\Distribution$ over a space $\mathcal{X}$, we define the $L^2(\Distribution)$-inner product and semi-norm as $\innerprodweighted{f_1}{f_2}{\Distribution} = \ensuremath{\mathbb{E}}ecti{[f_1f_2]}{\Distribution}, $ and $ \norm{f_1}{\Distribution} = \sqrt{\innerprodweighted{f_1}{f_1}{\Distribution}}{}$. The identity function that returns one for every input is denoted by $\1$. We frequently use notation such as $c, c', \ensuremath{\tilde{c}}, c_1,c_2$ etc. to denote constants that can take on different values in different sections of the paper. \subsection{Markov decision processes and Bellman errors} We focus on infinite-horizon discounted Markov decision processes~\cite{puterman1994markov,bertsekas1996neuro,sutton2018reinforcement} with discount factor $\discount \in [0,1)$, state space $\StateSpace$, and an action set $ASpace$. For each state-action pair $\psa$, there is a reward distribution $R\psa$ supported in $[0,1]$ with mean $\reward\psa$, and a transition $\Pro(\cdot \mid \state, \action)$. A (stationary) stochastic policy $\policyicy$ maps states to actions. For a given policy, its $Q$-function is the discounted sum of future rewards based on starting from the pair $\psa$, and then following the policy $\policyicy$ in all future time steps $Q^{\pi}\psa = \reward\psa + \sum_{\hstep = 0}^{\infty} \discount^{\hstep} \ensuremath{\mathbb{E}} [ \reward_{\hstep}({\SState}_\hstep, {\AAction}_\hstep) \mid (\SState_0, \AAction_0) = \psa], $ where the expectation is taken over trajectories with $ \AAction_{\hstep} \sim \policy(\cdot \mid \SState_\hstep), \quad \mbox{and} \quad \SState_{\hstep+1} \sim \Pro(\cdot \mid \SState_\hstep, \AAction_\hstep) \quad \mbox{for $\hstep = 1, 2, \ldots$.} $ We also use $\Qpi{\policyicy}(\state, \policyicy) = \ensuremath{\mathbb{E}}ecti{\Qpi{\policyicy}(\state, \AAction)}{\AAction \sim \policyicy(\cdot \mid \state)}$ and define the \emph{Bellman evaluation operator} as $ (\BellmanEvaluation{\policyicy}Q)\psa = \reward\psa + \ensuremath{\mathbb{E}}ecti{Q(\SState^+ ,\policyicy)} {\SState^+ \sim \Pro(\cdot \mid \state, \action)}. $ The value function satisfies $ \Vpi{\policyicy}(\state) = \Qpi{\policyicy}(\state,\policyicy). $ In our analysis, we assume that policies have action-value functions that satisfy the uniform bound $\sup_{\psa} \abs{\Qpi{\policyicy}\psa} \leq 1$. We are also interested in approximating optimal policies, whose value and action-value functions are defined as $\Vstar(\state) = V^{\pistar}(\state) = \sup_{\policyicy} \Vpi{\policyicy}(\state)$ and $\Qstar\psa = Q^{\pistar}\psa = \sup_{\policyicy} Q^{\policyicy}\psa$. We assume that the starting state $\SState_0$ is drawn according to $\nu_{\text{start}}$ and study $ \Vpi{\policyicy} = \ensuremath{\mathbb{E}}_{\SState_0 \sim \nu_{\text{start}}}[\Vpi{\policyicy}(\SState_0)] $. We define the \emph{discounted occupancy measure} associated with a policy $\policyicy$ as the distribution over the state action space $ \DistributionOfPolicy{\policyicy}\psa = (1 - \discount ) \sum_{\hstep=0}^\infty \discount^\hstep \Pro_\hstep[ (\SState_\hstep,\AAction_\hstep) = \psa ]. $ We adopt the shorthand notation $\ensuremath{\mathbb{E}}_\policyicy$ for expectations over $\DistributionOfPolicy{\policyicy}$. For any functions $f, g: \StateSpace \times ASpace \rightarrow \R$, we make frequent use of the shorthands $ \ensuremath{\mathbb{E}}_\policyicy[f] \defeq \ensuremath{\mathbb{E}}_{(\SState, \AAction) \sim \DistributionOfPolicy{\policyicy}}[f(\SState, \AAction)], \quad \mbox{and} \quad \inprod{f}{g}_\policyicy \defeq \ensuremath{\mathbb{E}}_{(\SState, \AAction) \sim \DistributionOfPolicy{\policyicy}} \big[f(\SState, \AAction) \, g(\SState, \AAction) \big]. $ Note moreover that we have $\inprod{\1}{f}_\policyicy = \ensuremath{\mathbb{E}}_\policyicy[f]$ where $\1$ denotes the identity function. For a given $Q$-function and policy $\policyicy$, let us define the \emph{temporal difference error} (or TD error) associated with the sample $\sarsizNp{} = \sars{}$ and the \emph{Bellman error} at $\psa$ \begin{align*} (\TDError{Q}{\policyicy}{})\sarsiz{} \defeq Q\psa - \reward - \gamma Q(\successorstate,\policyicy), \qquad (\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa \defeq Q\psa - \reward\psa - \gamma \E_{\successorstate\sim\Pro\psa}Q(\successorstate,\policyicy). \end{align*} The TD error is a random variable function of $\sarsizNp{}$, while the Bellman error is its conditional expectation with respect to the immediate reward and successor state at $\psa$. Many of our bounds involve the quantity $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \ensuremath{\mathbb{E}}ecti{\big[\ensuremath{\mathcal{B}}or{Q}{\policyicy}{} \PSA \big]} {\PSA \sim \DistributionOfPolicy{\policyicy}}.$ Finally, we introduce the data generation mechanism. A more general sampling model is described in \cref{sec:GeneralGuarantees}. \begin{assumption}[I.i.d. dataset] \label{asm:IIDDataset} An i.i.d. dataset is a collection $\Dataset = \{ \sarsi{i} \}_{i=1}^n$ such that for each $i = 1, \ldots, n$ we have $(\state_i, \action_i, o_i) \sim \mu$ and conditioned on $(\state_i,\action_i,o_i )$, we observe a noisy reward $\reward_i = \reward(\state_i,\action_i) + \eta_i$ with $\E[\eta_i \mid \mathcal F_{i} ] = 0, \; |\reward_i| \leq 1$ and the next state $\successorstate_i\sim \Pro(\state_i,\action_i)$. \end{assumption} \subsection{Function Spaces and Weak Representation} Our methods involve three different types of function spaces, corresponding to policies, action-value functions, and test functions. A test function $\TestFunction{}$ is a mapping $(\state,\action,\identifier) \mapsto \TestFunction{}(\state,\action,\identifier)$ such that \mbox{$\sup_{(\state,\action,\identifier)}\abs{\TestFunction{}(\state,\action,\identifier)} \leq 1$}, where $o$ is an optional identifier containing side information. Our methodology involves the following three function classes: \begin{carlist} \item a \emph{policy class} $\PolicyClass$ that contains all policies $\policyicy$ of interest (for evaluation or optimization); \item for each $\policyicy$, the \emph{predictor class} $\Qclass{\policyicy}$ of action-value functions $Q$ that we permit; and \item for each $\policyicy$, the \emph{test function class} $\TestFunctionClass{\policyicy}$ that we use to enforce the Bellman residual constraints. \end{carlist} We use the shorthands $\Qclass{} = \cup_{\policyicy \in \PolicyClass}\Qclass{\policyicy}$ and \mbox{$\TestFunctionClass{} = \cup_{\policyicy \in \PolicyClass} \TestFunctionClass{\policyicy}$}. We assume \emph{weak realizability}: \begin{assumption}[Weak Realizability] \label{asm:WeakRealizability} For a given policy $\policyicy$, the predictor class $\Qclass{\policyicy}$ is weakly realizable with respect to the test space $\TestFunctionClass{\policyicy}$ and the measure $\mu$ if there exists a predictor $\QpiWeak{\policyicy} \in \Qclass{\policyicy}$ such that \begin{align} \label{EqnWeakRealizable} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} {\mu} = 0 \; \text{for all } \TestFunction{} \in \TestFunctionClass{\policyicy} \qquad \text{and} \qquad \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0. \end{align} \end{assumption} The first condition requires the predictor to satisfy the Bellman equations \emph{on average}. The second condition amounts to requiring that the predictor returns the value of $\policyicy$ at the start distribution: using \cref{lem:Simulation} stated in the sequel, we have \begin{align*} \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} = \ensuremath{\mathbb{E}}ecti{[\QpiWeak{\policyicy}-\Qpi{\policyicy}](S ,\policyicy)}{S \sim \nu_{\text{start}}} = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \frac{1}{1-\discount} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0. \end{align*} This weak notion should be contrasted with \emph{strong realizability}, which requires a function $\Qpi{\policyicy} \in \Qclass{\policyicy}$ that satisfies the Bellman equation in all state-action pairs. A stronger assumption that we sometime use is Bellman closure, which requires that $ \BellmanEvaluation{\policyicy}(Q) \in \Qclass{\policyicy} \; \mbox{for all $Q \in \Qclass{\policyicy}$.} $ The corresponding `weak' version is given in \cref{sec:appWeakClosure}. \section{Policy Estimates via the Weak Bellman Equations} \label{sec:Algorithms} In this section, we introduce our high-level approach, first at the population level and then in terms of regularized/normalized sample-based approximations. \subsection{Weak Bellman equations, empirical approximations and confidence intervals} We begin by noting that the predictor $\Qpi{\policyicy}$ satisfies the Bellman equations everywhere in the state-action space, i.e., $\ensuremath{\mathcal{B}}or{\Qpi{\policyicy}}{\policyicy}{} = 0$. However, if our dataset is ``small'' relative to the complexity of (functions) on the state-action space, then it is unrealistic to enforce such a stringent condition. Instead, the idea is to control the Bellman error in a weighted-average sense, where the weights are given by a set of \emph{test functions}. At the idealized population level (corresponding to an infinite sample size), we consider predictors that satisfy the conditions \begin{align} \label{eqn:PoP} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0, \qquad \text{for all} \; \TestFunction{} \in \TestFunctionClass{\policyicy}. \end{align} where $\TestFunctionClass{\policyicy}$ is a user-defined set of test functions. The two main challenges here are how to use data to enforce an approximate version of such constraints (along with rigorous data-dependent guarantees), and how to design the test function space. We begin with the former challenge. \paragraph{Construction of the empirical set} Given a dataset $\Dataset = \{(\state_i, \action_i, \reward_i, \successorstate_i, o_i) \}_{i=1}^n$, we can approximate the Bellman errors by a linear combination of the temporal difference errors: \begin{align} \label{eqn:WeightedTemporaResidual} \int \TestFunction{}\psa \underbrace{[Q\psa - (\BellmanEvaluation{\policyicy} Q)\psa]}_{= \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}\psa } d\mu \approx \frac{1}{n} \sum_{i=1}^n \TestFunction{}(\state_i, \action_i) \underbrace{ [Q(\state_i, \action_i) - \reward_i - \discount Q(\successorstate_i,\policyicy) ]}_{= \TDError{Q}{\policyicy}{}\sarsi{i} }. \end{align} Note that the approximation~\eqref{eqn:WeightedTemporaResidual} corresponds to a weighted linear combination of temporal differences. Written more compactly in inner product notation, equation~\eqref{eqn:WeightedTemporaResidual} reads $\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \approx \innerprodweighted{\TestFunction{}}{\TDError{Q}{\policyicy}{}}{n}$, where $ \innerprodweighted{f}{g}{n} = \frac{1}{n}\sum_{\sarsi{} \in \Dataset} (fg)\sarsi{}. $ In general, the action value function $\Qpi{\policyicy}$ does not satisfy $\inprod{\TestFunction{}}{ \TDError{\Qpi{\policyicy}}{\policyicy}{}}_n = 0$ because the empirical approximation~\eqref{eqn:WeightedTemporaResidual} involves sampling error. For these reasons, in order to (approximately) identify $\Qpi{\policyicy}$, we impose only inequalities. Given a class of test functions $\TestFunctionClass{\policyicy}$, a radius parameter $\ensuremath{\rho} \geq 0$ and regularization parameter $\ensuremath{\lambda} \geq 0$, we define the set \begin{align} \label{eqn:EmpiricalBellmanGalerkinEquations} \SuperEmpiricalFeasible \defeq \left \{ Q \in \Qclass{\policyicy} \quad \text{such that} \quad \frac{ \abs{\innerprodweighted{\TestFunction{}}{\TDError{Q}{\pi}{}}{n}} } { \TestNormaRegularizerEmp{} } \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \quad \mbox{for all $\ensuremath{f}c \in \TestFunctionClass{\policyicy}$}\right \}. \end{align} When the choices of $(\ensuremath{\rho}, \ensuremath{\lambda})$ are clear from the context, we adopt the shorthand $\EmpiricalFeasibleSet{\policyicy}( \TestFunctionClass{\policyicy})$, or $\EmpiricalFeasibleSet{\policyicy}$ when the function class $\TestFunctionClass{\policyicy}$ is also clear. If $\TestFunctionClass{\policyicy}$ and $\Qclass{\policyicy}$ have finite cardinality, $\ensuremath{\rho} \approx \ln |\TestFunctionClass{\policyicy}||\Qclass{\policyicy}| + \ln 1/\FailureProbability$, where $\FailureProbability$ is a prescribed failure probability. Our definition of the empirical constraint set~\eqref{eqn:EmpiricalBellmanGalerkinEquations} has two key components: first, the division by $\TestNormaRegularizerEmp{}$ corresponds to a form of \emph{self-normalization}, whereas the addition of $\lambda$ corresponds to a form of \emph{regularization}. Self-normalization is needed so that the constraints remain suitably scale-invariant. More importantly---in conjunction with the regularization---it ensures that test functions that have poor coverage under the dataset do not have major effects on the solution. In particular, the empirical norm $\EmpNorm{f}^2$ in the self-normalization measures how well the given test function is covered by the dataset. Any test function with poor coverage (i.e., $\EmpNorm{f}^2 \approx 0$) will not yield useful information, and the regularization counteracts its influence. In our guarantees, the choices of $\lambda$ and $\rho$ are critical; as shown in our theory, we typically have $\lambda = \rho/n$, where $\rho$ scales with the metric entropy of the predictor, test and policy spaces. Disregarding $\rho$, the right-hand side of the constraint decays as $1/\sqrt{n}$, so that the constraints are enforced more tightly as the sample size increases. \paragraph{Confidence bounds and policy optimization:} First, for any fixed policy $\policyicy$, we can use the feasibility set~\eqref{eqn:EmpiricalBellmanGalerkinEquations} to compute the lower and upper estimates \begin{align} \label{eqn:ConfidenceIntervalEmpirical} \VminEmp{\policyicy} \defeq \min_{Q \in \SuperEmpiricalFeasible} \ensuremath{\mathbb{E}}ecti{\big[Q(S ,\policyicy)\big]}{S \sim \nu_{\text{start}}}, \; \text{and} \quad \VmaxEmp{\policyicy} \defeq \max_{Q \in \SuperEmpiricalFeasible} \ensuremath{\mathbb{E}}ecti{\big[Q(S ,\policyicy)\big]}{S \sim \nu_{\text{start}}}, \end{align} corresponding to estimates of the minimum and maximum value that the policy $\policyicy$ can take at the initial distribution. The family of lower estimates can be used to perform policy optimization over the class $\PolicyClass$, in particular by solving the \emph{max-min} problem \begin{align} \label{eqn:MaxMinEmpirical} \max_{\policy \in \PolicyClass} \Big [\min_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \Big], \qquad \text{or equivalently} \qquad \max_{\policy \in \PolicyClass}\VminEmp{\policyicy}. \end{align} \paragraph{Form of guarantees} Let us now specify and discuss the types of guarantees that we establish for our estimators~\eqref{eqn:ConfidenceIntervalEmpirical} and~\eqref{eqn:MaxMinEmpirical}. All of our theoretical guarantees involve a $\mu$-based counterpart $\PopulationFeasibleSet{\policyicy}$ of the data-dependent set $\EmpiricalFeasibleSet{\policyicy}$. More precisely, we define the population set \begin{align} \label{eqn:ApproximateBellmanGalerkingEquations} \SuperPopulationFeasible \defeq \biggr\{ Q \in \Qclass{\policyicy} \quad \text{such that} \quad \frac{ \abs{\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\pi}{}}{\mu}} } { \TestNormaRegularizerPop{} } \leq \sqrt{\frac{4 \ensuremath{\rho}}{n}} \qquad \mbox{for all $\ensuremath{f}c \in \TestFunctionClass{}$} \biggr\}, \end{align} where $\inprod{f}{g}_\mu \defeq \int f \psa g \psa d \mu$ is the inner product induced by a distribution\footnote{See Section~\ref{SecDataGen} for a precise definition of the relevant $\mu$ for a fairly general sampling model.} $\mu$ over $\psa$. As before, we use the shorthand notation $\PopulationFeasibleSet{\policyicy}$ when the underlying arguments are clear from context. Moreover, in the sequel, we generally ignore the constant $4$ in the definition~\eqref{eqn:ApproximateBellmanGalerkingEquations} by assuming that $\ensuremath{\rho}$ is rescaled appropriately---e.g., that we use a factor of $\frac{1}{4}$ in defining the empirical set. It should be noted that in contrast to the set $\EmpiricalFeasibleSet{\policyicy}$, the set $\PopulationFeasibleSet{\policyicy}$ is \emph{non-random} and it is defined in terms of the distribution $\mu$ and the input space $\InputFunctionalSpace$. It relaxes the orthogonality constraints in the weak Bellman formulation~\eqref{eqn:PoP}. Our guarantees for off-policy confidence intervals take the following form: \begin{subequations} \label{EqnPolEval} \begin{align} \label{EqnCoverage} \mbox{\underline{Coverage guarantee:}} & \qquad \big[ \VminEmp{\policyicy}, \VmaxEmp{\policyicy} \big] \ni \Vpi{\policyicy}. \\ \label{EqnWidthBound} \mbox{\underline{Width bound:}} & \qquad \max \Big \{ |\VminEmp{\policyicy} - \Vpi{\policyicy} |, \; |\VmaxEmp{\policyicy} - \Vpi{\policyicy} | \Big \} \leq \frac{1}{1-\discount} \max_{Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{\policyicy})} |\PolComplexGen{\policyicy}|. \end{align} \end{subequations} Turning to policy optimization, let $\widetilde \pi$ be a solution to the max-min criterion~\eqref{eqn:MaxMinEmpirical}. Then we prove a result of the following type: \begin{align} \label{EqnOracle} \mbox{\underline{Oracle inequality:}} \qquad \Vpi{\widetilde \pi} \geq \max_{\policyicy \in \PolicyClass} \Big \{ \underbrace{ \vphantom{ \frac{1}{1 - \discount} \max_{Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})} |\PolComplexGen{\policyicy}| } \Vpi{\policyicy}}_{\mbox{\tiny{Value}}} - \underbrace{\frac{1}{1 - \discount} \max_{Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})} |\PolComplexGen{\policyicy}|}_{\mbox{\tiny{Evaluation uncertainty}}} \Big \}. \end{align} Note that this result guarantees that the estimator competes against an oracle that can search over all policies, and select one based on the optimal trade-off between its value and evaluation uncertainty. \subsection{High-probability guarantees} In this section, we present some high-probability guarantees. So as to facilitate understanding under space constraints, we state here results under simplifying assumptions: (a) the dataset originates from a fixed distribution, and (b) the classes $\PolicyClass{}, \TestFunctionClass{}$ and $\Qclass{}$ have finite cardinality. We emphasize that~\cref{sec:GeneralGuarantees} provides a far more general version of this result, with an extremely flexible sampling model, and involving metric entropies of parametric or non-parametric function classes. \begin{theorem}[Guarantees for finite classes] \label{thm:NewPolicyEvaluationFiniteClasses} Consider a triple $\InputFunctionalSpace$ that is weakly Bellman realizable (\cref{asm:WeakRealizability}); an i.i.d. dataset (\Cref{asm:IIDDataset}); and the choices $\ensuremath{\rho} = c \big \{ \log (|\TestFunctionClass{}| |\PolicyClass | |\Qclass{}|) + \log (1/\FailureProbability) \big \}$ and $\ensuremath{\lambda} = c' \ensuremath{\rho}/n$ for some constants $c, c'$. Then w.p. at least $1 - \delta$: \begin{carlist} \item \underline{Policy evaluation:} For any $\pi \in \PolicyClass$, the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ specify a confidence interval satisfying the coverage~\eqref{EqnCoverage} and width bounds~\eqref{EqnWidthBound} \item \underline{Policy optimization:} Any max-min policy~\eqref{eqn:MaxMinEmpirical} $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{carlist} \end{theorem} \section{Concentrability Coefficients and Test Spaces} \label{sec:Applications} In this section, we develop some connections to concentrability coefficients that have been used in past work, and discuss various choices of the test class. Like the predictor class $\Qclass{\policyicy}$, the test class $\TestFunctionClass{\policyicy}$ encodes domain knowledge, and thus its choice is delicate. Different from the predictor class, the test class does not require a `realizability' condition. As a general principle, the test functions should be chosen as orthogonal as possible with respect to the Bellman residual, so as to enable rapid progress towards the solution; at the same time, they should be sufficiently ``aligned'' with the dataset, meaning that $\norm{\TestFunction{}}{\mu}$ or its empirical counterpart $\norm{\TestFunction{}}{n}$ should be large. Given a test class, each additional test function posits a new constraint which helps identify the correct predictor; at the same time, it increases the metric entropy (parameter $\ensuremath{\rho}$), which makes each individual constraints more loose. In summary, there are trade-offs to be made in the selection of the test class $\TestFunctionClass{}$, much like $\Qclass{}$. In order to assess the statistical cost that we pay for off-policy data, it is natural to define the \emph{off-policy cost coefficient} (OPC) as \begin{align} \label{EqnConcSimple} K^\policy(\PopulationFeasibleSet{\policyicy}, \ensuremath{\rho}, \ensuremath{\lambda}) & \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} = \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{} }{\policyicy} ^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}}, \end{align} With this notation, our off-policy width bound~\eqref{EqnWidthBound} can be re-expressed as \begin{subequations} \begin{align} \label{eqn:ConcreteCI} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}}, \end{align} while the oracle inequality~\eqref{EqnOracle} for policy optimization can be re-expressed in the form \begin{align} \label{EqnConcSimpleBound} \Vpi{\widetilde \pi} \geq \max_{\policyicy \in \PolicyClass} \Big\{ \Vpi{\policyicy} - \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}} \Big \}, \end{align} \end{subequations} Since $\ensuremath{\lambda} \sim \ensuremath{\rho}/n$, the factor $\sqrt{1 + \ensuremath{\lambda}}$ can be bounded by a constant in the typical case $n \geq \ensuremath{\rho}$. We now offer concrete examples of the OPC , while deferring further examples to \cref{sec:appConc}. \subsection{Likelihood ratios} Our broader goal is to obtain small Bellman error along the distribution induced by $\policyicy$. Assume that one constructs a test function class $\TestFunctionClass{\policyicy}$ of possible likelihood ratios. \begin{proposition}[Likelihood ratio bounds] \label{prop:LikeRatio} Assume that for some constant $\scaling{\policyicy}$, the test function defined as $\TestFunction{}^*\psa = \frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa}$ belongs to $\TestFunctionClass{\policyicy}$ and satisfies $\norm{\TestFunction{}^*}{\infty} \leq 1$. Then the {OPC } coefficient satisfies \begin{align} \label{EqnLikeRatioBound} \ConcentrabilityGeneric{\policyicy} \stackrel{(i)}{\leq} \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}} \; \stackrel{(ii)}{\leq} \l \frac{\scaling{\policyicy} \big(1 + \scaling{\policyicy} \ensuremath{\lambda} \big)}{1 + \ensuremath{\lambda}}. \end{align} \end{proposition} Here $\scaling{\policyicy}$ is a scaling parameter that ensures $\norm{\TestFunction{}^*}{\infty} \leq 1$. Concretely one can take $\scaling{\policyicy} = \sup_{(\state,\action)} \frac{\DistributionOfPolicy{\policyicy}(\state,\action)}{\mu(\state,\action)}$. The proof is in \cref{sec:LikeRatio}. Since $\ensuremath{\lambda} = \ensuremath{\lambda}_n \rightarrow 0$ as $n$ increases, the {OPC } coefficient is bounded by a multiple of the expected ratio $\ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big]}{\policyicy}{}$. Up to an additive offset, this expectation is equivalent to the $\chi^2$-distribution between the policy-induced occupation measure $\DistributionOfPolicy{\policyicy}$ and data-generating distribution $\mu$. The concentrability coefficient can be plugged back into \cref{eqn:ConcreteCI,EqnConcSimpleBound} to obtain a concrete policy optimization bound. In this case, we recover a result similar to \cite{xie2020Q}, but with a much milder concentrability coefficient that involves only the chosen comparator policy. \subsection{The error test space} \label{sec:ErrorTestSpace} We now turn to the discussion of a choice for the test space that extends the LSTD algorithm to non-linear spaces. A simplification to the linear setting is presented later in \cref{sec:Linear}. As is well known, the LSTD algorithm \cite{bradtke1996linear} can be seen as minimizing the Bellman error projected onto the linear prediction space $Q$. Define the transition operator $ (\TransitionOperator{\policyicy}Q)\psa = \ensuremath{\mathbb{E}}ecti{Q(\successorstate,\policyicy ) } {\successorstate \sim \Pro\psa} $, and the prediction error $QErr = Q - \QpiWeak{\policyicy}$, where $\QpiWeak{\policyicy}$ is a $Q$-function from the definition of weak realizability. The Bellman error can be re-written as $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = (\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr $. When realizability holds, in the linear setting and at the population level, the LSTD solution seeks to satisfy the projected Bellman equations \begin{align} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} = 0, \quad \text{for all $\TestFunction{} \in \QclassErrCentered{\policyicy}$}. \end{align} In the linear case, $\QclassErrCentered{\policyicy} $ is the class of linear functions $\Qclass{\policyicy}$ used as predictors; when $\Qclass{\policyicy}$ is non-linear, we can extend the LSTD method by using the (nonlinear) error test space $\TestFunctionClass{\policyicy} = \QclassErrCentered{\policyicy} = \{Q - \QpiWeak{\policyicy} \}$. Since $\QclassErrCentered{\policyicy}$ is unknown (as it depends on the weak solution $\QpiWeak{\policyicy}$), we choose instead the larger class \begin{align*} \QclassErr{\policyicy} = \{ Q - Q' \mid Q,Q' \in \Qclass{\policyicy} \}, \end{align*} which contains $\QclassErrCentered{\policyicy}$. The resulting approach can be seen as performing a projection of the Bellman operator $\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$ into the error space $\QclassErrCentered{\policyicy}$, much like LSTD does in the linear setting. However, different from LSTD, our procedure returns confidence intervals as opposed to a point estimator. This choice of the test space is related to the Bubnov-Galerkin method~\cite{repin2017one} for linear spaces; it selects the test space $\TestFunctionClass{\policyicy}$ to be identical to the trial space $\QclassErrCentered{\policyicy}$ that contains all possible solution errors. \begin{lemma}[OPC coefficient from prediction error] \label{lem:PredictionError} For any test function class $\TestFunctionClass{\policyicy} \supseteq \QclassErr{\policyicy}$, we have \begin{align} \label{EqnPredErrorBound} K^\policy & \leq \max_{Q \in \Qclass{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2 } { \innerprodweighted{QErr} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu}^2 } \big \} = \max_{QErr \in \QclassErrCentered{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\mu}^2 } \big \}. \end{align} \end{lemma} The above coefficient measures the ratio between the Bellman error along the distribution of the target policy $\policyicy$ and that projected onto the error space $\QclassErrCentered{\policyicy}$ defined by $\Qclass{\policyicy}$. It is a concentrability coefficient that \emph{always} applies, as the choice of the test space does not require domain knowledge. See~\cref{sec:PredictionError} for the proof, and \cref{sec:appBubnov} for further comments and insights, as well as a simplification in the special case of Bellman closure. \subsection{The Bellman test space} \label{sec:DomainKnowledge} In the prior section we controlled the projected Bellman error. Another longstanding approach in reinforcement learning is to control the Bellman error itself, for example by minimizing the squared Bellman residual. In general, this cannot be done if only an offline dataset is available due to the well known \emph{double sampling} issue. However, in some cases we can use an helper class to try to capture the Bellman error. Such class needs to be a superset of the class of \emph{Bellman test functions} given by \begin{align} \label{EqnBellmanTest} \TestFunctionClassBubnov{\policyicy} & \defeq \{ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} \mid Q \in \Qclass{\policyicy} \}. \end{align} Any test class that contains the above allows us to control the Bellman residual, as we show next. \begin{lemma}[Bellman Test Functions] \label{lem:BellmanTestFunctions} For any test function class $ \TestFunctionClass{\policyicy}$ that contains $\TestFunctionClassBubnov{\policyicy}$, we have \begin{subequations} \begin{align} \label{EqnBellBound} \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \leq c_1 \sqrt{\frac{\ensuremath{\rho}}{n}} \qquad \mbox{for any $Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{\policyicy})$.} \end{align} Moreover, the {off-policy cost coefficient} is upper bounded as \begin{align} \label{EqnBellBoundConc} K^\policy & \stackrel{(i)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(ii)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(iii)}{\leq} c_1 \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa} {\mu\psa}. \end{align} \end{subequations} \end{lemma} \noindent See~\cref{sec:BellmanTestFunctions} for the proof of this claim. Consequently, whenever the test class includes the Bellman test functions, the {off-policy cost coefficient} is at most the ratio between the squared Bellman residuals along the data generating distribution and the target distribution. If Bellman closure holds, then the prediction error space $\QclassErr{\policyicy}$ introduced in \cref{sec:ErrorTestSpace} contains the Bellman test functions: for $Q \in \Qclass{\policyicy}$, we can write $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q \in \QclassErr{\policyicy} $. This fact allows us to recover a result in the recent paper~\cite{xie2021bellman} in the special case of Bellman closure, although the approach presented here is more general. \subsection{Combining test spaces} \label{sec:MultipleRobustness} Often, it is natural to construct a test space that is a union of several simpler classes. A simple but valuable observation is that the resulting procedure inherits the best of the OPC coefficients. Suppose that we are given a collection $\{ \TestFunctionClass{\policyicy}_m \}_{m=1}^M$ of $M$ different test function classes, and define the union $\TestFunctionClass{\policyicy} = \bigcup_{m=1}^M \TestFunctionClass{\policyicy}_m$. For each $m = 1, \ldots, M$, let $K^\policy_m$ be the OPC coefficient defined by the function class $\TestFunctionClass{\policyicy}_m$ and radius $\ensuremath{\rho}$, and let $K^\policy(\TestFunctionClass{})$ be the OPC coefficient associated with the full class. Then we have the following guarantee: \begin{lemma}[Multiple test classes] \label{prop:MultipleRobustness} $ \label{EqnMinBound} K^\policy(\TestFunctionClass{}) \leq \min_{m = 1, \ldots, M} K^\policy_m. $ \end{lemma} \noindent This guarantee is a straightforward consequence of our construction of the feasibility sets: in particular, we have $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}) = \cap_{m =1}^M \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}_m)$, and consequently, by the variational definition of the {off-policy cost coefficient} $K^\policy(\TestFunctionClass{})$ as optimization over $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})$, the bound~\eqref{EqnMinBound} follows. In words, when multiple test spaces are combined, then our algorithms inherit the best (smallest) OPC coefficient over all individual test spaces. While this behavior is attractive, one must note that there is a statistical cost to using a union of test spaces: the choice of $\ensuremath{\rho}$ scales as a function of $\TestFunctionClass{}$ via its metric entropy. This increase in $\ensuremath{\rho}$ must be balanced with the benefits of using multiple test spaces.\footnote{For space reasons, we defer to \cref{sec:IS2BC} an application in which we construct a test function space as a union of subclasses, and thereby obtain a method that automatically leverages Bellman closure when it holds, falls back to importance sampling if closure fails, and falls back to a worst-case bound in general.} \section{Linear Setting} \label{sec:Linear} In this section, we turn to a detailed analysis of our estimators using function classes that are linear in a feature map. Let $\phi: \StateSpace \times ASpace \rightarrow \R^\dim$ be a given feature map, and consider linear expansions $g_{\CriticPar{}} \psa \defeq \inprod{\CriticPar{}}{\phi \psa} \; = \; \sum_{j=1}^d \CriticPar{j} \phi_j \psa$. The class of \emph{linear functions} takes the form \begin{align} \label{EqnLinClass} \ensuremath{\mathcal{L}} & \defeq \{ \psa \mapsto g_{\CriticPar{}} \psa \mid \CriticPar{} \in \R^{\dim}, \; \norm{\CriticPar{}}{2} \leq 1 \}. \end{align} Throughout our analysis, we assume that $\norm{\phi(\state, \action)}{2} \leq 1$ for all state-action pairs. Following the approach in \cref{sec:ErrorTestSpace}, which is based on the LSTD method, we should choose the test function class $\TestFunctionClass{\policyicy} = \LinSpace$, as in the linear case the prediction error is linear. In order to obtain a computationally efficient implementation, we need to use a test class that is a ``simpler'' subset of $\LinSpace$. In particular, for linear functions, it is not hard to show that the estimates $\VminEmp{\policyicy}$ and $\VmaxEmp{\policyicy}$ from equation~\eqref{eqn:ConfidenceIntervalEmpirical} can be computed by solving a quadratic program, with two linear constraints for each test function. (See~\cref{sec:LinearConfidenceIntervals} for the details.) Consequently, the computational complexity scales linearly with the number of test functions. Thus, if we restrict ourselves to a finite test class contained within $\LinSpace$, we will obtain a computationally efficient approach. \subsection{A computationally friendly test class and OPC coefficients} Define the empirical covariance matrix $\widehat \Sigma = \frac{1}{n} \sum_{i=1}^{n} \phiEmpirical{i} \phiEmpirical{i}^T$ \mbox{where $\phiEmpirical{i} \defeq \phi(\state_i,\action_i)$.} Let $\{\EigenVectorEmpirical{j} \}_{j=1}^\dim$ be the eigenvectors of empirical covariance matrix $\widehat \Sigma$, and suppose that they are normalized to have unit $\ell_2$-norm. We use these normalized eigenvectors to define the finite test class \begin{align} \label{eqn:LinTestFunction} \TestFunctionClassRandom{\policyicy} \defeq \{ f_j, j = 1, \ldots, \dim \} \quad \mbox{where $f_j \psa \defeq \inprod{\EigenVectorEmpirical{j}}{\phi \psa}$} \end{align} A few observations are in order: \begin{carlist} \item This test class has only $d$ functions, so that our QP implementation has $2 \dim$ constraints, and can be solved in polynomial time. (Again, see \cref{sec:LinearConfidenceIntervals} for details.) \item Since $\TestFunctionClassRandom{\policyicy}$ is a subset of $\LinSpace$ the choice of radius $\ensuremath{\rho} = c( \frac{\dim}{n} + \log 1/\delta)$ is valid for some constant $c$. \end{carlist} \newcommand{\ConcSimple(\TestFunctionClassRandom{\policy})}{K^\policy(\TestFunctionClassRandom{\policyicy})} \paragraph{Concentrability} When weak Bellman closure does not hold, then our analysis needs to take into account how errors propagate via the dynamics. In particular, we define the \emph{next-state feature extractor} $\phiBootstrap{\policyicy} \psa \defeq \ensuremath{\mathbb{E}}ecti{\phi(\successorstate,\policyicy)}{\successorstate \sim \Pro\psa}$, along with the population covariance matrix $\Sigma \defeq \E_\mu \big[\phi \psa \phi^\top \psa \big]$, and its $\ensuremath{\lambda}$-regularized version $\SigmaReg \defeq \Sigma + \TestFunctionReg \Identity$. We also define the matrices \begin{align*} \CovarianceBootstrap{\policyicy} \defeq \ensuremath{\mathbb{E}}ecti{[\phi(\phiBootstrap{\policyicy})^\top]}{ \mu}, \quad \CovarianceWithBootstrapReg{\policyicy} \defeq (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy} )^\top (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}). \end{align*} The matrix $\CovarianceBootstrap{\policyicy}$ is the cross-covariance between successive states, whereas the matrix $\CovarianceWithBootstrapReg{\policyicy}$ is a suitably renormalized and symmetrized version of the matrix $\Sigma^{\frac{1}{2}} - \discount \Sigma^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}$, which arises naturally from the policy evaluation equation. We refer to quantities that contain evaluations at the next-state (e.g., $\phiBootstrap{\policyicy}$) as bootstrapping terms, and now bound the OPC coefficient in the presence of such terms: \begin{proposition}[OPC bounds with bootstrapping] \label{prop:LinearConcentrability} Under weak realizability, we have \begin{align} \label{EqnOPCBootstrap} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\policyicy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \qquad \mbox{with probability at least $1-\FailureProbability$.} \end{align} \end{proposition} \noindent See~\cref{sec:LinearConcentrability} for the proof. The bound~\eqref{EqnOPCBootstrap} takes a familiar form, as it involves the same matrices used to define the LSTD solution. This is expected, as our approach here is essentially equivalent to the LSTD method; the difference is that LSTD only gives a point estimate as opposed to the confidence intervals that we present here; however, they are both derived from the same principle, namely from the Bellman equations projected along the predictor (error) space. The bound quantifies how the feature extractor $\phi$ together with the bootstrapping term $\phiBootstrap{\policyicy}$, averaged along the target policy $\policyicy$, interact with the covariance matrix with bootstrapping $\CovarianceWithBootstrapReg{\policyicy}$. It is an approximation to the OPC coefficient bound derived in~\cref{lem:PredictionError}. The bootstrapping terms capture the temporal difference correlations that can arise in reinforcement learning when strong assumptions like Bellman closure do not hold. As a consequence, such an OPC coefficient being small is a \emph{sufficient} condition for reliable off-policy prediction. This bound on the OPC coefficient always applies, and it reduces to the simpler one~\eqref{EqnOPCClosure} when weak Bellman closure holds, with no need to inform the algorithm of the simplified setting; see \cref{sec:LinearConcentrabilityBellmanClosure} for the proof. \begin{proposition}[OPC bounds under weak Bellman Closure] \label{prop:LinearConcentrabilityBellmanClosure} Under Bellman closure, we have \begin{align} \label{EqnOPCClosure} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{ \phi}{\policyicy}}{\SigmaReg^{-1}}^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align} \end{proposition} \subsection{Actor-critic scheme for policy optimization} \label{sec:LinearApproximateOptimization} Having described a practical procedure to compute $\VminEmp{\policyicy}$, we now turn to the computation of the max-min estimator for policy optimization. We define the \emph{soft-max policy class} \begin{align} \label{eqn:SoftMax} \PolicyClass_{\text{lin}} \defeq \Big\{ \psa \mapsto \frac{e^{\innerprod{\phi\psa}{\ActorPar{}}}}{\sum_{\successoraction \in ASpace} e^{\innerprod{\phi(\state,\successoraction)}{\ActorPar{}}}} \mid \norm{\ActorPar{}}{2} \leq \nIter, \; \ActorPar{} \in \R^{\dim} \Big\}. \end{align} In order to compute the max-min solution~\eqref{eqn:MaxMinEmpirical} over this policy class, we implement an actor-critic method, in which the actor performs a variant of mirror descent.\footnote{Strictly speaking, it is mirror ascent, but we use the conventional terminology.} \begin{carlist} \item At each iteration $t = 1, \ldots, \nIter$, the policy $\ActorPolicy{t} \in \PolicyClass_{\text{lin}}$ can be identified with a parameter $\ActorPar{t} \in \R^\dim$. The sequence is initialized with $\ActorPar{1} = 0$. \item Using the finite test function class~\eqref{eqn:LinTestFunction} based on normalized eigenvectors, the pessimistic value estimate $\VminEmp{\ActorPolicy{t}}$ is computed by solving a quadratic program, as previously described. This computation returns the weight vector $\CriticPar{t}$ of the associated optimal action-value function. \item Using the action-value vector $\CriticPar{t}$, we update the actor's parameter as \begin{align} \label{eqn:LinearActorUpdate} \ActorPar{t+1} = \ActorPar{t} + \eta \CriticPar{t} \qquad \mbox{where $\eta = \sqrt{\frac{\log\abs{ASpace}}{2T}}$ is a stepsize parameter. } \end{align} \end{carlist} We now state a guarantee on the behavior of this procedure, based on two OPC coefficients: \begin{align} \label{EqnNewConc} \ConcentrabilityGenericSub{\widetilde \policy}{1} = \dim \l \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}}^2, \quad \mbox{and} \quad \ConcentrabilityGenericSub{\widetilde \policy}{2} = \dim \; \sup_{\policyicy \in \PolicyClass} \Big \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \Big \}. \end{align} Moreover, in making the following assertion, we assume that every weak solution $\QpiWeak{\policyicy}$ can be evaluated against the distribution of a comparator policy $\widetilde \policy \in \PolicyClass$, i.e., $\innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\widetilde \policy} = 0$ for all $\policyicy \in \PolicyClass $. (This assumption is still weaker than strong realizability). \begin{theorem}[Approximate Guarantees for Linear Soft-Max Optimization] \label{thm:LinearApproximation} Under the above conditions, running the procedure for $T$ rounds returns a policy sequence $\{\ActorPolicy{t}\}_{t=1}^T$ such that, for any comparator policy $\widetilde \policy\in \PolicyClass$, \begin{align} \label{EqnMirrorBound} \frac{1}{T} \sumiter \big \{ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \big \} & \leq \frac{c_1}{1-\discount} \biggr \{ \underbrace{\sqrt{\frac{\log \abs{ASpace}}{T}} \vphantom{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot}\frac{\dim \log(nT) + \log \frac{n}{\FailureProbability} }{n}}} }_{\text{Optimization error}} + \underbrace{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} \frac{\dim \log(nT) + \log \big(\frac{n }{\FailureProbability}\big) }{n}}}_{\text{Statistical error}} \biggr \}, \end{align} with probability at least $1 - \FailureProbability$. This bound always holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{2}$,} and moreover, it holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{1}$} when weak Bellman closure is in force. \end{theorem} \noindent See~\cref{sec:LinearApproximation} for the proof. Whenever Bellman closure holds, the result automatically inherits the more favorable concentrability coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{2}$, as originally derived in \cref{prop:LinearConcentrabilityBellmanClosure}. The resulting bound is only $\sqrt{\dim}$ worse than the lower bound recently established in the paper~\cite{zanette2021provable}. However, the method proposed here is robust, in that it provides guarantees even when Bellman closure does not hold. In this case, we have a guarantee in terms of the OPC coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$. Note that it is a uniform version of the one derived previously in~\cref{prop:LinearConcentrability}, in that there is an additional supremum over the policy class. This supremum arises due to the use of gradient-based method, which implicitly searches over policies in bootstrapping terms; see \cref{sec:LinearDiscussion} for a more detailed discussion of this issue. \section*{Acknowledgment} AZ was partially supported by NSF-FODSI grant 2023505. In addition, this work was partially supported by NSF-DMS grant 2015454, NSF-IIS grant 1909365, as well as Office of Naval Research grant DOD-ONR-N00014-18-1-2640 to MJW. The authors are grateful to Nan Jiang and Alekh Agarwal for pointing out further connections with the existing literature, as well as to the reviewers for pointing out clarity issues. {\small{ }} \tableofcontents \section{Additional Discussion and Results} \input{3p1p1-BRO} \subsection{Comparison with Weight Learning Methods} \label{sec:WeightLearning} The work closest to ours is \cite{jiang2020minimax}. They also use an auxiliary weight function class, which is comparable to our test class. However, the test class is used in different ways; we compare them in this section at the population level.\footnote{ The empirical estimator in \cite{jiang2020minimax} does not take into account the `alignment' of each weight function with respect to the dataset, which we do through self-normalization and regularization in the construction of the empirical estimator. This precludes obtaining the same type of strong finite time guarantees that we are able to derive here.} Let us assume that weak realizability holds and that $\TestFunctionClass{}$ is symmetric, i.e., if $\TestFunction{} \in \TestFunctionClass{}$ then $-\TestFunction{} \in \TestFunctionClass{}$ as well. At the population level, our program seeks to solve \begin{align} \label{eqn:PopProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0, \end{align} \defw{w} which is equivalent for any $w \in \TestFunctionClass{}$ to \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0. \end{align*} Removing the constraints leads to the upper bound \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} Since this is a valid upper bound for any $w{} \in \TestFunctionClass{}$, minimizing over $w$ must still yield an upper bound, which reads \begin{align*} \inf_{w \in \TestFunctionClass{}}\sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} This is the population program for ``weight learning'', as described in \cite{jiang2020minimax}. It follows that Bellman residual orthogonalization always produces tighter confidence intervals than ``weight learning'' at the population level. Another interesting comparison is with ``value learning'', also described in \cite{jiang2020minimax}. In this case, assuming symmetric $\TestFunctionClass{}$, we can equivalently express the population program \eqref{eqn:PopProgComparison} using a Lagrange multiplier as follows \begin{align} \label{eqn:LagProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \sup_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}} \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align} Rearranging we obtain \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \inf_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}}& \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} The ``value learning'' program proposed in \cite{jiang2020minimax} has a similar formulation to ours but differs in two key aspects. The first---and most important---is that \cite{jiang2020minimax} ignores the Lagrange multiplier; this means ``value learning'' is not longer associated to a constrained program. While the Lagrange multiplier could be ``incorporated'' into the test class $\TestFunctionClass{}$, doing so would cause the entropy of $\TestFunctionClass{}$ to be unbounded. Another point of difference is that ``value learning'' uses such expression with $\lambda = 1$ to derive the confidence interval \emph{lower bound}, while we use it to construct the confidence interval \emph{upper bound}. While this may seem like a contradiction, we notice that the expression is derived using different assumptions: we assume weak realizability of $Q$, while \cite{jiang2020minimax} assumes realizability of the density ratios between $\mu$ and the discounted occupancy measure $\policyicy$. \subsection{Additional Literature} \label{sec:Literature} Here we summarize some additional literature. The efficiency of off-policy tabular RL has been investigated in the papers~\cite{yin2020near,yin2020asymptotically,yin2021towards}. For empirical studies on offline RL, see the papers~\cite{laroche2019safe,jaques2019way,wu2019behavior,agarwal2020optimistic,wang2020critic,siegel2020keep,nair2020accelerating,yang2021pessimistic,kumar2021should,buckman2020importance,kumar2019stabilizing,kidambi2020morel,yu2020mopo}. Some of the classical RL algorithm are presented in the papers~\cite{munos2003error,munos2005error,antos2007fitted,antos2008learning,farahmand2010error,farahmand2016regularized}. For a more modern analysis, see~\cite{chen2019information}. These works generally make additionally assumptions on top of realizability. Alternatively, one can use importance sampling \cite{precup2000eligibility,thomas2016data,jiang2016doubly,farajtabar2018more}. A more recent idea is to look at the distributions themselves~\cite{liu2018breaking,nachum2019algaedice,xie2019towards,zhang2020gendice,zhang2020gradientdice,yang2020off,kallus2019efficiently}. Offline policy optimization with pessimism has been studied in the papers~\cite{liu2020provably,rashidinejad2021bridging,jin2021pessimism,xie2021bellman,zanette2021provable,yinnear,uehara2021pessimistic}. There exists a fairly extensive literature on lower bounds with linear representations, including the two papers~\cite{zanette2020exponential,wang2020statistical} that concurrently derived the first exponential lower bounds for the offline setting, and \cite{foster2021offline} proves that realizability and coverage alone are insufficient. In the context of off-policy optimization several works have investigated methods that assume only realizability of the optimal policy~\cite{xie2020batch,xie2020Q}. Related work includes the papers~\cite{duan2020minimax,duan2021risk,jiang2020minimax,uehara2020minimax,tang2019doubly,nachum2020reinforcement,voloshin2021minimax,hao2021bootstrapping,zhang2022efficient,uehara2021finite,chen2022well,lee2021model}. Among concurrent works, we note \cite{zhan2022offline}. \subsection{Definition of Weak Bellman Closure} \label{sec:appWeakClosure} \begin{definition}[Weak Bellman Closure] The Bellman operator $\BellmanEvaluation{\policyicy}$ is \emph{weakly closed} with respect to the triple $\big( \Qclass{\policyicy}, \TestFunctionClass{\policyicy}, \mu \big)$ if for any $Q \in \Qclass{\policyicy}$, there exists a predictor $\QpiProj{\policyicy}{Q} \in \Qclass{\policyicy}$ such that \begin{align} \innerprodweighted{\TestFunction{}}{\QpiProj{\policyicy}{Q}}{\mu} = \innerprodweighted{\TestFunction{}} {\BellmanEvaluation{\policyicy}(Q)} {\mu}. \end{align} \end{definition} \subsection{Additional results on the concentrability coefficients} \label{sec:appConc} \subsubsection{Testing with the identity function} Suppose that the identity function $\1$ belongs to the test class. Doing so amounts to requiring that the Bellman error is controlled in an average sense over all the data. When this choice is made, we can derive some generic upper bounds on $K^\policy$, which we state and prove here: \begin{lemma} If $\1 \in \TestFunctionClass{\policyicy}$, then we have the upper bounds \begin{align} \label{eqn:InEq} K^\policy & \stackrel{(i)}{\leq} \frac{\max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}|^2}{ \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2} \; \stackrel{(ii)}{\leq} K^\policy_* \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{|\PolMuComplex|^2}. \end{align} \end{lemma} \begin{proof} Since $\1 \in \TestFunctionClass{}$, the definition of $\PopulationFeasibleSet{\policyicy}$ implies that \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2 & \leq \big( \munorm{\1}^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n} = \big( 1 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} The upper bound (i) then follows from the definition of $K^\policy$. The upper bound (ii) follows since the right hand side is the maximum ratio. \end{proof} Note that large values of $K^\policy_*$ (defined in \cref{eqn:InEq}) can arise when there exist $Q$-functions in the set $\PopulationFeasibleSet{\policyicy}$ that have low average Bellman error under the data-generating distribution $\mu$, but relatively large values under $\policyicy$. Of course, the likelihood of such unfavorable choices of $Q$ is reduced when we use a larger test function class, which then reduces the size of $\PopulationFeasibleSet{\policyicy}$. However, we pay a price in choosing a larger test function class, since the choice~\eqref{EqnRadChoice} of the radius $\ensuremath{\rho}$ needed for \cref{thm:NewPolicyEvaluation} depends on its complexity. \subsubsection{Mixture distributions} Now suppose that the dataset consists of a collection of trajectories collected by different protocols. More precisely, for each $j = 1, \ldots, \nConstraints$, let $\Dpi{j}$ be a particular protocol for generating a trajectory. Suppose that we generate data by first sampling a random index $J \in [\nConstraints]$ according to a probability distribution $\{\Frequencies{j} \}_{j=1}^{\nConstraints}$, and conditioned $J = j$, we sample $(\state,\action,\identifier)$ according to $\Dpi{j}$. The resulting data follows a mixture distribution, where we set $o = j$ to tag the protocol used to generate the data. To be clear, for each sample $i = 1, \ldots, n$, we sample $J$ as described, and then draw a single sample $(\state,\action,\identifier) \sim \Dpi{j}$ . Following the intuition given in the previous section, it is natural to include test functions that code for the protocol---that is, the binary-indicator functions \begin{align} f_j (\state,\action,\identifier) & = \begin{cases} 1 & \mbox{if $o=j$} \\ 0 & \mbox{otherwise.} \end{cases} \end{align} This test function, when included in the weak formulation, enforces the Bellman evaluation equations for the policy $\policyicy \in \PolicyClass$ under consideration along the distribution induced by each data-generating policy $\Dpi{j}$. \begin{lemma}[Mixture Policy Concentrability] \label{lem:MixturePolicyConcentrability} Suppose that $\mu$ is an $\nConstraints$-component mixture, and that the indicator functions $\{f_j \}_{j=1}^\nConstraints$ are included in the test class. Then we have the upper bounds \begin{align} \label{EqnMixturePolicyUpper} K^\policy & \stackrel{(i)}{\leq} \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \frac{\max \limits_{Q\in \PopulationFeasibleSet{\policyicy}} [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} {\max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \; \stackrel{(ii)}{\leq} \; \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \max_{Q\in \PopulationFeasibleSet{\policyicy}} \left \{ \frac{ [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} { \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \right \}. \end{align} \end{lemma} \begin{proof} From the definition of $K^\policy$, it suffices to show that \begin{align*} \max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \; \big(1 + \nConstraints \ensuremath{\lambda} \big). \end{align*} A direct calculation yields $\innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \ensuremath{\mathbb{E}}ecti{\Indicator \{o = j \} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}$. Moreover, since each $\TestFunction{j}$ belongs to the test class by assumption, we have the upper bound $\Big|\Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \; \sqrt{ \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda}}$. Squaring each term and summing over the constraints yields \begin{align*} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \sum_{j=1}^\nConstraints \big( \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda} \big) = \frac{\ensuremath{\rho}}{n} \big(1 + \nConstraints \ensuremath{\lambda} \big), \end{align*} where the final equality follows since $\sum_{j=1}^{\nConstraints} \munorm{\TestFunction{j}}^2 = 1$. \end{proof} As shown by the upper bound, the off-policy coefficient $K^\policy$ provides a measure of how the squared-averaged Bellman errors along the policies $\{ \Dpi{j} \}_{j=1}^\nConstraints$, weighted by their probabilities $\{ \Frequencies{j} \}_{j=1}^{\nConstraints}$, transfers to the evaluation policy $\policyicy$. Note that the regularization parameter $\ensuremath{\lambda}$ decays as a function of the sample size---e.g., as $1/n$ in \cref{thm:NewPolicyEvaluation}---the factor $(1 + \nConstraints \TestFunctionReg)/(1 + \TestFunctionReg)$ approaches one as $n$ increases (for a fixed number $\nConstraints$ of mixture components). \subsubsection{Bellman Rank for off-policy evaluation} In this section, we show how more refined bounds can be obtained when---in addition to a mixture condition---additional structure is imposed on the problem. In particular, we consider a notion similar to that of Bellman rank~\cite{jiang17contextual}, but suitably adapted\footnote{ The original definition essentially takes $\widetilde {\PolicyClass}$ as the set of all greedy policies with respect to $\widetilde {\Qclass{}}$. Since a dataset need not originate from greedy policies, the definition of Bellman rank is adapted in a natural way.} to the off-policy setting. Given a policy class $\widetilde {\PolicyClass}$ and a predictor class $\widetilde {\Qclass{}}$, we say that it has Bellman rank is $\dim$ if there exist two maps $\BRleft{} : \widetilde {\PolicyClass} \rightarrow \R^\dim$ and $\BRright{}: \widetilde {\Qclass{}} \rightarrow \R^\dim$ such that \begin{align} \label{eqn:BellmanRank} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \smlinprod{\BRleft{\policyicy}}{\BRright{Q}}_{\R^d}, \qquad \text{for all } \; \policyicy \in \widetilde {\PolicyClass} \; \text{and} \; Q \in \widetilde {\Qclass{}}. \end{align} In words, the average Bellman error of any predictor $Q$ along any given policy $\policyicy$ can be expressed as the Euclidean inner product between two $\dim$-dimensional vectors, one for the policy and one for the predictor. As in the previous section, we assume that the data is generated by a mixture of $\nConstraints$ different distributions (or equivalently policies) $\{\Dpi{j} \}_{j=1}^\nConstraints$. In the off-policy setting, we require that the policy class $\widetilde {\PolicyClass}$ contains all of these policies as well as the target policy---viz. $\{\Dpi{j} \} \cup \{ \policyicy \} \subseteq \widetilde {\PolicyClass}$. Moreover, the predictor class $\widetilde {\Qclass{}}$ should contain the predictor class for the target policy, i.e., $\Qclass{\policyicy} \subseteq \widetilde {\Qclass{}}$. We also assume weak realizability for this discussion. Our result depends on a positive semidefinite matrix determined by the mixture weights $\{\Frequencies{j} \}_{j=1}^\nConstraints$ along with the embeddings $\{\BRleft{\Dpi{j}}\}_{j=1}^\nConstraints$ of the associated policies that generated the data. In particular, we define \begin{align*} \BRCovariance{} = \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top. \end{align*} Assuming that this is matrix is positive definite,\footnote{If not, one can prove a result for a suitably regularized version.} we define the norm $\|u\|_{\BRCovariance{}^{-1}} = \sqrt{u^T (\BRCovariance{})^{-1} u}$. With this notation, we have the following bound. \begin{lemma}[Concentrability with Bellman Rank] \label{lem:ConcentrabilityBellmanRank} For a mixture data-generation process and under the Bellman rank condition~\eqref{eqn:BellmanRank}, we have the upper bound \begin{align} K^\policy & \leq \; \frac{1 + \nConstraints \TestFunctionReg}{1 + \TestFunctionReg} \; \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align} \end{lemma} \begin{proof} Our proof exploits the upper bound (ii) from the claim~\eqref{EqnMixturePolicyUpper} in~\cref{lem:MixturePolicyConcentrability}. We first evaluate and redefine the ratio in this upper bound. Weak realizability coupled with the Bellman rank condition~\eqref{eqn:BellmanRank} implies that there exists some $\QpiWeak{\policyicy}$ such that \begin{align*} 0 & = \innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\Dpi{j}} = \Frequencies{j} \inprod{\BRleft{\Dpi{j}}}{\BRright{\QpiWeak{\policyicy}}}, \qquad \mbox{for all $j = 1, \ldots, \nConstraints$, and} \\ 0 & = \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{\BRright{\QpiWeak{\policyicy}}}. \end{align*} Therefore, we have the equivalences $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} = \inprod{\BRleft{\Dpi{j}}}{ (\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$ for all $j = 1, \ldots, \nConstraints$, as well as $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{(\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$. Introducing the shorthand $\BRDelta{Q} = \BRright{Q} - \BRright{\QpiWeak{\policyicy}}$, we can bound the ratio as follows \begin{align*} \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{\BRDelta{Q}})^2 }{ \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 (\inprod{\BRleft{\Dpi{j}}}{ \BRDelta{Q}})^2 } \Big \}& = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{ \BRDelta{Q}})^2 }{\BRDelta{Q}^\top \Big( \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top \Big) \BRDelta{Q} } \Big \} \\ & = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ ( \smlinprod{\BRleft{\policyicy}}{ \BRCovariance{}^{-\frac{1}{2}} \BRDeltaCoord{Q}})^2 }{ \|\BRDeltaCoord{Q}\|_2^2} \Big \} \qquad \mbox{where $\BRDeltaCoord{Q} = \BRCovariance{}^{\frac{1}{2}}\BRDelta{Q}$}\\ & \leq \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align*} where the final step follows from the Cauchy--Schwarz inequality. \end{proof} Thus, when performing off-policy evaluation with a mixture distribution under the Bellman rank condition, the coefficient $K^\policy$ is bounded by the alignment between the target policy $\policyicy$ and the data-generating distribution $\mu$, as measured in the the embedded space guaranteed by the Bellman rank condition. The structure of this upper bound is similar to a result that we derive in the sequel for linear approximation under Bellman closure (see~\cref{prop:LinearConcentrabilityBellmanClosure}). \subsection{Further comments on the prediction error test space} \label{sec:appBubnov} A few comments on the bound in \cref{lem:PredictionError}: as in our previous results, the pre-factor $\frac{\norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }$ serves as a normalization factor. Disregarding this leading term, the second ratio measures how the prediction error $QErr = Q - \QpiWeak{\policyicy} $ along $\mu$ transfers to $\policyicy$, as measured via the operator $\IdentityOperator - \discount\TransitionOperator{\policyicy}$. This interaction is complex, since it includes the \emph{bootstrapping term} $-\discount\TransitionOperator{\policyicy}$. (Notably, such a term is not present for standard prediction or bandit problems, in which case $\discount = 0$.) This term reflects the dynamics intrinsic to reinforcement learning, and plays a key role in proving ``hard'' lower bounds for offline RL (e.g., see the work~\cite{zanette2020exponential}). Observe that the bound in \cref{lem:PredictionError} requires only weak realizability, and thus it always applies. This fact is significant in light of a recent lower bound~\cite{foster2021offline}, showing that without Bellman closure, off-policy learning is challenging even under strong concentrability assumption (such as bounds on density ratios). \cref{lem:PredictionError} gives a sufficient condition without Bellman closure, but with a different measure that accounts for bootstrapping. \\ \noindent If, in fact, (weak) Bellman closure holds, then~\cref{lem:PredictionError} takes the following simplified form: \begin{lemma}[OPC coefficient under Bellman closure] \label{lem:PredictionErrorBellmanClosure} If $\QclassErr{\policyicy} \subseteq \TestFunctionClass{\policyicy}$ and weak Bellman closure holds, then \begin{align*} K^\policy \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \, \cdot \, \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm {QErrNC} {\policyicy}^2 } { \norm{QErrNC} {\mu}^2 } \Big \}. \end{align*} \end{lemma} \noindent See \cref{sec:PredictionErrorBellmanClosure} for the proof. \\ In such case, the concentrability measures the increase in the discrepancy $Q - Q'$ of the feasible predictors when moving from the dataset distribution $\mu$ to the distribution of the target policy $\policyicy$. In \cref{sec:DomainKnowledge}, we give another bound under weak Bellman closure, and thereby recover a recent result due to Xie et al.~\cite{xie2021bellman}. Finally, in~\cref{sec:Linear}, we provide some applications of this concentrability factor to the linear setting. \subsection{From Importance Sampling to Bellman Closure} \label{sec:IS2BC} Let us show an application of \cref{prop:MultipleRobustness} on an example with just two test spaces. Suppose that we suspect that Bellman closure holds, but rather than committing to such assumption, we wish to fall back to an importance sampling estimator if Bellman closure does not hold. In order to streamline the presentation of the idea, let us introduce the following setup. Let $\policyicyBeh{}$ be a behavioral policy that generates the dataset, i.e., such that each state-action $\psa$ in the dataset is sampled from its discounted state distribution $\DistributionOfPolicy{\policyicyBeh{}}$. Next, let the identifier $o$ contain the trajectory from $\nu_{\text{start}}$ up to the state-action pair $\psa$ recorded in the dataset. That is, each tuple $\sarsi{}$ in the dataset $\Dataset$ is such that $\psa \sim \DistributionOfPolicy{\policyicyBeh{}}$ and $o$ contains the trajectory up to $\psa$. We now define the test spaces. The first one is denoted with $\TestFunctionISClass{\policyicy}$ and leverages importance sampling. It contains a single test function defined as the importance sampling estimator \begin{align} \label{eqn:ImportanceSampling} \TestFunctionISClass{\policyicy} = \{ \TestFunction{\policyicy}\}, \qquad \text{where} \; \TestFunction{\policyicy}(\state,\action,\identifier) = \frac{1}{\scaling{\policyicy}} \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) }. \end{align} The above product is over the random trajectory contained in the identifier $o$. The normalization factor $\scaling{\policyicy} \in \R$ is connected to the maximum range of the importance sampling estimator, and ensures that $\sup_{(\state,\action,\identifier)} \TestFunction{\policyicy}(\state,\action,\identifier) \leq 1$. The second test space is the prediction error test space $\QclassErr{\policyicy}$ defined in \cref{sec:ErrorTestSpace}. With this choice, let us define three concentrability coefficients. $\ConcentrabilityGenericSub{\policyicy}{1}$ arises from importance sampling, $\ConcentrabilityGenericSub{\policyicy}{2}$ from the prediction error test space when Bellman closure holds and $\ConcentrabilityGenericSub{\policyicy}{3}$ from the prediction error test space when just weak realizability holds. They are defined as \begin{align*} \ConcentrabilityGenericSub{\policyicy}{1} \leq \sqrt{ \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg} } \qquad \ConcentrabilityGenericSub{\policyicy}{2} \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \times \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }, \qquad \ConcentrabilityGenericSub{\policyicy}{3} \leq c_1 \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2}. \end{align*} \begin{lemma}[From Importance Sampling to Bellman Closure] \label{lem:IS} The choice $\TestFunctionClass{\policyicy} = \TestFunctionISClass{\policyicy} \cup \QclassErr{\policyicy} \; \text{for all } \policyicy\in\PolicyClass$ ensures that with probability at least $1-\FailureProbability$, the oracle inequality~\eqref{EqnOracle} holds with $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2}, \ConcentrabilityGenericSub{\policyicy}{3} \} $ if weak Bellman closure holds and $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2} \} $ otherwise. \end{lemma} \begin{proof} Let us calculate the {off-policy cost coefficient} associated with $\TestFunctionISClass{\policyicy}$. The unbiasedness of the importance sampling estimator gives us the following population constraint (here $\mu = \DistributionOfPolicy{\policyicyBeh{}}$) \begin{align*} \abs{ \innerprodweighted{\TestFunction{\policyicy}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \abs{ \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \frac{1}{\scaling{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } = \frac{1}{\scaling{\policyicy}} \abs{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } \leq \frac{\ConfidenceInterval{\FailureProbability}}{\sqrt{n}} \sqrt{\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg} \end{align*} The norm of the test function reads (notice that $\mu$ generates $(\state,\action,\identifier)$ here) \begin{align*} \norm{\TestFunction{\policyicy}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}^2}{\mu} = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\mu}\Bigg]^2 = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\policyicy}\Bigg] \leq \frac{1}{\scaling{\policyicy}}. \\ \end{align*} Together with the prior display, we obtain \begin{align*} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} {\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} \leq \frac{\ensuremath{\rho}}{n}. \end{align*} The resulting concentrability coefficient is therefore \begin{align*} \ConcentrabilityGeneric{\policyicy} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{n}{\ensuremath{\rho}} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} {\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} \leq \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg}. \end{align*} Chaining the above result with \cref{lem:BellmanTestFunctions,lem:PredictionError}, using \cref{prop:MultipleRobustness} and plugging back into \cref{thm:NewPolicyEvaluation} yields the thesis. \end{proof} \subsection{Implementation for Off-Policy Predictions} \label{sec:LinearConfidenceIntervals} In this section, we describe a computationally efficient way in which to compute the upper/lower estimates~\eqref{eqn:ConfidenceIntervalEmpirical}. Given a finite set of $n_{\TestFunctionClass{}}$ test functions, it involves solving a quadratic program with $2 n_{\TestFunctionClass{}} + 1$ constraints. Let us first work out a concise description of the constraints defining membership in $\EmpiricalFeasibleSet{\policyicy}$. Introduce the shorthand $\nLinEffSq{\TestFunction{}} \defeq \norm{\TestFunction{j}}{n}^2 + \TestFunctionReg$. We then define the empirical average feature vector $\phiEmpiricalExpecti{\TestFunction{}}$, the empirical average reward $\LinEmpiricalReward{\TestFunction{}}$, and the average next-state feature vector $\phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}$ as \begin{align*} \quad \phiEmpiricalExpecti{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi\psa, \qquad \LinEmpiricalReward{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa\reward, \\ \qquad \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi(\successorstate,\policyicy). \end{align*} In terms of this notation, each empirical constraint defining $\EmpiricalFeasibleSet{\policyicy}$ can be written in the more compact form \begin{align*} \frac{\abs{\innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}}{n}} } {\nLinEff{\TestFunction{}}} & = \Big | \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Then the set of empirical constraints can be written as a set of constraints linear in the critic parameter $\CriticPar{}$ coupled with the assumed regularity bound on $\CriticPar{}$ \begin{align} \label{eqn:LinearConstraints} \EmpiricalFeasibleSet{\policyicy} = \Big\{ \CriticPar{} \in \R^d \mid \norm{\CriticPar{}}{2} \leq 1, \quad \mbox{and} \quad - \sqrt{\frac{\ensuremath{\rho}}{n}} \leq \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \Big\}. \end{align} Thus, the estimates $\VminEmp{\policy}$ (respectively $\VmaxEmp{\policy}$) acan be computed by minimizing (respectively maximizing) the linear objective function $w \mapsto \inprod{[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathbb{E}}ecti{\phi(\state,\action)} {\action \sim \policyicy}}{\state \sim \nu_{\text{start}}}]}{\CriticPar{}}$ subject to the $2 n_{\TestFunctionClass{}} + 1$ constraints in equation~\eqref{eqn:LinearConstraints}. Therefore, the estimates can be computed in polynomial time for any test function with a cardinality that grows polynomially in the problem parameters. \subsection{Discussion of Linear Approximate Optimization} \label{sec:LinearDiscussion} Here we discuss the presence of the supremum over policies in the coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$ from equation~\eqref{EqnNewConc}. In particular, it arises because our actor-critic method iteratively approximates the maximum in the max-min estimate~\eqref{eqn:MaxMinEmpirical} using a gradient-based scheme. The ability of a gradient-based method to make progress is related to the estimation accuracy of the gradient, which is the $Q$ estimates of the actor's current policy $\ActorPolicy{t}$; more specifically, the gradient is the $Q$ function parameter $\CriticPar{t}$. In the general case, the estimation error of the gradient $\CriticPar{t}$ depends on the policy under consideration through the matrix $\CovarianceWithBootstrapReg{\ActorPolicy{t}}$, while it is independent in the special case of Bellman closure (as it depends on just $\Sigma$). As the actor's policies are random, this yields the introduction of a $\sup_{\policyicy\in\PolicyClass}$ in the general bound. Notice the method still competes with the best comparator $\widetilde \policy$ by measuring the errors along the distribution of the comparator (through the operator $\ensuremath{\mathbb{E}}ecti{}{\widetilde \policy}$). To be clear, $\sup_{\policyicy\in\PolicyClass}$ may not arise with approximate solution methods that do not rely only on the gradient to make progress (such as second-order methods); we leave this for future research. Reassuringly, when Bellman closure, the approximate solution method recovers the standard guarantees established in the paper~\cite{zanette2021provable}. \section{General Guarantees} \label{sec:GeneralGuarantees} \subsection{A deterministic guarantee} We begin our analysis stating a deterministic set of sufficient conditions for our estimators to satisfy the guarantees~\eqref{EqnPolEval} and~\eqref{EqnOracle}. This formulation is useful, because it reveals the structural conditions that underlie success of our estimators, and in particular the connection to weak realizability. In Section~\ref{SecHighProb}, we exploit this deterministic result to show that, under a fairly general sampling model, our estimators enjoy these guarantees with high probability. In the previous section, we introduced the population level set $\PopulationFeasibleSet{\policyicy}$ that arises in the statement of our guarantees. Also central in our analysis is the infinite data limit of this set. More specifically, for any fixed $(\ensuremath{\rho}, \ensuremath{\lambda})$, if we take the limit $n \rightarrow \infty$, then $\PopulationFeasibleSet{\policyicy}$ reduces to the set of all solutions to the weak formulation~\eqref{eqn:WeakFormulation}---that is \begin{align} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) = \{ Q \in \Qclass{\policyicy} \mid \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\pi}{}} {\mu} = 0 \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \}. \end{align} As before, we omit the dependence on the test function class $\TestFunctionClass{\policyicy}$ when it is clear from context. By construction, we have the inclusion $\PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \SuperPopulationFeasible$ for any non-negative pair $(\ensuremath{\rho}, \ensuremath{\lambda})$. Our first set of guarantees hold when the random set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the \emph{sandwich relation} \begin{align} \label{EqnSandwich} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \EmpiricalFeasibleSet{\policyicy}(\ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \subseteq \PopulationFeasibleSet{\policyicy}(4 \ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \end{align} To provide intuition as to why this sandwich condition is natural, observe that it has two important implications: \begin{enumerate} \item[(a)] Recalling the definition of weak realizability~\eqref{EqnWeakRealizable}, the weak solution $\QpiWeak{\policyicy}$ belongs to the empirical constraint set $\EmpiricalFeasibleSet{\policyicy}$ for any choice of test function space. This important property follows because $\QpiWeak{\policyicy}$ must satisfy the constraints~\eqref{eqn:WeakFormulation}, and thus it belongs to $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$. \item[(b)] All solutions in $\EmpiricalFeasibleSet{\policyicy}$ also belong to $\PopulationFeasibleSet{\policyicy}$, which means they approximately satisfy the weak Bellman equations in a way quantified by $\PopulationFeasibleSet{\policyicy}$. \end{enumerate} By leveraging these facts in the appropriate way, we can establish the following guarantee: \\ \begin{proposition} \label{prop:Deterministic} The following two statements hold. \begin{enumerate} \item[(a)] \underline{Policy evaluation:} If the set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the sandwich relation~\eqref{EqnSandwich}, then the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ satisfy the width bound~\eqref{EqnWidthBound}. If, in addition, weak Bellman realizability for $\policyicy$ is assumed, then the coverage~\eqref{EqnCoverage} condition holds. \item[(b)] \underline{Policy optimization:} If the sandwich relation~\eqref{EqnSandwich} and weak Bellman realizability hold for all $\policyicy \in \PolicyClass$, then any max-min~\eqref{eqn:MaxMinEmpirical} optimal policy $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{enumerate} \end{proposition} \noindent See Section~\ref{SecProofPropDeterministic} for the proof of this claim. \\ In summary, \Cref{prop:Deterministic} ensures that when weak realizability is in force, then the sandwich relation~\eqref{EqnSandwich} is a sufficient condition for both the policy evaluation~\eqref{EqnPolEval} and optimization~\eqref{EqnOracle} guarantees to hold. Accordingly, the next phase of our analysis focuses on deriving sufficient conditions for the sandwich relation to hold with high probabability. \subsection{Some high-probability guarantees} \label{SecHighProb} As stated, \Cref{prop:Deterministic} is a ``meta-result'', in that it applies to any choice of set $\EmpiricalFeasibleSet{\policyicy} \equiv \SuperEmpiricalFeasible$ for which the sandwich relation~\eqref{EqnSandwich} holds. In order to obtain a more concrete guarantee, we need to impose assumptions on the way in which the dataset was generated, and concrete choices of $(\ensuremath{\rho}, \ensuremath{\lambda})$ that suffice to ensure that the associated sandwich relation~\eqref{EqnSandwich} holds with high probability. These tasks are the focus of this section. \subsubsection{A model for data generation} \label{SecDataGen} Let us begin by describing a fairly general model for data-generation. Any sample takes the form $\sarsizNp{} \defeq \sarsi{}$, where the five components are defined as follows: \begin{carlist} \item the pair $(\state, \action)$ index the current state and action. \item the random variable $\reward$ is a noisy observation of the mean reward. \item the random state $\successorstate$ is the next-state sample, drawn according to the transition $\Pro\psa$. \item the variable $o$ is an optional identifier. \end{carlist} \noindent As one example of the use of an identifier variable, if samples might be generated by one of two possible policies---say $\policyicy_1$ and $\policyicy_2$---the identifier can take values in the set $\{1, 2 \}$ to indicate which policy was used for a particular sample. \\ Overall, we observe a dataset $\Dataset = \{\sarsizNp{i} \}_{i=1}^n$ of $n$ such quintuples. In the simplest of possible settings, each triple $(\state,\action,\identifier)$ is drawn i.i.d. from some fixed distribution $\mu$, and the noisy reward $\reward_i$ is an unbiased estimate of the mean reward function $\Reward(\state_i, \action_i)$. In this case, our dataset consists of $n$ i.i.d. quintuples. More generally, we would like to accommodate richer sampling models in which the sample $z_i = (\state_i, \action_i, o_i, \reward_i, \successorstate_i)$ at a given time $i$ is allowed to depend on past samples. In order to specify such dependence in a precise way, define the nested sequence of sigma-fields \begin{align} \mathcal F_1 = \emptyset, \quad \mbox{and} \quad \mathcal F_i \defeq \sigma \Big(\{\sarsizNp{j}\}_{j=1}^{i-1} \Big) \qquad \mbox{for $i = 2, \ldots, n$.} \end{align} In terms of this filtration, we make the following definition: \begin{assumption}[Adapted dataset] \label{asm:Dataset} An adapted dataset is a collection $\Dataset = \{ \sarsizNp{i} \}_{i=1}^n$ such that for each $i = 1, \ldots, n$: \begin{carlist} \item There is a conditional distribution $\mu_i$ such that $(\state_i, \action_i, o_i) \sim \mu_i (\cdot \mid \mathcal F_i)$. \item Conditioned on $(\state_i,\action_i,o_i )$, we observe a noisy reward $\reward_i = \reward(\state_i,\action_i) + \eta_i$ with $\E[\eta_i \mid \mathcal F_{i} ] = 0$, and $|\reward_i| \leq 1$. \item Conditioned on $(\state_i,\action_i,o_i )$, the next state $\successorstate_i$ is generated according to $\Pro(\state_i,\action_i)$. \end{carlist} \end{assumption} Under this assumption, we can define the (possibly) random reference measure \begin{align} \mu(\state, \action, o) & \defeq \frac{1}{n} \sum_{i=1}^n \mu_i \big(\state, \action, o \mid \mathcal F_i \big). \end{align} In words, it corresponds to the distribution induced by first drawing a time index $i \in \{1, \ldots, n \}$ uniformly at random, and then sampling a triple $(\state,\action,\identifier)$ from the conditional distribution $\mu_i \big(\cdot \mid \mathcal F_i \big)$. \subsubsection{A general guarantee} \newcommand{\ensuremath{\phi}}{\ensuremath{\phi}} \label{SecGeneral} Recall that there are three function classes that underlie our method: the test function class $\TestFunctionClass{}$, the policy class $\PolicyClass$, and the $Q$-function class $\Qclass{}$. In this section, we state a general guarantee (\cref{thm:NewPolicyEvaluation}) that involves the metric entropies of these sets. In Section~\ref{SecCorollaries}, we provide corollaries of this guarantee for specific function classes. In more detail, we equip the test function class and the $Q$-function class with the usual sup-norm \begin{align*} \|f - \tilde{f}\|_\infty \defeq \sup_{(\state,\action,\identifier)} |f(\state,\action,\identifier) - \tilde{f} (\state,\action,\identifier)|, \quad \mbox{and} \quad \|Q - \tilde{Q}\|_\infty \defeq \sup_{\psa} |Q \psa - \tilde{Q} \psa|, \end{align*} and the policy class with the sup-TV norm \begin{align*} \|\pi - \pitilde\|_{\infty, 1} & \defeq \sup_{\state} \|\pi(\cdot \mid \state) - \pitilde(\cdot \mid \state) \|_1 = \sup_{\state} \sum_{\action} |\pi(\action \mid \state) - \pitilde(\action \mid \state)|. \end{align*} For a given $\epsilon > 0$, we let $\CoveringNumber{\TestFunctionClass{}}{\epsilon}$, $\CoveringNumber{\Qclass{}}{\epsilon}$, and $\CoveringNumber{\PolicyClass}{\epsilon}$ denote the $\epsilon$-covering numbers of each of these function classes in the given norms. Given these covering numbers, a tolerance parameter $\delta \in (0,1)$ and the shorthand $\ensuremath{\phi}(t) = \max \{t, \sqrt{t} \}$, define the radius function \begin{subequations} \label{EqnTheoremChoice} \begin{align} \label{EqnDefnR} \ensuremath{\rho}(\epsilon, \delta) & \defeq n \Big\{ \int_{\epsilon^2}^\epsilon \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + \frac{\log N_\epsilon(\Qclass{})}{n} + \frac{ \log N_\epsilon(\Pi)}{n} + \frac{\log(n/\delta)}{n} \Big\}. \end{align} In our theorem, we implement the estimator using a radius $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$, where $\epsilon > 0$ is any parameter that satisfies the bound \begin{align} \label{EqnRadChoice} \epsilon^2 & \stackrel{(i)}{\leq} \bar{c} \: \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}, \quad \mbox{and} \quad \ensuremath{\lambda} \stackrel{(i)}{=} 4 \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}. \end{align} \end{subequations} Here $\bar{c} > 0$ is a suitably chosen but universal constant (whose value is determined in the proof), and we adopt the shorthand $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ in our statement below. \renewcommand{\descriptionlabel}[1]{ \hspace{\labelsep}\normalfont\underline{#1} } \begin{theorem}[High-probability guarantees] \label{thm:NewPolicyEvaluation} Consider the estimates implemented using triple $\InputFunctionalSpace$ that is weakly Bellman realizable (\cref{asm:WeakRealizability}); an adapted dataset (\Cref{asm:Dataset}); and with the choices~\eqref{EqnTheoremChoice} for $(\epsilon, \ensuremath{\rho}, \ensuremath{\lambda})$. Then with probability at least $1 - \delta$: \begin{description} \item[Policy evaluation:] For any $\pi \in \PolicyClass$, the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ specify a confidence interval satisfying the coverage~\eqref{EqnCoverage} and width bounds~\eqref{EqnWidthBound}. \item[Policy optimization:] Any max-min policy~\eqref{eqn:MaxMinEmpirical} $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{description} \end{theorem} \noindent See \cref{SecProofNewPolicyEvaluation} for the proof of the claim. \\ \paragraph{Choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$:} Let us provide a few comments about the choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$ from equations~\eqref{EqnDefnR} and~\eqref{EqnRadChoice}. The quality of our bounds depends on the size of the constraint set $\PopulationFeasibleSet{\policyicy}$, which is controlled by the constraint level $\sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, our results are tightest when $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ is as small as possible. Note that $\ensuremath{\rho}$ is an decreasing function of $\epsilon$, so that in order to minimize it, we would like to choose $\epsilon$ as large as possible subject to the constraint~\eqref{EqnRadChoice}(i). Ignoring the entropy integral term in equation~\eqref{EqnRadChoice} for the moment---see below for some comments on it---these considerations lead to \begin{align} n \epsilon^2 \asymp \log N_\epsilon(\TestFunctionClass{}) + \log N_\epsilon(\Qclass{}) + \log N_\epsilon(\Pi). \end{align} This type of relation for the choice of $\epsilon$ in non-parametric statistics is well-known (e.g., see Chapters 13--15 in the book~\cite{wainwright2019high} and references therein). Moreover, setting $\ensuremath{\lambda} \asymp \epsilon^2$ as in equation~\eqref{EqnRadChoice}(ii) is often the correct scale of regularization. \paragraph{Key technical steps in proof:} It is worthwhile making a few comments about the structure of the proof so as to clarify the connections to \Cref{prop:Deterministic} along with the weak formulation that underlies our methods. Recall that \Cref{prop:Deterministic} requires the empirical $\EmpiricalFeasibleSet{\policyicy}$ and population sets $\PopulationFeasibleSet{\policyicy}$ to satisfy the sandwich relation~\eqref{EqnSandwich}. In order to prove that this condition holds with high probability, we need to establish uniform control over the family of random variables \begin{align} \label{EqnKeyFamily} \frac{ \big| \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big|}{\TestNormaRegularizerEmp{}}, \qquad \mbox{as indexed by the triple $(f, Q, \policyicy)$.} \end{align} Note that the differences in the numerator of these variables correspond to moving from the empirical constraints on $Q$-functions that are enforced using the TD errors, to the population constraints that involve the Bellman error function. Uniform control of the family~\eqref{EqnKeyFamily}, along with the differences $\|f\|_n - \|f\|_\mu$ uniformly over $f$, allows us to relate the empirical and population sets, since the associated constraints are obtained by shifting between the empirical inner products $\inprod{\cdot}{\cdot}_n$ to the reference inner products $\inprod{\cdot}{\cdot}_\mu$. A simple discretization argument allows us to control the differences uniformly in $(Q, \policyicy)$, as reflected by the metric entropies appearing in our definition~\eqref{EqnTheoremChoice}. Deriving uniform bounds over test functions $f$---due to the self-normalizing nature of the constraints---requires a more delicate argument. More precisely, in order to obtain optimal results for non-parametric problems (see Corollary~\ref{cor:alpha} to follow), we need to localize the empirical process at a scale $\epsilon$, and derive bounds on the localized increments. This portion of the argument leads to the entropy integral---which is localized to the interval $[\epsilon^2, \epsilon]$---in our definition~\eqref{EqnDefnR} of the radius function. \paragraph{Intuition from the on-policy setting:} In order to gain intuition for the statistical meaning of the guarantees in~\cref{thm:NewPolicyEvaluation}, it is worthwhile understanding the implications in a rather special case---namely, the simpler on-policy setting, where the discounted occupation measure induced by the target policy $\policyicy$ coincides with the dataset distribution $\mu$. Let us consider the case in which the identity function $\1$ belongs to the test class $\TestFunctionClass{\policyicy}$. Under these conditions, for any $Q \in \PopulationFeasibleSet{\policyicy}$, we can write \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}| & \stackrel{(i)}{=} \max_{Q \in \PopulationFeasibleSet{\policyicy}}| \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}| \; \stackrel{(ii)}{\leq} \sqrt{1 + \ensuremath{\lambda}} \; \sqrt{\frac{\ensuremath{\rho}}{n}}, \end{align*} where equality (i) follows from the on-policy assumption, and step (ii) follows from the definition of the set $\PopulationFeasibleSet{\policyicy}$, along with the condition that $\1 \in \TestFunctionClass{\policyicy}$. Consequently, in the on-policy setting, the width bound~\eqref{EqnWidthBound} ensures that \begin{align} \label{EqnOnPolicy} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{\frac{ \ensuremath{\rho}}{n}}. \end{align} In this simple case, we see that the confidence interval scales as $\sqrt{\ensuremath{\rho}/n}$, where the quantity $\ensuremath{\rho}$ is related to the metric entropy via equation~\eqref{EqnRadChoice}. In the more general off-policy setting, the bound involves this term, along with additional terms that reflect the cost of off-policy data. We discuss these issues in more detail in \cref{sec:Applications}. Before doing so, however, it is useful derive some specific corollaries that show the form of $\ensuremath{\rho}$ under particular assumptions on the underlying function classes, which we now do. \subsubsection{Some corollaries} \label{SecCorollaries} Theorem~\ref{thm:NewPolicyEvaluation} applies generally to triples of function classes $\InputFunctionalSpace$, and the statistical error $\sqrt{\frac{\ensuremath{\rho}(\epsilon, \delta)}{n}}$ depends on the metric entropies of these function classes via the definition~\eqref{EqnDefnR} of $\ensuremath{\rho}(\epsilon, \delta)$, and the choices~\eqref{EqnRadChoice}. As shown in this section, if we make particular assumptions about the metric entropies, then we can derive more concrete guarantees. \paragraph{Parametric and finite VC classes:} One form of metric entropy, typical for a relatively simple function class $\ensuremath{\mathcal{G}}$ (such as those with finite VC dimension) scales as \begin{align} \label{EqnPolyMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp d \; \log \big(\frac{1}{\epsilon} \big), \end{align} for some dimensionality parameter $d$. For instance, bounds of this type hold for linear function classes with $d$ parameters, and for finite VC classes (with $d$ proportional to the VC dimension); see Chapter 5 of the book~\cite{wainwright2019high} for more details. \begin{corollary} \label{cor:poly} Suppose each class of the triple $\InputFunctionalSpace$ has metric entropy that is at most polynomial~\eqref{EqnPolyMetric} of order $d$. Then for a sample size $n \geq 2 d$, the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = d/n$ and \begin{align} \label{EqnPolyRchoice} \tilde{\ensuremath{\rho}}\big( \sqrt{\frac{d}{n}}, \delta \big) & \defeq c \; \Big \{ d \; \log \big( \frac{n}{d} \big) + \log \big(\frac{n}{\delta} \big) \Big \}, \end{align} where $c$ is a universal constant. \end{corollary} \begin{proof} Our strategy is to upper bound the radius $\ensuremath{\rho}$ from equation~\eqref{EqnDefnR}, and then show that this upper bound $\tilde{\ensuremath{\rho}}$ satisfies the conditions~\eqref{EqnRadChoice} for the specified choice of $\epsilon^2$. We first control the term $\log N_\epsilon(\TestFunctionClass{})$. We have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \sqrt{\frac{d}{n}} \; \int_{0}^{\epsilon} \sqrt{\log(1/u)} du \; = \; \epsilon \sqrt{\frac{d}{n}} \; \int_0^1 \sqrt{\log(1/(\epsilon t))} dt \; = \; c \epsilon \log(1/\epsilon) \sqrt{\frac{d}{n}}. \end{align*} Similarly, we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \epsilon \frac{d}{n} \Big \{ \int_{\epsilon}^{1} \log(1/t) dt + \log(1/\epsilon) \Big \} \; \leq \; c \, \epsilon \log(1/\epsilon) \frac{d}{n}. \end{align*} Finally, for terms not involving entropy integrals, we have \begin{align*} \max \Big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \Big\} & \leq c \frac{d}{n} \log(1/\epsilon). \end{align*} Setting $\epsilon^2 = d/n$, we see that the required conditions~\eqref{EqnRadChoice} hold with the specified choice~\eqref{EqnPolyRchoice} of $\tilde{\ensuremath{\rho}}$. \end{proof} \paragraph{Richer function classes:} In the previous section, the metric entropy scaled logarithmically in the inverse precision $1/\epsilon$. For other (richer) function classes, the metric entropy exhibits a polynomial scaling in the inverse precision, with an exponent $\alpha > 0$ that controls the complexity. More precisely, we consider classes of the form \begin{align} \label{EqnAlphaMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp \Big(\frac{1}{\epsilon} \Big)^\alpha. \end{align} For example, the class of Lipschitz functions in dimension $d$ has this type of metric entropy with $\alpha = d$. More generally, for Sobolev spaces of functions that have $s$ derivatives (and the $s^{th}$-derivative is Lipschitz), we encounter metric entropies of this type with $\alpha = d/s$. See Chapter 5 of the book~\cite{wainwright2019high} for further background. \begin{corollary} \label{cor:alpha} Suppose that each function class $\InputFunctionalSpace$ has metric entropy with at most \mbox{$\alpha$-scaling~\eqref{EqnAlphaMetric}} for some $\alpha \in (0,2)$. Then the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = (1/n)^{\frac{2}{2 + \alpha}}$, and \begin{align} \label{EqnAlphaRchoice} \tilde{\ensuremath{\rho}}\big((1/n)^{\frac{1}{2 + \alpha}}, \delta \big) & = c \; \Big \{ n^{\frac{\alpha}{2 + \alpha}} + \log(n/\delta) \Big \}. \end{align} where $c$ is a universal constant. \end{corollary} \noindent We note that for standard regression problems over classes with $\alpha$-metric entropy, the rate $(1/n)^{\frac{2}{2 + \alpha}}$ is well-known to be minimax optimal (e.g., see Chapter 15 in the book~\cite{wainwright2019high}, as well as references therein). \begin{proof} We start by controlling the terms involving entropy integrals. In particular, we have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \frac{c}{\sqrt{n}} u^{1 - \frac{\alpha}{2}} \Big |_{0}^\epsilon \; = \; \frac{c}{\sqrt{n}} \epsilon^{1 - \frac{\alpha}{2}}. \end{align*} Requiring that this term is of order $\epsilon^2$ amounts to enforcing that $\epsilon^{1 + \frac{\alpha}{2}} \asymp (1/\sqrt{n})$, or equivalently that $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$. If $\alpha \in (0,1]$, then the second entropy integral converges and is of lower order. Otherwise, if $\alpha \in (1,2)$, then we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \frac{c}{n} \int_{\epsilon^2}^\epsilon (1/u)^{\alpha} du \leq \frac{c}{n} (\epsilon^2)^{1 - \alpha}. \end{align*} Hence the requirement that this term is bounded by $\epsilon^2$ is equivalent to $\epsilon^{2 \alpha} \succsim (1/n)$, or $\epsilon^2 \succsim (1/n)^{1/\alpha}$. When $\alpha \in (1,2)$, we have $\frac{1}{\alpha} > \frac{2}{2 + \alpha}$, so that this condition is milder than our first condition. Finally, we have $ \max \big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \big\} \leq \frac{c}{n} \big(1/\epsilon)^{\alpha}$, and requiring that this term scales as $\epsilon^2$ amounts to requiring that $\epsilon^{2 +\alpha} \asymp (1/n)$, or equivalently $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$, as before. \end{proof} \section{Main Proofs} \label{sec:AnalysisProofs} This section is devoted to the proofs of our guarantees for general function classes---namely, \cref{prop:Deterministic} that holds in a deterministic manner, and \cref{thm:NewPolicyEvaluation} that gives high probability bounds under a particular sampling model. \subsection{Proof of \cref{prop:Deterministic}} \label{SecProofPropDeterministic} Our proof makes use of an elementary simulation lemma, which we state here: \begin{lemma}[Simulation lemma] \label{lem:Simulation} For any policy $\policyicy$ and function $Q$, we have \begin{align} \Estart{Q-\Qpi{\policyicy}}{,\policyicy} & = \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align} \end{lemma} \noindent See \cref{SecProofLemSimulation} for the proof of this claim. \subsubsection{Proof of policy evaluation claims} First of all, we have the elementary bounds \begin{align*} \abs{\VminEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \min_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }, \quad \mbox{and} \\ \abs{\VmaxEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }. \end{align*} Consequently, in order to prove the bound~\eqref{EqnWidthBound} it suffices to upper bound the right-hand side common in the two above displays. Since $\EmpiricalFeasibleSet{\policyicy} \subseteq \PopulationFeasibleSet{\policyicy}$, we have the upper bound \begin{align*} \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } & \leq \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \\ & = \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{\ensuremath{\mathbb{E}}ecti{[Q(S ,\policyicy) - \Qpi{\policyicy}(S ,\policyicy)]} {S \sim \nu_{\text{start}}}} \\ & \overset{\text{(i)}}{=} \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policyicy}} \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align*} where step (i) follows from \cref{lem:Simulation}. Combined with the earlier displays, this completes the proof of the bound~\eqref{EqnWidthBound}. We now show the inclusion $[ \VminEmp{\policyicy}, \VmaxEmp{\policyicy}] \ni \Vpi{\policyicy}$ when weak realizability holds. By definition of weak realizability, there exists some $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy}$. In conjunction with our sandwich assumption, we are guaranteed that $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$, and consequently \begin{align*} \VminEmp{\policyicy} & = \min_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \min_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}, \quad \mbox{and} \\ \VmaxEmp{\policyicy} & = \max_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \max_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}. \end{align*} \subsubsection{Proof of policy optimization claims} We now prove the oracle inequality~\eqref{EqnOracle} on the value $\Vpi{\widetilde \pi}$ of a policy $\widetilde \pi$ that optimizes the max-min criterion. Fix an arbitrary comparator policy $\policycomp$. Starting with the inclusion $[\VminEmp{\widetilde \pi}, \VmaxEmp{\widetilde \pi}] \ni \Vpi{\widetilde \pi}$, we have \begin{align*} \Vpi{\widetilde \pi} \stackrel{(i)}{\geq} \VminEmp{\widetilde \pi} \stackrel{(ii)}{\geq} \VminEmp{\policycomp} \; = \; \Vpi{\policycomp} - \Big(\Vpi{\policycomp} - \VminEmp{\policycomp} \Big) \stackrel{(iii)}{\geq} \Vpi{\policycomp} - \frac{1}{1-\discount} \max_{Q \in \PopulationFeasibleSet{\policycomp}} \frac{|\PolComplexGen{\policyicy}Gen{\policycomp}|}{1-\discount}, \end{align*} where step (i) follows from the stated inclusion at the start of the argument; step (ii) follows since $\widetilde \pi$ solves the max-min program; and step (iii) follows from the bound $|\Vpi{\policycomp} - \VminEmp{\policycomp}| \leq \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policycomp}} \frac{\PolComplexGen{\policyicy}}{1-\discount}$, as proved in the preceding section. This lower bound holds uniformly for all comparators $\policycomp$, from which the stated claim follows. \subsection{Proof of \cref{lem:Simulation}} \label{SecProofLemSimulation} For each $\timestep = 1, 2, \ldots$, let $\ensuremath{\mathbb{E}}ecti{}{\timestep}$ be the expectation over the state-action pair at timestep $\timestep$ upon starting from $\nu_{\text{start}}$, so that we have $\Estart{Q - \Qpi{\policyicy} }{,\policyicy} = \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0}$ by definition. We claim that \begin{align} \label{EqnInduction} \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0} & = \sum_{\tau=1}^{\timestep} \discount^{\tau-1} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\tau-1} + \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy} ]}{\timestep} \qquad \mbox{for all $\timestep = 1, 2, \ldots$.} \end{align} For the base case $\timestep = 1$, we have \begin{align} \label{EqnBase} \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{0} = \ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q]}{0} + \ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} & = \ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{0} + \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}, \end{align} where we have used the definition of the Bellman evaluation operator to assert that $\ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} = \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}$. Since $Q - \BellmanEvaluation{\policyicy}Q = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$, the equality~\eqref{EqnBase} is equivalent to the claim~\eqref{EqnInduction} with $\timestep = 1$. Turning to the induction step, we now assume that the claim~\eqref{EqnInduction} holds for some $\timestep \geq 1$, and show that it holds at step $\timestep + 1$. By a similar argument, we can write \begin{align*} \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{\timestep} = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q + \BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{\timestep} & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1} \\ & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1}. \end{align*} By the induction hypothesis, equality~\eqref{EqnInduction} holds for $\timestep$, and substituting the above equality shows that it also holds at time $\timestep + 1$. Since the equivalence~\eqref{EqnInduction} holds for all $\timestep$, we can take the limit as $\timestep \rightarrow \infty$, and doing so yields the claim. \subsection{Proof of \cref{thm:NewPolicyEvaluation}} \label{SecProofNewPolicyEvaluation} The proof relies on proving a high probability bound to \cref{EqnSandwich} and then invoking \cref{prop:Deterministic} to conclude. In the statement of the theorem, we require choosing $\epsilon > 0$ to satisfy the upper bound $\epsilon^2 \precsim \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}$, and then provide an upper bound in terms of $\sqrt{\ensuremath{\rho}(\epsilon,\delta)/n}$. It is equivalent to instead choose $\epsilon$ to satisfy the lower bound $\epsilon^2 \succsim \frac{\ensuremath{\rho}(\epsilon,\delta)}{n}$, and then provide upper bounds proportional to $\epsilon$. For the purposes of the proof, the latter formulation turns out to be more convenient and we pursue it here. \\ To streamline notation, let us introduce the shorthand $\inprod{f}{\Diff{\policyicy}(Q)} \defeq \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu$. For each pair $(Q, \policyicy)$, we then define the random variable \begin{align*} \ZvarShort \defeq \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\Diff{\policyicy}(Q)}{}\big|}{\TestNormaRegularizerEmp{}}. \end{align*} Central to our proof of the theorem is a uniform bound on this random variable, one that holds for all pairs $(Q, \policyicy)$. In particular, our strategy is to exhibit some $\epsilon > 0$ for which, upon setting $\ensuremath{\lambda} = 4 \epsilon^2$, we have the guarantees \begin{subequations} \begin{align} \label{EqnSandwichZvar} \frac{1}{4} \leq \frac{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}{ \sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \leq 2 \qquad & \mbox{uniformly for all $f \in \TestFunctionClass{}$, and} \\ \label{EqnUniformZvar} \ZvarShort \leq \epsilon \quad & \mbox{uniformly for all $(Q, \policyicy)$,} \end{align} \end{subequations} both with probability at least $1 - \delta$. In particular, consistent with the theorem statement, we show that this claim holds if we choose $\epsilon > 0$ to satisfy the inequality \begin{align} \label{EqnOriginalChoice} \epsilon^2 & \geq \bar{c} \frac{\ensuremath{\rho}(\epsilon, \delta)}{n} \; \end{align} where $\bar{c} > 0$ is a sufficiently large (but universal) constant. Supposing that the bounds~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar} hold, let us now establish the set inclusions claimed in the theorem. \paragraph{Inclusion $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}(\epsilon)$:} Define the random variable $\ensuremath{M}_n(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ |\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}|}{\TestNormaRegularizerEmp{}}$, and observe that $Q \in \PopulationFeasibleSetInfty{\policyicy}$ implies that $\ensuremath{M}_n(Q, \policyicy) = 0$. With this definition, we have \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \stackrel{(i)}{\leq} \ensuremath{M}_n(Q, \policyicy) + \ZvarShort \; \stackrel{(ii)}{\leq} \epsilon \end{align*} where step (i) follows from the triangle inequality; and step (ii) follows since $\ensuremath{M}_n(Q, \policyicy) = 0$, and $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}. \paragraph{Inclusion $\EmpiricalFeasibleSet{\policyicy}(\epsilon) \subseteq \PopulationFeasibleSet{\policyicy}(4 \epsilon)$} By the definition of $\PopulationFeasibleSet{\policyicy}(4 \epsilon)$, we need to show that \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}\big|}{\TestNormaRegularizerPop{}} \leq 4 \epsilon \qquad \mbox{for any $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Now we have \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \stackrel{(i)}{\leq} 2 \ensuremath{M}_n(Q, \policyicy) \stackrel{(ii)}{\leq} 2 \left \{ \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} + \ZvarShort \right \} \; \stackrel{(iii)}{\leq} 2 \big \{ \epsilon + \epsilon \} \; = \; 4 \epsilon, \end{align*} where step (i) follows from the sandwich relation~\eqref{EqnSandwichZvar}; step (ii) follows from the triangle inequality and the definition of $\ZvarShort$; and step (iii) follows since $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}, and \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \leq \epsilon, \qquad \mbox{using the inclusion $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Consequently, the remainder of our proof is devoted to establishing the claims~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar}. In doing so, we make repeated use of some Bernstein bounds, stated in terms of the shorthand $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma} \label{LemBernsteinBound} There is a universal constant $c$ such each the following statements holds with probability at least $1 - \delta$. For any $f$, we have \begin{subequations} \begin{align} \label{EqnBernsteinFsquare} \Big| \|f\|_n^2 - \|f\|_\mu^2 \Big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \Big \}, \end{align} and for any $(Q, \policyicy)$ and any function $f$, we have \begin{align} \label{EqnBernsteinBound} \big|\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \|f\|_\infty \ensuremath{\Psi_\numobs}(\delta) \Big \}. \end{align} \end{subequations} \end{lemma} \noindent These bounds follow by identifying a martingale difference sequence, and applying a form of Bernstein's inequality tailored to the martingale setting. See Section~\ref{SecProofLemBernsteinBound} for the details. \subsection{Proof of the sandwich relation~\eqref{EqnSandwichZvar}} We claim that (modulo the choice of constants) it suffices to show that \begin{align} \label{EqnCleanSandwich} \Big| \|f\|_n - \|f\|_\mu \Big| & \leq \epsilon \qquad \mbox{uniformly for all $f \in \TestFunctionClass{}$} \end{align} for some universal constant $c'$. Indeed, when this bound holds, we have \begin{align*} \|f\|_n + 2 \epsilon \leq \|f\|_\mu + 3 \epsilon \leq \frac{3}{2} \{ \|f\|_\mu + 2\epsilon \}, \quad \mbox{and} \quad \|f\|_n + 2 \epsilon \geq \|f\|_\mu + \epsilon \geq \frac{1}{2} \big \{ \|f\|_\mu + 2 \epsilon \}, \end{align*} so that $\frac{\|f\|_\mu + 2 \epsilon}{\|f \|_n + 2 \epsilon} \in \big[ \frac{1}{2}, \frac{3}{2} \big]$. To relate this statement to the claimed sandwich, observe the inclusion $\frac{\|f\| + \sqrt{2 \epsilon}}{\sqrt{\|f\|^2 + 4 \epsilon^2}} \in [1, \sqrt{2}]$, where $\|f\|$ can be either $\|f\|_n$ or $\|f\|_\mu$. Combining this fact with our previous bound, we see that $\frac{\sqrt{\|f\|_n^2 + 4 \epsilon^2}}{\sqrt{\|f\|_\mu^2 + 4 \epsilon^2}} \in \Big[ \frac{1}{\sqrt{2}} \frac{1}{2}, \frac{3 \sqrt{2}}{2} \Big] \subset \big[\frac{1}{4}, 3 \big]$, as claimed. \\ The remainder of our analysis is focused on proving the bound~\eqref{EqnCleanSandwich}. Defining the random variable $Y_n(\ensuremath{f}) = \big| \|f\|_n - \|f\|_\mu \big|$, we need to establish a high probability bound on $\sup_{f \in \TestFunctionClass{}} Y_n(\ensuremath{f})$. Let $\{f^1, \ldots, f^N \}$ be an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm. For any $f \in \TestFunctionClass{}$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$, whence \begin{align*} Y_n(f) \leq Y_n(f^j) + \big | Y_n(f^j) - Y_n(f) \big| & \stackrel{(i)}{\leq} Y_n(f^j) + \big| \|f^j\|_n - \|f\|_n \big| + \big| \|f^j\|_\mu - \|f\|_\mu \big| \\ & \stackrel{(ii)}{\leq} Y_n(f^j) + \|f^j - f\|_n + \|f^j - f\|_\mu \\ & \stackrel{(iii)}{\leq} Y_n(f^j) + 2 \epsilon, \end{align*} where steps (i) and (ii) follow from the triangle inequality; and step (iii) follows from the inequality $\max \{ \|f^j - f\|_n, \|f^j - f\|_\mu \} \leq \|f^j - f\|_\infty \leq \epsilon$. Thus, we have reduced the problem to bounding a finite maximum. Note that if $\max \{ \|f^j\|_n, \|f^j\|_\mu \} \leq \epsilon$, then we have $Y_n(f^j) \leq 2 \epsilon$ by the triangle inequality. Otherwise, we may assume that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. With probability at least $1 - \delta$, we have \begin{align*} \Big| \|f^j\|_n - \|f\|_\mu \Big| = \frac{ \Big| \|f^j\|_n^2 - \|f\|_\mu^2 \Big|}{\|f^j\|_n + \|f^j\|_\mu} & \stackrel{(i)}{\leq} \frac{ c \big \{ \|f^j\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \big \}}{\|f^j\|_\mu + \|f^j\|_n} \\ & \stackrel{(ii)}{\leq} c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \frac{\ensuremath{\Psi_\numobs}(\delta)}{\epsilon} \Big \}, \end{align*} where step (i) follows from the Bernstein bound~\eqref{EqnBernsteinFsquare} from Lemma~\ref{LemBernsteinBound}, and step (ii) uses the fact that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. Taking union bound over all $N$ elements in the cover and replacing $\delta$ with $\delta/N$, we have \begin{align*} \max_{j \in [N]} Y_n(f^j) & \leq c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \frac{\ensuremath{\Psi_\numobs}(\delta/N)}{\epsilon} \Big \} \end{align*} with probability at least $1 - \delta$. Recalling that $N = N_\epsilon(\TestFunctionClass{})$, our choice~\eqref{EqnOriginalChoice} of $\epsilon$ ensures that $\sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} \leq c \; \epsilon$ for some universal constant $c$. Putting together the pieces (and increasing the constant $\bar{c}$ in the choice~\eqref{EqnOriginalChoice} of $\epsilon$ as needed) yields the claim. \subsection{Proof of the uniform upper bound~\eqref{EqnUniformZvar}} \label{SecUniProof} We need to establish an upper bound on $\ZvarShort$ that that holds uniformly for all $(Q, \policyicy)$. Our first step is to prove a high probability bound for a fixed pair. We then apply a standard discretization argument to make it uniform in the pair. Note that we can write $\ZvarShort = \sup_{f \in \TestFunctionClass{}} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}$, where we have defined $V_n(f) \defeq |\inprod{f}{\Diff{\policyicy}(Q)}|$. Our first lemma provides a uniform bound on the latter random variables: \begin{lemma} \label{LemVbound} Suppose that $\epsilon^2 \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$. Then we have \begin{align} \label{EqnVbound} V_n(f) & \leq c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \} \qquad \mbox{for all $f \in \TestFunctionClass{}$} \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent See \cref{SecProofLemVbound} for the proof of this claim. \\ We claim that the bound~\eqref{EqnVbound} implies that, for any fixed pair $(Q, \policyicy)$, we have \begin{align*} Z_n(Q, \policyicy) \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} Indeed, when Lemma~\ref{LemVbound} holds, for any $f \in \TestFunctionClass{}$, we can write \begin{align*} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} = \frac{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \; \frac{V_n(f)}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \stackrel{(i)}{\leq} \; 3 \; \frac{c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \}}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \; \stackrel{(ii)}{\leq} \; c' \epsilon, \end{align*} where step (i) uses the sandwich relation~\eqref{EqnSandwichZvar}, along with the bound~\eqref{EqnVbound}; and step (ii) follows given the choice $\ensuremath{\lambda} = 4 \epsilon^2$. We have thus proved that for any fixed $(Q, \policyicy)$ and $\epsilon \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$, we have \begin{align} \label{EqnFixZvarBound} \ZvarShort & \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align} Our next step is to upgrade this bound to one that is uniform over all pairs $(Q, \policyicy)$. We do so via a discretization argument: let $\{Q^j\}_{j=1}^J$ and $\{\pi^k\}_{k=1}^K$ be $\epsilon$-coverings of $\Qclass{}$ and $\PolicyClass$, respectively. \begin{lemma} \label{LemDiscretization} We have the upper bound \begin{align} \sup_{Q, \policyicy} \ZvarShort & \leq \max_{(j,k) \in [J] \times [K]} \Zvar{Q^j}{\policyicy^k} + 4 \epsilon. \end{align} \end{lemma} \noindent See Section~\ref{SecProofLemDiscretization} for the proof of this claim. If we replace $\delta$ with $\delta/(J K)$, then we are guaranteed that the bound~\eqref{EqnFixZvarBound} holds uniformly over the family $\{ Q^j \}_{j=1}^J \times \{ \policyicy^k \}_{k=1}^K$. Recalling that $J = N_\epsilon(\Qclass{})$ and $K = N_\epsilon(\Pi)$, we conclude that for any $\epsilon$ satisfying the inequality~\eqref{EqnOriginalChoice}, we have $\sup_{Q, \policyicy} \ZvarShort \leq \ensuremath{\tilde{c}} \epsilon$ with probability at least $1 - \delta$. (Note that by suitably scaling up $\epsilon$ via the choice of constant $\bar{c}$ in the bound~\eqref{EqnOriginalChoice}, we can arrange for $\ensuremath{\tilde{c}} = 1$, as in the stated claim.) \subsection{Proofs of supporting lemmas} In this section, we collect together the proofs of~\cref{LemVbound,LemDiscretization}, which were stated and used in~\Cref{SecUniProof}. \subsubsection{Proof of ~\cref{LemVbound}} \label{SecProofLemVbound} We first localize the problem to the class $\ensuremath{\mathcal{F}}(\epsilon) = \{f \in \TestFunctionClass{} \mid \|f\|_\mu \leq \epsilon \}$. In particular, if there exists some $\tilde{f} \in \TestFunctionClass{}$ that violates~\eqref{EqnVbound}, then the rescaled function $f = \epsilon \tilde{f}/\|\tilde{f}\|_\mu$ belongs to $\ensuremath{\mathcal{F}}(\epsilon)$, and satisfies $V_n(f) \geq c \epsilon^2$. Consequently, it suffices to show that $V_n(f) \leq c \epsilon^2$ for all $f \in \ensuremath{\mathcal{F}}(\epsilon)$. Choose an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm with $N = N_\epsilon(\TestFunctionClass{})$ elements. Using this cover, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$. Thus, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can write \begin{align} \label{EqnOriginalInequality} V_n(f) \leq V_n(f^j) + V_n(f - f^j) \; \; \leq \underbrace{V_n(f^j)}_{T_1} + \underbrace{\sup_{g \in \ensuremath{\mathcal{G}}(\epsilon)} V_n(g)}_{T_2}, \end{align} where $\ensuremath{\mathcal{G}}(\epsilon) \defeq \{ f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{}, \|f_1 - f_2 \|_\infty \leq \epsilon \}$. We bound each of these two terms in turn. In particular, we show that each of $T_1$ and $T_2$ are upper bounded by $c \epsilon^2$ with high probability. \paragraph{Bounding $T_1$:} From the Bernstein bound~\eqref{EqnBernsteinBound}, we have \begin{align*} V_n(f^k) & \leq c \big \{ \|f^k\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \|f^k\|_\infty \ensuremath{\Psi_\numobs}(\delta/N) \big \} \qquad \mbox{for all $k \in [N]$} \end{align*} with probability at least $1 - \delta$. Now for the particular $f^j$ chosen to approximate $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we have \begin{align*} \|f^j\|_\mu & \leq \|f^j - f\|_\mu + \|f\|_\mu \leq 2 \epsilon, \end{align*} where the inequality follows since $\|f^j - f\|_\mu \leq \|f^j - f\|_\infty \leq \epsilon$, and $\|f\|_\mu \leq \epsilon$. Consequently, we conclude that \begin{align*} T_1 & \leq c \Big \{ 2 \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \ensuremath{\Psi_\numobs}(\delta/N) \Big \} \; \leq \; c' \epsilon^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} where the final inequality follows from our choice of $\epsilon$. \paragraph{Bounding $T_2$:} Define $\ensuremath{\mathcal{G}} \defeq \{f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{} \}$. We need to bound a supremum of the process $\{ V_n(g), g \in \ensuremath{\mathcal{G}} \}$ over the subset $\ensuremath{\mathcal{G}}(\epsilon)$. From the Bernstein bound~\eqref{EqnBernsteinBound}, the increments $V_n(g_1) - V_n(g_2)$ of this process are sub-Gaussian with parameter $\|g_1 - g_2\|_\mu \leq \|g_1 - g_2\|_\infty$, and sub-exponential with parameter $\|g_1 - g_2\|_\infty$. Therefore, we can apply a chaining argument that uses the metric entropy $\log N_t(\ensuremath{\mathcal{G}})$ in the supremum norm. Moreover, we can terminate the chaining at $2 \epsilon$, because we are taking the supremum over the subset $\ensuremath{\mathcal{G}}(\epsilon)$, and it has sup-norm diameter at most $2 \epsilon$. Moreover, the lower interval of the chain can terminate at $2 \epsilon^2$, since our goal is to prove an upper bound of this order. Then, by using high probability bounds for the suprema of empirical processes (e.g., Theorem 5.36 in the book~\cite{wainwright2019high}), we have \begin{align*} T_2 \leq c_1 \; \int_{2 \epsilon^2}^{2 \epsilon} \ensuremath{\phi} \big( \frac{\log N_t(\ensuremath{\mathcal{G}})}{n} \big) dt + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} + 2 \epsilon^2 \end{align*} with probability at least $1 - \delta$. (Here the reader should recall our shorthand $\ensuremath{\phi}(s) = \max \{s, \sqrt{s} \}$.) Since $\ensuremath{\mathcal{G}}$ consists of differences from $\TestFunctionClass{}$, we have the upper bound $\log N_t(\ensuremath{\mathcal{G}}) \leq 2 \log N_{t/2}(\TestFunctionClass{})$, and hence (after making the change of variable $u = t/2$ in the integrals) \begin{align*} T_2 \leq c'_1 \int_{\epsilon^2}^{ \epsilon} \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} \; \leq \; \ensuremath{\tilde{c}} \epsilon^2, \end{align*} where the last inequality follows from our choice of $\epsilon$. \subsubsection{Proof of ~\cref{LemDiscretization}} \label{SecProofLemDiscretization} By our choice of the $\epsilon$-covers, for any $(Q, \policyicy)$, there is a pair $(Q^j, \policyicy^k)$ such that \begin{align*} \|Q^j - Q\|_\infty \leq \epsilon, \quad \mbox{and} \quad \|\policyicy^k - \policyicy\|_{\infty,1} = \sup_{\state} \|\policyicy^k(\cdot \mid \state) - \policyicy(\cdot \mid \state) \|_1 \leq \epsilon. \end{align*} Using this pair, an application of the triangle inequality yields \begin{align*} \big| \Zvar{Q}{\policyicy} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \underbrace{\big| \Zvar{Q}{\policyicy} - \Zvar{Q}{\policyicy^k} \big|}_{T_1} + \underbrace{\big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big|}_{T_2} \end{align*} We bound each of these terms in turn, in particular proving that $T_1 + T_2 \leq 24 \epsilon$. Putting together the pieces yields the bound stated in the lemma. \paragraph{Bounding $T_2$:} From the definition of $\ensuremath{Z_\numobs}$, we have \begin{align*} T_2 = \big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \sup_{f \in \TestFunctionClass{}} \frac{\big| \smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}|}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}. \end{align*} Now another application of the triangle inequality yields \begin{align*} |\smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}| & \leq |\smlinprod{f}{\TDError{(Q- Q^j)}{\policyicy^k}{}}_n| + ||\smlinprod{f}{\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)}|_\mu \\ & \leq \|f\|_n \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_n + \|f\|_\mu \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\mu \\ & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty + \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty \Big \} \end{align*} where step (i) follows from the Cauchy--Schwarz inequality. Now in terms of the shorthand \mbox{$\Delta \defeq Q - Q^j$,} we have \begin{subequations} \begin{align} \label{EqnCoffeeBell} \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty & = \sup_{\psa} \Big| \Delta \psa - \discount \E_{\successorstate\sim\Pro\psa} \big[ \Delta(\successorstate,\policyicy) \big] \Big| \leq 2 \|\Delta\|_\infty \leq 2 \epsilon. \end{align} An entirely analogous argument yields \begin{align} \label{EqnCoffeeTD} \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty \leq 2 \epsilon \end{align} \end{subequations} Conditioned on the sandwich relation~\eqref{EqnSandwichZvar}, we have $\sup_{f \in \TestFunctionClass{}} \frac{\max \{ \|f\|_n, \|f\|_\mu \}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \leq 4$. Combining this bound with inequalities~\eqref{EqnCoffeeBell} and~\eqref{EqnCoffeeTD}, we have shown that $T_2 \leq 4 \big \{2 \epsilon + 2 \epsilon \} = 16 \epsilon$. \paragraph{Bounding $T_1$:} In this case, a similar argument yields \begin{align*} |\smlinprod{f}{(\Diff{\policyicy} - \Diff{\policyicy^k})(Q)}| & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n + \|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu \}. \end{align*} Now we have \begin{align*} \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n & \leq \max_{i = 1, \ldots, n} \Big| \sum_{\action'} \big(\policyicy(\action' \mid \state_i) - \policyicy^k(\action' \mid \state_i) \big) Q(\state^+_i, \action') \Big| \\ & \leq \max_{\state} \sum_{\action'} |\policyicy(\action' \mid \state) - \policyicy^k(\action \mid \state)| \; \|Q\|_\infty \\ & \leq \epsilon. \end{align*} A similar argument yields that $\|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu | \leq \epsilon$, and arguing as before, we conclude that $T_1 \leq 4 \{\epsilon + \epsilon \} = 8 \epsilon$. \subsubsection{Proof of Lemma~\ref{LemBernsteinBound}} \label{SecProofLemBernsteinBound} Our proof of this claim makes use of the following known Bernstein bound for martingale differences (cf. Theorem 1 in the paper~\cite{beygelzimer2011contextual}). Recall the shorthand notation $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma}[Bernstein's Inequality for Martingales] \label{lem:Bernstein} Let $\{X_t\}_{t \geq 1}$ be a martingale difference sequence with respect to the filtration $\{ \mathcal F_t \}_{t \geq 1}$. Suppose that $|X_t| \leq 1$ almost surely, and let $\E_t$ denote expectation conditional on $\mathcal F_t$. Then for all $\delta \in (0,1)$, we have \begin{align} \Big| \frac{1}{n} \sum_{t=1}^n X_t \Big| & \leq 2 \Big[ \Big(\frac{1}{n} \sum_{t=1}^n \E_t X^2_t \Big) \ensuremath{\Psi_\numobs}(2 \delta) \Big]^{1/2} + 2 \ensuremath{\Psi_\numobs}(2 \delta) \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent With this result in place, we divide our proof into two parts, corresponding to the two claims~\eqref{EqnBernsteinBound} and~\eqref{EqnBernsteinFsquare} stated in~\cref{LemBernsteinBound}. \paragraph{Proof of the bound~\eqref{EqnBernsteinBound}:} Recall that at step $i$, the triple $(\state,\action,\identifier)$ is drawn according to a conditional distribution $\mu_i(\cdot \mid \mathcal F_i)$. Similarly, we let $d_i$ denote the distribution of $\sarsi{}$ conditioned on the filtration $\mathcal F_i$. Note that $\mu_i$ is obtained from $d_i$ by marginalizing out the pair $(\reward, \successorstate)$. Moreover, by the tower property of expectation, the Bellman error is equivalent to the average TD error. Using these facts, we have the equivalence \begin{align*} \innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}} {d_{i}} & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {d_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) \ensuremath{\mathbb{E}}ecti{[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {\substack{\reward \sim R\psa, \successorstate\sim\Pro\psa}}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) [\ensuremath{\mathcal{B}}orDefCompact{Q}{\policyicy}{}]\big\}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu_{i}}. \end{align*} As a consequence, we can write $\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu = \frac{1}{n} \sum_{i=1}^n \ensuremath{W}_i$ where \begin{align*} \ensuremath{W}_i & \defeq \TestFunctionDefCompact{}{i} [\TDErrorDefCompact{Q}{\policyicy}{i}] - \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}{[\TDErrorDefCompact{Q}{\policyicy}{}}]\big\}}{d_{i}} \end{align*} defines a martingale difference sequence (MDS). Thus, we can prove the claim by applying a Bernstein martingale inequality. Since $\|r\|_\infty \leq 1$ and $\|Q\|_\infty \leq 1$ by assumption, we have $\|\ensuremath{W}_i\|_\infty \leq 3 \|f\|_\infty$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{d_i} [\ensuremath{W}_i^2] & \leq 9 \; \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state_{i},\action_{i}, o_{i})] \; = \; 9 \|f\|_\mu^2. \end{align*} Consequently, the claimed bound~\eqref{EqnBernsteinBound} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \paragraph{Proof of the bound~\eqref{EqnBernsteinFsquare}:} In this case, we have the additive decomposition \begin{align*} \|\ensuremath{f}c\|_n^2 - \|\ensuremath{f}\|_\mu^2 & = \frac{1}{n} \sum_{i=1}^n \Big \{ \underbrace{\ensuremath{f}c^2(\state_{i},\action_{i},o_{i}) - \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state, \action, o)]}_{\ensuremath{\martone'}_i} \Big\}, \end{align*} where $\{\ensuremath{\martone'}_i\}_{i=1}^n$ again defines a martingale difference sequence. Note that $\|\ensuremath{\martone'}_i\|_\infty \leq 2 \|\ensuremath{f}c\|_\infty^2 \leq 2$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} [(\ensuremath{\martone'}_i)^2] & \stackrel{(i)}{\leq} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[ \ensuremath{f}c^4(S, A, \ensuremath{O}) \big] \; \leq \; \|\ensuremath{f}c\|_\infty^2 \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[\ensuremath{f}c^2(S, A, \ensuremath{O}) \big] \; \stackrel{(ii)}{\leq} \; \|\ensuremath{f}c\|_\mu^2, \end{align*} where step (i) uses the fact that the variance of $\ensuremath{f}c^2$ is at most the fourth moment, and step (ii) uses the bound $\|\ensuremath{f}c\|_\infty \leq 1$. Consequently, the claimed bound~\eqref{EqnBernsteinFsquare} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \section{Proofs for \texorpdfstring{\cref{sec:Applications}}{} and \cref{sec:appConc}} In this section, we collect together the proofs of results stated without proof in Section~\ref{sec:Applications} and \cref{sec:appConc}. \subsection{Proof of \cref{prop:LikeRatio}} \label{sec:LikeRatio} \begin{proof} Since $\TestFunction{}^* \in \TestFunctionClass{\policyicy}$, we are guaranteed that the corresponding constraint must hold. It reads as \begin{align*} |\ensuremath{\mathbb{E}}ecti{\frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2 = \frac{1}{\scaling{\policyicy}^2} | \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} |^2 & \overset{(iii)}{\leq} \big( \frac{1}{\scaling{\policyicy}^2} \munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} where step (iii) follows from the definition of population constraint. Re-arranging yields the upper bound \begin{align*} \frac{|\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} & \leq \frac{\big(\munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \scalingsq{\policyicy}\ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} \; = \; \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}}, \end{align*} where the final step uses the fact that \begin{align*} \norm{\frac{\DistributionOfPolicy{\policyicy}}{\mu}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}^2\PSA}{\mu^2\PSA} }{\mu} = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA} }{\policyicy} \end{align*} Thus, we have established the bound (i) in our claim~\eqref{EqnLikeRatioBound}. The upper bound (ii) follows immediately since $\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} }{\policyicy} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} \leq \scaling{\policyicy}$. \end{proof} \subsection{Proof of \texorpdfstring{\cref{lem:PredictionError}}{}} \label{sec:PredictionError} Some simple algebra yields \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = [Q - \BellmanEvaluation{\policyicy}Q] - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy}\QpiWeak{\policyicy}] = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) (Q - \QpiWeak{\policyicy}) = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) QErr. \end{align*} Taking expectations under $\policyicy$ and recalling that $\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0$ for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$ yields \begin{align*} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} = \innerprodweighted{\TestFunction{}} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}. \end{align*} Notice that for any $Q \in \Qclass{\policyicy}$ there exists a test function $QErr = Q - \QpiWeak{\policyicy} \in \QclassErr{\policyicy}$, and the associated population constraint reads \begin{align*} \frac{ \big| \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu} \big| } { \sqrt{ \norm{QErr}{\mu}^2 + \TestFunctionReg } } & \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Consequently, the {off-policy cost coefficient} can be upper bounded as \begin{align*} K^\policy & \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{\ensuremath{\rho}}{n} \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \; \leq \; \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \: \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \Big \}, \end{align*} as claimed in the bound~\eqref{EqnPredErrorBound}. \subsection{Proof of \texorpdfstring{\cref{lem:PredictionErrorBellmanClosure}}{}} \label{sec:PredictionErrorBellmanClosure} If weak Bellman closure holds, then we can write \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = Q - \QpiProj{\policyicy}{Q} \in \QclassErr{\policyicy}. \end{align*} For any $Q \in \Qclass{\policyicy}$, the function $QErrNC = Q - \QpiProj{\policyicy}{Q}$ belongs to $\QclassErr{\policyicy}$, and the associated population constraint reads $\frac{ |\innerprodweighted{QErrNC} {QErrNC} {\mu} |} { \sqrt{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg }} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, the {off-policy cost coefficient} is upper bounded as \begin{align*} K^\policy & \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{n}{\ensuremath{\rho}} \; \frac{ v\innerprodweighted{\1} {QErrNC} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \; \leq \; \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \}, \end{align*} where the final inequality follows from the fact that $\norm{QErrNC}{\mu} \leq 1$. \subsection{Proof of \texorpdfstring{\cref{lem:BellmanTestFunctions}}{}} \label{sec:BellmanTestFunctions} We split our proof into the two separate claims. \paragraph{Proof of the bound~\eqref{EqnBellBound}:} When the test function class includes $\TestFunctionClassBubnov{\policyicy}$, then any $Q$ feasible must satisfy the population constraints \begin{align*} \frac{\innerprodweighted{ \ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}} {\sqrt{\norm{\ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}}{\mu}^2 + \TestFunctionReg}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}, \qquad \mbox{for all $Q' \in \Qpi{\policyicy}$.} \end{align*} Setting $Q' = Q$ yields $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 + \TestFunctionReg } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. If $\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 \geq \TestFunctionReg$, then the claim holds, given our choice $\TestFunctionReg = c \frac{\ensuremath{\rho}}{n}$ for some constant $c$. Otherwise, the constraint can be weakened to $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ 2\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$, which yields the bound~\eqref{EqnBellBound}. \paragraph{Proof of the bound~\eqref{EqnBellBoundConc}:} We now prove the sequence of inequalities stated in equation~\eqref{EqnBellBoundConc}. Inequality (i) follows directly from the definition of $K^\policy$ and \cref{lem:BellmanTestFunctions}. Turning to inequality (ii), an application of Jensen's inequality yields \begin{align*} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 = [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2 \leq \ensuremath{\mathbb{E}}ecti{[\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}]^2}{\policyicy} = \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2. \end{align*} Finally, inequality (iii) follows by observing that \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \frac{\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\policyicy}}{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{ \Big[ \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa} }{\mu} \Big][(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2 }{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa}. \end{align*} \section{Proofs for the Linear Setting} We now prove the results stated in~\cref{sec:Linear}. Throughout this section, the reader should recall that $Q$ takes the linear function $Q \psa = \inprod{\CriticPar{}}{\phi \psa}$, so that the bulk of our arguments operate directly on the weight vector $\CriticPar{} \in \R^\dim$. Given the linear structure, the population and empirical covariance matrices of the feature vectors play a central role. We make use of the following known result (cf. Lemma 1 in the paper~\cite{zhang2021optimal}) that relates these objects: \begin{lemma}[Covariance Concentration] \label{lem:CovarianceConcentration} There are universal constants $(c_1, c_2, c_3)$ such that for any $\delta \in (0, 1)$, we have \begin{align} c_1 \SigmaExplicit \preceq \frac{1}{n}\widehat \SigmaExplicit + \frac{c_2}{n} \log \frac{n \dim}{\FailureProbability}\Identity \preceq c_3 \SigmaExplicit + \frac{c_4}{n} \log \frac{n \dim}{\FailureProbability}\Identity. \end{align} with probability at least $1 - \FailureProbability$. \end{lemma} \subsection{Proof of \texorpdfstring{\cref{prop:LinearConcentrability}}{}} \label{sec:LinearConcentrability} Under weak realizability, we have \begin{align} \innerprodweighted {\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} {\mu} = 0 \qquad \mbox{for all $j = 1, \ldots, \dim$.} \end{align} Thus, at $\psa$ the Bellman error difference reads \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}\psa - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}\psa & = [Q - \BellmanEvaluation{\policyicy}Q]\psa - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy} \QpiWeak{\policyicy} ]\psa \\ & = [Q - \QpiWeak{\policyicy} ]\psa - \discount \ensuremath{\mathbb{E}}ecti{[Q - \QpiWeak{\policyicy}](\successorstate,\policyicy)}{\successorstate \sim \Pro\psa } \\ \numberthis{\label{eqn:LinearBellmanError}} & = \inprod{\CriticPar{} - \CriticParBest{\policyicy}}{ \phi\psa - \discount \phiBootstrap{\policyicy}\psa} \end{align*} To proceed we need the following auxiliary result: \begin{lemma}[Linear Parameter Constraints] \label{lem:RelaxedLinearConstraints} With probability at least $1-\FailureProbability$, there exists a universal constant $c_1 > 0$ such that if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParBest{\policyicy}}{\CovarianceWithBootstrapReg{\policyicy}}^2 \leq c_1 \frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} \noindent See \cref{sec:RelaxedLinearConstraints} for the proof. \newcommand{\IntermediateSmall}[4]{Using this lemma, we can bound the OPC coefficient as follows \begin{align*} K^\policy \overset{(i)}{\leq} \frac{n}{\ensuremath{\rho}} \; \max_{Q \in \PopulationFeasibleSet{\policyicy}} \innerprodweighted{\1} { \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} #4} {\policyicy}^2 & \overset{(ii)}{\leq} \frac{n}{\ensuremath{\rho}} \; [\ensuremath{\mathbb{E}}ecti{(#1)^\top}{\policyicy} (#2)]^2 \\ & \overset{(iii)}{\leq} \frac{n}{\ensuremath{\rho}} \: \norm{\ensuremath{\mathbb{E}}ecti{ #1 }{\policyicy}}{(#3)^{-1}}^2 \norm{#2}{#3}^2 \\ & \leq c_1 \dim \norm{\ensuremath{\mathbb{E}}ecti{ #1}{\policyicy}}{(#3)^{-1}}^2. \end{align*} Here step $(i)$ follows from the definition of off-policy cost coefficient, $(ii)$ leverages the linear structure and $(iii)$ is Cauchy-Schwartz. } \IntermediateSmall {\phi - \discount \phiBootstrap{\policyicy}} {\CriticPar{} - \CriticParBest{\policyicy}} {\CovarianceWithBootstrapReg{\policyicy}} {-\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} \subsection{Proof of \texorpdfstring{\cref{lem:RelaxedLinearConstraints}}{}} \label{sec:RelaxedLinearConstraints} \intermediate{(\phi - \discount\phiBootstrap{\policyicy})^\top} {(\SigmaReg - \discount \CovarianceBootstrap{\policyicy})} {(\CriticPar{} - \CriticParBest{\policyicy})} {\CovarianceWithBootstrapReg{\policyicy}} {(\Sigma - \discount \CovarianceBootstrap{\policyicy})} {\cref{eqn:LinearBellmanError}} \subsection{Proof of \cref{prop:LinearConcentrabilityBellmanClosure}} \label{sec:LinearConcentrabilityBellmanClosure} Under weak Bellman closure, we have \begin{align} \numberthis{\label{eqn:LinearBellmanErrorWithClosure}} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = \phi^\top(\CriticPar{} - \CriticParProjection{\policyicy}). \end{align} With a slight abuse of notation, let $\QpiProj{\policyicy}{\CriticPar{}}$ denote the weight vector that defines the action-value function $\QpiProj{\policyicy}{Q}$. We introduce the following auxiliary lemma: \begin{lemma}[Linear Parameter Constraints with Bellman Closure] \label{lem:RelaxedLinearConstraintsBellmanClosure} With probability at least $1-\FailureProbability$, if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParProjection{\policyicy}}{\SigmaReg }^2 \leq c_1\frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} See~\cref{sec:RelaxedLinearConstraintsBellmanClosure} for the proof. \IntermediateSmall {\phi} {\CriticPar{} - \CriticParProjection{\policyicy}} {\SigmaReg} {} \subsection{Proof of \texorpdfstring{\cref{sec:RelaxedLinearConstraintsBellmanClosure}}{}} \label{sec:RelaxedLinearConstraintsBellmanClosure} \intermediate{\phi^\top} {\SigmaReg} {(\CriticPar{} - \CriticParProjection{\policyicy})} {\SigmaReg} {\Sigma} {\cref{eqn:LinearBellmanErrorWithClosure}} \section{Proof of \texorpdfstring{\cref{thm:LinearApproximation}}{}} \label{sec:LinearApproximation} In this section, we prove the guarantee on our actor-critic procedure stated in \cref{thm:LinearApproximation}. \hidecom{ The proof consists of four main steps. First, we show that the critic linear $\QminEmp{\policyicy}$ function can be interpreted as the \emph{exact} value function on an MDP with a perturbed reward function. Second, we show that the global update rule in \cref{eqn:LinearActorUpdate} is equivalent to an instantiation of Mirror descent where the gradient is the $Q$ of the adversarial MDP. Third, we analyze the progress of Mirror descent in finding a good solution on the sequence of adversarial MDP identified by the critic. Finally we put everything together to derive a performance bound. } \subsection{Adversarial MDPs} \label{sec:AdversarialMDP} We now introduce sequence of adversarial MDPs $\{\mathcal{M}adv{t}\}_{t=1}^T$ used in the analysis. Each MDP $\mathcal{M}adv{t}$ is defined by the same state-action space and transition law as the original MDP $\mathcal{M}$, but with the reward functions $\Reward$ perturbed by $RAdv{t}$---that is \begin{align} \label{eqn:AdversarialMDP} \mathcal{M}adv{t} \defeq \langle \StateSpace,ASpace, R + RAdv{t}, \Pro, \discount \rangle. \end{align} For an arbitrary policy $\policyicy$, we denote with $\QpiAdv{t}{\policyicy}$ and with $\ApiAdv{t}{\policyicy}$ the action value function and the advantage function on $\mathcal{M}adv{t}$; the value of $\policyicy$ from the starting distribution $\nu_{\text{start}}$ is denoted by $\VpiAdv{t}{\policyicy}$. We immediately have the following expression for the value function, which follows because the dynamics of $\mathcal{M}adv{t}$ and $\mathcal{M}$ are identical and the reward function of $\mathcal{M}adv{t}$ equals that of $\mathcal{M}$ plus $RAdv{t}$ \begin{align} \label{eqn:VonAdv} \VpiAdv{t}{\policyicy} \defeq \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{ \Big[ R + RAdv{t} \Big]}{\policyicy}. \end{align} Consider the action value function $\QminEmp{\ActorPolicy{t}}$ returned by the critic, and let the reward perturbation $RAdv{t} = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$ be the Bellman error of the critic value function $\QminEmp{\ActorPolicy{t}}$. The special property of $\mathcal{M}adv{t}$ is that the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$ equals the critic lower estimate $\QminEmp{\ActorPolicy{t}}$. \begin{lemma}[Adversarial MDP Equivalence] \label{lem:QfncOnAdversarialMDP} Given the perturbed MDP $\mathcal{M}adv{t}$ from equation~\eqref{eqn:AdversarialMDP} with $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$, we have the equivalence \begin{align*} \QpiAdv{t}{\ActorPolicy{t}} = \QminEmp{\ActorPolicy{t}}. \end{align*} \end{lemma} \begin{proof} We need to check that $\QminEmp{\ActorPolicy{t}}$ solves the Bellman evaluation equations for the adversarial MDP, ensuring that $\QminEmp{\ActorPolicy{t}}$ is the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Let $\BellmanEvaluation{\ActorPolicy{t}}_t$ be the Bellman evaluation operator on $\mathcal{M}adv{t}$ for policy $\ActorPolicy{t}$. We have \begin{align*} \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}_t(\QminEmp{\ActorPolicy{t}}) = \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}(\QminEmp{\ActorPolicy{t}}) - RAdv{t} & = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} - \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} = 0. \end{align*} Thus, the function $\QminEmp{\ActorPolicy{t}}$ is the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$, and it is by definition denoted by $\QpiAdv{t}{\ActorPolicy{t}}$. \end{proof} This lemma shows that the action-value function $\QminEmp{\ActorPolicy{t}}$ computed by the critic is equivalent to the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Thus, we can interpret the critic as performing a model-based pessimistic estimate of $\ActorPolicy{t}$; this view is useful in the rest of the analysis. \subsection{Equivalence of Updates} The second step is to establish the equivalence between the update rule~\eqref{eqn:LinearActorUpdate}, or equivalently as the update~\eqref{eqn:GlobalRule}, to the exponentiated gradient update rule~\eqref{eqn:LocalRule}. \begin{lemma}[Equivalence of Updates] \label{lem:UpdateEquivalence} For linear $Q$-functions of the form $\QpiAdv{t}{} \psa = \inprod{\CriticPar{t}}{\phi \psa}$, the parameter update \begin{subequations} \begin{align} \label{eqn:GlobalRule} \ActorPolicy{t+1} \pas & \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta \CriticPar{t})), \qquad \intertext{is equivalent to the policy update} \label{eqn:LocalRule} \ActorPolicy{t+1}\pas & \propto \ActorPolicy{t}\pas \exp(\eta\QpiAdv{t}{} \psa), \qquad \ActorPolicy{1}\pas = \frac{1}{\card{ASpace_{\state}}}. \end{align} \end{subequations} \end{lemma} \begin{proof} We prove this claim via induction on $t$. The base case ($t = 1$) holds by a direct calculation. Now let us show that the two update rules update $\ActorPolicy{t}$ in the same way. As an inductive step, assume that both rules maintain the same policy $\ActorPolicy{t} \propto \exp(\phi\psa^\top\ActorPar{t})$ at iteration $t$; we will show the policies are still the same at iteration $t+1$. At any $\psa$, we have \begin{align*} \ActorPolicy{t+1}(\action \mid \state) \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta\CriticPar{t})) & \propto \exp(\phi\psa^\top \ActorPar{t}) \exp(\eta\phi\psa^\top \CriticPar{t}) \\ & \propto \ActorPolicy{t}(\action \mid \state) \exp(\eta\QpiAdv{t}{} \psa). \end{align*} \end{proof} Recall that $\ActorPar{t}$ is the parameter associated to $\ActorPolicy{t}$ and that $\CriticPar{t}$ is the parameter associated to $\QminEmp{\ActorPolicy{t}}$. Using \cref{lem:UpdateEquivalence} together with \cref{lem:QfncOnAdversarialMDP} we obtain that the actor policy $\ActorPolicy{t}$ satisfies through its parameter $\ActorPar{t}$ the mirror descent update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QminEmp{\ActorPolicy{t}} = \QpiAdv{t}{\ActorPolicy{t}}$ and $\ActorPolicy{1}(\action \mid \state) = 1/\abs{ASpace_\state}, \; \forall \psa$. In words, the actor is using Mirror descent to find the best policy on the sequence of adversarial MDPs $\{\mathcal{M}adv{t} \}$ implicitly identified by the critic. \subsection{Mirror Descent on Adversarial MDPs} Our third step is to analyze the behavior of mirror descent on the MDP sequence $\{\mathcal{M}adv{t}\}_{t=1}^T$, and then translate such guarantees back to the original MDP $\mathcal{M}$. The following result provides a bound on the average of the value functions $\{\Vpi{\ActorPolicy{t}} \}_{t=1}^T$ induced by the actor's policy sequence. This bound involves a form of optimization error\footnote{Technically, this error should depend on $\card{ASpace_\state}$, if we were to allow the action spaces to have varyign cardinality, but we elide this distinction here.} given by \begin{align*} \MirrorRegret{T} & = 2 \, \sqrt{ \frac{2 \log |ASpace|}{T}}, \end{align*} as is standard in mirror descent schemes. It also involves the \emph{perturbed rewards} given by $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QpiAdv{t}{\ActorPolicy{t}}} {\ActorPolicy{t}}{}$. \begin{lemma}[Mirror Descent on Adversarial MDPs] \label{prop:MirrorDescentAdversarialRewards} For any positive integer $T$, applying the update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QpiAdv{t}{\ActorPolicy{t}}$ for $T$ rounds yields a sequence such that \begin{align} \label{eqn:MirrorDescentAdversarialRewards} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align} valid for any comparator policy $\widetilde \policy$. \end{lemma} \noindent See \cref{sec:MirrorDescentAdversarialRewards} for the proof. \\ To be clear, the comparator policy $\widetilde \policy$ need belong to the soft-max policy class. Apart from the optimization error term, our bound~\eqref{eqn:MirrorDescentAdversarialRewards} involves the behavior of the perturbed rewards $RAdv{t}$ along the comparator $\widetilde \policy$ and $\ActorPolicy{t}$, respectively. These correction terms arise because the actor performs the policy update using the action-value function $\QpiAdv{t}{\ActorPolicy{t}}$ on the perturbed MDPs instead of the real underlying MDP. \subsection{Pessimism: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}$}{}} The fourth step of the proof is to leverage the pessimistic estimates returned by critic to simplify equation~\eqref{eqn:MirrorDescentAdversarialRewards}. Using \cref{lem:Simulation} and the definition of adversarial reward $RAdv{t}$ we can write \begin{align*} \VminEmp{\ActorPolicy{}} - \Vpi{\ActorPolicy{t}} = \frac{1}{1-\discount} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} & = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}. \end{align*} Since weak realizability holds, \cref{thm:NewPolicyEvaluation} guarantees that $\VminEmp{\ActorPolicy{}} \leq \Vpi{\policyicy}$ uniformly for all $\policyicy \in \PolicyClass$ with probability at least $1-\FailureProbability$. Coupled with the prior display, we find that \begin{align} \label{eqn:PessimisticAdversarialReward} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \leq 0. \end{align} Using the above display, the result in \cref{eqn:MirrorDescentAdversarialRewards} can be further upper bounded and simplified. \subsection{Concentrability: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$}{}} The term $\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$ can be interpreted as an approximate concentrability factor for the approximate algorithm that we are investigating. \paragraph{Bound under only weak realizability:} \cref{lem:RelaxedLinearConstraints} gives with probability at least $1-\FailureProbability$ that any surviving $Q$ in $\PopulationFeasibleSet{\ActorPolicy{t}}$ must satisfy: $ \norm{ \CriticPar{} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}}^2 \lesssim \frac{\dim \ensuremath{\rho}}{n} $ where $\CriticParBest{\ActorPolicy{t}}$ is the parameter associated to the weak solution $\QpiWeak{\ActorPolicy{t}}$. Such bound must apply to the parameter $ \CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$ identified by the critic.\footnote{We abuse the notation and write $\CriticPar{} \in \EmpiricalFeasibleSet{\policyicy}$ in place of $Q \in \EmpiricalFeasibleSet{\policyicy}$}. We are now ready to bound the remaining adversarial reward along the distribution of the comparator $\widetilde \policy$. \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} & = \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} \\ & \overset{\text(i)}{=} \abs{\ensuremath{\mathbb{E}}ecti{ (\phi - \discount \phiBootstrap{\ActorPolicy{t}})^\top (\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\ActorPolicy{t}}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\ActorPolicy{t}})^{-1}} \norm{\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}} \\ & \leq c \; \sqrt{\frac{\dim \ensuremath{\rho}}{n}} \; \sup_{\policyicy \in \PolicyClass} \left \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\policyicy})^{-1}} \right \}. \numberthis{\label{eqn:ApproximateLinearConcentrability}} \end{align*} Step (i) follows from the expression~\eqref{eqn:LinearBellmanError} for the weak Bellman error, along with the definition of the weak solution $\QpiWeak{\ActorPolicy{t}}$. \paragraph{Bound under weak Bellman closure:} When Bellman closure holds we proceed analogously. The bound in \cref{lem:RelaxedLinearConstraintsBellmanClosure} ensures with probability at least $1-\FailureProbability$ that $ \norm{\CriticPar{} - \CriticParProjection{\ActorPolicy{t}}}{\SigmaReg}^2 \leq c \; \frac{\dim \ensuremath{\rho}}{n} $ for all $\CriticPar{} \in \PopulationFeasibleSet{\ActorPolicy{t}}$; as before, this relation must apply to the parameter chosen by the critic $\CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$. The bound on the adversarial reward along the distribution of the comparator $\widetilde \policy$ now reads \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} \: = \: \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} & \overset{\text{(i)}}{=} \abs{\ensuremath{\mathbb{E}}ecti{ \phi^\top (\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \norm{\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}}{\SigmaReg} \\ & \leq c \; \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \sqrt{\frac{\dim \ensuremath{\rho}}{n}}. \numberthis{\label{eqn:ApproximateLinearConcentrabilityBellmanClosure}} \end{align*} Here step (i) follows from the expression~\eqref{eqn:LinearBellmanErrorWithClosure} for the Bellman error under weak closure. \subsection{Proof of \cref{prop:MirrorDescentAdversarialRewards}} \label{sec:MirrorDescentAdversarialRewards} We now prove our guarantee for a mirror descent procedure on the sequence of adversarial MDPs. Our analysis makes use of a standard result on online mirror descent for linear functions (e.g., see Section 5.4.2 of Hazan~\cite{hazan2021introduction}), which we state here for reference. Given a finite cardinality set $\xSpace$, a function $f: \xSpace \rightarrow \R$, and a distribution $\ensuremath{\nu}$ over $\xSpace$, we define $f(\ensuremath{\nu}) \defeq \sum_{x \in \xSpace} \ensuremath{\nu}(x) f(x)$. The following result gives a guarantee that holds uniformly for any sequence of functions $\{f_t\}_{t=1}^T$, thereby allowing for the possibility of adversarial behavior. \begin{proposition}[Adversarial Guarantees for Mirror Descent] \label{prop:MirrorDescent} Suppose that we initialize with the uniform distribution $\ensuremath{\nu}_{1}(\xvar) = \frac{1}{\abs{\xSpace}}$ for all $\xvar \in \xSpace$, and then perform $T$ rounds of the update \begin{align} \label{eqn:ExponentiatedGradient} \ensuremath{\nu}_{t+1}(\xvar ) \propto \ensuremath{\nu}_{t}(\xvar) \exp(\eta \fnc_{t}(\xvar)), \quad \mbox{for all $\xvar \in \xSpace$,} \end{align} using $\eta = \sqrt{\frac{\log \abs{\xSpace}}{2T}}$. If $\norm{\fnc_{t}}{\infty} \leq 1$ for all $t \in [T]$ then we have the bound \begin{align} \label{EqnMirrorBoundStatewise} \frac{1}{T} \sum_{t=1}^T \Big[ \fnc_{t}(\widetilde{\ensuremath{\nu}}) - \fnc_{t}(\ensuremath{\nu}_{t})\Big] \leq \MirrorRegret{T} \defeq 2\sqrt{\frac{2\log \abs{\xSpace}}{T}}. \end{align} where $\widetilde{\ensuremath{\nu}}$ is any comparator distribution over $\xSpace$. \end{proposition} We now use this result to prove our claim. So as to streamline the presentation, it is convenient to introduce the advantage function corresponding to $\ActorPolicy{t}$. It is a function of the state-action pair $\psa$ given by \begin{align*} \ApiAdv{t}{\ActorPolicy{t}}\psa \defeq \QpiAdv{t}{\ActorPolicy{t}}\psa - \ensuremath{\mathbb{E}}ecti{\QpiAdv{t}{\ActorPolicy{t}}(\state,\successoraction)}{\successoraction \sim \ActorPolicy{t}(\cdot \mid \state)}. \end{align*} In the sequel, we omit dependence on $\psa$ when referring to this function, consistent with the rest of the paper. From our earlier observation~\eqref{eqn:VonAdv}, recall that the reward function of the perturbed MDP $\mathcal{M}adv{t}$ corresponds to that of $\mathcal{M}$ plus the perturbation $RAdv{t}$. Combining this fact with a standard simulation lemma (e.g., \cite{kakade2003sample}) applied to $\mathcal{M}adv{t}$, we find that \begin{subequations} \begin{align} \label{EqnInitialBound} \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} & = \VpiAdv{t}{\widetilde \policy} - \VpiAdv{t}{\ActorPolicy{t}} + \frac{1}{1-\discount} \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \; = \; \frac{1}{1-\discount} \Big[ \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big]. \end{align} Now for any given state $\state$, we introduce the linear objective function \begin{align*} \fnc_t(\ensuremath{\nu}) & \defeq \ensuremath{\mathbb{E}}_{\action \sim \ensuremath{\nu}} \QpiAdv{t}{\ActorPolicy{t}}(\state, \action) \; = \; \sum_{\action \in ASpace} \ensuremath{\nu}(\action) \QpiAdv{t}{\ActorPolicy{t}}(\state, \action), \end{align*} where $\ensuremath{\nu}$ is a distribution over the action space. With this choice, we have the equivalence \begin{align*} \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa & = \fnc_t(\widetilde \policy(\cdot \mid \state)) - \fnc_t\big(\ActorPolicy{t}(\cdot \mid \state) \big), \end{align*} where the reader should recall that we have fixed an arbitrary state $\state$. Consequently, applying the bound~\eqref{EqnMirrorBoundStatewise} with $\xSpace = ASpace$ and these choices of linear functions, we conclude that \begin{align} \label{EqnMyMirror} \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa \leq \MirrorRegret{T}. \end{align} \end{subequations} This bound holds for any state, and also for any average over the states. We now combine the pieces to conclude. By computing the average of the bound~\eqref{EqnInitialBound} over all $T$ iterations, we find that \begin{align*} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] & \leq \frac{1}{1-\discount} \left \{ \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \} \\ & \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align*} where the final inequality follow from the bound~\eqref{EqnMirrorBoundStatewise}, applied for each $\state$. We have thus established the claim. \section{Concentrability Coefficients and Test Spaces} \label{sec:Applications} In this section, we develop some connections to concentrability coefficients that have been used in past work, and discuss various choices of the test class. Like the predictor class $\Qclass{\policyicy}$, the test class $\TestFunctionClass{\policyicy}$ encodes domain knowledge, and thus its choice is delicate. Different from the predictor class, the test class does not require a `realizability' condition. As a general principle, the test functions should be chosen as orthogonal as possible with respect to the Bellman residual, so as to enable rapid progress towards the solution; at the same time, they should be sufficiently ``aligned'' with the dataset, meaning that $\norm{\TestFunction{}}{\mu}$ or its empirical counterpart $\norm{\TestFunction{}}{n}$ should be large. Given a test class, each additional test function posits a new constraint which helps identify the correct predictor; at the same time, it increases the metric entropy (parameter $\ensuremath{\rho}$), which makes each individual constraints more loose. In summary, there are trade-offs to be made in the selection of the test class $\TestFunctionClass{}$, much like $\Qclass{}$. In order to assess the statistical cost that we pay for off-policy data, it is natural to define the \emph{off-policy cost coefficient} (OPC) as \begin{align} \label{EqnConcSimple} K^\policy(\PopulationFeasibleSet{\policyicy}, \ensuremath{\rho}, \ensuremath{\lambda}) & \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} = \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{} }{\policyicy} ^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}}, \end{align} With this notation, our off-policy width bound~\eqref{EqnWidthBound} can be re-expressed as \begin{subequations} \begin{align} \label{eqn:ConcreteCI} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}}, \end{align} while the oracle inequality~\eqref{EqnOracle} for policy optimization can be re-expressed in the form \begin{align} \label{EqnConcSimpleBound} \Vpi{\widetilde \pi} \geq \max_{\policyicy \in \PolicyClass} \Big\{ \Vpi{\policyicy} - \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{K^\policy \frac{ \ensuremath{\rho}}{n}} \Big \}, \end{align} \end{subequations} Since $\ensuremath{\lambda} \sim \ensuremath{\rho}/n$, the factor $\sqrt{1 + \ensuremath{\lambda}}$ can be bounded by a constant in the typical case $n \geq \ensuremath{\rho}$. We now offer concrete examples of the OPC , while deferring further examples to \cref{sec:appConc}. \subsection{Likelihood ratios} Our broader goal is to obtain small Bellman error along the distribution induced by $\policyicy$. Assume that one constructs a test function class $\TestFunctionClass{\policyicy}$ of possible likelihood ratios. \begin{proposition}[Likelihood ratio bounds] \label{prop:LikeRatio} Assume that for some constant $\scaling{\policyicy}$, the test function defined as $\TestFunction{}^*\psa = \frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa}$ belongs to $\TestFunctionClass{\policyicy}$ and satisfies $\norm{\TestFunction{}^*}{\infty} \leq 1$. Then the {OPC } coefficient satisfies \begin{align} \label{EqnLikeRatioBound} \ConcentrabilityGeneric{\policyicy} \stackrel{(i)}{\leq} \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}} \; \stackrel{(ii)}{\leq} \l \frac{\scaling{\policyicy} \big(1 + \scaling{\policyicy} \ensuremath{\lambda} \big)}{1 + \ensuremath{\lambda}}. \end{align} \end{proposition} Here $\scaling{\policyicy}$ is a scaling parameter that ensures $\norm{\TestFunction{}^*}{\infty} \leq 1$. Concretely one can take $\scaling{\policyicy} = \sup_{(\state,\action)} \frac{\DistributionOfPolicy{\policyicy}(\state,\action)}{\mu(\state,\action)}$. The proof is in \cref{sec:LikeRatio}. Since $\ensuremath{\lambda} = \ensuremath{\lambda}_n \rightarrow 0$ as $n$ increases, the {OPC } coefficient is bounded by a multiple of the expected ratio $\ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big]}{\policyicy}{}$. Up to an additive offset, this expectation is equivalent to the $\chi^2$-distribution between the policy-induced occupation measure $\DistributionOfPolicy{\policyicy}$ and data-generating distribution $\mu$. The concentrability coefficient can be plugged back into \cref{eqn:ConcreteCI,EqnConcSimpleBound} to obtain a concrete policy optimization bound. In this case, we recover a result similar to \cite{xie2020Q}, but with a much milder concentrability coefficient that involves only the chosen comparator policy. \subsection{The error test space} \label{sec:ErrorTestSpace} We now turn to the discussion of a choice for the test space that extends the LSTD algorithm to non-linear spaces. A simplification to the linear setting is presented later in \cref{sec:Linear}. As is well known, the LSTD algorithm \cite{bradtke1996linear} can be seen as minimizing the Bellman error projected onto the linear prediction space $Q$. Define the transition operator $ (\TransitionOperator{\policyicy}Q)\psa = \ensuremath{\mathbb{E}}ecti{Q(\successorstate,\policyicy ) } {\successorstate \sim \Pro\psa} $, and the prediction error $QErr = Q - \QpiWeak{\policyicy}$, where $\QpiWeak{\policyicy}$ is a $Q$-function from the definition of weak realizability. The Bellman error can be re-written as $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = (\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr $. When realizability holds, in the linear setting and at the population level, the LSTD solution seeks to satisfy the projected Bellman equations \begin{align} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} = 0, \quad \text{for all $\TestFunction{} \in \QclassErrCentered{\policyicy}$}. \end{align} In the linear case, $\QclassErrCentered{\policyicy} $ is the class of linear functions $\Qclass{\policyicy}$ used as predictors; when $\Qclass{\policyicy}$ is non-linear, we can extend the LSTD method by using the (nonlinear) error test space $\TestFunctionClass{\policyicy} = \QclassErrCentered{\policyicy} = \{Q - \QpiWeak{\policyicy} \}$. Since $\QclassErrCentered{\policyicy}$ is unknown (as it depends on the weak solution $\QpiWeak{\policyicy}$), we choose instead the larger class \begin{align*} \QclassErr{\policyicy} = \{ Q - Q' \mid Q,Q' \in \Qclass{\policyicy} \}, \end{align*} which contains $\QclassErrCentered{\policyicy}$. The resulting approach can be seen as performing a projection of the Bellman operator $\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$ into the error space $\QclassErrCentered{\policyicy}$, much like LSTD does in the linear setting. However, different from LSTD, our procedure returns confidence intervals as opposed to a point estimator. This choice of the test space is related to the Bubnov-Galerkin method~\cite{repin2017one} for linear spaces; it selects the test space $\TestFunctionClass{\policyicy}$ to be identical to the trial space $\QclassErrCentered{\policyicy}$ that contains all possible solution errors. \begin{lemma}[OPC coefficient from prediction error] \label{lem:PredictionError} For any test function class $\TestFunctionClass{\policyicy} \supseteq \QclassErr{\policyicy}$, we have \begin{align} \label{EqnPredErrorBound} K^\policy & \leq \max_{Q \in \Qclass{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2 } { \innerprodweighted{QErr} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu}^2 } \big \} = \max_{QErr \in \QclassErrCentered{\policyicy}} \big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \discount\TransitionOperator{\policyicy})QErr} {\mu}^2 } \big \}. \end{align} \end{lemma} The above coefficient measures the ratio between the Bellman error along the distribution of the target policy $\policyicy$ and that projected onto the error space $\QclassErrCentered{\policyicy}$ defined by $\Qclass{\policyicy}$. It is a concentrability coefficient that \emph{always} applies, as the choice of the test space does not require domain knowledge. See~\cref{sec:PredictionError} for the proof, and \cref{sec:appBubnov} for further comments and insights, as well as a simplification in the special case of Bellman closure. \subsection{The Bellman test space} \label{sec:DomainKnowledge} In the prior section we controlled the projected Bellman error. Another longstanding approach in reinforcement learning is to control the Bellman error itself, for example by minimizing the squared Bellman residual. In general, this cannot be done if only an offline dataset is available due to the well known \emph{double sampling} issue. However, in some cases we can use an helper class to try to capture the Bellman error. Such class needs to be a superset of the class of \emph{Bellman test functions} given by \begin{align} \label{EqnBellmanTest} \TestFunctionClassBubnov{\policyicy} & \defeq \{ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} \mid Q \in \Qclass{\policyicy} \}. \end{align} Any test class that contains the above allows us to control the Bellman residual, as we show next. \begin{lemma}[Bellman Test Functions] \label{lem:BellmanTestFunctions} For any test function class $ \TestFunctionClass{\policyicy}$ that contains $\TestFunctionClassBubnov{\policyicy}$, we have \begin{subequations} \begin{align} \label{EqnBellBound} \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \leq c_1 \sqrt{\frac{\ensuremath{\rho}}{n}} \qquad \mbox{for any $Q \in \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{\policyicy})$.} \end{align} Moreover, the {off-policy cost coefficient} is upper bounded as \begin{align} \label{EqnBellBoundConc} K^\policy & \stackrel{(i)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(ii)}{\leq} c_1 \sup_{Q \in \Qclass{\policyicy}} \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2} \stackrel{(iii)}{\leq} c_1 \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa} {\mu\psa}. \end{align} \end{subequations} \end{lemma} \noindent See~\cref{sec:BellmanTestFunctions} for the proof of this claim. Consequently, whenever the test class includes the Bellman test functions, the {off-policy cost coefficient} is at most the ratio between the squared Bellman residuals along the data generating distribution and the target distribution. If Bellman closure holds, then the prediction error space $\QclassErr{\policyicy}$ introduced in \cref{sec:ErrorTestSpace} contains the Bellman test functions: for $Q \in \Qclass{\policyicy}$, we can write $ \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q \in \QclassErr{\policyicy} $. This fact allows us to recover a result in the recent paper~\cite{xie2021bellman} in the special case of Bellman closure, although the approach presented here is more general. \subsection{Combining test spaces} \label{sec:MultipleRobustness} Often, it is natural to construct a test space that is a union of several simpler classes. A simple but valuable observation is that the resulting procedure inherits the best of the OPC coefficients. Suppose that we are given a collection $\{ \TestFunctionClass{\policyicy}_m \}_{m=1}^M$ of $M$ different test function classes, and define the union $\TestFunctionClass{\policyicy} = \bigcup_{m=1}^M \TestFunctionClass{\policyicy}_m$. For each $m = 1, \ldots, M$, let $K^\policy_m$ be the OPC coefficient defined by the function class $\TestFunctionClass{\policyicy}_m$ and radius $\ensuremath{\rho}$, and let $K^\policy(\TestFunctionClass{})$ be the OPC coefficient associated with the full class. Then we have the following guarantee: \begin{lemma}[Multiple test classes] \label{prop:MultipleRobustness} $ \label{EqnMinBound} K^\policy(\TestFunctionClass{}) \leq \min_{m = 1, \ldots, M} K^\policy_m. $ \end{lemma} \noindent This guarantee is a straightforward consequence of our construction of the feasibility sets: in particular, we have $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}) = \cap_{m =1}^M \PopulationFeasibleSet{\policyicy}(\TestFunctionClass{}_m)$, and consequently, by the variational definition of the {off-policy cost coefficient} $K^\policy(\TestFunctionClass{})$ as optimization over $\PopulationFeasibleSet{\policyicy}(\TestFunctionClass{})$, the bound~\eqref{EqnMinBound} follows. In words, when multiple test spaces are combined, then our algorithms inherit the best (smallest) OPC coefficient over all individual test spaces. While this behavior is attractive, one must note that there is a statistical cost to using a union of test spaces: the choice of $\ensuremath{\rho}$ scales as a function of $\TestFunctionClass{}$ via its metric entropy. This increase in $\ensuremath{\rho}$ must be balanced with the benefits of using multiple test spaces.\footnote{For space reasons, we defer to \cref{sec:IS2BC} an application in which we construct a test function space as a union of subclasses, and thereby obtain a method that automatically leverages Bellman closure when it holds, falls back to importance sampling if closure fails, and falls back to a worst-case bound in general.} \section{Linear Setting} \label{sec:Linear} In this section, we turn to a detailed analysis of our estimators using function classes that are linear in a feature map. Let $\phi: \StateSpace \times ASpace \rightarrow \R^\dim$ be a given feature map, and consider linear expansions $g_{\CriticPar{}} \psa \defeq \inprod{\CriticPar{}}{\phi \psa} \; = \; \sum_{j=1}^d \CriticPar{j} \phi_j \psa$. The class of \emph{linear functions} takes the form \begin{align} \label{EqnLinClass} \ensuremath{\mathcal{L}} & \defeq \{ \psa \mapsto g_{\CriticPar{}} \psa \mid \CriticPar{} \in \R^{\dim}, \; \norm{\CriticPar{}}{2} \leq 1 \}. \end{align} Throughout our analysis, we assume that $\norm{\phi(\state, \action)}{2} \leq 1$ for all state-action pairs. Following the approach in \cref{sec:ErrorTestSpace}, which is based on the LSTD method, we should choose the test function class $\TestFunctionClass{\policyicy} = \LinSpace$, as in the linear case the prediction error is linear. In order to obtain a computationally efficient implementation, we need to use a test class that is a ``simpler'' subset of $\LinSpace$. In particular, for linear functions, it is not hard to show that the estimates $\VminEmp{\policyicy}$ and $\VmaxEmp{\policyicy}$ from equation~\eqref{eqn:ConfidenceIntervalEmpirical} can be computed by solving a quadratic program, with two linear constraints for each test function. (See~\cref{sec:LinearConfidenceIntervals} for the details.) Consequently, the computational complexity scales linearly with the number of test functions. Thus, if we restrict ourselves to a finite test class contained within $\LinSpace$, we will obtain a computationally efficient approach. \subsection{A computationally friendly test class and OPC coefficients} Define the empirical covariance matrix $\widehat \Sigma = \frac{1}{n} \sum_{i=1}^{n} \phiEmpirical{i} \phiEmpirical{i}^T$ \mbox{where $\phiEmpirical{i} \defeq \phi(\state_i,\action_i)$.} Let $\{\EigenVectorEmpirical{j} \}_{j=1}^\dim$ be the eigenvectors of empirical covariance matrix $\widehat \Sigma$, and suppose that they are normalized to have unit $\ell_2$-norm. We use these normalized eigenvectors to define the finite test class \begin{align} \label{eqn:LinTestFunction} \TestFunctionClassRandom{\policyicy} \defeq \{ f_j, j = 1, \ldots, \dim \} \quad \mbox{where $f_j \psa \defeq \inprod{\EigenVectorEmpirical{j}}{\phi \psa}$} \end{align} A few observations are in order: \begin{carlist} \item This test class has only $d$ functions, so that our QP implementation has $2 \dim$ constraints, and can be solved in polynomial time. (Again, see \cref{sec:LinearConfidenceIntervals} for details.) \item Since $\TestFunctionClassRandom{\policyicy}$ is a subset of $\LinSpace$ the choice of radius $\ensuremath{\rho} = c( \frac{\dim}{n} + \log 1/\delta)$ is valid for some constant $c$. \end{carlist} \newcommand{\ConcSimple(\TestFunctionClassRandom{\policy})}{K^\policy(\TestFunctionClassRandom{\policyicy})} \paragraph{Concentrability} When weak Bellman closure does not hold, then our analysis needs to take into account how errors propagate via the dynamics. In particular, we define the \emph{next-state feature extractor} $\phiBootstrap{\policyicy} \psa \defeq \ensuremath{\mathbb{E}}ecti{\phi(\successorstate,\policyicy)}{\successorstate \sim \Pro\psa}$, along with the population covariance matrix $\Sigma \defeq \E_\mu \big[\phi \psa \phi^\top \psa \big]$, and its $\ensuremath{\lambda}$-regularized version $\SigmaReg \defeq \Sigma + \TestFunctionReg \Identity$. We also define the matrices \begin{align*} \CovarianceBootstrap{\policyicy} \defeq \ensuremath{\mathbb{E}}ecti{[\phi(\phiBootstrap{\policyicy})^\top]}{ \mu}, \quad \CovarianceWithBootstrapReg{\policyicy} \defeq (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy} )^\top (\SigmaReg^{\frac{1}{2}} - \discount \SigmaReg^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}). \end{align*} The matrix $\CovarianceBootstrap{\policyicy}$ is the cross-covariance between successive states, whereas the matrix $\CovarianceWithBootstrapReg{\policyicy}$ is a suitably renormalized and symmetrized version of the matrix $\Sigma^{\frac{1}{2}} - \discount \Sigma^{-\frac{1}{2}} \CovarianceBootstrap{\policyicy}$, which arises naturally from the policy evaluation equation. We refer to quantities that contain evaluations at the next-state (e.g., $\phiBootstrap{\policyicy}$) as bootstrapping terms, and now bound the OPC coefficient in the presence of such terms: \begin{proposition}[OPC bounds with bootstrapping] \label{prop:LinearConcentrability} Under weak realizability, we have \begin{align} \label{EqnOPCBootstrap} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\policyicy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \qquad \mbox{with probability at least $1-\FailureProbability$.} \end{align} \end{proposition} \noindent See~\cref{sec:LinearConcentrability} for the proof. The bound~\eqref{EqnOPCBootstrap} takes a familiar form, as it involves the same matrices used to define the LSTD solution. This is expected, as our approach here is essentially equivalent to the LSTD method; the difference is that LSTD only gives a point estimate as opposed to the confidence intervals that we present here; however, they are both derived from the same principle, namely from the Bellman equations projected along the predictor (error) space. The bound quantifies how the feature extractor $\phi$ together with the bootstrapping term $\phiBootstrap{\policyicy}$, averaged along the target policy $\policyicy$, interact with the covariance matrix with bootstrapping $\CovarianceWithBootstrapReg{\policyicy}$. It is an approximation to the OPC coefficient bound derived in~\cref{lem:PredictionError}. The bootstrapping terms capture the temporal difference correlations that can arise in reinforcement learning when strong assumptions like Bellman closure do not hold. As a consequence, such an OPC coefficient being small is a \emph{sufficient} condition for reliable off-policy prediction. This bound on the OPC coefficient always applies, and it reduces to the simpler one~\eqref{EqnOPCClosure} when weak Bellman closure holds, with no need to inform the algorithm of the simplified setting; see \cref{sec:LinearConcentrabilityBellmanClosure} for the proof. \begin{proposition}[OPC bounds under weak Bellman Closure] \label{prop:LinearConcentrabilityBellmanClosure} Under Bellman closure, we have \begin{align} \label{EqnOPCClosure} \ConcSimple(\TestFunctionClassRandom{\policy}) & \leq c \; \dim \norm{\ensuremath{\mathbb{E}}ecti{ \phi}{\policyicy}}{\SigmaReg^{-1}}^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align} \end{proposition} \subsection{Actor-critic scheme for policy optimization} \label{sec:LinearApproximateOptimization} Having described a practical procedure to compute $\VminEmp{\policyicy}$, we now turn to the computation of the max-min estimator for policy optimization. We define the \emph{soft-max policy class} \begin{align} \label{eqn:SoftMax} \PolicyClass_{\text{lin}} \defeq \Big\{ \psa \mapsto \frac{e^{\innerprod{\phi\psa}{\ActorPar{}}}}{\sum_{\successoraction \in ASpace} e^{\innerprod{\phi(\state,\successoraction)}{\ActorPar{}}}} \mid \norm{\ActorPar{}}{2} \leq \nIter, \; \ActorPar{} \in \R^{\dim} \Big\}. \end{align} In order to compute the max-min solution~\eqref{eqn:MaxMinEmpirical} over this policy class, we implement an actor-critic method, in which the actor performs a variant of mirror descent.\footnote{Strictly speaking, it is mirror ascent, but we use the conventional terminology.} \begin{carlist} \item At each iteration $t = 1, \ldots, \nIter$, the policy $\ActorPolicy{t} \in \PolicyClass_{\text{lin}}$ can be identified with a parameter $\ActorPar{t} \in \R^\dim$. The sequence is initialized with $\ActorPar{1} = 0$. \item Using the finite test function class~\eqref{eqn:LinTestFunction} based on normalized eigenvectors, the pessimistic value estimate $\VminEmp{\ActorPolicy{t}}$ is computed by solving a quadratic program, as previously described. This computation returns the weight vector $\CriticPar{t}$ of the associated optimal action-value function. \item Using the action-value vector $\CriticPar{t}$, we update the actor's parameter as \begin{align} \label{eqn:LinearActorUpdate} \ActorPar{t+1} = \ActorPar{t} + \eta \CriticPar{t} \qquad \mbox{where $\eta = \sqrt{\frac{\log\abs{ASpace}}{2T}}$ is a stepsize parameter. } \end{align} \end{carlist} We now state a guarantee on the behavior of this procedure, based on two OPC coefficients: \begin{align} \label{EqnNewConc} \ConcentrabilityGenericSub{\widetilde \policy}{1} = \dim \l \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}}^2, \quad \mbox{and} \quad \ConcentrabilityGenericSub{\widetilde \policy}{2} = \dim \; \sup_{\policyicy \in \PolicyClass} \Big \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}} {(\CovarianceWithBootstrapReg{\policyicy})^{-1}}^2 \Big \}. \end{align} Moreover, in making the following assertion, we assume that every weak solution $\QpiWeak{\policyicy}$ can be evaluated against the distribution of a comparator policy $\widetilde \policy \in \PolicyClass$, i.e., $\innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\widetilde \policy} = 0$ for all $\policyicy \in \PolicyClass $. (This assumption is still weaker than strong realizability). \begin{theorem}[Approximate Guarantees for Linear Soft-Max Optimization] \label{thm:LinearApproximation} Under the above conditions, running the procedure for $T$ rounds returns a policy sequence $\{\ActorPolicy{t}\}_{t=1}^T$ such that, for any comparator policy $\widetilde \policy\in \PolicyClass$, \begin{align} \label{EqnMirrorBound} \frac{1}{T} \sumiter \big \{ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \big \} & \leq \frac{c_1}{1-\discount} \biggr \{ \underbrace{\sqrt{\frac{\log \abs{ASpace}}{T}} \vphantom{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot}\frac{\dim \log(nT) + \log \frac{n}{\FailureProbability} }{n}}} }_{\text{Optimization error}} + \underbrace{\sqrt{\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} \frac{\dim \log(nT) + \log \big(\frac{n }{\FailureProbability}\big) }{n}}}_{\text{Statistical error}} \biggr \}, \end{align} with probability at least $1 - \FailureProbability$. This bound always holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{2}$,} and moreover, it holds with \mbox{$\ConcentrabilityGenericSub{\widetilde \policy}{\cdot} = \ConcentrabilityGenericSub{\widetilde \policy}{1}$} when weak Bellman closure is in force. \end{theorem} \noindent See~\cref{sec:LinearApproximation} for the proof. Whenever Bellman closure holds, the result automatically inherits the more favorable concentrability coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{2}$, as originally derived in \cref{prop:LinearConcentrabilityBellmanClosure}. The resulting bound is only $\sqrt{\dim}$ worse than the lower bound recently established in the paper~\cite{zanette2021provable}. However, the method proposed here is robust, in that it provides guarantees even when Bellman closure does not hold. In this case, we have a guarantee in terms of the OPC coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$. Note that it is a uniform version of the one derived previously in~\cref{prop:LinearConcentrability}, in that there is an additional supremum over the policy class. This supremum arises due to the use of gradient-based method, which implicitly searches over policies in bootstrapping terms; see \cref{sec:LinearDiscussion} for a more detailed discussion of this issue. \section*{Acknowledgment} AZ was partially supported by NSF-FODSI grant 2023505. In addition, this work was partially supported by NSF-DMS grant 2015454, NSF-IIS grant 1909365, as well as Office of Naval Research grant DOD-ONR-N00014-18-1-2640 to MJW. The authors are grateful to Nan Jiang and Alekh Agarwal for pointing out further connections with the existing literature, as well as to the reviewers for pointing out clarity issues. \section{Main Proofs} \label{sec:AnalysisProofs} This section is devoted to the proofs of our guarantees for general function classes---namely, \cref{prop:Deterministic} that holds in a deterministic manner, and \cref{thm:NewPolicyEvaluation} that gives high probability bounds under a particular sampling model. \subsection{Proof of \cref{prop:Deterministic}} \label{SecProofPropDeterministic} Our proof makes use of an elementary simulation lemma, which we state here: \begin{lemma}[Simulation lemma] \label{lem:Simulation} For any policy $\policyicy$ and function $Q$, we have \begin{align} \Estart{Q-\Qpi{\policyicy}}{,\policyicy} & = \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align} \end{lemma} \noindent See \cref{SecProofLemSimulation} for the proof of this claim. \subsubsection{Proof of policy evaluation claims} First of all, we have the elementary bounds \begin{align*} \abs{\VminEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \min_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }, \quad \mbox{and} \\ \abs{\VmaxEmp{\policyicy} - \Vpi{\policyicy}} & = \abs{ \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \leq \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S, \policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} }. \end{align*} Consequently, in order to prove the bound~\eqref{EqnWidthBound} it suffices to upper bound the right-hand side common in the two above displays. Since $\EmpiricalFeasibleSet{\policyicy} \subseteq \PopulationFeasibleSet{\policyicy}$, we have the upper bound \begin{align*} \max_{Q \in \EmpiricalFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } & \leq \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} - \Vpi{\policyicy} } \\ & = \max_{Q\in\PopulationFeasibleSet{\policyicy}} \abs{\ensuremath{\mathbb{E}}ecti{[Q(S ,\policyicy) - \Qpi{\policyicy}(S ,\policyicy)]} {S \sim \nu_{\text{start}}}} \\ & \overset{\text{(i)}}{=} \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policyicy}} \frac{\PolComplexGen{\policyicy}}{1-\discount} \end{align*} where step (i) follows from \cref{lem:Simulation}. Combined with the earlier displays, this completes the proof of the bound~\eqref{EqnWidthBound}. We now show the inclusion $[ \VminEmp{\policyicy}, \VmaxEmp{\policyicy}] \ni \Vpi{\policyicy}$ when weak realizability holds. By definition of weak realizability, there exists some $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy}$. In conjunction with our sandwich assumption, we are guaranteed that $\QpiWeak{\policyicy} \in \PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$, and consequently \begin{align*} \VminEmp{\policyicy} & = \min_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \min_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \leq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}, \quad \mbox{and} \\ \VmaxEmp{\policyicy} & = \max_{Q\in\EmpiricalFeasibleSet{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \max_{Q\in\PopulationFeasibleSetInfty{\policyicy}} \ensuremath{\mathbb{E}}ecti{Q(S ,\policyicy)}{S \sim \nu_{\text{start}}} \geq \ensuremath{\mathbb{E}}ecti{\QpiWeak{\policyicy}(S ,\policyicy)}{S \sim \nu_{\text{start}}} = \Vpi{\policyicy}. \end{align*} \subsubsection{Proof of policy optimization claims} We now prove the oracle inequality~\eqref{EqnOracle} on the value $\Vpi{\widetilde \pi}$ of a policy $\widetilde \pi$ that optimizes the max-min criterion. Fix an arbitrary comparator policy $\policycomp$. Starting with the inclusion $[\VminEmp{\widetilde \pi}, \VmaxEmp{\widetilde \pi}] \ni \Vpi{\widetilde \pi}$, we have \begin{align*} \Vpi{\widetilde \pi} \stackrel{(i)}{\geq} \VminEmp{\widetilde \pi} \stackrel{(ii)}{\geq} \VminEmp{\policycomp} \; = \; \Vpi{\policycomp} - \Big(\Vpi{\policycomp} - \VminEmp{\policycomp} \Big) \stackrel{(iii)}{\geq} \Vpi{\policycomp} - \frac{1}{1-\discount} \max_{Q \in \PopulationFeasibleSet{\policycomp}} \frac{|\PolComplexGen{\policyicy}Gen{\policycomp}|}{1-\discount}, \end{align*} where step (i) follows from the stated inclusion at the start of the argument; step (ii) follows since $\widetilde \pi$ solves the max-min program; and step (iii) follows from the bound $|\Vpi{\policycomp} - \VminEmp{\policycomp}| \leq \frac{1}{1-\discount} \max_{Q\in\PopulationFeasibleSet{\policycomp}} \frac{\PolComplexGen{\policyicy}}{1-\discount}$, as proved in the preceding section. This lower bound holds uniformly for all comparators $\policycomp$, from which the stated claim follows. \subsection{Proof of \cref{lem:Simulation}} \label{SecProofLemSimulation} For each $\timestep = 1, 2, \ldots$, let $\ensuremath{\mathbb{E}}ecti{}{\timestep}$ be the expectation over the state-action pair at timestep $\timestep$ upon starting from $\nu_{\text{start}}$, so that we have $\Estart{Q - \Qpi{\policyicy} }{,\policyicy} = \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0}$ by definition. We claim that \begin{align} \label{EqnInduction} \ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy}]}{0} & = \sum_{\tau=1}^{\timestep} \discount^{\tau-1} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\tau-1} + \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \Qpi{\policyicy} ]}{\timestep} \qquad \mbox{for all $\timestep = 1, 2, \ldots$.} \end{align} For the base case $\timestep = 1$, we have \begin{align} \label{EqnBase} \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{0} = \ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q]}{0} + \ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} & = \ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{0} + \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}, \end{align} where we have used the definition of the Bellman evaluation operator to assert that $\ensuremath{\mathbb{E}}ecti{ [\BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{0} = \discount \ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{1}$. Since $Q - \BellmanEvaluation{\policyicy}Q = \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}$, the equality~\eqref{EqnBase} is equivalent to the claim~\eqref{EqnInduction} with $\timestep = 1$. Turning to the induction step, we now assume that the claim~\eqref{EqnInduction} holds for some $\timestep \geq 1$, and show that it holds at step $\timestep + 1$. By a similar argument, we can write \begin{align*} \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy}]}{\timestep} = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[Q - \BellmanEvaluation{\policyicy}Q + \BellmanEvaluation{\policyicy}Q - \BellmanEvaluation{\policyicy} \Qpi{\policyicy} ]}{\timestep} & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{[ Q - \BellmanEvaluation{\policyicy}Q ]}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1} \\ & = \discount^{\timestep}\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\timestep} + \discount^{\timestep+1}\ensuremath{\mathbb{E}}ecti{[ Q - \Qpi{\policyicy} ]}{\timestep+1}. \end{align*} By the induction hypothesis, equality~\eqref{EqnInduction} holds for $\timestep$, and substituting the above equality shows that it also holds at time $\timestep + 1$. Since the equivalence~\eqref{EqnInduction} holds for all $\timestep$, we can take the limit as $\timestep \rightarrow \infty$, and doing so yields the claim. \subsection{Proof of \cref{thm:NewPolicyEvaluation}} \label{SecProofNewPolicyEvaluation} The proof relies on proving a high probability bound to \cref{EqnSandwich} and then invoking \cref{prop:Deterministic} to conclude. In the statement of the theorem, we require choosing $\epsilon > 0$ to satisfy the upper bound $\epsilon^2 \precsim \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}$, and then provide an upper bound in terms of $\sqrt{\ensuremath{\rho}(\epsilon,\delta)/n}$. It is equivalent to instead choose $\epsilon$ to satisfy the lower bound $\epsilon^2 \succsim \frac{\ensuremath{\rho}(\epsilon,\delta)}{n}$, and then provide upper bounds proportional to $\epsilon$. For the purposes of the proof, the latter formulation turns out to be more convenient and we pursue it here. \\ To streamline notation, let us introduce the shorthand $\inprod{f}{\Diff{\policyicy}(Q)} \defeq \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu$. For each pair $(Q, \policyicy)$, we then define the random variable \begin{align*} \ZvarShort \defeq \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\Diff{\policyicy}(Q)}{}\big|}{\TestNormaRegularizerEmp{}}. \end{align*} Central to our proof of the theorem is a uniform bound on this random variable, one that holds for all pairs $(Q, \policyicy)$. In particular, our strategy is to exhibit some $\epsilon > 0$ for which, upon setting $\ensuremath{\lambda} = 4 \epsilon^2$, we have the guarantees \begin{subequations} \begin{align} \label{EqnSandwichZvar} \frac{1}{4} \leq \frac{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}{ \sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \leq 2 \qquad & \mbox{uniformly for all $f \in \TestFunctionClass{}$, and} \\ \label{EqnUniformZvar} \ZvarShort \leq \epsilon \quad & \mbox{uniformly for all $(Q, \policyicy)$,} \end{align} \end{subequations} both with probability at least $1 - \delta$. In particular, consistent with the theorem statement, we show that this claim holds if we choose $\epsilon > 0$ to satisfy the inequality \begin{align} \label{EqnOriginalChoice} \epsilon^2 & \geq \bar{c} \frac{\ensuremath{\rho}(\epsilon, \delta)}{n} \; \end{align} where $\bar{c} > 0$ is a sufficiently large (but universal) constant. Supposing that the bounds~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar} hold, let us now establish the set inclusions claimed in the theorem. \paragraph{Inclusion $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}(\epsilon)$:} Define the random variable $\ensuremath{M}_n(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ |\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}|}{\TestNormaRegularizerEmp{}}$, and observe that $Q \in \PopulationFeasibleSetInfty{\policyicy}$ implies that $\ensuremath{M}_n(Q, \policyicy) = 0$. With this definition, we have \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \stackrel{(i)}{\leq} \ensuremath{M}_n(Q, \policyicy) + \ZvarShort \; \stackrel{(ii)}{\leq} \epsilon \end{align*} where step (i) follows from the triangle inequality; and step (ii) follows since $\ensuremath{M}_n(Q, \policyicy) = 0$, and $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}. \paragraph{Inclusion $\EmpiricalFeasibleSet{\policyicy}(\epsilon) \subseteq \PopulationFeasibleSet{\policyicy}(4 \epsilon)$} By the definition of $\PopulationFeasibleSet{\policyicy}(4 \epsilon)$, we need to show that \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \defeq \sup \limits_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}^\policyicy(Q)}{\mu}\big|}{\TestNormaRegularizerPop{}} \leq 4 \epsilon \qquad \mbox{for any $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Now we have \begin{align*} \ensuremath{\bar{\ensuremath{M}}}(Q, \policyicy) \stackrel{(i)}{\leq} 2 \ensuremath{M}_n(Q, \policyicy) \stackrel{(ii)}{\leq} 2 \left \{ \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} + \ZvarShort \right \} \; \stackrel{(iii)}{\leq} 2 \big \{ \epsilon + \epsilon \} \; = \; 4 \epsilon, \end{align*} where step (i) follows from the sandwich relation~\eqref{EqnSandwichZvar}; step (ii) follows from the triangle inequality and the definition of $\ZvarShort$; and step (iii) follows since $\ZvarShort \leq \epsilon$ from the bound~\eqref{EqnUniformZvar}, and \begin{align*} \sup_{\TestFunction{} \in \TestFunctionClass{\policyicy} } \frac{ \big| \innerprodweighted{\TestFunction{}}{\delta^{\policyicy}(Q)}{n}\big|}{\TestNormaRegularizerEmp{}} & \leq \epsilon, \qquad \mbox{using the inclusion $Q \in \EmpiricalFeasibleSet{\policyicy}(\epsilon)$.} \end{align*} Consequently, the remainder of our proof is devoted to establishing the claims~\eqref{EqnSandwichZvar} and~\eqref{EqnUniformZvar}. In doing so, we make repeated use of some Bernstein bounds, stated in terms of the shorthand $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma} \label{LemBernsteinBound} There is a universal constant $c$ such each the following statements holds with probability at least $1 - \delta$. For any $f$, we have \begin{subequations} \begin{align} \label{EqnBernsteinFsquare} \Big| \|f\|_n^2 - \|f\|_\mu^2 \Big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \Big \}, \end{align} and for any $(Q, \policyicy)$ and any function $f$, we have \begin{align} \label{EqnBernsteinBound} \big|\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big| & \leq c \; \Big \{ \|f\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \|f\|_\infty \ensuremath{\Psi_\numobs}(\delta) \Big \}. \end{align} \end{subequations} \end{lemma} \noindent These bounds follow by identifying a martingale difference sequence, and applying a form of Bernstein's inequality tailored to the martingale setting. See Section~\ref{SecProofLemBernsteinBound} for the details. \subsection{Proof of the sandwich relation~\eqref{EqnSandwichZvar}} We claim that (modulo the choice of constants) it suffices to show that \begin{align} \label{EqnCleanSandwich} \Big| \|f\|_n - \|f\|_\mu \Big| & \leq \epsilon \qquad \mbox{uniformly for all $f \in \TestFunctionClass{}$} \end{align} for some universal constant $c'$. Indeed, when this bound holds, we have \begin{align*} \|f\|_n + 2 \epsilon \leq \|f\|_\mu + 3 \epsilon \leq \frac{3}{2} \{ \|f\|_\mu + 2\epsilon \}, \quad \mbox{and} \quad \|f\|_n + 2 \epsilon \geq \|f\|_\mu + \epsilon \geq \frac{1}{2} \big \{ \|f\|_\mu + 2 \epsilon \}, \end{align*} so that $\frac{\|f\|_\mu + 2 \epsilon}{\|f \|_n + 2 \epsilon} \in \big[ \frac{1}{2}, \frac{3}{2} \big]$. To relate this statement to the claimed sandwich, observe the inclusion $\frac{\|f\| + \sqrt{2 \epsilon}}{\sqrt{\|f\|^2 + 4 \epsilon^2}} \in [1, \sqrt{2}]$, where $\|f\|$ can be either $\|f\|_n$ or $\|f\|_\mu$. Combining this fact with our previous bound, we see that $\frac{\sqrt{\|f\|_n^2 + 4 \epsilon^2}}{\sqrt{\|f\|_\mu^2 + 4 \epsilon^2}} \in \Big[ \frac{1}{\sqrt{2}} \frac{1}{2}, \frac{3 \sqrt{2}}{2} \Big] \subset \big[\frac{1}{4}, 3 \big]$, as claimed. \\ The remainder of our analysis is focused on proving the bound~\eqref{EqnCleanSandwich}. Defining the random variable $Y_n(\ensuremath{f}) = \big| \|f\|_n - \|f\|_\mu \big|$, we need to establish a high probability bound on $\sup_{f \in \TestFunctionClass{}} Y_n(\ensuremath{f})$. Let $\{f^1, \ldots, f^N \}$ be an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm. For any $f \in \TestFunctionClass{}$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$, whence \begin{align*} Y_n(f) \leq Y_n(f^j) + \big | Y_n(f^j) - Y_n(f) \big| & \stackrel{(i)}{\leq} Y_n(f^j) + \big| \|f^j\|_n - \|f\|_n \big| + \big| \|f^j\|_\mu - \|f\|_\mu \big| \\ & \stackrel{(ii)}{\leq} Y_n(f^j) + \|f^j - f\|_n + \|f^j - f\|_\mu \\ & \stackrel{(iii)}{\leq} Y_n(f^j) + 2 \epsilon, \end{align*} where steps (i) and (ii) follow from the triangle inequality; and step (iii) follows from the inequality $\max \{ \|f^j - f\|_n, \|f^j - f\|_\mu \} \leq \|f^j - f\|_\infty \leq \epsilon$. Thus, we have reduced the problem to bounding a finite maximum. Note that if $\max \{ \|f^j\|_n, \|f^j\|_\mu \} \leq \epsilon$, then we have $Y_n(f^j) \leq 2 \epsilon$ by the triangle inequality. Otherwise, we may assume that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. With probability at least $1 - \delta$, we have \begin{align*} \Big| \|f^j\|_n - \|f\|_\mu \Big| = \frac{ \Big| \|f^j\|_n^2 - \|f\|_\mu^2 \Big|}{\|f^j\|_n + \|f^j\|_\mu} & \stackrel{(i)}{\leq} \frac{ c \big \{ \|f^j\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \ensuremath{\Psi_\numobs}(\delta) \big \}}{\|f^j\|_\mu + \|f^j\|_n} \\ & \stackrel{(ii)}{\leq} c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \frac{\ensuremath{\Psi_\numobs}(\delta)}{\epsilon} \Big \}, \end{align*} where step (i) follows from the Bernstein bound~\eqref{EqnBernsteinFsquare} from Lemma~\ref{LemBernsteinBound}, and step (ii) uses the fact that $\|f^j\|_n + \|f^j\|_n \geq \epsilon$. Taking union bound over all $N$ elements in the cover and replacing $\delta$ with $\delta/N$, we have \begin{align*} \max_{j \in [N]} Y_n(f^j) & \leq c \Big \{ \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \frac{\ensuremath{\Psi_\numobs}(\delta/N)}{\epsilon} \Big \} \end{align*} with probability at least $1 - \delta$. Recalling that $N = N_\epsilon(\TestFunctionClass{})$, our choice~\eqref{EqnOriginalChoice} of $\epsilon$ ensures that $\sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} \leq c \; \epsilon$ for some universal constant $c$. Putting together the pieces (and increasing the constant $\bar{c}$ in the choice~\eqref{EqnOriginalChoice} of $\epsilon$ as needed) yields the claim. \subsection{Proof of the uniform upper bound~\eqref{EqnUniformZvar}} \label{SecUniProof} We need to establish an upper bound on $\ZvarShort$ that that holds uniformly for all $(Q, \policyicy)$. Our first step is to prove a high probability bound for a fixed pair. We then apply a standard discretization argument to make it uniform in the pair. Note that we can write $\ZvarShort = \sup_{f \in \TestFunctionClass{}} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}$, where we have defined $V_n(f) \defeq |\inprod{f}{\Diff{\policyicy}(Q)}|$. Our first lemma provides a uniform bound on the latter random variables: \begin{lemma} \label{LemVbound} Suppose that $\epsilon^2 \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$. Then we have \begin{align} \label{EqnVbound} V_n(f) & \leq c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \} \qquad \mbox{for all $f \in \TestFunctionClass{}$} \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent See \cref{SecProofLemVbound} for the proof of this claim. \\ We claim that the bound~\eqref{EqnVbound} implies that, for any fixed pair $(Q, \policyicy)$, we have \begin{align*} Z_n(Q, \policyicy) \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} Indeed, when Lemma~\ref{LemVbound} holds, for any $f \in \TestFunctionClass{}$, we can write \begin{align*} \frac{V_n(f)}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} = \frac{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \; \frac{V_n(f)}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \stackrel{(i)}{\leq} \; 3 \; \frac{c \big \{ \|f\|_\mu \epsilon + \epsilon^2 \big \}}{\sqrt{\|f\|_\mu^2 + \ensuremath{\lambda}}} \; \stackrel{(ii)}{\leq} \; c' \epsilon, \end{align*} where step (i) uses the sandwich relation~\eqref{EqnSandwichZvar}, along with the bound~\eqref{EqnVbound}; and step (ii) follows given the choice $\ensuremath{\lambda} = 4 \epsilon^2$. We have thus proved that for any fixed $(Q, \policyicy)$ and $\epsilon \geq \ensuremath{\Psi_\numobs} \big(\delta/N_\epsilon(\TestFunctionClass{}) \big)$, we have \begin{align} \label{EqnFixZvarBound} \ZvarShort & \leq c' \epsilon \qquad \mbox{with probability at least $1 - \delta$.} \end{align} Our next step is to upgrade this bound to one that is uniform over all pairs $(Q, \policyicy)$. We do so via a discretization argument: let $\{Q^j\}_{j=1}^J$ and $\{\pi^k\}_{k=1}^K$ be $\epsilon$-coverings of $\Qclass{}$ and $\PolicyClass$, respectively. \begin{lemma} \label{LemDiscretization} We have the upper bound \begin{align} \sup_{Q, \policyicy} \ZvarShort & \leq \max_{(j,k) \in [J] \times [K]} \Zvar{Q^j}{\policyicy^k} + 4 \epsilon. \end{align} \end{lemma} \noindent See Section~\ref{SecProofLemDiscretization} for the proof of this claim. If we replace $\delta$ with $\delta/(J K)$, then we are guaranteed that the bound~\eqref{EqnFixZvarBound} holds uniformly over the family $\{ Q^j \}_{j=1}^J \times \{ \policyicy^k \}_{k=1}^K$. Recalling that $J = N_\epsilon(\Qclass{})$ and $K = N_\epsilon(\Pi)$, we conclude that for any $\epsilon$ satisfying the inequality~\eqref{EqnOriginalChoice}, we have $\sup_{Q, \policyicy} \ZvarShort \leq \ensuremath{\tilde{c}} \epsilon$ with probability at least $1 - \delta$. (Note that by suitably scaling up $\epsilon$ via the choice of constant $\bar{c}$ in the bound~\eqref{EqnOriginalChoice}, we can arrange for $\ensuremath{\tilde{c}} = 1$, as in the stated claim.) \subsection{Proofs of supporting lemmas} In this section, we collect together the proofs of~\cref{LemVbound,LemDiscretization}, which were stated and used in~\Cref{SecUniProof}. \subsubsection{Proof of ~\cref{LemVbound}} \label{SecProofLemVbound} We first localize the problem to the class $\ensuremath{\mathcal{F}}(\epsilon) = \{f \in \TestFunctionClass{} \mid \|f\|_\mu \leq \epsilon \}$. In particular, if there exists some $\tilde{f} \in \TestFunctionClass{}$ that violates~\eqref{EqnVbound}, then the rescaled function $f = \epsilon \tilde{f}/\|\tilde{f}\|_\mu$ belongs to $\ensuremath{\mathcal{F}}(\epsilon)$, and satisfies $V_n(f) \geq c \epsilon^2$. Consequently, it suffices to show that $V_n(f) \leq c \epsilon^2$ for all $f \in \ensuremath{\mathcal{F}}(\epsilon)$. Choose an $\epsilon$-cover of $\TestFunctionClass{}$ in the sup-norm with $N = N_\epsilon(\TestFunctionClass{})$ elements. Using this cover, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can find some $f^j$ such that $\|f - f^j\|_\infty \leq \epsilon$. Thus, for any $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we can write \begin{align} \label{EqnOriginalInequality} V_n(f) \leq V_n(f^j) + V_n(f - f^j) \; \; \leq \underbrace{V_n(f^j)}_{T_1} + \underbrace{\sup_{g \in \ensuremath{\mathcal{G}}(\epsilon)} V_n(g)}_{T_2}, \end{align} where $\ensuremath{\mathcal{G}}(\epsilon) \defeq \{ f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{}, \|f_1 - f_2 \|_\infty \leq \epsilon \}$. We bound each of these two terms in turn. In particular, we show that each of $T_1$ and $T_2$ are upper bounded by $c \epsilon^2$ with high probability. \paragraph{Bounding $T_1$:} From the Bernstein bound~\eqref{EqnBernsteinBound}, we have \begin{align*} V_n(f^k) & \leq c \big \{ \|f^k\|_\mu \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \|f^k\|_\infty \ensuremath{\Psi_\numobs}(\delta/N) \big \} \qquad \mbox{for all $k \in [N]$} \end{align*} with probability at least $1 - \delta$. Now for the particular $f^j$ chosen to approximate $f \in \ensuremath{\mathcal{F}}(\epsilon)$, we have \begin{align*} \|f^j\|_\mu & \leq \|f^j - f\|_\mu + \|f\|_\mu \leq 2 \epsilon, \end{align*} where the inequality follows since $\|f^j - f\|_\mu \leq \|f^j - f\|_\infty \leq \epsilon$, and $\|f\|_\mu \leq \epsilon$. Consequently, we conclude that \begin{align*} T_1 & \leq c \Big \{ 2 \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta/N)} + \ensuremath{\Psi_\numobs}(\delta/N) \Big \} \; \leq \; c' \epsilon^2 \qquad \mbox{with probability at least $1 - \delta$.} \end{align*} where the final inequality follows from our choice of $\epsilon$. \paragraph{Bounding $T_2$:} Define $\ensuremath{\mathcal{G}} \defeq \{f_1 - f_2 \mid f_1, f_2 \in \TestFunctionClass{} \}$. We need to bound a supremum of the process $\{ V_n(g), g \in \ensuremath{\mathcal{G}} \}$ over the subset $\ensuremath{\mathcal{G}}(\epsilon)$. From the Bernstein bound~\eqref{EqnBernsteinBound}, the increments $V_n(g_1) - V_n(g_2)$ of this process are sub-Gaussian with parameter $\|g_1 - g_2\|_\mu \leq \|g_1 - g_2\|_\infty$, and sub-exponential with parameter $\|g_1 - g_2\|_\infty$. Therefore, we can apply a chaining argument that uses the metric entropy $\log N_t(\ensuremath{\mathcal{G}})$ in the supremum norm. Moreover, we can terminate the chaining at $2 \epsilon$, because we are taking the supremum over the subset $\ensuremath{\mathcal{G}}(\epsilon)$, and it has sup-norm diameter at most $2 \epsilon$. Moreover, the lower interval of the chain can terminate at $2 \epsilon^2$, since our goal is to prove an upper bound of this order. Then, by using high probability bounds for the suprema of empirical processes (e.g., Theorem 5.36 in the book~\cite{wainwright2019high}), we have \begin{align*} T_2 \leq c_1 \; \int_{2 \epsilon^2}^{2 \epsilon} \ensuremath{\phi} \big( \frac{\log N_t(\ensuremath{\mathcal{G}})}{n} \big) dt + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} + 2 \epsilon^2 \end{align*} with probability at least $1 - \delta$. (Here the reader should recall our shorthand $\ensuremath{\phi}(s) = \max \{s, \sqrt{s} \}$.) Since $\ensuremath{\mathcal{G}}$ consists of differences from $\TestFunctionClass{}$, we have the upper bound $\log N_t(\ensuremath{\mathcal{G}}) \leq 2 \log N_{t/2}(\TestFunctionClass{})$, and hence (after making the change of variable $u = t/2$ in the integrals) \begin{align*} T_2 \leq c'_1 \int_{\epsilon^2}^{ \epsilon} \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + c_2 \big \{ \epsilon \sqrt{\ensuremath{\Psi_\numobs}(\delta)} + \epsilon \ensuremath{\Psi_\numobs}(\delta) \big\} \; \leq \; \ensuremath{\tilde{c}} \epsilon^2, \end{align*} where the last inequality follows from our choice of $\epsilon$. \subsubsection{Proof of ~\cref{LemDiscretization}} \label{SecProofLemDiscretization} By our choice of the $\epsilon$-covers, for any $(Q, \policyicy)$, there is a pair $(Q^j, \policyicy^k)$ such that \begin{align*} \|Q^j - Q\|_\infty \leq \epsilon, \quad \mbox{and} \quad \|\policyicy^k - \policyicy\|_{\infty,1} = \sup_{\state} \|\policyicy^k(\cdot \mid \state) - \policyicy(\cdot \mid \state) \|_1 \leq \epsilon. \end{align*} Using this pair, an application of the triangle inequality yields \begin{align*} \big| \Zvar{Q}{\policyicy} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \underbrace{\big| \Zvar{Q}{\policyicy} - \Zvar{Q}{\policyicy^k} \big|}_{T_1} + \underbrace{\big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big|}_{T_2} \end{align*} We bound each of these terms in turn, in particular proving that $T_1 + T_2 \leq 24 \epsilon$. Putting together the pieces yields the bound stated in the lemma. \paragraph{Bounding $T_2$:} From the definition of $\ensuremath{Z_\numobs}$, we have \begin{align*} T_2 = \big| \Zvar{Q}{\policyicy^k} - \Zvar{Q^j}{\policyicy^k} \big| & \leq \sup_{f \in \TestFunctionClass{}} \frac{\big| \smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}|}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}}. \end{align*} Now another application of the triangle inequality yields \begin{align*} |\smlinprod{f}{\Diff{\policyicy^k}(Q - Q^j)}| & \leq |\smlinprod{f}{\TDError{(Q- Q^j)}{\policyicy^k}{}}_n| + ||\smlinprod{f}{\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)}|_\mu \\ & \leq \|f\|_n \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_n + \|f\|_\mu \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\mu \\ & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty + \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty \Big \} \end{align*} where step (i) follows from the Cauchy--Schwarz inequality. Now in terms of the shorthand \mbox{$\Delta \defeq Q - Q^j$,} we have \begin{subequations} \begin{align} \label{EqnCoffeeBell} \|\ensuremath{\mathcal{B}}^{\policyicy^k}(Q- Q^j)\|_\infty & = \sup_{\psa} \Big| \Delta \psa - \discount \E_{\successorstate\sim\Pro\psa} \big[ \Delta(\successorstate,\policyicy) \big] \Big| \leq 2 \|\Delta\|_\infty \leq 2 \epsilon. \end{align} An entirely analogous argument yields \begin{align} \label{EqnCoffeeTD} \|\TDError{(Q- Q^j)}{\policyicy^k}{}\|_\infty \leq 2 \epsilon \end{align} \end{subequations} Conditioned on the sandwich relation~\eqref{EqnSandwichZvar}, we have $\sup_{f \in \TestFunctionClass{}} \frac{\max \{ \|f\|_n, \|f\|_\mu \}}{\sqrt{\|f\|_n^2 + \ensuremath{\lambda}}} \leq 4$. Combining this bound with inequalities~\eqref{EqnCoffeeBell} and~\eqref{EqnCoffeeTD}, we have shown that $T_2 \leq 4 \big \{2 \epsilon + 2 \epsilon \} = 16 \epsilon$. \paragraph{Bounding $T_1$:} In this case, a similar argument yields \begin{align*} |\smlinprod{f}{(\Diff{\policyicy} - \Diff{\policyicy^k})(Q)}| & \leq \max \{ \|f\|_n, \|f\|_\mu \} \; \Big \{ \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n + \|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu \}. \end{align*} Now we have \begin{align*} \|(\delta^\policyicy - \delta^{\policyicy^k})(Q)\|_n & \leq \max_{i = 1, \ldots, n} \Big| \sum_{\action'} \big(\policyicy(\action' \mid \state_i) - \policyicy^k(\action' \mid \state_i) \big) Q(\state^+_i, \action') \Big| \\ & \leq \max_{\state} \sum_{\action'} |\policyicy(\action' \mid \state) - \policyicy^k(\action \mid \state)| \; \|Q\|_\infty \\ & \leq \epsilon. \end{align*} A similar argument yields that $\|(\ensuremath{\mathcal{B}}^\policyicy - \ensuremath{\mathcal{B}}^{\policyicy^k})(Q)\|_\mu | \leq \epsilon$, and arguing as before, we conclude that $T_1 \leq 4 \{\epsilon + \epsilon \} = 8 \epsilon$. \subsubsection{Proof of Lemma~\ref{LemBernsteinBound}} \label{SecProofLemBernsteinBound} Our proof of this claim makes use of the following known Bernstein bound for martingale differences (cf. Theorem 1 in the paper~\cite{beygelzimer2011contextual}). Recall the shorthand notation $\ensuremath{\Psi_\numobs}(\delta) = \frac{\log(n/\delta)}{n}$. \begin{lemma}[Bernstein's Inequality for Martingales] \label{lem:Bernstein} Let $\{X_t\}_{t \geq 1}$ be a martingale difference sequence with respect to the filtration $\{ \mathcal F_t \}_{t \geq 1}$. Suppose that $|X_t| \leq 1$ almost surely, and let $\E_t$ denote expectation conditional on $\mathcal F_t$. Then for all $\delta \in (0,1)$, we have \begin{align} \Big| \frac{1}{n} \sum_{t=1}^n X_t \Big| & \leq 2 \Big[ \Big(\frac{1}{n} \sum_{t=1}^n \E_t X^2_t \Big) \ensuremath{\Psi_\numobs}(2 \delta) \Big]^{1/2} + 2 \ensuremath{\Psi_\numobs}(2 \delta) \end{align} with probability at least $1 - \delta$. \end{lemma} \noindent With this result in place, we divide our proof into two parts, corresponding to the two claims~\eqref{EqnBernsteinBound} and~\eqref{EqnBernsteinFsquare} stated in~\cref{LemBernsteinBound}. \paragraph{Proof of the bound~\eqref{EqnBernsteinBound}:} Recall that at step $i$, the triple $(\state,\action,\identifier)$ is drawn according to a conditional distribution $\mu_i(\cdot \mid \mathcal F_i)$. Similarly, we let $d_i$ denote the distribution of $\sarsi{}$ conditioned on the filtration $\mathcal F_i$. Note that $\mu_i$ is obtained from $d_i$ by marginalizing out the pair $(\reward, \successorstate)$. Moreover, by the tower property of expectation, the Bellman error is equivalent to the average TD error. Using these facts, we have the equivalence \begin{align*} \innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}} {d_{i}} & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {d_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) \ensuremath{\mathbb{E}}ecti{[\TDErrorDefCompact{Q}{\policyicy}{}]\big\}} {\substack{\reward \sim R\psa, \successorstate\sim\Pro\psa}}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \ensuremath{\mathbb{E}}ecti{\big\{\TestFunction{}(\state,\action,\identifier) [\ensuremath{\mathcal{B}}orDefCompact{Q}{\policyicy}{}]\big\}} {(\state,\action,\identifier) \sim \mu_{i}} \\ & = \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu_{i}}. \end{align*} As a consequence, we can write $\inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu = \frac{1}{n} \sum_{i=1}^n \ensuremath{W}_i$ where \begin{align*} \ensuremath{W}_i & \defeq \TestFunctionDefCompact{}{i} [\TDErrorDefCompact{Q}{\policyicy}{i}] - \ensuremath{\mathbb{E}}ecti{\big\{\TestFunctionDefCompact{}{}{[\TDErrorDefCompact{Q}{\policyicy}{}}]\big\}}{d_{i}} \end{align*} defines a martingale difference sequence (MDS). Thus, we can prove the claim by applying a Bernstein martingale inequality. Since $\|r\|_\infty \leq 1$ and $\|Q\|_\infty \leq 1$ by assumption, we have $\|\ensuremath{W}_i\|_\infty \leq 3 \|f\|_\infty$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{d_i} [\ensuremath{W}_i^2] & \leq 9 \; \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state_{i},\action_{i}, o_{i})] \; = \; 9 \|f\|_\mu^2. \end{align*} Consequently, the claimed bound~\eqref{EqnBernsteinBound} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \paragraph{Proof of the bound~\eqref{EqnBernsteinFsquare}:} In this case, we have the additive decomposition \begin{align*} \|\ensuremath{f}c\|_n^2 - \|\ensuremath{f}\|_\mu^2 & = \frac{1}{n} \sum_{i=1}^n \Big \{ \underbrace{\ensuremath{f}c^2(\state_{i},\action_{i},o_{i}) - \ensuremath{\mathbb{E}}_{\mu_{i}}[f^2(\state, \action, o)]}_{\ensuremath{\martone'}_i} \Big\}, \end{align*} where $\{\ensuremath{\martone'}_i\}_{i=1}^n$ again defines a martingale difference sequence. Note that $\|\ensuremath{\martone'}_i\|_\infty \leq 2 \|\ensuremath{f}c\|_\infty^2 \leq 2$, and \begin{align*} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} [(\ensuremath{\martone'}_i)^2] & \stackrel{(i)}{\leq} \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[ \ensuremath{f}c^4(S, A, \ensuremath{O}) \big] \; \leq \; \|\ensuremath{f}c\|_\infty^2 \frac{1}{n} \sum_{i=1}^n \ensuremath{\mathbb{E}}_{\mu_i} \big[\ensuremath{f}c^2(S, A, \ensuremath{O}) \big] \; \stackrel{(ii)}{\leq} \; \|\ensuremath{f}c\|_\mu^2, \end{align*} where step (i) uses the fact that the variance of $\ensuremath{f}c^2$ is at most the fourth moment, and step (ii) uses the bound $\|\ensuremath{f}c\|_\infty \leq 1$. Consequently, the claimed bound~\eqref{EqnBernsteinFsquare} follows by applying the Bernstein bound stated in \cref{lem:Bernstein}. \tableofcontents \section{Additional Discussion and Results} \input{3p1p1-BRO} \subsection{Comparison with Weight Learning Methods} \label{sec:WeightLearning} The work closest to ours is \cite{jiang2020minimax}. They also use an auxiliary weight function class, which is comparable to our test class. However, the test class is used in different ways; we compare them in this section at the population level.\footnote{ The empirical estimator in \cite{jiang2020minimax} does not take into account the `alignment' of each weight function with respect to the dataset, which we do through self-normalization and regularization in the construction of the empirical estimator. This precludes obtaining the same type of strong finite time guarantees that we are able to derive here.} Let us assume that weak realizability holds and that $\TestFunctionClass{}$ is symmetric, i.e., if $\TestFunction{} \in \TestFunctionClass{}$ then $-\TestFunction{} \in \TestFunctionClass{}$ as well. At the population level, our program seeks to solve \begin{align} \label{eqn:PopProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0, \end{align} \defw{w} which is equivalent for any $w \in \TestFunctionClass{}$ to \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} \quad \text{s.t. } \quad \sup_{\TestFunction{} \in \TestFunctionClass{}} \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = 0. \end{align*} Removing the constraints leads to the upper bound \begin{align*} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} Since this is a valid upper bound for any $w{} \in \TestFunctionClass{}$, minimizing over $w$ must still yield an upper bound, which reads \begin{align*} \inf_{w \in \TestFunctionClass{}}\sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \frac{1}{1-\discount} \innerprodweighted{w}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} This is the population program for ``weight learning'', as described in \cite{jiang2020minimax}. It follows that Bellman residual orthogonalization always produces tighter confidence intervals than ``weight learning'' at the population level. Another interesting comparison is with ``value learning'', also described in \cite{jiang2020minimax}. In this case, assuming symmetric $\TestFunctionClass{}$, we can equivalently express the population program \eqref{eqn:PopProgComparison} using a Lagrange multiplier as follows \begin{align} \label{eqn:LagProgComparison} \sup_{Q \in \Qclass{\policyicy}} & \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \sup_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}} \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align} Rearranging we obtain \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \inf_{\lambda \geq 0 ,\TestFunction{} \in \TestFunctionClass{}}& \ensuremath{\mathbb{E}}ecti{Q(\state,\policyicy)}{\state \sim \nu_{\text{start}}} - \lambda\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}. \end{align*} The ``value learning'' program proposed in \cite{jiang2020minimax} has a similar formulation to ours but differs in two key aspects. The first---and most important---is that \cite{jiang2020minimax} ignores the Lagrange multiplier; this means ``value learning'' is not longer associated to a constrained program. While the Lagrange multiplier could be ``incorporated'' into the test class $\TestFunctionClass{}$, doing so would cause the entropy of $\TestFunctionClass{}$ to be unbounded. Another point of difference is that ``value learning'' uses such expression with $\lambda = 1$ to derive the confidence interval \emph{lower bound}, while we use it to construct the confidence interval \emph{upper bound}. While this may seem like a contradiction, we notice that the expression is derived using different assumptions: we assume weak realizability of $Q$, while \cite{jiang2020minimax} assumes realizability of the density ratios between $\mu$ and the discounted occupancy measure $\policyicy$. \subsection{Additional Literature} \label{sec:Literature} Here we summarize some additional literature. The efficiency of off-policy tabular RL has been investigated in the papers~\cite{yin2020near,yin2020asymptotically,yin2021towards}. For empirical studies on offline RL, see the papers~\cite{laroche2019safe,jaques2019way,wu2019behavior,agarwal2020optimistic,wang2020critic,siegel2020keep,nair2020accelerating,yang2021pessimistic,kumar2021should,buckman2020importance,kumar2019stabilizing,kidambi2020morel,yu2020mopo}. Some of the classical RL algorithm are presented in the papers~\cite{munos2003error,munos2005error,antos2007fitted,antos2008learning,farahmand2010error,farahmand2016regularized}. For a more modern analysis, see~\cite{chen2019information}. These works generally make additionally assumptions on top of realizability. Alternatively, one can use importance sampling \cite{precup2000eligibility,thomas2016data,jiang2016doubly,farajtabar2018more}. A more recent idea is to look at the distributions themselves~\cite{liu2018breaking,nachum2019algaedice,xie2019towards,zhang2020gendice,zhang2020gradientdice,yang2020off,kallus2019efficiently}. Offline policy optimization with pessimism has been studied in the papers~\cite{liu2020provably,rashidinejad2021bridging,jin2021pessimism,xie2021bellman,zanette2021provable,yinnear,uehara2021pessimistic}. There exists a fairly extensive literature on lower bounds with linear representations, including the two papers~\cite{zanette2020exponential,wang2020statistical} that concurrently derived the first exponential lower bounds for the offline setting, and \cite{foster2021offline} proves that realizability and coverage alone are insufficient. In the context of off-policy optimization several works have investigated methods that assume only realizability of the optimal policy~\cite{xie2020batch,xie2020Q}. Related work includes the papers~\cite{duan2020minimax,duan2021risk,jiang2020minimax,uehara2020minimax,tang2019doubly,nachum2020reinforcement,voloshin2021minimax,hao2021bootstrapping,zhang2022efficient,uehara2021finite,chen2022well,lee2021model}. Among concurrent works, we note \cite{zhan2022offline}. \subsection{Definition of Weak Bellman Closure} \label{sec:appWeakClosure} \begin{definition}[Weak Bellman Closure] The Bellman operator $\BellmanEvaluation{\policyicy}$ is \emph{weakly closed} with respect to the triple $\big( \Qclass{\policyicy}, \TestFunctionClass{\policyicy}, \mu \big)$ if for any $Q \in \Qclass{\policyicy}$, there exists a predictor $\QpiProj{\policyicy}{Q} \in \Qclass{\policyicy}$ such that \begin{align} \innerprodweighted{\TestFunction{}}{\QpiProj{\policyicy}{Q}}{\mu} = \innerprodweighted{\TestFunction{}} {\BellmanEvaluation{\policyicy}(Q)} {\mu}. \end{align} \end{definition} \subsection{Additional results on the concentrability coefficients} \label{sec:appConc} \subsubsection{Testing with the identity function} Suppose that the identity function $\1$ belongs to the test class. Doing so amounts to requiring that the Bellman error is controlled in an average sense over all the data. When this choice is made, we can derive some generic upper bounds on $K^\policy$, which we state and prove here: \begin{lemma} If $\1 \in \TestFunctionClass{\policyicy}$, then we have the upper bounds \begin{align} \label{eqn:InEq} K^\policy & \stackrel{(i)}{\leq} \frac{\max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}|^2}{ \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2} \; \stackrel{(ii)}{\leq} K^\policy_* \defeq \max_{Q \in \PopulationFeasibleSet{\policyicy}} \frac{|\PolComplexGen{\policyicy}|^2}{|\PolMuComplex|^2}. \end{align} \end{lemma} \begin{proof} Since $\1 \in \TestFunctionClass{}$, the definition of $\PopulationFeasibleSet{\policyicy}$ implies that \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolMuComplex|^2 & \leq \big( \munorm{\1}^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n} = \big( 1 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} The upper bound (i) then follows from the definition of $K^\policy$. The upper bound (ii) follows since the right hand side is the maximum ratio. \end{proof} Note that large values of $K^\policy_*$ (defined in \cref{eqn:InEq}) can arise when there exist $Q$-functions in the set $\PopulationFeasibleSet{\policyicy}$ that have low average Bellman error under the data-generating distribution $\mu$, but relatively large values under $\policyicy$. Of course, the likelihood of such unfavorable choices of $Q$ is reduced when we use a larger test function class, which then reduces the size of $\PopulationFeasibleSet{\policyicy}$. However, we pay a price in choosing a larger test function class, since the choice~\eqref{EqnRadChoice} of the radius $\ensuremath{\rho}$ needed for \cref{thm:NewPolicyEvaluation} depends on its complexity. \subsubsection{Mixture distributions} Now suppose that the dataset consists of a collection of trajectories collected by different protocols. More precisely, for each $j = 1, \ldots, \nConstraints$, let $\Dpi{j}$ be a particular protocol for generating a trajectory. Suppose that we generate data by first sampling a random index $J \in [\nConstraints]$ according to a probability distribution $\{\Frequencies{j} \}_{j=1}^{\nConstraints}$, and conditioned $J = j$, we sample $(\state,\action,\identifier)$ according to $\Dpi{j}$. The resulting data follows a mixture distribution, where we set $o = j$ to tag the protocol used to generate the data. To be clear, for each sample $i = 1, \ldots, n$, we sample $J$ as described, and then draw a single sample $(\state,\action,\identifier) \sim \Dpi{j}$ . Following the intuition given in the previous section, it is natural to include test functions that code for the protocol---that is, the binary-indicator functions \begin{align} f_j (\state,\action,\identifier) & = \begin{cases} 1 & \mbox{if $o=j$} \\ 0 & \mbox{otherwise.} \end{cases} \end{align} This test function, when included in the weak formulation, enforces the Bellman evaluation equations for the policy $\policyicy \in \PolicyClass$ under consideration along the distribution induced by each data-generating policy $\Dpi{j}$. \begin{lemma}[Mixture Policy Concentrability] \label{lem:MixturePolicyConcentrability} Suppose that $\mu$ is an $\nConstraints$-component mixture, and that the indicator functions $\{f_j \}_{j=1}^\nConstraints$ are included in the test class. Then we have the upper bounds \begin{align} \label{EqnMixturePolicyUpper} K^\policy & \stackrel{(i)}{\leq} \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \frac{\max \limits_{Q\in \PopulationFeasibleSet{\policyicy}} [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} {\max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \; \stackrel{(ii)}{\leq} \; \frac{1 + \nConstraints \ensuremath{\lambda}}{1 + \ensuremath{\lambda}} \; \max_{Q\in \PopulationFeasibleSet{\policyicy}} \left \{ \frac{ [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2} { \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2} \right \}. \end{align} \end{lemma} \begin{proof} From the definition of $K^\policy$, it suffices to show that \begin{align*} \max \limits_{Q \in \PopulationFeasibleSet{\policyicy}} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \; \big(1 + \nConstraints \ensuremath{\lambda} \big). \end{align*} A direct calculation yields $\innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \ensuremath{\mathbb{E}}ecti{\Indicator \{o = j \} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}$. Moreover, since each $\TestFunction{j}$ belongs to the test class by assumption, we have the upper bound $\Big|\Frequencies{j} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \; \sqrt{ \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda}}$. Squaring each term and summing over the constraints yields \begin{align*} \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}}]^2 \leq \frac{\ensuremath{\rho}}{n} \sum_{j=1}^\nConstraints \big( \munorm{\TestFunction{j}}^2 + \ensuremath{\lambda} \big) = \frac{\ensuremath{\rho}}{n} \big(1 + \nConstraints \ensuremath{\lambda} \big), \end{align*} where the final equality follows since $\sum_{j=1}^{\nConstraints} \munorm{\TestFunction{j}}^2 = 1$. \end{proof} As shown by the upper bound, the off-policy coefficient $K^\policy$ provides a measure of how the squared-averaged Bellman errors along the policies $\{ \Dpi{j} \}_{j=1}^\nConstraints$, weighted by their probabilities $\{ \Frequencies{j} \}_{j=1}^{\nConstraints}$, transfers to the evaluation policy $\policyicy$. Note that the regularization parameter $\ensuremath{\lambda}$ decays as a function of the sample size---e.g., as $1/n$ in \cref{thm:NewPolicyEvaluation}---the factor $(1 + \nConstraints \TestFunctionReg)/(1 + \TestFunctionReg)$ approaches one as $n$ increases (for a fixed number $\nConstraints$ of mixture components). \subsubsection{Bellman Rank for off-policy evaluation} In this section, we show how more refined bounds can be obtained when---in addition to a mixture condition---additional structure is imposed on the problem. In particular, we consider a notion similar to that of Bellman rank~\cite{jiang17contextual}, but suitably adapted\footnote{ The original definition essentially takes $\widetilde {\PolicyClass}$ as the set of all greedy policies with respect to $\widetilde {\Qclass{}}$. Since a dataset need not originate from greedy policies, the definition of Bellman rank is adapted in a natural way.} to the off-policy setting. Given a policy class $\widetilde {\PolicyClass}$ and a predictor class $\widetilde {\Qclass{}}$, we say that it has Bellman rank is $\dim$ if there exist two maps $\BRleft{} : \widetilde {\PolicyClass} \rightarrow \R^\dim$ and $\BRright{}: \widetilde {\Qclass{}} \rightarrow \R^\dim$ such that \begin{align} \label{eqn:BellmanRank} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \smlinprod{\BRleft{\policyicy}}{\BRright{Q}}_{\R^d}, \qquad \text{for all } \; \policyicy \in \widetilde {\PolicyClass} \; \text{and} \; Q \in \widetilde {\Qclass{}}. \end{align} In words, the average Bellman error of any predictor $Q$ along any given policy $\policyicy$ can be expressed as the Euclidean inner product between two $\dim$-dimensional vectors, one for the policy and one for the predictor. As in the previous section, we assume that the data is generated by a mixture of $\nConstraints$ different distributions (or equivalently policies) $\{\Dpi{j} \}_{j=1}^\nConstraints$. In the off-policy setting, we require that the policy class $\widetilde {\PolicyClass}$ contains all of these policies as well as the target policy---viz. $\{\Dpi{j} \} \cup \{ \policyicy \} \subseteq \widetilde {\PolicyClass}$. Moreover, the predictor class $\widetilde {\Qclass{}}$ should contain the predictor class for the target policy, i.e., $\Qclass{\policyicy} \subseteq \widetilde {\Qclass{}}$. We also assume weak realizability for this discussion. Our result depends on a positive semidefinite matrix determined by the mixture weights $\{\Frequencies{j} \}_{j=1}^\nConstraints$ along with the embeddings $\{\BRleft{\Dpi{j}}\}_{j=1}^\nConstraints$ of the associated policies that generated the data. In particular, we define \begin{align*} \BRCovariance{} = \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top. \end{align*} Assuming that this is matrix is positive definite,\footnote{If not, one can prove a result for a suitably regularized version.} we define the norm $\|u\|_{\BRCovariance{}^{-1}} = \sqrt{u^T (\BRCovariance{})^{-1} u}$. With this notation, we have the following bound. \begin{lemma}[Concentrability with Bellman Rank] \label{lem:ConcentrabilityBellmanRank} For a mixture data-generation process and under the Bellman rank condition~\eqref{eqn:BellmanRank}, we have the upper bound \begin{align} K^\policy & \leq \; \frac{1 + \nConstraints \TestFunctionReg}{1 + \TestFunctionReg} \; \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align} \end{lemma} \begin{proof} Our proof exploits the upper bound (ii) from the claim~\eqref{EqnMixturePolicyUpper} in~\cref{lem:MixturePolicyConcentrability}. We first evaluate and redefine the ratio in this upper bound. Weak realizability coupled with the Bellman rank condition~\eqref{eqn:BellmanRank} implies that there exists some $\QpiWeak{\policyicy}$ such that \begin{align*} 0 & = \innerprodweighted{\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\mu} = \Frequencies{j} \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\Dpi{j}} = \Frequencies{j} \inprod{\BRleft{\Dpi{j}}}{\BRright{\QpiWeak{\policyicy}}}, \qquad \mbox{for all $j = 1, \ldots, \nConstraints$, and} \\ 0 & = \innerprodweighted{\1} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \ensuremath{\mathbb{E}}ecti {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{\BRright{\QpiWeak{\policyicy}}}. \end{align*} Therefore, we have the equivalences $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\Dpi{j}} = \inprod{\BRleft{\Dpi{j}}}{ (\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$ for all $j = 1, \ldots, \nConstraints$, as well as $\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} = \inprod{\BRleft{\policyicy}}{(\BRright{Q} - \BRright{\QpiWeak{\policyicy}})}$. Introducing the shorthand $\BRDelta{Q} = \BRright{Q} - \BRright{\QpiWeak{\policyicy}}$, we can bound the ratio as follows \begin{align*} \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{\BRDelta{Q}})^2 }{ \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 (\inprod{\BRleft{\Dpi{j}}}{ \BRDelta{Q}})^2 } \Big \}& = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ (\inprod{\BRleft{\policyicy}}{ \BRDelta{Q}})^2 }{\BRDelta{Q}^\top \Big( \sum_{\iConstraint =1}^{\nConstraints} \Frequencies{j}^2 \BRleft{\Dpi{j}} \BRleft{\Dpi{j}}^\top \Big) \BRDelta{Q} } \Big \} \\ & = \sup_{Q \in \PopulationFeasibleSet{\policyicy}} \Big \{ \frac{ ( \smlinprod{\BRleft{\policyicy}}{ \BRCovariance{}^{-\frac{1}{2}} \BRDeltaCoord{Q}})^2 }{ \|\BRDeltaCoord{Q}\|_2^2} \Big \} \qquad \mbox{where $\BRDeltaCoord{Q} = \BRCovariance{}^{\frac{1}{2}}\BRDelta{Q}$}\\ & \leq \norm{\BRleft{\policyicy}}{\BRCovariance{}^{-1}}^2, \end{align*} where the final step follows from the Cauchy--Schwarz inequality. \end{proof} Thus, when performing off-policy evaluation with a mixture distribution under the Bellman rank condition, the coefficient $K^\policy$ is bounded by the alignment between the target policy $\policyicy$ and the data-generating distribution $\mu$, as measured in the the embedded space guaranteed by the Bellman rank condition. The structure of this upper bound is similar to a result that we derive in the sequel for linear approximation under Bellman closure (see~\cref{prop:LinearConcentrabilityBellmanClosure}). \subsection{Further comments on the prediction error test space} \label{sec:appBubnov} A few comments on the bound in \cref{lem:PredictionError}: as in our previous results, the pre-factor $\frac{\norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }$ serves as a normalization factor. Disregarding this leading term, the second ratio measures how the prediction error $QErr = Q - \QpiWeak{\policyicy} $ along $\mu$ transfers to $\policyicy$, as measured via the operator $\IdentityOperator - \discount\TransitionOperator{\policyicy}$. This interaction is complex, since it includes the \emph{bootstrapping term} $-\discount\TransitionOperator{\policyicy}$. (Notably, such a term is not present for standard prediction or bandit problems, in which case $\discount = 0$.) This term reflects the dynamics intrinsic to reinforcement learning, and plays a key role in proving ``hard'' lower bounds for offline RL (e.g., see the work~\cite{zanette2020exponential}). Observe that the bound in \cref{lem:PredictionError} requires only weak realizability, and thus it always applies. This fact is significant in light of a recent lower bound~\cite{foster2021offline}, showing that without Bellman closure, off-policy learning is challenging even under strong concentrability assumption (such as bounds on density ratios). \cref{lem:PredictionError} gives a sufficient condition without Bellman closure, but with a different measure that accounts for bootstrapping. \\ \noindent If, in fact, (weak) Bellman closure holds, then~\cref{lem:PredictionError} takes the following simplified form: \begin{lemma}[OPC coefficient under Bellman closure] \label{lem:PredictionErrorBellmanClosure} If $\QclassErr{\policyicy} \subseteq \TestFunctionClass{\policyicy}$ and weak Bellman closure holds, then \begin{align*} K^\policy \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \, \cdot \, \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm {QErrNC} {\policyicy}^2 } { \norm{QErrNC} {\mu}^2 } \Big \}. \end{align*} \end{lemma} \noindent See \cref{sec:PredictionErrorBellmanClosure} for the proof. \\ In such case, the concentrability measures the increase in the discrepancy $Q - Q'$ of the feasible predictors when moving from the dataset distribution $\mu$ to the distribution of the target policy $\policyicy$. In \cref{sec:DomainKnowledge}, we give another bound under weak Bellman closure, and thereby recover a recent result due to Xie et al.~\cite{xie2021bellman}. Finally, in~\cref{sec:Linear}, we provide some applications of this concentrability factor to the linear setting. \subsection{From Importance Sampling to Bellman Closure} \label{sec:IS2BC} Let us show an application of \cref{prop:MultipleRobustness} on an example with just two test spaces. Suppose that we suspect that Bellman closure holds, but rather than committing to such assumption, we wish to fall back to an importance sampling estimator if Bellman closure does not hold. In order to streamline the presentation of the idea, let us introduce the following setup. Let $\policyicyBeh{}$ be a behavioral policy that generates the dataset, i.e., such that each state-action $\psa$ in the dataset is sampled from its discounted state distribution $\DistributionOfPolicy{\policyicyBeh{}}$. Next, let the identifier $o$ contain the trajectory from $\nu_{\text{start}}$ up to the state-action pair $\psa$ recorded in the dataset. That is, each tuple $\sarsi{}$ in the dataset $\Dataset$ is such that $\psa \sim \DistributionOfPolicy{\policyicyBeh{}}$ and $o$ contains the trajectory up to $\psa$. We now define the test spaces. The first one is denoted with $\TestFunctionISClass{\policyicy}$ and leverages importance sampling. It contains a single test function defined as the importance sampling estimator \begin{align} \label{eqn:ImportanceSampling} \TestFunctionISClass{\policyicy} = \{ \TestFunction{\policyicy}\}, \qquad \text{where} \; \TestFunction{\policyicy}(\state,\action,\identifier) = \frac{1}{\scaling{\policyicy}} \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) }. \end{align} The above product is over the random trajectory contained in the identifier $o$. The normalization factor $\scaling{\policyicy} \in \R$ is connected to the maximum range of the importance sampling estimator, and ensures that $\sup_{(\state,\action,\identifier)} \TestFunction{\policyicy}(\state,\action,\identifier) \leq 1$. The second test space is the prediction error test space $\QclassErr{\policyicy}$ defined in \cref{sec:ErrorTestSpace}. With this choice, let us define three concentrability coefficients. $\ConcentrabilityGenericSub{\policyicy}{1}$ arises from importance sampling, $\ConcentrabilityGenericSub{\policyicy}{2}$ from the prediction error test space when Bellman closure holds and $\ConcentrabilityGenericSub{\policyicy}{3}$ from the prediction error test space when just weak realizability holds. They are defined as \begin{align*} \ConcentrabilityGenericSub{\policyicy}{1} \leq \sqrt{ \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg} } \qquad \ConcentrabilityGenericSub{\policyicy}{2} \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \times \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg }, \qquad \ConcentrabilityGenericSub{\policyicy}{3} \leq c_1 \frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 }{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} ^2}. \end{align*} \begin{lemma}[From Importance Sampling to Bellman Closure] \label{lem:IS} The choice $\TestFunctionClass{\policyicy} = \TestFunctionISClass{\policyicy} \cup \QclassErr{\policyicy} \; \text{for all } \policyicy\in\PolicyClass$ ensures that with probability at least $1-\FailureProbability$, the oracle inequality~\eqref{EqnOracle} holds with $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2}, \ConcentrabilityGenericSub{\policyicy}{3} \} $ if weak Bellman closure holds and $\ConcentrabilityGeneric{\policyicy} \leq \min\{ \ConcentrabilityGenericSub{\policyicy}{1}, \ConcentrabilityGenericSub{\policyicy}{2} \} $ otherwise. \end{lemma} \begin{proof} Let us calculate the {off-policy cost coefficient} associated with $\TestFunctionISClass{\policyicy}$. The unbiasedness of the importance sampling estimator gives us the following population constraint (here $\mu = \DistributionOfPolicy{\policyicyBeh{}}$) \begin{align*} \abs{ \innerprodweighted{\TestFunction{\policyicy}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \abs{ \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\mu} } = \frac{1}{\scaling{\policyicy}} \abs{ \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } = \frac{1}{\scaling{\policyicy}} \abs{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} } \leq \frac{\ConfidenceInterval{\FailureProbability}}{\sqrt{n}} \sqrt{\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg} \end{align*} The norm of the test function reads (notice that $\mu$ generates $(\state,\action,\identifier)$ here) \begin{align*} \norm{\TestFunction{\policyicy}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\TestFunction{\policyicy}^2}{\mu} = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\mu}\Bigg]^2 = \frac{1}{\scaling{\policyicy}^2} \ensuremath{\mathbb{E}}ecti{ \Bigg[ \prod_{(\state_\hstep,\action_\hstep) \in o} \frac{ \policyicy(\action_{\hstep} \mid \state_{\hstep}) }{ \policyicyBeh{}(\action_{\hstep} \mid \state_{\hstep}) } }{\policyicy}\Bigg] \leq \frac{1}{\scaling{\policyicy}}. \\ \end{align*} Together with the prior display, we obtain \begin{align*} \frac{\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} {\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} \leq \frac{\ensuremath{\rho}}{n}. \end{align*} The resulting concentrability coefficient is therefore \begin{align*} \ConcentrabilityGeneric{\policyicy} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{n}{\ensuremath{\rho}} \leq \max_{Q\in \PopulationFeasibleSet{\policyicy}} \frac{ \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {1 + \TestFunctionReg} \times \frac{\scaling{\policyicy}^2(\norm{\TestFunction{\policyicy}}{2}^2 + \TestFunctionReg)} {\innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy}^2} \leq \scaling{\policyicy} \frac{(1 + \TestFunctionReg\scaling{\policyicy})}{1+\TestFunctionReg}. \end{align*} Chaining the above result with \cref{lem:BellmanTestFunctions,lem:PredictionError}, using \cref{prop:MultipleRobustness} and plugging back into \cref{thm:NewPolicyEvaluation} yields the thesis. \end{proof} \subsection{Implementation for Off-Policy Predictions} \label{sec:LinearConfidenceIntervals} In this section, we describe a computationally efficient way in which to compute the upper/lower estimates~\eqref{eqn:ConfidenceIntervalEmpirical}. Given a finite set of $n_{\TestFunctionClass{}}$ test functions, it involves solving a quadratic program with $2 n_{\TestFunctionClass{}} + 1$ constraints. Let us first work out a concise description of the constraints defining membership in $\EmpiricalFeasibleSet{\policyicy}$. Introduce the shorthand $\nLinEffSq{\TestFunction{}} \defeq \norm{\TestFunction{j}}{n}^2 + \TestFunctionReg$. We then define the empirical average feature vector $\phiEmpiricalExpecti{\TestFunction{}}$, the empirical average reward $\LinEmpiricalReward{\TestFunction{}}$, and the average next-state feature vector $\phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}$ as \begin{align*} \quad \phiEmpiricalExpecti{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi\psa, \qquad \LinEmpiricalReward{\TestFunction{}} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa\reward, \\ \qquad \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy} = \frac{1}{\nLinEff{\TestFunction{}}} \sum_{\psa{} \in \Dataset}rs \TestFunction{}\psa \phi(\successorstate,\policyicy). \end{align*} In terms of this notation, each empirical constraint defining $\EmpiricalFeasibleSet{\policyicy}$ can be written in the more compact form \begin{align*} \frac{\abs{\innerprodweighted{\TestFunction{}} {\TDError{Q}{\policyicy}{}}{n}} } {\nLinEff{\TestFunction{}}} & = \Big | \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \Big| \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Then the set of empirical constraints can be written as a set of constraints linear in the critic parameter $\CriticPar{}$ coupled with the assumed regularity bound on $\CriticPar{}$ \begin{align} \label{eqn:LinearConstraints} \EmpiricalFeasibleSet{\policyicy} = \Big\{ \CriticPar{} \in \R^d \mid \norm{\CriticPar{}}{2} \leq 1, \quad \mbox{and} \quad - \sqrt{\frac{\ensuremath{\rho}}{n}} \leq \smlinprod{\phiEmpiricalExpecti{\TestFunction{}} - \discount \phiEmpiricalExpectiBootstrap{\TestFunction{}}{\policyicy}}{\CriticPar{}} - \LinEmpiricalReward{\TestFunction{}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}} \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \Big\}. \end{align} Thus, the estimates $\VminEmp{\policy}$ (respectively $\VmaxEmp{\policy}$) acan be computed by minimizing (respectively maximizing) the linear objective function $w \mapsto \inprod{[\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathbb{E}}ecti{\phi(\state,\action)} {\action \sim \policyicy}}{\state \sim \nu_{\text{start}}}]}{\CriticPar{}}$ subject to the $2 n_{\TestFunctionClass{}} + 1$ constraints in equation~\eqref{eqn:LinearConstraints}. Therefore, the estimates can be computed in polynomial time for any test function with a cardinality that grows polynomially in the problem parameters. \subsection{Discussion of Linear Approximate Optimization} \label{sec:LinearDiscussion} Here we discuss the presence of the supremum over policies in the coefficient $\ConcentrabilityGenericSub{\widetilde \policy}{1}$ from equation~\eqref{EqnNewConc}. In particular, it arises because our actor-critic method iteratively approximates the maximum in the max-min estimate~\eqref{eqn:MaxMinEmpirical} using a gradient-based scheme. The ability of a gradient-based method to make progress is related to the estimation accuracy of the gradient, which is the $Q$ estimates of the actor's current policy $\ActorPolicy{t}$; more specifically, the gradient is the $Q$ function parameter $\CriticPar{t}$. In the general case, the estimation error of the gradient $\CriticPar{t}$ depends on the policy under consideration through the matrix $\CovarianceWithBootstrapReg{\ActorPolicy{t}}$, while it is independent in the special case of Bellman closure (as it depends on just $\Sigma$). As the actor's policies are random, this yields the introduction of a $\sup_{\policyicy\in\PolicyClass}$ in the general bound. Notice the method still competes with the best comparator $\widetilde \policy$ by measuring the errors along the distribution of the comparator (through the operator $\ensuremath{\mathbb{E}}ecti{}{\widetilde \policy}$). To be clear, $\sup_{\policyicy\in\PolicyClass}$ may not arise with approximate solution methods that do not rely only on the gradient to make progress (such as second-order methods); we leave this for future research. Reassuringly, when Bellman closure, the approximate solution method recovers the standard guarantees established in the paper~\cite{zanette2021provable}. \section{General Guarantees} \label{sec:GeneralGuarantees} \subsection{A deterministic guarantee} We begin our analysis stating a deterministic set of sufficient conditions for our estimators to satisfy the guarantees~\eqref{EqnPolEval} and~\eqref{EqnOracle}. This formulation is useful, because it reveals the structural conditions that underlie success of our estimators, and in particular the connection to weak realizability. In Section~\ref{SecHighProb}, we exploit this deterministic result to show that, under a fairly general sampling model, our estimators enjoy these guarantees with high probability. In the previous section, we introduced the population level set $\PopulationFeasibleSet{\policyicy}$ that arises in the statement of our guarantees. Also central in our analysis is the infinite data limit of this set. More specifically, for any fixed $(\ensuremath{\rho}, \ensuremath{\lambda})$, if we take the limit $n \rightarrow \infty$, then $\PopulationFeasibleSet{\policyicy}$ reduces to the set of all solutions to the weak formulation~\eqref{eqn:WeakFormulation}---that is \begin{align} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) = \{ Q \in \Qclass{\policyicy} \mid \innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{Q}{\pi}{}} {\mu} = 0 \quad \mbox{for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$} \}. \end{align} As before, we omit the dependence on the test function class $\TestFunctionClass{\policyicy}$ when it is clear from context. By construction, we have the inclusion $\PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \SuperPopulationFeasible$ for any non-negative pair $(\ensuremath{\rho}, \ensuremath{\lambda})$. Our first set of guarantees hold when the random set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the \emph{sandwich relation} \begin{align} \label{EqnSandwich} \PopulationFeasibleSetInfty{\policyicy}(\TestFunctionClass{\policyicy}) \subseteq \EmpiricalFeasibleSet{\policyicy}(\ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \subseteq \PopulationFeasibleSet{\policyicy}(4 \ensuremath{\rho}, \ensuremath{\lambda}; \TestFunctionClass{\policyicy}) \end{align} To provide intuition as to why this sandwich condition is natural, observe that it has two important implications: \begin{enumerate} \item[(a)] Recalling the definition of weak realizability~\eqref{EqnWeakRealizable}, the weak solution $\QpiWeak{\policyicy}$ belongs to the empirical constraint set $\EmpiricalFeasibleSet{\policyicy}$ for any choice of test function space. This important property follows because $\QpiWeak{\policyicy}$ must satisfy the constraints~\eqref{eqn:WeakFormulation}, and thus it belongs to $\PopulationFeasibleSetInfty{\policyicy} \subseteq \EmpiricalFeasibleSet{\policyicy}$. \item[(b)] All solutions in $\EmpiricalFeasibleSet{\policyicy}$ also belong to $\PopulationFeasibleSet{\policyicy}$, which means they approximately satisfy the weak Bellman equations in a way quantified by $\PopulationFeasibleSet{\policyicy}$. \end{enumerate} By leveraging these facts in the appropriate way, we can establish the following guarantee: \\ \begin{proposition} \label{prop:Deterministic} The following two statements hold. \begin{enumerate} \item[(a)] \underline{Policy evaluation:} If the set $\EmpiricalFeasibleSet{\policyicy}$ satisfies the sandwich relation~\eqref{EqnSandwich}, then the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ satisfy the width bound~\eqref{EqnWidthBound}. If, in addition, weak Bellman realizability for $\policyicy$ is assumed, then the coverage~\eqref{EqnCoverage} condition holds. \item[(b)] \underline{Policy optimization:} If the sandwich relation~\eqref{EqnSandwich} and weak Bellman realizability hold for all $\policyicy \in \PolicyClass$, then any max-min~\eqref{eqn:MaxMinEmpirical} optimal policy $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{enumerate} \end{proposition} \noindent See Section~\ref{SecProofPropDeterministic} for the proof of this claim. \\ In summary, \Cref{prop:Deterministic} ensures that when weak realizability is in force, then the sandwich relation~\eqref{EqnSandwich} is a sufficient condition for both the policy evaluation~\eqref{EqnPolEval} and optimization~\eqref{EqnOracle} guarantees to hold. Accordingly, the next phase of our analysis focuses on deriving sufficient conditions for the sandwich relation to hold with high probabability. \subsection{Some high-probability guarantees} \label{SecHighProb} As stated, \Cref{prop:Deterministic} is a ``meta-result'', in that it applies to any choice of set $\EmpiricalFeasibleSet{\policyicy} \equiv \SuperEmpiricalFeasible$ for which the sandwich relation~\eqref{EqnSandwich} holds. In order to obtain a more concrete guarantee, we need to impose assumptions on the way in which the dataset was generated, and concrete choices of $(\ensuremath{\rho}, \ensuremath{\lambda})$ that suffice to ensure that the associated sandwich relation~\eqref{EqnSandwich} holds with high probability. These tasks are the focus of this section. \subsubsection{A model for data generation} \label{SecDataGen} Let us begin by describing a fairly general model for data-generation. Any sample takes the form $\sarsizNp{} \defeq \sarsi{}$, where the five components are defined as follows: \begin{carlist} \item the pair $(\state, \action)$ index the current state and action. \item the random variable $\reward$ is a noisy observation of the mean reward. \item the random state $\successorstate$ is the next-state sample, drawn according to the transition $\Pro\psa$. \item the variable $o$ is an optional identifier. \end{carlist} \noindent As one example of the use of an identifier variable, if samples might be generated by one of two possible policies---say $\policyicy_1$ and $\policyicy_2$---the identifier can take values in the set $\{1, 2 \}$ to indicate which policy was used for a particular sample. \\ Overall, we observe a dataset $\Dataset = \{\sarsizNp{i} \}_{i=1}^n$ of $n$ such quintuples. In the simplest of possible settings, each triple $(\state,\action,\identifier)$ is drawn i.i.d. from some fixed distribution $\mu$, and the noisy reward $\reward_i$ is an unbiased estimate of the mean reward function $\Reward(\state_i, \action_i)$. In this case, our dataset consists of $n$ i.i.d. quintuples. More generally, we would like to accommodate richer sampling models in which the sample $z_i = (\state_i, \action_i, o_i, \reward_i, \successorstate_i)$ at a given time $i$ is allowed to depend on past samples. In order to specify such dependence in a precise way, define the nested sequence of sigma-fields \begin{align} \mathcal F_1 = \emptyset, \quad \mbox{and} \quad \mathcal F_i \defeq \sigma \Big(\{\sarsizNp{j}\}_{j=1}^{i-1} \Big) \qquad \mbox{for $i = 2, \ldots, n$.} \end{align} In terms of this filtration, we make the following definition: \begin{assumption}[Adapted dataset] \label{asm:Dataset} An adapted dataset is a collection $\Dataset = \{ \sarsizNp{i} \}_{i=1}^n$ such that for each $i = 1, \ldots, n$: \begin{carlist} \item There is a conditional distribution $\mu_i$ such that $(\state_i, \action_i, o_i) \sim \mu_i (\cdot \mid \mathcal F_i)$. \item Conditioned on $(\state_i,\action_i,o_i )$, we observe a noisy reward $\reward_i = \reward(\state_i,\action_i) + \eta_i$ with $\E[\eta_i \mid \mathcal F_{i} ] = 0$, and $|\reward_i| \leq 1$. \item Conditioned on $(\state_i,\action_i,o_i )$, the next state $\successorstate_i$ is generated according to $\Pro(\state_i,\action_i)$. \end{carlist} \end{assumption} Under this assumption, we can define the (possibly) random reference measure \begin{align} \mu(\state, \action, o) & \defeq \frac{1}{n} \sum_{i=1}^n \mu_i \big(\state, \action, o \mid \mathcal F_i \big). \end{align} In words, it corresponds to the distribution induced by first drawing a time index $i \in \{1, \ldots, n \}$ uniformly at random, and then sampling a triple $(\state,\action,\identifier)$ from the conditional distribution $\mu_i \big(\cdot \mid \mathcal F_i \big)$. \subsubsection{A general guarantee} \newcommand{\ensuremath{\phi}}{\ensuremath{\phi}} \label{SecGeneral} Recall that there are three function classes that underlie our method: the test function class $\TestFunctionClass{}$, the policy class $\PolicyClass$, and the $Q$-function class $\Qclass{}$. In this section, we state a general guarantee (\cref{thm:NewPolicyEvaluation}) that involves the metric entropies of these sets. In Section~\ref{SecCorollaries}, we provide corollaries of this guarantee for specific function classes. In more detail, we equip the test function class and the $Q$-function class with the usual sup-norm \begin{align*} \|f - \tilde{f}\|_\infty \defeq \sup_{(\state,\action,\identifier)} |f(\state,\action,\identifier) - \tilde{f} (\state,\action,\identifier)|, \quad \mbox{and} \quad \|Q - \tilde{Q}\|_\infty \defeq \sup_{\psa} |Q \psa - \tilde{Q} \psa|, \end{align*} and the policy class with the sup-TV norm \begin{align*} \|\pi - \pitilde\|_{\infty, 1} & \defeq \sup_{\state} \|\pi(\cdot \mid \state) - \pitilde(\cdot \mid \state) \|_1 = \sup_{\state} \sum_{\action} |\pi(\action \mid \state) - \pitilde(\action \mid \state)|. \end{align*} For a given $\epsilon > 0$, we let $\CoveringNumber{\TestFunctionClass{}}{\epsilon}$, $\CoveringNumber{\Qclass{}}{\epsilon}$, and $\CoveringNumber{\PolicyClass}{\epsilon}$ denote the $\epsilon$-covering numbers of each of these function classes in the given norms. Given these covering numbers, a tolerance parameter $\delta \in (0,1)$ and the shorthand $\ensuremath{\phi}(t) = \max \{t, \sqrt{t} \}$, define the radius function \begin{subequations} \label{EqnTheoremChoice} \begin{align} \label{EqnDefnR} \ensuremath{\rho}(\epsilon, \delta) & \defeq n \Big\{ \int_{\epsilon^2}^\epsilon \ensuremath{\phi} \big( \frac{\log N_u(\TestFunctionClass{})}{n} \big) du + \frac{\log N_\epsilon(\Qclass{})}{n} + \frac{ \log N_\epsilon(\Pi)}{n} + \frac{\log(n/\delta)}{n} \Big\}. \end{align} In our theorem, we implement the estimator using a radius $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$, where $\epsilon > 0$ is any parameter that satisfies the bound \begin{align} \label{EqnRadChoice} \epsilon^2 & \stackrel{(i)}{\leq} \bar{c} \: \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}, \quad \mbox{and} \quad \ensuremath{\lambda} \stackrel{(i)}{=} 4 \frac{\ensuremath{\rho}(\epsilon, \delta)}{n}. \end{align} \end{subequations} Here $\bar{c} > 0$ is a suitably chosen but universal constant (whose value is determined in the proof), and we adopt the shorthand $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ in our statement below. \renewcommand{\descriptionlabel}[1]{ \hspace{\labelsep}\normalfont\underline{#1} } \begin{theorem}[High-probability guarantees] \label{thm:NewPolicyEvaluation} Consider the estimates implemented using triple $\InputFunctionalSpace$ that is weakly Bellman realizable (\cref{asm:WeakRealizability}); an adapted dataset (\Cref{asm:Dataset}); and with the choices~\eqref{EqnTheoremChoice} for $(\epsilon, \ensuremath{\rho}, \ensuremath{\lambda})$. Then with probability at least $1 - \delta$: \begin{description} \item[Policy evaluation:] For any $\pi \in \PolicyClass$, the estimates $(\VminEmp{\policyicy}, \VmaxEmp{\policyicy})$ specify a confidence interval satisfying the coverage~\eqref{EqnCoverage} and width bounds~\eqref{EqnWidthBound}. \item[Policy optimization:] Any max-min policy~\eqref{eqn:MaxMinEmpirical} $\widetilde \pi$ satisfies the oracle inequality~\eqref{EqnOracle}. \end{description} \end{theorem} \noindent See \cref{SecProofNewPolicyEvaluation} for the proof of the claim. \\ \paragraph{Choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$:} Let us provide a few comments about the choices of $(\ensuremath{\rho}, \epsilon, \ensuremath{\lambda})$ from equations~\eqref{EqnDefnR} and~\eqref{EqnRadChoice}. The quality of our bounds depends on the size of the constraint set $\PopulationFeasibleSet{\policyicy}$, which is controlled by the constraint level $\sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, our results are tightest when $\ensuremath{\rho} = \ensuremath{\rho}(\epsilon, \delta)$ is as small as possible. Note that $\ensuremath{\rho}$ is an decreasing function of $\epsilon$, so that in order to minimize it, we would like to choose $\epsilon$ as large as possible subject to the constraint~\eqref{EqnRadChoice}(i). Ignoring the entropy integral term in equation~\eqref{EqnRadChoice} for the moment---see below for some comments on it---these considerations lead to \begin{align} n \epsilon^2 \asymp \log N_\epsilon(\TestFunctionClass{}) + \log N_\epsilon(\Qclass{}) + \log N_\epsilon(\Pi). \end{align} This type of relation for the choice of $\epsilon$ in non-parametric statistics is well-known (e.g., see Chapters 13--15 in the book~\cite{wainwright2019high} and references therein). Moreover, setting $\ensuremath{\lambda} \asymp \epsilon^2$ as in equation~\eqref{EqnRadChoice}(ii) is often the correct scale of regularization. \paragraph{Key technical steps in proof:} It is worthwhile making a few comments about the structure of the proof so as to clarify the connections to \Cref{prop:Deterministic} along with the weak formulation that underlies our methods. Recall that \Cref{prop:Deterministic} requires the empirical $\EmpiricalFeasibleSet{\policyicy}$ and population sets $\PopulationFeasibleSet{\policyicy}$ to satisfy the sandwich relation~\eqref{EqnSandwich}. In order to prove that this condition holds with high probability, we need to establish uniform control over the family of random variables \begin{align} \label{EqnKeyFamily} \frac{ \big| \inprod{f}{\delta^\policyicy(Q)}_n - \inprod{f}{\ensuremath{\mathcal{B}}^\policyicy(Q)}_\mu \big|}{\TestNormaRegularizerEmp{}}, \qquad \mbox{as indexed by the triple $(f, Q, \policyicy)$.} \end{align} Note that the differences in the numerator of these variables correspond to moving from the empirical constraints on $Q$-functions that are enforced using the TD errors, to the population constraints that involve the Bellman error function. Uniform control of the family~\eqref{EqnKeyFamily}, along with the differences $\|f\|_n - \|f\|_\mu$ uniformly over $f$, allows us to relate the empirical and population sets, since the associated constraints are obtained by shifting between the empirical inner products $\inprod{\cdot}{\cdot}_n$ to the reference inner products $\inprod{\cdot}{\cdot}_\mu$. A simple discretization argument allows us to control the differences uniformly in $(Q, \policyicy)$, as reflected by the metric entropies appearing in our definition~\eqref{EqnTheoremChoice}. Deriving uniform bounds over test functions $f$---due to the self-normalizing nature of the constraints---requires a more delicate argument. More precisely, in order to obtain optimal results for non-parametric problems (see Corollary~\ref{cor:alpha} to follow), we need to localize the empirical process at a scale $\epsilon$, and derive bounds on the localized increments. This portion of the argument leads to the entropy integral---which is localized to the interval $[\epsilon^2, \epsilon]$---in our definition~\eqref{EqnDefnR} of the radius function. \paragraph{Intuition from the on-policy setting:} In order to gain intuition for the statistical meaning of the guarantees in~\cref{thm:NewPolicyEvaluation}, it is worthwhile understanding the implications in a rather special case---namely, the simpler on-policy setting, where the discounted occupation measure induced by the target policy $\policyicy$ coincides with the dataset distribution $\mu$. Let us consider the case in which the identity function $\1$ belongs to the test class $\TestFunctionClass{\policyicy}$. Under these conditions, for any $Q \in \PopulationFeasibleSet{\policyicy}$, we can write \begin{align*} \max_{Q \in \PopulationFeasibleSet{\policyicy}} |\PolComplexGen{\policyicy}| & \stackrel{(i)}{=} \max_{Q \in \PopulationFeasibleSet{\policyicy}}| \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}| \; \stackrel{(ii)}{\leq} \sqrt{1 + \ensuremath{\lambda}} \; \sqrt{\frac{\ensuremath{\rho}}{n}}, \end{align*} where equality (i) follows from the on-policy assumption, and step (ii) follows from the definition of the set $\PopulationFeasibleSet{\policyicy}$, along with the condition that $\1 \in \TestFunctionClass{\policyicy}$. Consequently, in the on-policy setting, the width bound~\eqref{EqnWidthBound} ensures that \begin{align} \label{EqnOnPolicy} \abs{\VminEmp{\policyicy} - \VmaxEmp{\policyicy}} \leq 2 \frac{\sqrt{1 + \ensuremath{\lambda}}}{1-\discount} \sqrt{\frac{ \ensuremath{\rho}}{n}}. \end{align} In this simple case, we see that the confidence interval scales as $\sqrt{\ensuremath{\rho}/n}$, where the quantity $\ensuremath{\rho}$ is related to the metric entropy via equation~\eqref{EqnRadChoice}. In the more general off-policy setting, the bound involves this term, along with additional terms that reflect the cost of off-policy data. We discuss these issues in more detail in \cref{sec:Applications}. Before doing so, however, it is useful derive some specific corollaries that show the form of $\ensuremath{\rho}$ under particular assumptions on the underlying function classes, which we now do. \subsubsection{Some corollaries} \label{SecCorollaries} Theorem~\ref{thm:NewPolicyEvaluation} applies generally to triples of function classes $\InputFunctionalSpace$, and the statistical error $\sqrt{\frac{\ensuremath{\rho}(\epsilon, \delta)}{n}}$ depends on the metric entropies of these function classes via the definition~\eqref{EqnDefnR} of $\ensuremath{\rho}(\epsilon, \delta)$, and the choices~\eqref{EqnRadChoice}. As shown in this section, if we make particular assumptions about the metric entropies, then we can derive more concrete guarantees. \paragraph{Parametric and finite VC classes:} One form of metric entropy, typical for a relatively simple function class $\ensuremath{\mathcal{G}}$ (such as those with finite VC dimension) scales as \begin{align} \label{EqnPolyMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp d \; \log \big(\frac{1}{\epsilon} \big), \end{align} for some dimensionality parameter $d$. For instance, bounds of this type hold for linear function classes with $d$ parameters, and for finite VC classes (with $d$ proportional to the VC dimension); see Chapter 5 of the book~\cite{wainwright2019high} for more details. \begin{corollary} \label{cor:poly} Suppose each class of the triple $\InputFunctionalSpace$ has metric entropy that is at most polynomial~\eqref{EqnPolyMetric} of order $d$. Then for a sample size $n \geq 2 d$, the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = d/n$ and \begin{align} \label{EqnPolyRchoice} \tilde{\ensuremath{\rho}}\big( \sqrt{\frac{d}{n}}, \delta \big) & \defeq c \; \Big \{ d \; \log \big( \frac{n}{d} \big) + \log \big(\frac{n}{\delta} \big) \Big \}, \end{align} where $c$ is a universal constant. \end{corollary} \begin{proof} Our strategy is to upper bound the radius $\ensuremath{\rho}$ from equation~\eqref{EqnDefnR}, and then show that this upper bound $\tilde{\ensuremath{\rho}}$ satisfies the conditions~\eqref{EqnRadChoice} for the specified choice of $\epsilon^2$. We first control the term $\log N_\epsilon(\TestFunctionClass{})$. We have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \sqrt{\frac{d}{n}} \; \int_{0}^{\epsilon} \sqrt{\log(1/u)} du \; = \; \epsilon \sqrt{\frac{d}{n}} \; \int_0^1 \sqrt{\log(1/(\epsilon t))} dt \; = \; c \epsilon \log(1/\epsilon) \sqrt{\frac{d}{n}}. \end{align*} Similarly, we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \epsilon \frac{d}{n} \Big \{ \int_{\epsilon}^{1} \log(1/t) dt + \log(1/\epsilon) \Big \} \; \leq \; c \, \epsilon \log(1/\epsilon) \frac{d}{n}. \end{align*} Finally, for terms not involving entropy integrals, we have \begin{align*} \max \Big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \Big\} & \leq c \frac{d}{n} \log(1/\epsilon). \end{align*} Setting $\epsilon^2 = d/n$, we see that the required conditions~\eqref{EqnRadChoice} hold with the specified choice~\eqref{EqnPolyRchoice} of $\tilde{\ensuremath{\rho}}$. \end{proof} \paragraph{Richer function classes:} In the previous section, the metric entropy scaled logarithmically in the inverse precision $1/\epsilon$. For other (richer) function classes, the metric entropy exhibits a polynomial scaling in the inverse precision, with an exponent $\alpha > 0$ that controls the complexity. More precisely, we consider classes of the form \begin{align} \label{EqnAlphaMetric} \log N_\epsilon(\ensuremath{\mathcal{G}}) & \asymp \Big(\frac{1}{\epsilon} \Big)^\alpha. \end{align} For example, the class of Lipschitz functions in dimension $d$ has this type of metric entropy with $\alpha = d$. More generally, for Sobolev spaces of functions that have $s$ derivatives (and the $s^{th}$-derivative is Lipschitz), we encounter metric entropies of this type with $\alpha = d/s$. See Chapter 5 of the book~\cite{wainwright2019high} for further background. \begin{corollary} \label{cor:alpha} Suppose that each function class $\InputFunctionalSpace$ has metric entropy with at most \mbox{$\alpha$-scaling~\eqref{EqnAlphaMetric}} for some $\alpha \in (0,2)$. Then the claims of Theorem~\ref{thm:NewPolicyEvaluation} hold with $\epsilon^2 = (1/n)^{\frac{2}{2 + \alpha}}$, and \begin{align} \label{EqnAlphaRchoice} \tilde{\ensuremath{\rho}}\big((1/n)^{\frac{1}{2 + \alpha}}, \delta \big) & = c \; \Big \{ n^{\frac{\alpha}{2 + \alpha}} + \log(n/\delta) \Big \}. \end{align} where $c$ is a universal constant. \end{corollary} \noindent We note that for standard regression problems over classes with $\alpha$-metric entropy, the rate $(1/n)^{\frac{2}{2 + \alpha}}$ is well-known to be minimax optimal (e.g., see Chapter 15 in the book~\cite{wainwright2019high}, as well as references therein). \begin{proof} We start by controlling the terms involving entropy integrals. In particular, we have \begin{align*} \frac{1}{\sqrt{n}} \int_{\epsilon^2}^\epsilon \sqrt{\log N_u(\TestFunctionClass{})} du & \leq \frac{c}{\sqrt{n}} u^{1 - \frac{\alpha}{2}} \Big |_{0}^\epsilon \; = \; \frac{c}{\sqrt{n}} \epsilon^{1 - \frac{\alpha}{2}}. \end{align*} Requiring that this term is of order $\epsilon^2$ amounts to enforcing that $\epsilon^{1 + \frac{\alpha}{2}} \asymp (1/\sqrt{n})$, or equivalently that $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$. If $\alpha \in (0,1]$, then the second entropy integral converges and is of lower order. Otherwise, if $\alpha \in (1,2)$, then we have \begin{align*} \frac{1}{n} \int_{\epsilon^2}^\epsilon \log N_u(\TestFunctionClass{}) du & \leq \frac{c}{n} \int_{\epsilon^2}^\epsilon (1/u)^{\alpha} du \leq \frac{c}{n} (\epsilon^2)^{1 - \alpha}. \end{align*} Hence the requirement that this term is bounded by $\epsilon^2$ is equivalent to $\epsilon^{2 \alpha} \succsim (1/n)$, or $\epsilon^2 \succsim (1/n)^{1/\alpha}$. When $\alpha \in (1,2)$, we have $\frac{1}{\alpha} > \frac{2}{2 + \alpha}$, so that this condition is milder than our first condition. Finally, we have $ \max \big \{ \frac{\log N_\epsilon(\Qclass{})}{n}, \frac{\log N_\epsilon(\Pi)}{n} \big\} \leq \frac{c}{n} \big(1/\epsilon)^{\alpha}$, and requiring that this term scales as $\epsilon^2$ amounts to requiring that $\epsilon^{2 +\alpha} \asymp (1/n)$, or equivalently $\epsilon^2 \asymp (1/n)^{\frac{2}{2 + \alpha}}$, as before. \end{proof} \section{Proofs for \texorpdfstring{\cref{sec:Applications}}{} and \cref{sec:appConc}} In this section, we collect together the proofs of results stated without proof in Section~\ref{sec:Applications} and \cref{sec:appConc}. \subsection{Proof of \cref{prop:LikeRatio}} \label{sec:LikeRatio} \begin{proof} Since $\TestFunction{}^* \in \TestFunctionClass{\policyicy}$, we are guaranteed that the corresponding constraint must hold. It reads as \begin{align*} |\ensuremath{\mathbb{E}}ecti{\frac{1}{\scaling{\policyicy}} \frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2 = \frac{1}{\scaling{\policyicy}^2} | \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy} |^2 & \overset{(iii)}{\leq} \big( \frac{1}{\scaling{\policyicy}^2} \munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}. \end{align*} where step (iii) follows from the definition of population constraint. Re-arranging yields the upper bound \begin{align*} \frac{|\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}}{\mu} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu} |^2}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} & \leq \frac{\big(\munorm{\frac{\DistributionOfPolicy{\policyicy}}{\mu} }^2 + \scalingsq{\policyicy}\ensuremath{\lambda} \big) \frac{\ensuremath{\rho}}{n}}{(1 + \ensuremath{\lambda}) \frac{\ensuremath{\rho}}{n}} \; = \; \frac{ \ensuremath{\mathbb{E}}ecti{\Big[\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA}\Big] }{\policyicy} + \scalingsq{\policyicy} \ensuremath{\lambda}} {1 + \ensuremath{\lambda}}, \end{align*} where the final step uses the fact that \begin{align*} \norm{\frac{\DistributionOfPolicy{\policyicy}}{\mu}}{\mu}^2 = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}^2\PSA}{\mu^2\PSA} }{\mu} = \ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\PSA}{\mu\PSA} }{\policyicy} \end{align*} Thus, we have established the bound (i) in our claim~\eqref{EqnLikeRatioBound}. The upper bound (ii) follows immediately since $\ensuremath{\mathbb{E}}ecti{\frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} }{\policyicy} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa}{\mu\psa} \leq \scaling{\policyicy}$. \end{proof} \subsection{Proof of \texorpdfstring{\cref{lem:PredictionError}}{}} \label{sec:PredictionError} Some simple algebra yields \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{} = [Q - \BellmanEvaluation{\policyicy}Q] - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy}\QpiWeak{\policyicy}] = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) (Q - \QpiWeak{\policyicy}) = (\IdentityOperator - \gamma\TransitionOperator{\policyicy}) QErr. \end{align*} Taking expectations under $\policyicy$ and recalling that $\innerprodweighted{\TestFunction{}}{\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}}{\policyicy} = 0$ for all $\TestFunction{} \in \TestFunctionClass{\policyicy}$ yields \begin{align*} \innerprodweighted{\TestFunction{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}} {\policyicy} = \innerprodweighted{\TestFunction{}} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}. \end{align*} Notice that for any $Q \in \Qclass{\policyicy}$ there exists a test function $QErr = Q - \QpiWeak{\policyicy} \in \QclassErr{\policyicy}$, and the associated population constraint reads \begin{align*} \frac{ \big| \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu} \big| } { \sqrt{ \norm{QErr}{\mu}^2 + \TestFunctionReg } } & \leq \sqrt{\frac{\ensuremath{\rho}}{n}}. \end{align*} Consequently, the {off-policy cost coefficient} can be upper bounded as \begin{align*} K^\policy & \leq \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{\ensuremath{\rho}}{n} \; \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \; \leq \; \max_{QErr \in \QclassErrCentered{\policyicy}} \Big \{ \frac{ \norm{QErr}{\mu}^2 + \TestFunctionReg } { \norm{\1}{\policyicy}^2 + \TestFunctionReg } \: \frac{ \innerprodweighted{\1} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\policyicy}^2 } { \innerprodweighted{QErr} {(\IdentityOperator - \gamma\TransitionOperator{\policyicy})QErr} {\mu}^2 } \Big \}, \end{align*} as claimed in the bound~\eqref{EqnPredErrorBound}. \subsection{Proof of \texorpdfstring{\cref{lem:PredictionErrorBellmanClosure}}{}} \label{sec:PredictionErrorBellmanClosure} If weak Bellman closure holds, then we can write \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = Q - \QpiProj{\policyicy}{Q} \in \QclassErr{\policyicy}. \end{align*} For any $Q \in \Qclass{\policyicy}$, the function $QErrNC = Q - \QpiProj{\policyicy}{Q}$ belongs to $\QclassErr{\policyicy}$, and the associated population constraint reads $\frac{ |\innerprodweighted{QErrNC} {QErrNC} {\mu} |} { \sqrt{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg }} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. Consequently, the {off-policy cost coefficient} is upper bounded as \begin{align*} K^\policy & \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{n}{\ensuremath{\rho}} \; \frac{ v\innerprodweighted{\1} {QErrNC} {\policyicy}^2 } {1+ \TestFunctionReg} \Big \} \leq \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \norm{QErrNC}{\mu}^2 + \TestFunctionReg } { 1 + \TestFunctionReg } \; \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \} \; \leq \; \max_{QErrNC \in \QclassErr{\policyicy}} \Big \{ \frac{ \innerprodweighted{\1} {QErrNC} {\policyicy}^2 } { \innerprodweighted{QErrNC} {QErrNC} {\mu}^2 } \Big \}, \end{align*} where the final inequality follows from the fact that $\norm{QErrNC}{\mu} \leq 1$. \subsection{Proof of \texorpdfstring{\cref{lem:BellmanTestFunctions}}{}} \label{sec:BellmanTestFunctions} We split our proof into the two separate claims. \paragraph{Proof of the bound~\eqref{EqnBellBound}:} When the test function class includes $\TestFunctionClassBubnov{\policyicy}$, then any $Q$ feasible must satisfy the population constraints \begin{align*} \frac{\innerprodweighted{ \ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}} {\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}} {\sqrt{\norm{\ensuremath{\mathcal{B}}or{Q'}{\policyicy}{}}{\mu}^2 + \TestFunctionReg}} \leq \sqrt{\frac{\ensuremath{\rho}}{n}}, \qquad \mbox{for all $Q' \in \Qpi{\policyicy}$.} \end{align*} Setting $Q' = Q$ yields $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 + \TestFunctionReg } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$. If $\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 \geq \TestFunctionReg$, then the claim holds, given our choice $\TestFunctionReg = c \frac{\ensuremath{\rho}}{n}$ for some constant $c$. Otherwise, the constraint can be weakened to $\frac{ \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} { \sqrt{ 2\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2 } } \leq \sqrt{\frac{\ensuremath{\rho}}{n}}$, which yields the bound~\eqref{EqnBellBound}. \paragraph{Proof of the bound~\eqref{EqnBellBoundConc}:} We now prove the sequence of inequalities stated in equation~\eqref{EqnBellBoundConc}. Inequality (i) follows directly from the definition of $K^\policy$ and \cref{lem:BellmanTestFunctions}. Turning to inequality (ii), an application of Jensen's inequality yields \begin{align*} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2 = [\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}]^2 \leq \ensuremath{\mathbb{E}}ecti{[\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}]^2}{\policyicy} = \norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2. \end{align*} Finally, inequality (iii) follows by observing that \begin{align*} \sup_{Q \in \Qclass{\policyicy}} \frac{\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\policyicy}^2} {\norm{\ensuremath{\mathcal{B}}or{Q}{\policyicy}{}}{\mu}^2} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\policyicy}}{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} = \sup_{Q \in \Qclass{\policyicy}} \frac{\ensuremath{\mathbb{E}}ecti{ \Big[ \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa} }{\mu} \Big][(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2 }{\ensuremath{\mathbb{E}}ecti{[(\ensuremath{\mathcal{B}}or{Q}{\policyicy}{})\psa]^2}{\mu}} \leq \sup_{\psa} \frac{\DistributionOfPolicy{\policyicy}\psa }{\mu \psa}. \end{align*} \section{Proofs for the Linear Setting} We now prove the results stated in~\cref{sec:Linear}. Throughout this section, the reader should recall that $Q$ takes the linear function $Q \psa = \inprod{\CriticPar{}}{\phi \psa}$, so that the bulk of our arguments operate directly on the weight vector $\CriticPar{} \in \R^\dim$. Given the linear structure, the population and empirical covariance matrices of the feature vectors play a central role. We make use of the following known result (cf. Lemma 1 in the paper~\cite{zhang2021optimal}) that relates these objects: \begin{lemma}[Covariance Concentration] \label{lem:CovarianceConcentration} There are universal constants $(c_1, c_2, c_3)$ such that for any $\delta \in (0, 1)$, we have \begin{align} c_1 \SigmaExplicit \preceq \frac{1}{n}\widehat \SigmaExplicit + \frac{c_2}{n} \log \frac{n \dim}{\FailureProbability}\Identity \preceq c_3 \SigmaExplicit + \frac{c_4}{n} \log \frac{n \dim}{\FailureProbability}\Identity. \end{align} with probability at least $1 - \FailureProbability$. \end{lemma} \subsection{Proof of \texorpdfstring{\cref{prop:LinearConcentrability}}{}} \label{sec:LinearConcentrability} Under weak realizability, we have \begin{align} \innerprodweighted {\TestFunction{j}} {\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} {\mu} = 0 \qquad \mbox{for all $j = 1, \ldots, \dim$.} \end{align} Thus, at $\psa$ the Bellman error difference reads \begin{align*} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{}\psa - \ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}\psa & = [Q - \BellmanEvaluation{\policyicy}Q]\psa - [\QpiWeak{\policyicy} - \BellmanEvaluation{\policyicy} \QpiWeak{\policyicy} ]\psa \\ & = [Q - \QpiWeak{\policyicy} ]\psa - \discount \ensuremath{\mathbb{E}}ecti{[Q - \QpiWeak{\policyicy}](\successorstate,\policyicy)}{\successorstate \sim \Pro\psa } \\ \numberthis{\label{eqn:LinearBellmanError}} & = \inprod{\CriticPar{} - \CriticParBest{\policyicy}}{ \phi\psa - \discount \phiBootstrap{\policyicy}\psa} \end{align*} To proceed we need the following auxiliary result: \begin{lemma}[Linear Parameter Constraints] \label{lem:RelaxedLinearConstraints} With probability at least $1-\FailureProbability$, there exists a universal constant $c_1 > 0$ such that if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParBest{\policyicy}}{\CovarianceWithBootstrapReg{\policyicy}}^2 \leq c_1 \frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} \noindent See \cref{sec:RelaxedLinearConstraints} for the proof. \newcommand{\IntermediateSmall}[4]{Using this lemma, we can bound the OPC coefficient as follows \begin{align*} K^\policy \overset{(i)}{\leq} \frac{n}{\ensuremath{\rho}} \; \max_{Q \in \PopulationFeasibleSet{\policyicy}} \innerprodweighted{\1} { \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} #4} {\policyicy}^2 & \overset{(ii)}{\leq} \frac{n}{\ensuremath{\rho}} \; [\ensuremath{\mathbb{E}}ecti{(#1)^\top}{\policyicy} (#2)]^2 \\ & \overset{(iii)}{\leq} \frac{n}{\ensuremath{\rho}} \: \norm{\ensuremath{\mathbb{E}}ecti{ #1 }{\policyicy}}{(#3)^{-1}}^2 \norm{#2}{#3}^2 \\ & \leq c_1 \dim \norm{\ensuremath{\mathbb{E}}ecti{ #1}{\policyicy}}{(#3)^{-1}}^2. \end{align*} Here step $(i)$ follows from the definition of off-policy cost coefficient, $(ii)$ leverages the linear structure and $(iii)$ is Cauchy-Schwartz. } \IntermediateSmall {\phi - \discount \phiBootstrap{\policyicy}} {\CriticPar{} - \CriticParBest{\policyicy}} {\CovarianceWithBootstrapReg{\policyicy}} {-\ensuremath{\mathcal{B}}or{\QpiWeak{\policyicy}}{\policyicy}{}} \subsection{Proof of \texorpdfstring{\cref{lem:RelaxedLinearConstraints}}{}} \label{sec:RelaxedLinearConstraints} \intermediate{(\phi - \discount\phiBootstrap{\policyicy})^\top} {(\SigmaReg - \discount \CovarianceBootstrap{\policyicy})} {(\CriticPar{} - \CriticParBest{\policyicy})} {\CovarianceWithBootstrapReg{\policyicy}} {(\Sigma - \discount \CovarianceBootstrap{\policyicy})} {\cref{eqn:LinearBellmanError}} \subsection{Proof of \cref{prop:LinearConcentrabilityBellmanClosure}} \label{sec:LinearConcentrabilityBellmanClosure} Under weak Bellman closure, we have \begin{align} \numberthis{\label{eqn:LinearBellmanErrorWithClosure}} \ensuremath{\mathcal{B}}or{Q}{\policyicy}{} = Q - \BellmanEvaluation{\policyicy}Q = \phi^\top(\CriticPar{} - \CriticParProjection{\policyicy}). \end{align} With a slight abuse of notation, let $\QpiProj{\policyicy}{\CriticPar{}}$ denote the weight vector that defines the action-value function $\QpiProj{\policyicy}{Q}$. We introduce the following auxiliary lemma: \begin{lemma}[Linear Parameter Constraints with Bellman Closure] \label{lem:RelaxedLinearConstraintsBellmanClosure} With probability at least $1-\FailureProbability$, if $Q \in \PopulationFeasibleSet{\policyicy}$ then $ \norm{\CriticPar{} - \CriticParProjection{\policyicy}}{\SigmaReg }^2 \leq c_1\frac{\dim \ensuremath{\rho}}{n}$. \end{lemma} See~\cref{sec:RelaxedLinearConstraintsBellmanClosure} for the proof. \IntermediateSmall {\phi} {\CriticPar{} - \CriticParProjection{\policyicy}} {\SigmaReg} {} \subsection{Proof of \texorpdfstring{\cref{sec:RelaxedLinearConstraintsBellmanClosure}}{}} \label{sec:RelaxedLinearConstraintsBellmanClosure} \intermediate{\phi^\top} {\SigmaReg} {(\CriticPar{} - \CriticParProjection{\policyicy})} {\SigmaReg} {\Sigma} {\cref{eqn:LinearBellmanErrorWithClosure}} \section{Proof of \texorpdfstring{\cref{thm:LinearApproximation}}{}} \label{sec:LinearApproximation} In this section, we prove the guarantee on our actor-critic procedure stated in \cref{thm:LinearApproximation}. \hidecom{ The proof consists of four main steps. First, we show that the critic linear $\QminEmp{\policyicy}$ function can be interpreted as the \emph{exact} value function on an MDP with a perturbed reward function. Second, we show that the global update rule in \cref{eqn:LinearActorUpdate} is equivalent to an instantiation of Mirror descent where the gradient is the $Q$ of the adversarial MDP. Third, we analyze the progress of Mirror descent in finding a good solution on the sequence of adversarial MDP identified by the critic. Finally we put everything together to derive a performance bound. } \subsection{Adversarial MDPs} \label{sec:AdversarialMDP} We now introduce sequence of adversarial MDPs $\{\mathcal{M}adv{t}\}_{t=1}^T$ used in the analysis. Each MDP $\mathcal{M}adv{t}$ is defined by the same state-action space and transition law as the original MDP $\mathcal{M}$, but with the reward functions $\Reward$ perturbed by $RAdv{t}$---that is \begin{align} \label{eqn:AdversarialMDP} \mathcal{M}adv{t} \defeq \langle \StateSpace,ASpace, R + RAdv{t}, \Pro, \discount \rangle. \end{align} For an arbitrary policy $\policyicy$, we denote with $\QpiAdv{t}{\policyicy}$ and with $\ApiAdv{t}{\policyicy}$ the action value function and the advantage function on $\mathcal{M}adv{t}$; the value of $\policyicy$ from the starting distribution $\nu_{\text{start}}$ is denoted by $\VpiAdv{t}{\policyicy}$. We immediately have the following expression for the value function, which follows because the dynamics of $\mathcal{M}adv{t}$ and $\mathcal{M}$ are identical and the reward function of $\mathcal{M}adv{t}$ equals that of $\mathcal{M}$ plus $RAdv{t}$ \begin{align} \label{eqn:VonAdv} \VpiAdv{t}{\policyicy} \defeq \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{ \Big[ R + RAdv{t} \Big]}{\policyicy}. \end{align} Consider the action value function $\QminEmp{\ActorPolicy{t}}$ returned by the critic, and let the reward perturbation $RAdv{t} = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$ be the Bellman error of the critic value function $\QminEmp{\ActorPolicy{t}}$. The special property of $\mathcal{M}adv{t}$ is that the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$ equals the critic lower estimate $\QminEmp{\ActorPolicy{t}}$. \begin{lemma}[Adversarial MDP Equivalence] \label{lem:QfncOnAdversarialMDP} Given the perturbed MDP $\mathcal{M}adv{t}$ from equation~\eqref{eqn:AdversarialMDP} with $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}$, we have the equivalence \begin{align*} \QpiAdv{t}{\ActorPolicy{t}} = \QminEmp{\ActorPolicy{t}}. \end{align*} \end{lemma} \begin{proof} We need to check that $\QminEmp{\ActorPolicy{t}}$ solves the Bellman evaluation equations for the adversarial MDP, ensuring that $\QminEmp{\ActorPolicy{t}}$ is the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Let $\BellmanEvaluation{\ActorPolicy{t}}_t$ be the Bellman evaluation operator on $\mathcal{M}adv{t}$ for policy $\ActorPolicy{t}$. We have \begin{align*} \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}_t(\QminEmp{\ActorPolicy{t}}) = \QminEmp{\ActorPolicy{t}} - \BellmanEvaluation{\ActorPolicy{t}}(\QminEmp{\ActorPolicy{t}}) - RAdv{t} & = \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} - \ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{} = 0. \end{align*} Thus, the function $\QminEmp{\ActorPolicy{t}}$ is the action value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$, and it is by definition denoted by $\QpiAdv{t}{\ActorPolicy{t}}$. \end{proof} This lemma shows that the action-value function $\QminEmp{\ActorPolicy{t}}$ computed by the critic is equivalent to the action-value function of $\ActorPolicy{t}$ on $\mathcal{M}adv{t}$. Thus, we can interpret the critic as performing a model-based pessimistic estimate of $\ActorPolicy{t}$; this view is useful in the rest of the analysis. \subsection{Equivalence of Updates} The second step is to establish the equivalence between the update rule~\eqref{eqn:LinearActorUpdate}, or equivalently as the update~\eqref{eqn:GlobalRule}, to the exponentiated gradient update rule~\eqref{eqn:LocalRule}. \begin{lemma}[Equivalence of Updates] \label{lem:UpdateEquivalence} For linear $Q$-functions of the form $\QpiAdv{t}{} \psa = \inprod{\CriticPar{t}}{\phi \psa}$, the parameter update \begin{subequations} \begin{align} \label{eqn:GlobalRule} \ActorPolicy{t+1} \pas & \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta \CriticPar{t})), \qquad \intertext{is equivalent to the policy update} \label{eqn:LocalRule} \ActorPolicy{t+1}\pas & \propto \ActorPolicy{t}\pas \exp(\eta\QpiAdv{t}{} \psa), \qquad \ActorPolicy{1}\pas = \frac{1}{\card{ASpace_{\state}}}. \end{align} \end{subequations} \end{lemma} \begin{proof} We prove this claim via induction on $t$. The base case ($t = 1$) holds by a direct calculation. Now let us show that the two update rules update $\ActorPolicy{t}$ in the same way. As an inductive step, assume that both rules maintain the same policy $\ActorPolicy{t} \propto \exp(\phi\psa^\top\ActorPar{t})$ at iteration $t$; we will show the policies are still the same at iteration $t+1$. At any $\psa$, we have \begin{align*} \ActorPolicy{t+1}(\action \mid \state) \propto \exp(\phi\psa^\top (\ActorPar{t} + \eta\CriticPar{t})) & \propto \exp(\phi\psa^\top \ActorPar{t}) \exp(\eta\phi\psa^\top \CriticPar{t}) \\ & \propto \ActorPolicy{t}(\action \mid \state) \exp(\eta\QpiAdv{t}{} \psa). \end{align*} \end{proof} Recall that $\ActorPar{t}$ is the parameter associated to $\ActorPolicy{t}$ and that $\CriticPar{t}$ is the parameter associated to $\QminEmp{\ActorPolicy{t}}$. Using \cref{lem:UpdateEquivalence} together with \cref{lem:QfncOnAdversarialMDP} we obtain that the actor policy $\ActorPolicy{t}$ satisfies through its parameter $\ActorPar{t}$ the mirror descent update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QminEmp{\ActorPolicy{t}} = \QpiAdv{t}{\ActorPolicy{t}}$ and $\ActorPolicy{1}(\action \mid \state) = 1/\abs{ASpace_\state}, \; \forall \psa$. In words, the actor is using Mirror descent to find the best policy on the sequence of adversarial MDPs $\{\mathcal{M}adv{t} \}$ implicitly identified by the critic. \subsection{Mirror Descent on Adversarial MDPs} Our third step is to analyze the behavior of mirror descent on the MDP sequence $\{\mathcal{M}adv{t}\}_{t=1}^T$, and then translate such guarantees back to the original MDP $\mathcal{M}$. The following result provides a bound on the average of the value functions $\{\Vpi{\ActorPolicy{t}} \}_{t=1}^T$ induced by the actor's policy sequence. This bound involves a form of optimization error\footnote{Technically, this error should depend on $\card{ASpace_\state}$, if we were to allow the action spaces to have varyign cardinality, but we elide this distinction here.} given by \begin{align*} \MirrorRegret{T} & = 2 \, \sqrt{ \frac{2 \log |ASpace|}{T}}, \end{align*} as is standard in mirror descent schemes. It also involves the \emph{perturbed rewards} given by $RAdv{t} \defeq \ensuremath{\mathcal{B}}or{\QpiAdv{t}{\ActorPolicy{t}}} {\ActorPolicy{t}}{}$. \begin{lemma}[Mirror Descent on Adversarial MDPs] \label{prop:MirrorDescentAdversarialRewards} For any positive integer $T$, applying the update rule~\eqref{eqn:LocalRule} with $\QpiAdv{t}{} = \QpiAdv{t}{\ActorPolicy{t}}$ for $T$ rounds yields a sequence such that \begin{align} \label{eqn:MirrorDescentAdversarialRewards} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align} valid for any comparator policy $\widetilde \policy$. \end{lemma} \noindent See \cref{sec:MirrorDescentAdversarialRewards} for the proof. \\ To be clear, the comparator policy $\widetilde \policy$ need belong to the soft-max policy class. Apart from the optimization error term, our bound~\eqref{eqn:MirrorDescentAdversarialRewards} involves the behavior of the perturbed rewards $RAdv{t}$ along the comparator $\widetilde \policy$ and $\ActorPolicy{t}$, respectively. These correction terms arise because the actor performs the policy update using the action-value function $\QpiAdv{t}{\ActorPolicy{t}}$ on the perturbed MDPs instead of the real underlying MDP. \subsection{Pessimism: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}$}{}} The fourth step of the proof is to leverage the pessimistic estimates returned by critic to simplify equation~\eqref{eqn:MirrorDescentAdversarialRewards}. Using \cref{lem:Simulation} and the definition of adversarial reward $RAdv{t}$ we can write \begin{align*} \VminEmp{\ActorPolicy{}} - \Vpi{\ActorPolicy{t}} = \frac{1}{1-\discount} \innerprodweighted{\1}{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\ActorPolicy{t}} & = \frac{1}{1-\discount} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}}. \end{align*} Since weak realizability holds, \cref{thm:NewPolicyEvaluation} guarantees that $\VminEmp{\ActorPolicy{}} \leq \Vpi{\policyicy}$ uniformly for all $\policyicy \in \PolicyClass$ with probability at least $1-\FailureProbability$. Coupled with the prior display, we find that \begin{align} \label{eqn:PessimisticAdversarialReward} \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \leq 0. \end{align} Using the above display, the result in \cref{eqn:MirrorDescentAdversarialRewards} can be further upper bounded and simplified. \subsection{Concentrability: Bound on \texorpdfstring{$\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$}{}} The term $\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}$ can be interpreted as an approximate concentrability factor for the approximate algorithm that we are investigating. \paragraph{Bound under only weak realizability:} \cref{lem:RelaxedLinearConstraints} gives with probability at least $1-\FailureProbability$ that any surviving $Q$ in $\PopulationFeasibleSet{\ActorPolicy{t}}$ must satisfy: $ \norm{ \CriticPar{} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}}^2 \lesssim \frac{\dim \ensuremath{\rho}}{n} $ where $\CriticParBest{\ActorPolicy{t}}$ is the parameter associated to the weak solution $\QpiWeak{\ActorPolicy{t}}$. Such bound must apply to the parameter $ \CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$ identified by the critic.\footnote{We abuse the notation and write $\CriticPar{} \in \EmpiricalFeasibleSet{\policyicy}$ in place of $Q \in \EmpiricalFeasibleSet{\policyicy}$}. We are now ready to bound the remaining adversarial reward along the distribution of the comparator $\widetilde \policy$. \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} & = \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} \\ & \overset{\text(i)}{=} \abs{\ensuremath{\mathbb{E}}ecti{ (\phi - \discount \phiBootstrap{\ActorPolicy{t}})^\top (\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\ActorPolicy{t}}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\ActorPolicy{t}})^{-1}} \norm{\CriticPar{t} - \CriticParBest{\ActorPolicy{t}}}{\CovarianceWithBootstrapReg{\ActorPolicy{t}}} \\ & \leq c \; \sqrt{\frac{\dim \ensuremath{\rho}}{n}} \; \sup_{\policyicy \in \PolicyClass} \left \{ \norm{\ensuremath{\mathbb{E}}ecti{[\phi - \discount \phiBootstrap{\policyicy}]}{\widetilde \policy}}{(\CovarianceWithBootstrapReg{\policyicy})^{-1}} \right \}. \numberthis{\label{eqn:ApproximateLinearConcentrability}} \end{align*} Step (i) follows from the expression~\eqref{eqn:LinearBellmanError} for the weak Bellman error, along with the definition of the weak solution $\QpiWeak{\ActorPolicy{t}}$. \paragraph{Bound under weak Bellman closure:} When Bellman closure holds we proceed analogously. The bound in \cref{lem:RelaxedLinearConstraintsBellmanClosure} ensures with probability at least $1-\FailureProbability$ that $ \norm{\CriticPar{} - \CriticParProjection{\ActorPolicy{t}}}{\SigmaReg}^2 \leq c \; \frac{\dim \ensuremath{\rho}}{n} $ for all $\CriticPar{} \in \PopulationFeasibleSet{\ActorPolicy{t}}$; as before, this relation must apply to the parameter chosen by the critic $\CriticPar{t} \in \EmpiricalFeasibleSet{\ActorPolicy{t}}$. The bound on the adversarial reward along the distribution of the comparator $\widetilde \policy$ now reads \begin{align*} \abs{\ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy}} \: = \: \abs{\ensuremath{\mathbb{E}}ecti{\ensuremath{\mathcal{B}}or{\QminEmp{\ActorPolicy{t}}}{\ActorPolicy{t}}{}}{\widetilde \policy}} & \overset{\text{(i)}}{=} \abs{\ensuremath{\mathbb{E}}ecti{ \phi^\top (\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}) }{\widetilde \policy}} \\ & \leq \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \norm{\CriticPar{t} - \CriticParProjectionFull{\ActorPolicy{t}}{\CriticPar{t}}}{\SigmaReg} \\ & \leq c \; \norm{\ensuremath{\mathbb{E}}ecti{\phi}{\widetilde \policy}}{\SigmaReg^{-1}} \sqrt{\frac{\dim \ensuremath{\rho}}{n}}. \numberthis{\label{eqn:ApproximateLinearConcentrabilityBellmanClosure}} \end{align*} Here step (i) follows from the expression~\eqref{eqn:LinearBellmanErrorWithClosure} for the Bellman error under weak closure. \subsection{Proof of \cref{prop:MirrorDescentAdversarialRewards}} \label{sec:MirrorDescentAdversarialRewards} We now prove our guarantee for a mirror descent procedure on the sequence of adversarial MDPs. Our analysis makes use of a standard result on online mirror descent for linear functions (e.g., see Section 5.4.2 of Hazan~\cite{hazan2021introduction}), which we state here for reference. Given a finite cardinality set $\xSpace$, a function $f: \xSpace \rightarrow \R$, and a distribution $\ensuremath{\nu}$ over $\xSpace$, we define $f(\ensuremath{\nu}) \defeq \sum_{x \in \xSpace} \ensuremath{\nu}(x) f(x)$. The following result gives a guarantee that holds uniformly for any sequence of functions $\{f_t\}_{t=1}^T$, thereby allowing for the possibility of adversarial behavior. \begin{proposition}[Adversarial Guarantees for Mirror Descent] \label{prop:MirrorDescent} Suppose that we initialize with the uniform distribution $\ensuremath{\nu}_{1}(\xvar) = \frac{1}{\abs{\xSpace}}$ for all $\xvar \in \xSpace$, and then perform $T$ rounds of the update \begin{align} \label{eqn:ExponentiatedGradient} \ensuremath{\nu}_{t+1}(\xvar ) \propto \ensuremath{\nu}_{t}(\xvar) \exp(\eta \fnc_{t}(\xvar)), \quad \mbox{for all $\xvar \in \xSpace$,} \end{align} using $\eta = \sqrt{\frac{\log \abs{\xSpace}}{2T}}$. If $\norm{\fnc_{t}}{\infty} \leq 1$ for all $t \in [T]$ then we have the bound \begin{align} \label{EqnMirrorBoundStatewise} \frac{1}{T} \sum_{t=1}^T \Big[ \fnc_{t}(\widetilde{\ensuremath{\nu}}) - \fnc_{t}(\ensuremath{\nu}_{t})\Big] \leq \MirrorRegret{T} \defeq 2\sqrt{\frac{2\log \abs{\xSpace}}{T}}. \end{align} where $\widetilde{\ensuremath{\nu}}$ is any comparator distribution over $\xSpace$. \end{proposition} We now use this result to prove our claim. So as to streamline the presentation, it is convenient to introduce the advantage function corresponding to $\ActorPolicy{t}$. It is a function of the state-action pair $\psa$ given by \begin{align*} \ApiAdv{t}{\ActorPolicy{t}}\psa \defeq \QpiAdv{t}{\ActorPolicy{t}}\psa - \ensuremath{\mathbb{E}}ecti{\QpiAdv{t}{\ActorPolicy{t}}(\state,\successoraction)}{\successoraction \sim \ActorPolicy{t}(\cdot \mid \state)}. \end{align*} In the sequel, we omit dependence on $\psa$ when referring to this function, consistent with the rest of the paper. From our earlier observation~\eqref{eqn:VonAdv}, recall that the reward function of the perturbed MDP $\mathcal{M}adv{t}$ corresponds to that of $\mathcal{M}$ plus the perturbation $RAdv{t}$. Combining this fact with a standard simulation lemma (e.g., \cite{kakade2003sample}) applied to $\mathcal{M}adv{t}$, we find that \begin{subequations} \begin{align} \label{EqnInitialBound} \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} & = \VpiAdv{t}{\widetilde \policy} - \VpiAdv{t}{\ActorPolicy{t}} + \frac{1}{1-\discount} \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \; = \; \frac{1}{1-\discount} \Big[ \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big]. \end{align} Now for any given state $\state$, we introduce the linear objective function \begin{align*} \fnc_t(\ensuremath{\nu}) & \defeq \ensuremath{\mathbb{E}}_{\action \sim \ensuremath{\nu}} \QpiAdv{t}{\ActorPolicy{t}}(\state, \action) \; = \; \sum_{\action \in ASpace} \ensuremath{\nu}(\action) \QpiAdv{t}{\ActorPolicy{t}}(\state, \action), \end{align*} where $\ensuremath{\nu}$ is a distribution over the action space. With this choice, we have the equivalence \begin{align*} \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa & = \fnc_t(\widetilde \policy(\cdot \mid \state)) - \fnc_t\big(\ActorPolicy{t}(\cdot \mid \state) \big), \end{align*} where the reader should recall that we have fixed an arbitrary state $\state$. Consequently, applying the bound~\eqref{EqnMirrorBoundStatewise} with $\xSpace = ASpace$ and these choices of linear functions, we conclude that \begin{align} \label{EqnMyMirror} \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\action \sim \widetilde \policy}\psa \leq \MirrorRegret{T}. \end{align} \end{subequations} This bound holds for any state, and also for any average over the states. We now combine the pieces to conclude. By computing the average of the bound~\eqref{EqnInitialBound} over all $T$ iterations, we find that \begin{align*} \frac{1}{T} \sumiter \Big[ \Vpi{\widetilde \policy} - \Vpi{\ActorPolicy{t}} \Big] & \leq \frac{1}{1-\discount} \left \{ \frac{1}{T} \sumiter \ensuremath{\mathbb{E}}ecti{\ApiAdv{t}{\ActorPolicy{t}}}{\widetilde \policy} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \} \\ & \leq \frac{1}{1-\discount} \left \{ \MirrorRegret{T} + \frac{1}{T} \sumiter \Big[ - \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\widetilde \policy} + \ensuremath{\mathbb{E}}ecti{RAdv{t}}{\ActorPolicy{t}} \Big] \right \}, \end{align*} where the final inequality follow from the bound~\eqref{EqnMirrorBoundStatewise}, applied for each $\state$. We have thus established the claim. \end{document}
math
The Commonwealth Ports Authority, in collaboration with the Marianas Visitors Authority and Imperial Pacific International, on Monday unveiled seven Automated Passport Control machines that will help reduce the long lines at the Francisco C. Ada/Saipan International Airport. Four of the machines were purchased by MVA while IPI purchased three. Each cost $51,500. In an interview, Acting CPA board chairman Thomas Villagomez said the machines will help reduce the long lines at the airport and help process travelers with U.S. passports or a U.S. business visa. He said these newly arrived travelers don't have to fall in line at an immigration booth but can go directly to the machines for processing and clearance. "We appreciate very much the assistance of our partners, MVA and IPI, for purchasing these important machines as they will expedite the processing of newly arrived visitors. Although there will still be lines, we will work together with our U.S. Customs and Border Protection to help further reduce the lines and make it easier and more convenient for our visitors." MVA Managing Director Chris Concepcion said the machines can be used by U.S. citizens, permanent residents, and those with Electronic System for Travel Authorization or ESTA which will benefit visitors from South Korea and Japan. "There will still be long wait times, but with these new machines in place we expect those times to gradually go down," Concepcion added. He said they will be informing their industry partners overseas so they can encourage visitors to apply for ESTA before they visit the Marianas. But the machines cannot replace the manpower needed by CBP at the airport he added. "We urge our partners at CBP and the U.S. Department of Homeland Security to look into hiring more personnel for the CNMI. And we urge our partners at CPA to continue with their plan to implement a master plan for the Saipan airport which addresses the congestion found throughout the facility." Present at the unveiling ceremony on Monday were CPA Executive Director Chris Tenorio, MVA Chris Concepcion, Margaret Cui Lijie, IPI Executive Director.
english
मोदी राज में अर्थव्यवस्था मजबूत: ७.३ प्रतिशत रहेगी आर्थिक वृद्धि दर | परफॉर्म इंडिया होम समाचार मोदी राज में अर्थव्यवस्था मजबूत: ७.३ प्रतिशत रहेगी आर्थिक वृद्धि दर प्रधानमंत्री नरेन्द्र मोदी के नेतृत्व में देश की अर्थव्यवस्था लगातार मजबूत हो रही है। वित्तवर्ष २०१९-२० में देश की आर्थिक वृद्धि दर ७.३ प्रतिशत रहने का अनुमान है। इंडिया रेटिंग्स एंड रिसर्च ने कहा है कि २०१९ में मानसून के सामान्य से नीचे रहने के अनुमान के बावजूद आर्थिक वृद्धि दर ७.३ प्रतिशत रहने की संभावना है। इंडिया रेटिंग्स एंड रिसर्च ने मॉनसून से जुड़े पूर्वानुमान आने के बाद कृषि सकल मूल्यवर्धित वृद्धि के २.५ प्रतिशत पर रहने का अनुमान जताया है। २०18-१९ में यह २.७ प्रतिशत दर्ज की गई थी। विदेशी मुद्रा भंडार ४१४.८९ अरब डॉलर के रिकार्ड स्तर पर प्रेवियस आर्टियलसुप्रीम कोर्ट से फिर राहुल गांधी को पड़ी कड़ी फटकार
hindi
خدایہِ سٕنٛدِ یژھنہٕ ووت سُہ اَتھ جنگلس منٛز تہٕ درٛاو تَمی کُلہِ کِنۍ یَتھ زہرا خاتونہِ یہِ اوس ووٚنمُت زِ ژٕ ؤنۍ زِ تَس میوٛن پَے پتاہ
kashmiri
करुणानिधि की गंभीर हालत का समाचार फैलते ही कावेरी हॉस्पिटल के बाहर समर्थकों की भीड़ जुट गई है (फोटो- इयांस) चेन्नई : डीएमके अध्यक्ष एम. करुणानिधि की हालत में कोई सुधार नहीं आ रहा है. मंगलवार को भी उनकी हालत नाजुक बनी रही. चेन्नई स्थित कावेरी हॉस्पिटल के बाहर उनके समर्थकों को जमावड़ा लगा हुआ है. लोगों की भीड़ बढ़ती ही जा रही है. समर्थकों की भीड़ को देखते हुए सुरक्षा के कड़े इंतजाम किए गए हैं. हॉस्पिटल के अलावा करुणानिधि के घर के बाहर भी बड़ी तादाद में सुरक्षाकर्मियों को तैनात किया गया है. उधर, करुणानिधि के बेटे एमके स्टालिन ने तमिलनाडु के मुख्यमंत्री के.पलनीस्वामी से मुलाकात की है. मुलाकात मुख्यमंत्री आवास पर हुई और इस दौरान डीएमके के कई नेता मौजूद थे. चेन्नई के कावेरी अस्पताल में करुणानिधि (९४) २८ जुलाई से भर्ती हैं. उन्हें पेशाब नली में संक्रमण और दिल की गति कम होने के कारण अस्पताल में भर्ती कराया गया था. करुणानिधि के स्वास्थ्य का जायजा लेने के लिए देशभर के बड़े नेताओं ने अस्पताल का दौरा किया. पश्चिम बंगाल की मुख्यमंत्री ममता बनर्जी भी करुणानिधि का हाल जानने के लिए चेन्नई रवाना हो गई हैं. डीएमके अध्यक्ष करुणानिधि तमिलनाडु के ५ बार मुख्यमंत्री रह चुके हैं. तमिलनाडु की राजनीति में उनका कद बहुत बड़ा है. उनके स्वास्थ्य को लेकर उनके समर्थक प्रार्थनाएं कर रहे हैं. हालांकि डीएमके और स्थानीय प्रशासन ने समर्थकों को अस्पताल के बाहर इकट्ठा नहीं होने की अपील की है. हॉस्पिटल के बाहर करुणानिधि समर्थकों को रोते-बिलखते हुए देखा जा सकता है (फोटो- प्ती) इसके बाद भी हजारों की तादाद में वहां लोग जुटे हुए हैं. पुलिस को कोई बार हल्के बल का प्रयोग करके व्यवस्था बनानी पड़ीं. तमिलनाडु के डीजीपी ने पुलिस प्रशासन के लिए हाई अलर्ट जारी कर दिया है. एक समर्थक ने कहा कि वह अपने नेता के स्वास्थ्य के बारे में अच्छा समाचार सुनना चाहते हैं. इसी वजह से वह यहां आए हैं. कावेरी अस्पताल द्वारा सोमवार शाम जारी मेडिकल बुलेटिन में कहा गया था कि करुणानिधि की हालत बिगड़ी हुई है. अस्पताल ने बयान में कहा, "करुणानिधि के स्वास्थ्य में गिरावट आई है. उनके जरूरी अंगों के कार्य को बनाए रखना उनकी उम्र की वजह से चुनौती बना हुआ है." अस्पताल ने एक बयान में कहा, 'करुणानिधि की तबीयत पिछले कुछ घंटों से लगातार बिगड़ती जा रही है। अधिकतम चिकित्सा सपोर्ट के बावजूद उनके महत्वपूर्ण अंग शिथिल होते जा रहे हैं. उनकी हालत अत्यंत नाजुक और अस्थिर है.' डीएमके प्रमुख को कावेरी अस्पताल में पहली बार १८ जुलाई को भर्ती कराया गया था. अस्पताल ने २६ जुलाई को कहा कि करुणानिधि यूरिनरी ट्रैक्ट इंफेक्शन से पीड़ित हैं और उनका इलाज चल रहा है. द्रमुक नेता की स्थिति २८ जुलाई को रक्तचाप में गिरावट से बिगड़ गई और उन्हें अस्पताल में भर्ती कराया गया. वह तभी से अस्पताल में हैं. मथुरा में फूड प्वाइजनिंग की वजह से दो दर्जन बीमार, सम ने कहा होगी जांच
hindi
अश्लीलता केँ खिलाफ बाजैए बाली सुपरस्टार नायिका/ गायिका केँ भेटल धमकी।दर्ज करबेलैथ एफ. आई.आर । - मिथिला दैनिक होम सुपौल अश्लीलता केँ खिलाफ बाजैए बाली सुपरस्टार नायिका/ गायिका केँ भेटल धमकी।दर्ज करबेलैथ एफ. आई.आर । सुपौल:[प्रणव चौधरी] मिथिलाक धिया "आस्था मिश्रा" जे मैथिली सिनेजगत केँ बहुतो रास लघु सिनेमा मे अभिनय कस चुकल छथि आओर आजुक समय मे गायिकीक क्षेत्र मे सभक दिल पर राज कस रहल छथि ओ लोकप्रिय नायिका/गायिका "आस्था मिश्रा" विगत किछु दिनसँ बहुत परेशान छथि। "आस्था मिश्रा" कहलथि कि किछु दिन पहिने ओ अपन फेसबुक पेज पर लाईव आयल छलथि, अपन फेसबुक लाईव मे ओ मैथिली संगीत जगत मे पसैर रहल अश्लीलताक खिलाफ अपन बात राखने छलथि, बस ओहि दिनसँ हुनका किछु अज्ञात लोकनि द्वारा परेशान कायल जा रहल छैन्ह। आस्था मिश्रा कहलथि कि हुनका किछु अनजान नंबर सँ बेर - बेर फोन कस केँ लाईव आबि अश्लीलताक खिलाफ अपन बात रखबाक लेल धमकाओल जा रहल अछि संगहि हुनका संग आपत्तिजनक बात सेहो कायल जा रहल अछि।ऐतबे नञि आस्था मिश्र जी केँ ईहो कहल गेल जेँ ज्यो अपने आश्लीलता पर आवाज उठेलू त अहाकेँ गंदा विडियौ बना क वायरल करब।ओर कोनो प्रशासन नञि बचेत। एही सभ देख आस्था मिश्रा आइ सँ दु दिन पहिले थाना में आवेदन देने छलैथ मुदा थाना कोनो तरहक संज्ञान नञि लेलक।ओहिके बाद आस्था मिश्रा आइ सुपौल केँ ए.सी.पी केँ आवेदन द अप्पन सुरक्षाक गुहार लगेली।आर सुपौल थान में एफ. आई.आर दर्ज करबैल गेल अछि।थाना प्रभारी कहलैथ जे की हम बहुत जल्दी दोषी केँ पकैर लेब।अखन फिलहाल थाना में सभ नंबर देल गेल अछि,आर थाना अप्पन काज कस रहल अछि। #धमकी # अश्लीलताक खिलाफ आवाज उथेनिहार पर #सुपौल #मैथिली समाचार
hindi
SHELL = /bin/sh # V=0 quiet, V=1 verbose. other values don't work. V = 0 Q1 = $(V:1=) Q = $(Q1:0=@) ECHO1 = $(V:1=@:) ECHO = $(ECHO1:0=@echo) #### Start of system configuration section. #### srcdir = . topdir = /home/ubuntu/.rvm/rubies/ruby-2.0.0-p594/include/ruby-2.0.0 hdrdir = $(topdir) arch_hdrdir = /home/ubuntu/.rvm/rubies/ruby-2.0.0-p594/include/ruby-2.0.0/x86_64-linux PATH_SEPARATOR = : VPATH = $(srcdir):$(arch_hdrdir)/ruby:$(hdrdir)/ruby prefix = /home/ubuntu/.rvm/rubies/ruby-2.0.0-p594 rubysitearchprefix = $(rubylibprefix)/$(sitearch) rubyarchprefix = $(rubylibprefix)/$(arch) rubylibprefix = $(libdir)/$(RUBY_BASE_NAME) exec_prefix = $(prefix) vendorarchhdrdir = $(vendorhdrdir)/$(sitearch) sitearchhdrdir = $(sitehdrdir)/$(sitearch) rubyarchhdrdir = $(rubyhdrdir)/$(arch) vendorhdrdir = $(rubyhdrdir)/vendor_ruby sitehdrdir = $(rubyhdrdir)/site_ruby rubyhdrdir = $(includedir)/$(RUBY_VERSION_NAME) vendorarchdir = $(vendorlibdir)/$(sitearch) vendorlibdir = $(vendordir)/$(ruby_version) vendordir = $(rubylibprefix)/vendor_ruby sitearchdir = ./.gem.20150609-18012-1vtxado sitelibdir = ./.gem.20150609-18012-1vtxado sitedir = $(rubylibprefix)/site_ruby rubyarchdir = $(rubylibdir)/$(arch) rubylibdir = $(rubylibprefix)/$(ruby_version) sitearchincludedir = $(includedir)/$(sitearch) archincludedir = $(includedir)/$(arch) sitearchlibdir = $(libdir)/$(sitearch) archlibdir = $(libdir)/$(arch) ridir = $(datarootdir)/$(RI_BASE_NAME) mandir = $(datarootdir)/man localedir = $(datarootdir)/locale libdir = $(exec_prefix)/lib psdir = $(docdir) pdfdir = $(docdir) dvidir = $(docdir) htmldir = $(docdir) infodir = $(datarootdir)/info docdir = $(datarootdir)/doc/$(PACKAGE) oldincludedir = /usr/include includedir = $(prefix)/include localstatedir = $(prefix)/var sharedstatedir = $(prefix)/com sysconfdir = $(prefix)/etc datadir = $(datarootdir) datarootdir = $(prefix)/share libexecdir = $(exec_prefix)/libexec sbindir = $(exec_prefix)/sbin bindir = $(exec_prefix)/bin archdir = $(rubyarchdir) CC = gcc CXX = g++ LIBRUBY = $(LIBRUBY_SO) LIBRUBY_A = lib$(RUBY_SO_NAME)-static.a LIBRUBYARG_SHARED = -Wl,-R -Wl,$(libdir) -L$(libdir) -l$(RUBY_SO_NAME) LIBRUBYARG_STATIC = -Wl,-R -Wl,$(libdir) -L$(libdir) -l$(RUBY_SO_NAME)-static empty = OUTFLAG = -o $(empty) COUTFLAG = -o $(empty) RUBY_EXTCONF_H = cflags = $(optflags) $(debugflags) $(warnflags) optflags = -O3 -fno-fast-math debugflags = -ggdb3 warnflags = -Wall -Wextra -Wno-unused-parameter -Wno-parentheses -Wno-long-long -Wno-missing-field-initializers -Wunused-variable -Wpointer-arith -Wwrite-strings -Wdeclaration-after-statement -Wimplicit-function-declaration CCDLFLAGS = -fPIC CFLAGS = $(CCDLFLAGS) $(cflags) -fPIC -O0 -Wall $(ARCH_FLAG) INCFLAGS = -I. -I$(arch_hdrdir) -I$(hdrdir)/ruby/backward -I$(hdrdir) -I$(srcdir) DEFS = CPPFLAGS = $(DEFS) $(cppflags) CXXFLAGS = $(CCDLFLAGS) $(cxxflags) $(ARCH_FLAG) ldflags = -L. -fstack-protector -rdynamic -Wl,-export-dynamic dldflags = ARCH_FLAG = DLDFLAGS = $(ldflags) $(dldflags) $(ARCH_FLAG) LDSHARED = $(CC) -shared LDSHAREDXX = $(CXX) -shared AR = ar EXEEXT = RUBY_INSTALL_NAME = ruby RUBY_SO_NAME = ruby RUBYW_INSTALL_NAME = RUBY_VERSION_NAME = $(RUBY_BASE_NAME)-$(ruby_version) RUBYW_BASE_NAME = rubyw RUBY_BASE_NAME = ruby arch = x86_64-linux sitearch = $(arch) ruby_version = 2.0.0 ruby = $(bindir)/ruby RUBY = $(ruby) ruby_headers = $(hdrdir)/ruby.h $(hdrdir)/ruby/defines.h $(arch_hdrdir)/ruby/config.h RM = rm -f RM_RF = $(RUBY) -run -e rm -- -rf RMDIRS = rmdir --ignore-fail-on-non-empty -p MAKEDIRS = /bin/mkdir -p INSTALL = /usr/bin/install -c INSTALL_PROG = $(INSTALL) -m 0755 INSTALL_DATA = $(INSTALL) -m 644 COPY = cp TOUCH = exit > #### End of system configuration section. #### preload = libpath = . $(libdir) LIBPATH = -L. -L$(libdir) -Wl,-R$(libdir) DEFFILE = CLEANFILES = mkmf.log DISTCLEANFILES = DISTCLEANDIRS = extout = extout_prefix = target_prefix = LOCAL_LIBS = LIBS = $(LIBRUBYARG_SHARED) -lc -lpthread -lrt -ldl -lcrypt -lm -lc ORIG_SRCS = gherkin_lexer_sr_latn.c SRCS = $(ORIG_SRCS) OBJS = gherkin_lexer_sr_latn.o HDRS = TARGET = gherkin_lexer_sr_latn TARGET_NAME = gherkin_lexer_sr_latn TARGET_ENTRY = Init_$(TARGET_NAME) DLLIB = $(TARGET).so EXTSTATIC = STATIC_LIB = BINDIR = $(DESTDIR)$(bindir) RUBYCOMMONDIR = $(DESTDIR)$(sitedir)$(target_prefix) RUBYLIBDIR = $(DESTDIR)$(sitelibdir)$(target_prefix) RUBYARCHDIR = $(DESTDIR)$(sitearchdir)$(target_prefix) HDRDIR = $(DESTDIR)$(rubyhdrdir)/ruby$(target_prefix) ARCHHDRDIR = $(DESTDIR)$(rubyhdrdir)/$(arch)/ruby$(target_prefix) TARGET_SO = $(DLLIB) CLEANLIBS = $(TARGET).so CLEANOBJS = *.o *.bak all: $(DLLIB) static: $(STATIC_LIB) .PHONY: all install static install-so install-rb .PHONY: clean clean-so clean-static clean-rb clean-static:: clean-rb-default:: clean-rb:: clean-so:: clean: clean-so clean-static clean-rb-default clean-rb -$(Q)$(RM) $(CLEANLIBS) $(CLEANOBJS) $(CLEANFILES) .*.time distclean-rb-default:: distclean-rb:: distclean-so:: distclean-static:: distclean: clean distclean-so distclean-static distclean-rb-default distclean-rb -$(Q)$(RM) Makefile $(RUBY_EXTCONF_H) conftest.* mkmf.log -$(Q)$(RM) core ruby$(EXEEXT) *~ $(DISTCLEANFILES) -$(Q)$(RMDIRS) $(DISTCLEANDIRS) 2> /dev/null || true realclean: distclean install: install-so install-rb install-so: $(DLLIB) ./.RUBYARCHDIR.time $(INSTALL_PROG) $(DLLIB) $(RUBYARCHDIR) clean-static:: -$(Q)$(RM) $(STATIC_LIB) install-rb: pre-install-rb install-rb-default install-rb-default: pre-install-rb-default pre-install-rb: Makefile pre-install-rb-default: Makefile pre-install-rb-default: $(ECHO) installing default gherkin_lexer_sr_latn libraries ./.RUBYARCHDIR.time: $(Q) $(MAKEDIRS) $(RUBYARCHDIR) $(Q) $(TOUCH) $@ site-install: site-install-so site-install-rb site-install-so: install-so site-install-rb: install-rb .SUFFIXES: .c .m .cc .mm .cxx .cpp .C .o .cc.o: $(ECHO) compiling $(<) $(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $< .mm.o: $(ECHO) compiling $(<) $(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $< .cxx.o: $(ECHO) compiling $(<) $(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $< .cpp.o: $(ECHO) compiling $(<) $(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $< .C.o: $(ECHO) compiling $(<) $(Q) $(CXX) $(INCFLAGS) $(CPPFLAGS) $(CXXFLAGS) $(COUTFLAG)$@ -c $< .c.o: $(ECHO) compiling $(<) $(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -c $< .m.o: $(ECHO) compiling $(<) $(Q) $(CC) $(INCFLAGS) $(CPPFLAGS) $(CFLAGS) $(COUTFLAG)$@ -c $< $(DLLIB): $(OBJS) Makefile $(ECHO) linking shared-object $(DLLIB) -$(Q)$(RM) $(@) $(Q) $(LDSHARED) -o $@ $(OBJS) $(LIBPATH) $(DLDFLAGS) $(LOCAL_LIBS) $(LIBS) $(OBJS): $(HDRS) $(ruby_headers)
code
/*- ******************************************************************************* * Copyright (c) 2011, 2014 Diamond Light Source Ltd. * All rights reserved. This program and the accompanying materials * are made available under the terms of the Eclipse Public License v1.0 * which accompanies this distribution, and is available at * http://www.eclipse.org/legal/epl-v10.html * * Contributors: * Peter Chang - initial API and implementation and/or initial documentation *******************************************************************************/ // This is generated from DoubleDataset.java by fromdouble.py package org.eclipse.dataset.internal.dense; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Set; import java.util.TreeSet; import org.apache.commons.math.complex.Complex; import org.eclipse.dataset.IDataset; import org.eclipse.dataset.IDatasetIterator; import org.eclipse.dataset.PositionIterator; import org.eclipse.dataset.dense.BooleanIterator; import org.eclipse.dataset.dense.BroadcastIterator; import org.eclipse.dataset.dense.Dataset; import org.eclipse.dataset.dense.DatasetFactory; import org.eclipse.dataset.dense.DatasetUtils; import org.eclipse.dataset.dense.DTypeUtils; import org.eclipse.dataset.dense.IndexIterator; import org.eclipse.dataset.dense.IntegerIterator; import org.eclipse.dataset.dense.IntegersIterator; import org.eclipse.dataset.dense.SliceIterator; /** * Extend dataset for byte values // PRIM_TYPE */ public class ByteDataset extends AbstractDataset { // pin UID to base class private static final long serialVersionUID = Dataset.serialVersionUID; protected byte[] data; // subclass alias // PRIM_TYPE @Override protected void setData() { data = (byte[]) odata; // PRIM_TYPE } protected static byte[] createArray(final int size) { // PRIM_TYPE byte[] array = null; // PRIM_TYPE try { array = new byte[size]; // PRIM_TYPE } catch (OutOfMemoryError e) { logger.error("The size of the dataset ({}) that is being created is too large " + "and there is not enough memory to hold it.", size); throw new OutOfMemoryError("The dimensions given are too large, and there is " + "not enough memory available in the Java Virtual Machine"); } return array; } @Override public int getDType() { return INT8; // DATA_TYPE } public ByteDataset() { } /** * Create a zero-filled dataset of given shape * @param shape */ public ByteDataset(final int... shape) { if (shape.length == 1) { size = shape[0]; if (size < 0) { throw new IllegalArgumentException("Negative component in shape is not allowed"); } } else { size = DatasetUtils.calculateSize(shape); } this.shape = shape.clone(); odata = data = createArray(size); } /** * Create a dataset using given data * @param data * @param shape * (can be null to create 1D dataset) */ public ByteDataset(final byte[] data, int... shape) { // PRIM_TYPE if (data == null) { throw new IllegalArgumentException("Data must not be null"); } if (shape == null || shape.length == 0) { shape = new int[] { data.length }; } size = DatasetUtils.calculateSize(shape); if (size != data.length) { throw new IllegalArgumentException(String.format("Shape %s is not compatible with size of data array, %d", Arrays.toString(shape), data.length)); } this.shape = shape.clone(); odata = this.data = data; } /** * Copy a dataset * @param dataset */ public ByteDataset(final ByteDataset dataset) { copyToView(dataset, this, true, true); if (dataset.stride == null) { odata = data = dataset.data.clone(); } else { offset = 0; stride = null; base = null; odata = data = createArray(size); IndexIterator iter = dataset.getIterator(); for (int i = 0; iter.hasNext(); i++) { data[i] = dataset.data[iter.index]; } } } /** * Cast a dataset to this class type * @param dataset */ public ByteDataset(final Dataset dataset) { copyToView(dataset, this, true, false); offset = 0; stride = null; base = null; odata = data = createArray(size); IndexIterator iter = dataset.getIterator(); for (int i = 0; iter.hasNext(); i++) { data[i] = (byte) dataset.getElementLongAbs(iter.index); // GET_ELEMENT_WITH_CAST } } @Override public boolean equals(Object obj) { if (!super.equals(obj)) { return false; } if (getRank() == 0) // already true for zero-rank dataset return true; ByteDataset other = (ByteDataset) obj; IndexIterator iter = getIterator(); IndexIterator oiter = other.getIterator(); while (iter.hasNext() && oiter.hasNext()) { if (data[iter.index] != other.data[oiter.index]) // OBJECT_UNEQUAL return false; } return true; } @Override public int hashCode() { return super.hashCode(); } @Override public ByteDataset clone() { return new ByteDataset(this); } /** * Create a dataset from an object which could be a Java list, array (of arrays...) or Number. Ragged * sequences or arrays are padded with zeros. * * @param obj * @return dataset with contents given by input */ public static ByteDataset createFromObject(final Object obj) { ByteDataset result = new ByteDataset(); result.shape = DatasetUtils.getShapeFromObject(obj); result.size = DatasetUtils.calculateSize(result.shape); result.odata = result.data = createArray(result.size); int[] pos = new int[result.shape.length]; result.fillData(obj, 0, pos); return result; } /** * * @param stop * @return a new 1D dataset, filled with values determined by parameters */ public static ByteDataset createRange(final double stop) { return createRange(0, stop, 1); } /** * * @param start * @param stop * @param step * @return a new 1D dataset, filled with values determined by parameters */ public static ByteDataset createRange(final double start, final double stop, final double step) { int size = calcSteps(start, stop, step); ByteDataset result = new ByteDataset(size); for (int i = 0; i < size; i++) { result.data[i] = (byte) (start + i * step); // PRIM_TYPE // ADD_CAST } return result; } /** * @param shape * @return a dataset filled with ones */ public static ByteDataset ones(final int... shape) { return new ByteDataset(shape).fill(1); } @Override public ByteDataset fill(final Object obj) { byte dv = (byte) DTypeUtils.toLong(obj); // PRIM_TYPE // FROM_OBJECT IndexIterator iter = getIterator(); while (iter.hasNext()) { data[iter.index] = dv; } setDirty(); return this; } /** * This is a typed version of {@link #getBuffer()} * @return data buffer as linear array */ public byte[] getData() { // PRIM_TYPE return data; } @Override protected int getBufferLength() { if (data == null) return 0; return data.length; } @Override public ByteDataset getView() { ByteDataset view = new ByteDataset(); copyToView(this, view, true, true); view.setData(); return view; } /** * Get a value from an absolute index of the internal array. This is an internal method with no checks so can be * dangerous. Use with care or ideally with an iterator. * * @param index * absolute index * @return value */ public byte getAbs(final int index) { // PRIM_TYPE return data[index]; } @Override public boolean getElementBooleanAbs(final int index) { return data[index] != 0; // BOOLEAN_FALSE } @Override public double getElementDoubleAbs(final int index) { return data[index]; // BOOLEAN_ZERO } @Override public long getElementLongAbs(final int index) { return data[index]; // BOOLEAN_ZERO // OMIT_CAST_INT } @Override public Object getObjectAbs(final int index) { return data[index]; } @Override public String getStringAbs(final int index) { return stringFormat == null ? String.format("%d", data[index]) : // FORMAT_STRING stringFormat.format(data[index]); } /** * Set a value at absolute index in the internal array. This is an internal method with no checks so can be * dangerous. Use with care or ideally with an iterator. * * @param index * absolute index * @param val * new value */ public void setAbs(final int index, final byte val) { // PRIM_TYPE data[index] = val; setDirty(); } @Override public void setItemDirect(final int dindex, final int sindex, final Object src) { byte[] dsrc = (byte[]) src; // PRIM_TYPE data[dindex] = dsrc[sindex]; } @Override public void setObjectAbs(final int index, final Object obj) { if (index < 0 || index > data.length) { throw new IndexOutOfBoundsException("Index given is outside dataset"); } setAbs(index, (byte) DTypeUtils.toLong(obj)); // FROM_OBJECT } /** * @param i * @return item in given position */ public byte get(final int i) { // PRIM_TYPE return data[get1DIndex(i)]; } /** * @param i * @param j * @return item in given position */ public byte get(final int i, final int j) { // PRIM_TYPE return data[get1DIndex(i, j)]; } /** * @param pos * @return item in given position */ public byte get(final int... pos) { // PRIM_TYPE return data[get1DIndex(pos)]; } @Override public Object getObject(final int i) { return Byte.valueOf(get(i)); // CLASS_TYPE } @Override public Object getObject(final int i, final int j) { return Byte.valueOf(get(i, j)); // CLASS_TYPE } @Override public Object getObject(final int... pos) { return Byte.valueOf(get(pos)); // CLASS_TYPE } @Override public String getString(final int i) { return getStringAbs(get1DIndex(i)); } @Override public String getString(final int i, final int j) { return getStringAbs(get1DIndex(i, j)); } @Override public String getString(final int... pos) { return getStringAbs(get1DIndex(pos)); } @Override public double getDouble(final int i) { return get(i); // BOOLEAN_ZERO } @Override public double getDouble(final int i, final int j) { return get(i, j); // BOOLEAN_ZERO } @Override public double getDouble(final int... pos) { return get(pos); // BOOLEAN_ZERO } @Override public float getFloat(final int i) { return get(i); // BOOLEAN_ZERO // OMIT_REAL_CAST } @Override public float getFloat(final int i, final int j) { return get(i, j); // BOOLEAN_ZERO // OMIT_REAL_CAST } @Override public float getFloat(final int... pos) { return get(pos); // BOOLEAN_ZERO // OMIT_REAL_CAST } @Override public long getLong(final int i) { return get(i); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public long getLong(final int i, final int j) { return get(i, j); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public long getLong(final int... pos) { return get(pos); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public int getInt(final int i) { return get(i); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public int getInt(final int i, final int j) { return get(i, j); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public int getInt(final int... pos) { return get(pos); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public short getShort(final int i) { return get(i); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public short getShort(final int i, final int j) { return get(i, j); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public short getShort(final int... pos) { return get(pos); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public byte getByte(final int i) { return get(i); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public byte getByte(final int i, final int j) { return get(i, j); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public byte getByte(final int... pos) { return get(pos); // BOOLEAN_ZERO // OMIT_UPCAST } @Override public boolean getBoolean(final int i) { return get(i) != 0; // BOOLEAN_FALSE } @Override public boolean getBoolean(final int i, final int j) { return get(i, j) != 0; // BOOLEAN_FALSE } @Override public boolean getBoolean(final int... pos) { return get(pos) != 0; // BOOLEAN_FALSE } /** * Sets the value at a particular point to the passed value. The dataset must be 1D * * @param value * @param i */ public void setItem(final byte value, final int i) { // PRIM_TYPE setAbs(get1DIndex(i), value); } /** * Sets the value at a particular point to the passed value. The dataset must be 2D * * @param value * @param i * @param j */ public void setItem(final byte value, final int i, final int j) { // PRIM_TYPE setAbs(get1DIndex(i, j), value); } /** * Sets the value at a particular point to the passed value * * @param value * @param pos */ public void setItem(final byte value, final int... pos) { // PRIM_TYPE setAbs(get1DIndex(pos), value); } @Override public void set(final Object obj, final int i) { setItem((byte) DTypeUtils.toLong(obj), i); // FROM_OBJECT } @Override public void set(final Object obj, final int i, final int j) { setItem((byte) DTypeUtils.toLong(obj), i, j); // FROM_OBJECT } @Override public void set(final Object obj, int... pos) { if (pos == null || (pos.length == 0 && shape.length > 0)) { pos = new int[shape.length]; } setItem((byte) DTypeUtils.toLong(obj), pos); // FROM_OBJECT } @Override public void resize(int... newShape) { final IndexIterator iter = getIterator(); final int nsize = DatasetUtils.calculateSize(newShape); final byte[] ndata = createArray(nsize); // PRIM_TYPE for (int i = 0; iter.hasNext() && i < nsize; i++) { ndata[i] = data[iter.index]; } odata = data = ndata; size = nsize; shape = newShape; stride = null; offset = 0; base = null; } @Override public ByteDataset sort(Integer axis) { if (axis == null) { Arrays.sort(data); } else { axis = checkAxis(axis); ByteDataset ads = new ByteDataset(shape[axis]); PositionIterator pi = getPositionIterator(axis); int[] pos = pi.getPos(); boolean[] hit = pi.getOmit(); while (pi.hasNext()) { copyItemsFromAxes(pos, hit, ads); Arrays.sort(ads.data); setItemsOnAxes(pos, hit, ads.data); } } setDirty(); return this; // throw new UnsupportedOperationException("Cannot sort dataset"); // BOOLEAN_USE } @Override public ByteDataset getUniqueItems() { Set<Byte> set = new TreeSet<Byte>(); // CLASS_TYPE IndexIterator it = getIterator(); while (it.hasNext()) { set.add(data[it.index]); } ByteDataset u = new ByteDataset(set.size()); // CLASS_TYPE int i = 0; byte[] udata = u.getData(); // PRIM_TYPE for (Byte v : set) { // CLASS_TYPE udata[i++] = v; } return u; } @Override public ByteDataset getSlice(final SliceIterator siter) { ByteDataset result = new ByteDataset(siter.getShape()); byte[] rdata = result.data; // PRIM_TYPE for (int i = 0; siter.hasNext(); i++) rdata[i] = data[siter.index]; result.setName(name + BLOCK_OPEN + siter.toString() + BLOCK_CLOSE); return result; } @Override public void fillDataset(Dataset result, IndexIterator iter) { IndexIterator riter = result.getIterator(); byte[] rdata = ((ByteDataset) result).data; // PRIM_TYPE while (riter.hasNext() && iter.hasNext()) rdata[riter.index] = data[iter.index]; } @Override public ByteDataset setByBoolean(final Object obj, Dataset selection) { if (obj instanceof Dataset) { final Dataset ds = (Dataset) obj; final int length = ((Number) selection.sum()).intValue(); if (length != ds.getSize()) { throw new IllegalArgumentException( "Number of true items in selection does not match number of items in dataset"); } final IndexIterator oiter = ds.getIterator(); final BooleanIterator biter = getBooleanIterator(selection); while (biter.hasNext() && oiter.hasNext()) { data[biter.index] = (byte) ds.getElementLongAbs(oiter.index); // GET_ELEMENT_WITH_CAST } } else { final byte dv = (byte) DTypeUtils.toLong(obj); // PRIM_TYPE // FROM_OBJECT final BooleanIterator biter = getBooleanIterator(selection); while (biter.hasNext()) { data[biter.index] = dv; } } setDirty(); return this; } @Override public ByteDataset setBy1DIndex(final Object obj, final Dataset index) { if (obj instanceof Dataset) { final Dataset ds = (Dataset) obj; if (index.getSize() != ds.getSize()) { throw new IllegalArgumentException( "Number of items in index dataset does not match number of items in dataset"); } final IndexIterator oiter = ds.getIterator(); final IntegerIterator iter = new IntegerIterator(index, size); while (iter.hasNext() && oiter.hasNext()) { data[iter.index] = (byte) ds.getElementLongAbs(oiter.index); // GET_ELEMENT_WITH_CAST } } else { final byte dv = (byte) DTypeUtils.toLong(obj); // PRIM_TYPE // FROM_OBJECT IntegerIterator iter = new IntegerIterator(index, size); while (iter.hasNext()) { data[iter.index] = dv; } } setDirty(); return this; } @Override public ByteDataset setByIndexes(final Object obj, final Object... indexes) { final IntegersIterator iter = new IntegersIterator(shape, indexes); final int[] pos = iter.getPos(); if (obj instanceof Dataset) { final Dataset ds = (Dataset) obj; if (DatasetUtils.calculateSize(iter.getShape()) != ds.getSize()) { throw new IllegalArgumentException( "Number of items in index datasets does not match number of items in dataset"); } final IndexIterator oiter = ds.getIterator(); while (iter.hasNext() && oiter.hasNext()) { setItem((byte) ds.getElementLongAbs(oiter.index), pos); // GET_ELEMENT_WITH_CAST } } else { final byte dv = (byte) DTypeUtils.toLong(obj); // PRIM_TYPE // FROM_OBJECT while (iter.hasNext()) { setItem(dv, pos); } } setDirty(); return this; } @Override ByteDataset setSlicedView(Dataset view, Dataset d) { final BroadcastIterator it = BroadcastIterator.createIterator(view, d); while (it.hasNext()) { data[it.aIndex] = (byte) d.getElementLongAbs(it.bIndex); // GET_ELEMENT_WITH_CAST } return this; } @Override public ByteDataset setSlice(final Object obj, final IndexIterator siter) { if (obj instanceof IDataset) { final IDataset ds = (IDataset) obj; final int[] oshape = ds.getShape(); if (!DatasetUtils.areShapesCompatible(siter.getShape(), oshape)) { throw new IllegalArgumentException(String.format( "Input dataset is not compatible with slice: %s cf %s", Arrays.toString(oshape), Arrays.toString(siter.getShape()))); } if (ds instanceof Dataset) { final Dataset ads = (Dataset) ds; final IndexIterator oiter = ads.getIterator(); while (siter.hasNext() && oiter.hasNext()) data[siter.index] = (byte) ads.getElementLongAbs(oiter.index); // GET_ELEMENT_WITH_CAST } else { final IDatasetIterator oiter = new PositionIterator(oshape); final int[] pos = oiter.getPos(); while (siter.hasNext() && oiter.hasNext()) data[siter.index] = ds.getByte(pos); // PRIM_TYPE } } else { try { byte v = (byte) DTypeUtils.toLong(obj); // PRIM_TYPE // FROM_OBJECT while (siter.hasNext()) data[siter.index] = v; } catch (IllegalArgumentException e) { throw new IllegalArgumentException("Object for setting slice is not a dataset or number"); } } setDirty(); return this; } @Override public void copyItemsFromAxes(final int[] pos, final boolean[] axes, final Dataset dest) { byte[] ddata = (byte[]) dest.getBuffer(); // PRIM_TYPE SliceIterator siter = getSliceIteratorFromAxes(pos, axes); int[] sshape = DatasetUtils.squeezeShape(siter.getShape(), false); IndexIterator diter = dest.getSliceIterator(null, sshape, null); if (ddata.length < DatasetUtils.calculateSize(sshape)) { throw new IllegalArgumentException("destination array is not large enough"); } while (siter.hasNext() && diter.hasNext()) ddata[diter.index] = data[siter.index]; } @Override public void setItemsOnAxes(final int[] pos, final boolean[] axes, final Object src) { byte[] sdata = (byte[]) src; // PRIM_TYPE SliceIterator siter = getSliceIteratorFromAxes(pos, axes); if (sdata.length < DatasetUtils.calculateSize(siter.getShape())) { throw new IllegalArgumentException("destination array is not large enough"); } for (int i = 0; siter.hasNext(); i++) { data[siter.index] = sdata[i]; } setDirty(); } @Override protected Number fromDoubleToNumber(double x) { byte r = (byte) (long) x; // ADD_CAST // PRIM_TYPE_LONG return Byte.valueOf(r); // CLASS_TYPE // return Integer.valueOf((int) (long) x); // BOOLEAN_USE // return null; // OBJECT_USE } private List<int[]> findPositions(final byte value) { // PRIM_TYPE IndexIterator iter = getIterator(true); List<int[]> posns = new ArrayList<int[]>(); int[] pos = iter.getPos(); { while (iter.hasNext()) { if (data[iter.index] == value) { posns.add(pos.clone()); } } } return posns; } @SuppressWarnings({ "unchecked" }) @Override public int[] maxPos(boolean ignoreInvalids) { if (storedValues == null || storedValues.isEmpty()) { calculateMaxMin(ignoreInvalids, ignoreInvalids); } String n = storeName(ignoreInvalids, ignoreInvalids, STORE_MAX_POS); Object o = storedValues.get(n); List<int[]> max = null; if (o == null) { max = findPositions(max(ignoreInvalids).byteValue()); // PRIM_TYPE // max = findPositions(max(false).intValue() != 0); // BOOLEAN_USE // max = findPositions(null); // OBJECT_USE storedValues.put(n, max); } else if (o instanceof List<?>) { max = (List<int[]>) o; } else { throw new InternalError("Inconsistent internal state of stored values for statistics calculation"); } return max.get(0); // first maximum } @SuppressWarnings({ "unchecked" }) @Override public int[] minPos(boolean ignoreInvalids) { if (storedValues == null || storedValues.isEmpty()) { calculateMaxMin(ignoreInvalids, ignoreInvalids); } String n = storeName(ignoreInvalids, ignoreInvalids, STORE_MIN_POS); Object o = storedValues.get(n); List<int[]> min = null; if (o == null) { min = findPositions(min(ignoreInvalids).byteValue()); // PRIM_TYPE // min = findPositions(min(false).intValue() != 0); // BOOLEAN_USE // min = findPositions(null); // OBJECT_USE storedValues.put(n, min); } else if (o instanceof List<?>) { min = (List<int[]>) o; } else { throw new InternalError("Inconsistent internal state of stored values for statistics calculation"); } return min.get(0); // first minimum } @Override public boolean containsNans() { return false; } @Override public boolean containsInfs() { return false; } @Override public boolean containsInvalidNumbers() { return false; } @Override public ByteDataset iadd(final Object b) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); boolean useLong = bds.getElementClass().equals(Long.class); if (bds.getSize() == 1) { final IndexIterator it = getIterator(); if (useLong) { final long lb = bds.getElementLongAbs(0); while (it.hasNext()) { data[it.index] += lb; } } else { final double db = bds.getElementDoubleAbs(0); while (it.hasNext()) { data[it.index] += db; } } } else { final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); it.setOutputDouble(!useLong); if (useLong) { while (it.hasNext()) { data[it.aIndex] += it.bLong; } } else { while (it.hasNext()) { data[it.aIndex] += it.bDouble; } } } setDirty(); return this; } @Override public ByteDataset isubtract(final Object b) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); boolean useLong = bds.getElementClass().equals(Long.class); if (bds.getSize() == 1) { final IndexIterator it = getIterator(); if (useLong) { final long lb = bds.getElementLongAbs(0); while (it.hasNext()) { data[it.index] -= lb; } } else { final double db = bds.getElementDoubleAbs(0); while (it.hasNext()) { data[it.index] -= db; } } } else { final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); if (useLong) { it.setOutputDouble(false); while (it.hasNext()) { data[it.aIndex] -= it.bLong; } } else { it.setOutputDouble(true); while (it.hasNext()) { data[it.aIndex] -= it.bDouble; } } } setDirty(); return this; } @Override public ByteDataset imultiply(final Object b) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); boolean useLong = bds.getElementClass().equals(Long.class); if (bds.getSize() == 1) { final IndexIterator it = getIterator(); if (useLong) { final long lb = bds.getElementLongAbs(0); while (it.hasNext()) { data[it.index] *= lb; } } else { final double db = bds.getElementDoubleAbs(0); while (it.hasNext()) { data[it.index] *= db; } } } else { final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); it.setOutputDouble(!useLong); if (useLong) { while (it.hasNext()) { data[it.aIndex] *= it.bLong; } } else { while (it.hasNext()) { data[it.aIndex] *= it.bDouble; } } } setDirty(); return this; } @Override public ByteDataset idivide(final Object b) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); boolean useLong = bds.getElementClass().equals(Long.class); if (bds.getSize() == 1) { if (useLong) { final long lb = bds.getElementLongAbs(0); if (lb == 0) { // INT_USE fill(0); // INT_USE } else { // INT_USE final IndexIterator it = getIterator(); while (it.hasNext()) { data[it.index] /= lb; } } // INT_USE } else { final double db = bds.getElementDoubleAbs(0); if (db == 0) { // INT_USE fill(0); // INT_USE } else { // INT_USE final IndexIterator it = getIterator(); while (it.hasNext()) { data[it.index] /= db; } } // INT_USE } } else { final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); it.setOutputDouble(!useLong); if (useLong) { while (it.hasNext()) { if (it.bLong == 0) { // INT_USE data[it.aIndex] = 0; // INT_USE } else { // INT_USE data[it.aIndex] /= it.bLong; } // INT_USE } } else { while (it.hasNext()) { if (it.bDouble == 0) { // INT_USE data[it.aIndex] = 0; // INT_USE } else { // INT_USE data[it.aIndex] /= it.bDouble; } // INT_USE } } } setDirty(); return this; } @Override public ByteDataset ifloor() { return this; } @Override public ByteDataset iremainder(final Object b) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); boolean useLong = bds.getElementClass().equals(Long.class); if (bds.getSize() == 1) { if (useLong) { final long lb = bds.getElementLongAbs(0); if (lb == 0) { // INT_USE fill(0); // INT_USE } else { // INT_USE final IndexIterator it = getIterator(); while (it.hasNext()) { data[it.index] %= lb; } } // INT_USE } else { final long lb = bds.getElementLongAbs(0); if (lb == 0) { // INT_USE fill(0); // INT_USE } else { // INT_USE final IndexIterator it = getIterator(); while (it.hasNext()) { data[it.index] %= lb; } } // INT_USE } } else { final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); it.setOutputDouble(!useLong); if (useLong) { while (it.hasNext()) { try { data[it.aIndex] %= it.bLong; // INT_EXCEPTION } catch (ArithmeticException e) { data[it.aIndex] = 0; } } } else { while (it.hasNext()) { try { data[it.aIndex] %= it.bDouble; // INT_EXCEPTION } catch (ArithmeticException e) { data[it.aIndex] = 0; } } } } setDirty(); return this; } @Override public ByteDataset ipower(final Object b) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); if (bds.getSize() == 1) { final double vr = bds.getElementDoubleAbs(0); final IndexIterator it = getIterator(); if (bds.isComplex()) { final double vi = bds.getElementDoubleAbs(1); if (vi == 0) { while (it.hasNext()) { final double v = Math.pow(data[it.index], vr); if (Double.isInfinite(v) || Double.isNaN(v)) { // INT_USE data[it.index] = 0; // INT_USE } else { // INT_USE data[it.index] = (byte) (long) v; // PRIM_TYPE_LONG // ADD_CAST } // INT_USE } } else { final Complex zv = new Complex(vr, vi); while (it.hasNext()) { Complex zd = new Complex(data[it.index], 0); final double v = zd.pow(zv).getReal(); if (Double.isInfinite(v) || Double.isNaN(v)) { // INT_USE data[it.index] = 0; // INT_USE } else { // INT_USE data[it.index] = (byte) (long) v; // PRIM_TYPE_LONG // ADD_CAST } // INT_USE } } } else {// NAN_OMIT while (it.hasNext()) { final double v = Math.pow(data[it.index], vr); if (Double.isInfinite(v) || Double.isNaN(v)) { // INT_USE data[it.index] = 0; // INT_USE } else { // INT_USE data[it.index] = (byte) (long) v; // PRIM_TYPE_LONG // ADD_CAST } // INT_USE } } } else { final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); it.setOutputDouble(true); if (bds.isComplex()) { while (it.hasNext()) { final Complex zv = new Complex(it.bDouble, bds.getElementDoubleAbs(it.bIndex + 1)); final double v = new Complex(it.aDouble, 0).pow(zv).getReal(); if (Double.isInfinite(v) || Double.isNaN(v)) { // INT_USE data[it.aIndex] = 0; // INT_USE } else { // INT_USE data[it.aIndex] = (byte) (long) v; // PRIM_TYPE_LONG // ADD_CAST } // INT_USE } } else {// NAN_OMIT while (it.hasNext()) { final double v = Math.pow(it.aDouble, it.bDouble); if (Double.isInfinite(v) || Double.isNaN(v)) { // INT_USE data[it.aIndex] = 0; // INT_USE } else { // INT_USE data[it.aIndex] = (byte) (long) v; // PRIM_TYPE_LONG // ADD_CAST } // INT_USE } } } setDirty(); return this; } @Override public double residual(final Object b, final Dataset w, boolean ignoreNaNs) { Dataset bds = b instanceof Dataset ? (Dataset) b : DatasetFactory.createFromObject(b); final BroadcastIterator it = BroadcastIterator.createIterator(this, bds); it.setOutputDouble(true); double sum = 0; double comp = 0; { if (w == null) { while (it.hasNext()) { final double diff = it.aDouble - it.bDouble; final double err = diff * diff - comp; final double temp = sum + err; comp = (temp - sum) - err; sum = temp; } } else { IndexIterator itw = w.getIterator(); while (it.hasNext() && itw.hasNext()) { final double diff = it.aDouble - it.bDouble; final double err = diff * diff * w.getElementDoubleAbs(itw.index) - comp; final double temp = sum + err; comp = (temp - sum) - err; sum = temp; } } } return sum; } }
code
advantages of roll crusher,sansaivt.com. about zenith. shanghai zenith mining and construction machinery co., ltd. is a hi tech, engineering group.
english
Wherever we look, population is the driver of the most toxic issues on the political agenda. But the population bomb is being defused. Half the world's women are having two children or fewer. Within a generation, the world's population will be falling. And we will all be getting very old. So should we welcome the return to centre stage of the tribal elders? Or is humanity facing a fate worse than environmental apocalypse? Brilliant, heretical and accessible to all, Fred Pearce takes on the matter that is fundamental to who we are and how we live, confronting our demographic demons.
english
I would love to own one of these Mini Stackables from Best Craft Organizers. To see the details, scroll down till you see Kit C in Maple...only I'd chose black drawers instead, and add the inserts to the 2" drawers. I would keep my assorted wood mounted stamp collection in the 1" drawers. I already own four of the above PortaInk units from the same company and I loooooove them. I had great service from them and they are actually Canadian, based in BC...cool, eh?! My mission over the next month is to sort through all the outgrown clothes/toys/gear from the boys and sell it all...proceeds will go to my mini stackable!!
english
#pragma once #include <opencv2/opencv.hpp> #include "math.h" #include <vector> #include "Image_basic_function.h" #include "Find_Four_Line.h" #include "Find_Four_Points.h" using namespace cv; class Projection_Image { public: float scale; Mat Dst_Projection_Image; public: Projection_Image(); void Interface_Information(Mat& input); private: void Exchange_with_Projection_Mat(Mat& input ,vector<CvPoint>& Points); };
code
This barn on our family farm caught fire, burnt down and was rebuilt in 1911. In the arched door the year ‘1886’ is etched. A stone house across the road was built in 1887.
english
const path = require('path'); const test = require('tap').test; const makeTestStorage = require('../fixtures/make-test-storage'); const extract = require('../fixtures/extract'); const VirtualMachine = require('../../src/index'); const projectUri = path.resolve(__dirname, '../fixtures/stack-click.sb2'); const project = extract(projectUri); /** * stack-click.sb2 contains a sprite at (0, 0) with a single stack * when timer > 100000000 * move 100 steps * The intention is to make sure that the stack can be activated by a stack click * even when the hat predicate is false. */ test('stack click activates the stack', t => { const vm = new VirtualMachine(); vm.attachStorage(makeTestStorage()); // Evaluate playground data and exit vm.on('playgroundData', () => { // The sprite should have moved 100 to the right t.equal(vm.editingTarget.x, 100); t.end(); process.nextTick(process.exit); }); // Start VM, load project, and run t.doesNotThrow(() => { vm.start(); vm.clear(); vm.setCompatibilityMode(false); vm.setTurboMode(false); vm.loadProject(project).then(() => { const blockContainer = vm.runtime.targets[1].blocks; const allBlocks = blockContainer._blocks; // Confirm the editing target is initially at 0 t.equal(vm.editingTarget.x, 0); // Find hat for greater than and click it for (const blockId in allBlocks) { if (allBlocks[blockId].opcode === 'event_whengreaterthan') { blockContainer.blocklyListen({ blockId: blockId, element: 'stackclick' }, vm.runtime); } } // After two seconds, get playground data and stop setTimeout(() => { vm.getPlaygroundData(); vm.stopAll(); }, 2000); }); }); });
code
I’m fascinated by what makes an Olympic champion, and I’ve been thinking about that since the Olympic games started a few days ago. 3 things never fail them. If we do these same 3 things, they won’t fail us, either. Would you rather watch a video of this blog post? If so, here it is for you. Or, you can read the text, below the video. I keep thinking these athletes have spent years preparing for their shot at greatness in the Olympics, and it’s such a narrow opportunity. Once in a lifetime for many of them, right? 3 things got them where they are. 3 things you can depend on forever to help you in your own business – so that you get your own shot at success. Clarity about their goal. These athletes are absolutely clear about what they want to accomplish. If they are skiers, they ski. They don’t wander over and start wondering if they should train for ice skating or curling. They are 100% clear that they want to compete as a skier. Are you as clear about your own goal for your business? Focus. Each athlete has sacrificed other things in life in order to focus on their goal. Going to a normal public high school has been set aside. Maybe dating has been set aside. Living at home with mom and dad has been set aside. In order to truly be in the game each athlete has had to narrow their life down to this focus of being in the Olympics. They are obsessive about it, right? If we want big success as a business owner, we also have to be that focused. We have to set aside other things for a while so that we can get to where we want to be. Practicing until mastery is achieved. If an athlete gets to the Olympics it means she has practiced her sport thousands of times, until she is a complete master at what she does. If she’s a skier, she’s fallen hundreds of times. If she’s a skater, she’s skated her routine thousands of times until she can do it no matter what arena she is in. I think that as business owners we often give up practicing what we have to do until we are masters at it. If we market through Facebook ads, we try a few and quit. If we market through blogging, we get started but quit because there isn’t quick success. We can’t do that. We have to practice that part of our business until we achieve mastery. I have a few resources for you that will help with these 3 things. My Maximize Your Message course helps you establish brand clarity and know your goal. Check it out and get notified for the March class. My 12 week coaching program helps you focus and figure out what to set aside so that you will meet that clear goal. My Facebook Ads Tips Workshop (this Friday, February 16th) will help you learn best practices for creating Facebook ads so that you don’t waste time or money. By the way, once you attend the workshop you can sit in on it again at no cost. You can be great at what you do, just like the amazing Olympic athletes. Clarity, focus, and practice will get you there. Be the Olympian of your business. It’s completely possible for you.
english
تَوٲریٖخ دان چھُ تَس سخصَس یِوان وَننہٕ یُس تَوٲریٖخ زانَن یا لیکھن آسہِ
kashmiri
A tiff between two judges in Delaware continued Thursday when the chief justice of the state’s supreme court threw some thinly veiled criticism at the head of the state’s business court. During a panel discussion at an M&A conference, Chief Justice Myron T. Steele criticized judges, who make comments outside of written legal opinions and include personal ruminations that aren’t necessarily a legal issue in the matter. “Every time they open their mouth they make the law,” Steele said Thursday, discussing how he would love to get that idea into the heads of judges. Steele said it was inappropriate to force lawyers to read transcripts in order to determine the law. But for the lawyers in the audience at the Tulane University’s Corporate Law Institute’s gathering, the likely target of his comments was clear: Delaware Chancellor Leo Strine Jr. Last fall the supreme court had issued a rebuke of Strine in a formal opinion that had the legal world abuzz. Strine, who declined to comment but will speak at the conference Friday, is famous for colorful opinions full of long analogies and pop-culture references.
english
Set within a small complex of only 8, this spacious apartment exudes the highly sought after on top' Buderim village lifestyle. This light and airy unit has the perfect lifestyle balance from bush to beach. The generous, elevated balcony soaks up the morning sun and delivers mesmerising easterly views to the ocean, whilst the adjoining bush setting provides privacy and serenity. The apartment consists of two generous sized bedrooms that share a full size bathroom with separate toilet and loads of storage, including a large private storage room located across from a single secure garage directly under the apartment. Situated in a prime location, you're only 2 minutes' walk into the village, including a myriad of shops, cafs, restaurants, grocery stores as well as a local surgery with adjoining pharmacy, schools and public transport. Buderim is only minutes' drive to local beaches, the new Maroochydore CBD and Sunshine Plaza, and 15 minutes to Maroochydore Airport and the newly opened Sunshine Coast University Hospital. The current owner has reluctantly decided to move to greener pastures after 13 years. Owner occupiers won't want to miss the opportunity to call this piece of paradise home, or investors can allow the current tenant to extend their lease (expires February). Either way, an inspection is a must.
english
<?php namespace Fisharebest\Localization\Locale; use Fisharebest\Localization\Script\ScriptLatn; use Fisharebest\Localization\Territory\TerritoryTn; /** * Class LocaleArTn * * @author Greg Roach <[email protected]> * @copyright (c) 2020 Greg Roach * @license GPLv3+ */ class LocaleArTn extends LocaleAr { public function numberSymbols() { return array( self::GROUP => self::DOT, self::DECIMAL => self::COMMA, self::NEGATIVE => self::LTR_MARK . '-', ); } protected function numerals() { $latin = new ScriptLatn(); return $latin->numerals(); } protected function percentFormat() { return self::PLACEHOLDER . self::LTR_MARK . self::PERCENT . self::LTR_MARK; } public function territory() { return new TerritoryTn(); } }
code
The star, who boarded The Walking Dead early on during its fourth season in 2013, remembers her last day on set as being “really nice” despite Tara’s looming death at the hands of the Whisperers, who erected a gruesome border marking Whisperer territory with the decapitated and subsequently reanimated heads of victims abducted from the fair. Masterson’s Tara was the fourth-longest surviving character still in play on The Walking Dead, behind only Norman Reedus’ Daryl and Melissa McBride’s Carol, who each started with the show in its first season, and Danai Gurira’s Michonne, who joined in Season Three. The Walking Dead is next hit with a ferocious blizzard when it premieres its Season Nine finale, “The Storm,” Sunday, March 31 at 9/8c on AMC.
english
Fear Might Be Destroying You. Learn to Transcend. Fear Might Be Destroying You. Learn How to Recognize and Transcend it. This idea that all of our problems are ultimately linked to fear can be both a comforting and daunting realization. A comfort because it simplifies all our problems down to one; if we eliminate the fear in our lives, then we also eliminate in one fell-swoop our jealousy, anger, insecurities, sadness, etc. But it can also be a daunting realization. Once we start taking stock of all our anxieties, insecurities, and emotional roadblocks, we may realize that we have a more complex and entrenched relationship with fear than we were prepared to admit. Why should we care though if fear underlines our decisions, or if it perpetuates itself throughout our lives? For the same reason that being unaware of our inaccurate perceptions and predictions causes suffering, being unaware of the subtle anxieties that pile up in our life as a result of fear can guide us down a path that we never would have chosen for ourselves. Fear can be an incredibly destructive, or it can guide us to high achievement. Unless we gain awareness of its presence and either transcend it or use it as a tool for our noble goals, it will forever be our master. The world we live in is incredibly complex and at times inexplicable. A million random things fly at us every day, and so much of what goes on around us is being shaped by external forces completely outside of our control. Even our own DNA, considered by some to be a deterministic rule of life, is in reality ruled by uncertainty and randomness; genetically identical cells in controlled environments will exhibit significant differences in gene expression. One researcher showed this when he inserted two identical DNA sequences into the genome of an E. coli sample. One sequence encoded a protein that would make the bacteria glow red, and the other green. Had the genes been expressed equally as expected, the bacteria would have glowed a consistent yellow color, but that's not what happened. The only real law of nature, from the micro to the macro, is change and uncertainty; nothing stays the same, and even the highly specific language that encodes our very existence is open to interpretation by the arbitrary bumbling about of proteins in our cells. This reality is what creates the space for fear to enter. Because the world is a place of uncertainty and change, we feel the need to hone our ability to predict it. Amidst all this incredible uncertainty, our brains have adapted to find ways of bringing clarity to the noise by way of pattern recognition; patterns that we assemble and categorize in an effort to make sense of the world around us. Being able to look at patterns of past experience, and make predictions of the future is a tool that has served us well for accomplishing certain tasks, but when we use these patterns to try and define reality we have gone too far. When we try to categorize the world into neat little rules, and fit it in a box, we find that we have also inadvertently put ourselves in a box; the patterns we identify become the thread that weaves our entire perceived identity. The problem is that this identity that we weave for ourselves is a false construct; it is not based on reality. Thus the rules, categories, and patterns that we use to construct and define the world around us are all incredibly fragile, which in turn creates a fragile identity by which we create for ourselves. This is the ultimate source of fear: we fear anything that shakes the foundation of the identity we have built for ourselves. Once we create rules about how the world should be, we have simultaneously defined in some way who we are. The moment we allow ourselves to expect the world to be one way or another is the moment we begin to fear outcomes outside our expectations, as these would disrupt our identity. But let's see if this is true. Let's look at one of the most destructive and corrosive fears that affects relationships in our time: jealousy. Most people have experienced jealousy and recognize it as a normal (and even healthy) component of any romantic relationship. But what is jealousy? Let's say you are at a bar and you see your partner flirting with another person. In your mind you start imagining your partner becoming increasingly attracted to this other person. Maybe they decide that they really enjoy flirting with other people and that being in a committed relationship with you is holding them back from meeting other, more exciting, people. Maybe you start to picture them becoming less and less interested in you to the point where they are sneaking off to have affairs with other people. As these things start playing out in your mind you become increasingly more anxious and insecure. When your partner tries to talk to you later in the evening you mask your insecurity with anger. You tell your partner that they have had too much to drink and it's time to go home. You try to regain control of realities that are not living up to your expectations. We become angry and afraid when someone harms us (or we perceive that they have harmed us), but if we look at this example closer, we see that your partner has not actually done anything to you. Even a scenario in which your partner cheats on you with another person does not describe a situation in which they have harmed you. Yet we have a tendency to argue the opposite. Why? It certainly feels like we have been harmed when our partner cheats on us. The reason is simple: we have built our identity around the relationship, and a perceived threat to the relationship is a perceived threat to our identity. If our partner is flirting with other people, maybe it means we are not as attractive as we thought; we aren't the lover we thought we were. If we have allowed Disney movies to influence the way we have come to expect the world to be, and we see relationships as a world of fantasy in which we are "the one" for our partner, then cheating is a direct refutation of that world-view. We have to come to grips with the reality that we are not "the one" after all. But these are all false anxieties that never needed to exist in the first place. When you build your identity around your relationship, telling yourself "I am a great lover, and that's why she loves me" might make you feel confident for a short while, but now you have to spend your time trying to defend and protect that identity, which means trying to control your partner and living in fear. Most people can relate to the experience of feeling jealous - of experiencing insecurity at a perceived threat to relationship and personal identity - but we build our identity around more than just our relationships. We build identities around our jobs, families, friends, status, wealth, health, skills, knowledge, etc. And when we do, we open ourselves up to the fear that comes with trying to protect and defend those identities. We build our identity around being a homeowner, so now we must defend our ability to pay our mortgage or suffer a reality that contradicts that. We build our identity around the world-view that college graduates are destined for great careers, and now we must defend that by seeking out a job that lives up to that idea, rather than a job or career that truly inspires us. We might build our identity around being a father, and a father of men who fit the description of how we think men should be. With that identity we may respond with fear and anger at the realization that our son is gay. The more we try to define the world, and create an identity for ourselves that conforms to those rules, the more fear we breed. Fear that leads us to make decisions that are not truly good for us. As a result, subtle anxieties, stress, insecurity, and suffering take the place of love, freedom, and joy. But the cycle can be broken. Mindfulness is the practice of reclaiming our identity from one of the ego - the false sense of self - to its true state: pure and present awareness. It allows us to untangle the web of arbitrary conditions we attach to identity; the self is no longer defined by the things we do or acquire, any other external developments, and not even our memories, knowledge, skills, etc. Where before we wrap our environment, achievements, possessions, etc. around our identity, now we see these things as merely passing through as we experience them. Identity becomes awareness. When we achieve this perspective, fear has nothing to hold on to. Mindfulness helps us identify the source of the rules we have set up for how the world should be, and all of the arbitrary conditions we have attached to our sense of self. We might find that we never made the conscious choice to view the world a certain way, it being a result of transference, and that we actually do not hold that view. It helps us identify our fears, so that we can start acting to overcome them. Mindfulness will help us recognize present experience for what it really is, illuminating situations, relationships, and environments that are slowly destroying us. These distractions are spilling over from the subtle but growing anxiety associated with the fact that we are not being honest with ourselves, and that doing so will necessitate a change in behavior and direction, a reality we don't want to face. Through the practice of mindfulness, we can identify these distractions and illuminate the source of the anxiety causing them. How do we practice mindfulness? Meditation. Mindfulness meditation can be done for as little as ten minutes a day, and it involves the practice of focusing on the breath. Sitting in a relaxed position, your eyes closed, focus on your breath as it enters and leaves your body. As thoughts, worries, feelings and ideas enter your mind, the goal is to recognize them, allow them to pass through you, and continue to focus on the breath. Do not complicate the process by thinking that the goal of meditation is to clear your mind. The goal is to exercise the habit of returning to present awareness. Mindful meditation is to awareness as the rep is to weight lifting; it's a practice, an exercise. Many people get frustrated with meditation because their mind gets distracted, and that feels like failure. But being distracted is part of the opportunity presented by meditation: to recognize when you are distracted, so you can practice the return to awareness. If you make this practice part of your daily habits, you will experience greater awareness, focus, and clarity as you go about your day. Your mind being tuned more accurately to the frequency of awareness, you might find that events do not upset you as much, and you will be quicker to recognize when you are being controlled by the ego. Changing the language by which we define the world around us from evaluative to descriptive. When you walk outside in the pouring rain and complain "this rain is awful!" you are not practicing mindfulness. What makes a thing bad or good? The rain is neither bad nor good, it simply is. You have made it up in your mind that it is a bad thing, and now you must feel bad walking through it. "But now my suit is wet!" Yes, it is, and that is neither bad nor good either. "But now I will look bad when I walk into my meeting." Why are you afraid? Have you decided that the world only makes sense when businessmen show up to meetings in dry suits, and that your peers might lose respect for you? Are they going to lose respect for you or the false identity you have projected? The practice of mindfulness, of transcending the ego to become pure awareness, helps us pull the rug of expectation out from beneath the feet of fear. If we are simply aware, we will see that so much of the rules we have set for ourselves and the world around us are just silly, and we will illuminate our underlying fears so that we can be honest with ourselves about who we really are and want to be. After graduating college with a degree in finance, I began working to pursue my ambition of making a lot of money and being a big success. I wanted to prove something to others and myself. As confident as I seemed to be at first, the more I worked, the more insecure and unhappy I became. The further along the path of career I went, the more it felt like I was falling behind, and I did not understand why. My insecurities plagued me, and my relationships suffered. I found it difficult to eat. At one point I found it difficult to pick myself up from lying face-down on the floor. But from the outside, it looked like I was doing really well, so what was happening? During this time I had nagging questions in the back of my head that I simply refused to ask myself, let alone address. The question "is this what I want?" kept popping up, and I would force it out, pretend not to notice it. I knew that if I addressed questions like that, it would force me to reevaluate a view that I had held for a long time, a view about who I was. And that was a scary thought. I thought that if I simply worked harder, I would achieve something that would somehow make the questions go away. If I could just become "successful," I wouldn't have to ask myself if this was what I really wanted to do. Eventually, in much the same way the body breaks down if exerted in unnatural ways for too long, the mind breaks down when pushed against its will. I realized that I had to start asking myself hard questions and answering them honestly because I was destroying myself. Q: Why am I doing what I'm doing? Q: Is this really what I want to do? Q: Would you be happy doing this if you made a certain amount of money, and 'proved' yourself? Q: How would you feel if you were driven into poverty by a passion that fueled you every day? Gaining this clarity, I can now reexamine and re-calibrate the formula by which I make decisions. Now when I look at opportunities, rather than trying to figure out how it will advance my false identity, I can ask is this a HELL YEAH! or no decision? What is the outcome of honest self-reflection like this applied broadly to all areas of our lives? When you have uncovered a deep fear that has been driving you, and you can walk away from it, it feels like waking up to the love of your life and finding out that it's you. You feel relieved, energized, invincible. Nothing and no one can harm you; there is nothing to expose. Where before your identity was fragile, because it was a false projection, now it is untouchable because it is only you. And you are not insecure when you embrace the real you. When you make the decision to be brutally honest with yourself, you start seeking out the answer to questions that may seem difficult at first. You might look at your relationship and ask "what is the real reason I am in a relationship with this person? Is it because I truly love them, or am I afraid of being alone?" Asking these types of questions is frightening. There is pain associated with answering them. Our false identity feel safe, and tearing it down means launching into the unknown of the existential variety. But crossing that threshold is worth it, because there is personal freedom on the other side. A freedom from insecurity, anxiety, stress, and fear. In the 1960's Oliver Sacks worked in a headache clinic specializing in migraines. The subject fascinated him and he worked under the direction of a prominent researcher who was also the head of the American Neurological Association. At first an encouraging mentor, the man began to resent Sack's progress. One summer Sacks spent an inspirational vacation writing a manuscript for a book on his experience and his findings in the migraine clinic. When the head of the clinic found out about the book he all but shut down Sacks, not only threatening to fire him, but as the head of the Association he threatened to prevent Sacks from getting any other job in neurology. And this is what we discover when we push through our fears. There is a threshold, the thing we fear, that may be extremely painful to cross. This event could be telling someone you don't love them anymore, or getting on stage to perform, or jumping out of a plane. In our minds we usually imagine the event to be far worse than it is, and we suffer more in imagination than in reality. What we find is that once the event is over, the pain, as suddenly as it came, is gone. We now have the peace and clarity that comes from knowing we are on the right track, that we are pursuing the truth of our spirit. The benefits of crossing this threshold can come immediately. For sacks, it allowed his creative energy to flourish. The story highlights both a victory over fear and a casualty of one. Sacks, being honest with himself, knew that he had to write the book, and in the face of a real threat to his career, he moved in the direction of his fear to overcome it and experience the joy, energy, and elation of creative freedom. On the other hand, the head of the clinic was a tragic victim of fear. The insecurity he experienced from a rising star caused him to lose the potential for a valuable ally and collaborator, and he was exposed when the publication of Sacks' book brought light to the fact that he had plagiarized Sacks' work years before under his own name. He created an identity for himself that placed him at the head of a field in which he felt he owned. Every talent under him that showed promise was therefore a threat to his identity, and he fought to control the world around him. The result is that fear consumed him. It is amazing what we can accomplish when we allow ourselves to be free, when we remove the fear that is blocking our creativity. To do this, we must face the fear and move directly towards it. Our fears, when we acknowledge them and understand them, can become our greatest allies in transforming our lives. It is possible that without the threat of being fired, Sacks would not have been so motivated. It is almost certain that the threat raised his sense of urgency: now he was on his own, and he had to complete what he set out to do, for there was no safety net, nothing to fall back on except his own knowledge, skills, and motivations. Ultimately, the goal is to live without fear, but there are points along that path where we can use fear to our advantage. Fear can be our greatest tool for eliminating fear. Self-honesty and mindfulness allow us to identify the right fears, and then re-calibrate our decision-making process so that we face them and eventually transcend them to live a truly fearless life. When fear is eliminated from our lives, we will live more fully than we ever thought possible. Without fear, there is only love, and when we choose to live for love - "love of life, love of what is good, love of family, love of pride, love for our infinite souls on the quest for impeccability" - we will live as warriors . Michael Singer in The Untethered Soul. Robert Greene and 50 Cent in The 50th Law. "Stochastic Gene Expression in a Single Cell" as described by Jonah Lehrer in Proust Was a Neuroscientist. Don Miguel Ruiz in The Mastery of Love. Brad Blanton in Radical Honesty: How to Transform Your Life by Telling the Truth. Marcus Aurelius in his Meditations - Hays Translation. Marcus Aurelius was the Emperor of Rome from 161 to 180 AD. The Meditations is his personal journal while campaigning on the war front away from home. The journal, never intended to be publicized or survive his death, consists of notes he made to himself during the conflict of the war, and since then has solidified him as one of the great Stoic philosophers. It is incredible to read how similar the struggles of an ancient emperor - the most powerful man in the world at the time - are to the struggles we experience every day (in one passage he criticizes himself for not getting out of bed). In adopting his advice on how to remain steadfast in the face of conflict, your life will forever be changed. Oliver Sacks in his autobiography On the Move. Not Fade Away tells the story of Peter Barton as he is a few months before his death. A retired media executive who lived an extraordinary life, he is now faced with his own mortality at the hands of a fatal cancer diagnosis. In the remaining months he has left, we get a rare and privileged look through his perspective as he achieves peace and solace. Jamie Foxx in an incredible interview with Tim Ferriss.
english