instance_id
stringlengths
13
37
text
stringlengths
3.08k
667k
repo
stringclasses
35 values
base_commit
stringlengths
40
40
problem_statement
stringlengths
10
256k
hints_text
stringlengths
0
908k
created_at
stringlengths
20
20
patch
stringlengths
18
101M
test_patch
stringclasses
1 value
version
stringclasses
1 value
FAIL_TO_PASS
stringclasses
1 value
PASS_TO_PASS
stringclasses
1 value
environment_setup_commit
stringclasses
1 value
pandas-dev__pandas-30995
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: Timestamp(Timestamp(Ambiguous time)) modifies .value with dateutil tz Pretty obscure bug, but this seems fishy: ``` In [7]: pd.__version__ Out[7]: '0.24.0.dev0+1300.ge0a68076a.dirty' # Ambiguous time In [8]: t = pd.Timestamp(1382835600000000000, tz='dateutil/Europe/London') # Repr is consistent In [11]: t Out[11]: Timestamp('2013-10-27 01:00:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London') In [12]: pd.Timestamp(t) Out[12]: Timestamp('2013-10-27 01:00:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London') # .value changes In [13]: t.value Out[13]: 1382835600000000000 In [14]: pd.Timestamp(t).value Out[14]: 1382832000000000000 ``` pytz timezones behave consistently though ``` In [15]: t = pd.Timestamp(1382835600000000000, tz='Europe/London') In [16]: t Out[16]: Timestamp('2013-10-27 01:00:00+0000', tz='Europe/London') In [17]: pd.Timestamp(t) Out[17]: Timestamp('2013-10-27 01:00:00+0000', tz='Europe/London') In [18]: t.value Out[18]: 1382835600000000000 In [19]: pd.Timestamp(t).value Out[19]: 1382835600000000000 ``` The fact that the repr between dateutil timezones and pytz timezones don't match can be possible be seen in a change in dateutil somewhere around 2.6? But the main issue that is `.value` changes. https://github.com/pandas-dev/pandas/blob/216986d4691297d5cfec33b5c62be7890b9a54d7/pandas/tests/indexes/datetimes/test_timezones.py#L564-L571 </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /> 30 </a> 31 </td> 32 </tr> 33 <tr> 34 <td>License</td> 35 <td> 36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 38 </a> 39 </td> 40 </tr> 41 <tr> 42 <td>Build Status</td> 43 <td> 44 <a href="https://travis-ci.org/pandas-dev/pandas"> 45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 46 </a> 47 </td> 48 </tr> 49 <tr> 50 <td></td> 51 <td> 52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 54 </a> 55 </td> 56 </tr> 57 <tr> 58 <td>Coverage</td> 59  <td> 60 <a href="https://codecov.io/gh/pandas-dev/pandas"> 61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 62 </a> 63 </td> 64 </tr> 65 <tr> 66 <td>Downloads</td> 67 <td> 68 <a href="https://pandas.pydata.org"> 69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 70 </a> 71 </td> 72 </tr> 73 <tr> 74 <td>Gitter</td> 75 <td> 76 <a href="https://gitter.im/pydata/pandas"> 77 <img src="https://badges.gitter.im/Join%20Chat.svg" /> 78 </a> 79 </td> 80 </tr> 81 </table> 82 83 84 85 ## What is it? 86 87 **pandas** is a Python package providing fast, flexible, and expressive data 88 structures designed to make working with "relational" or "labeled" data both 89 easy and intuitive. It aims to be the fundamental high-level building block for 90 doing practical, **real world** data analysis in Python. Additionally, it has 91 the broader goal of becoming **the most powerful and flexible open source data 92 analysis / manipulation tool available in any language**. It is already well on 93 its way towards this goal. 94 95 ## Main Features 96 Here are just a few of the things that pandas does well: 97 98 - Easy handling of [**missing data**][missing-data] (represented as 99 `NaN`) in floating point as well as non-floating point data 100 - Size mutability: columns can be [**inserted and 101 deleted**][insertion-deletion] from DataFrame and higher dimensional 102 objects 103 - Automatic and explicit [**data alignment**][alignment]: objects can 104 be explicitly aligned to a set of labels, or the user can simply 105 ignore the labels and let `Series`, `DataFrame`, etc. automatically 106 align the data for you in computations 107 - Powerful, flexible [**group by**][groupby] functionality to perform 108 split-apply-combine operations on data sets, for both aggregating 109 and transforming data 110 - Make it [**easy to convert**][conversion] ragged, 111 differently-indexed data in other Python and NumPy data structures 112 into DataFrame objects 113 - Intelligent label-based [**slicing**][slicing], [**fancy 114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 115 large data sets 116 - Intuitive [**merging**][merging] and [**joining**][joining] data 117 sets 118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 119 data sets 120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 121 labels per tick) 122 - Robust IO tools for loading data from [**flat files**][flat-files] 123 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 125 - [**Time series**][timeseries]-specific functionality: date range 126 generation and frequency conversion, moving window statistics, 127 date shifting and lagging. 128 129 130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 148 149 ## Where to get it 150 The source code is currently hosted on GitHub at: 151 https://github.com/pandas-dev/pandas 152 153 Binary installers for the latest released version are available at the [Python 154 package index](https://pypi.org/project/pandas) and on conda. 155 156 ```sh 157 # conda 158 conda install pandas 159 ``` 160 161 ```sh 162 # or PyPI 163 pip install pandas 164 ``` 165 166 ## Dependencies 167 - [NumPy](https://www.numpy.org) 168 - [python-dateutil](https://labix.org/python-dateutil) 169 - [pytz](https://pythonhosted.org/pytz) 170 171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies. 172 173 ## Installation from sources 174 To install pandas from source you need Cython in addition to the normal 175 dependencies above. Cython can be installed from pypi: 176 177 ```sh 178 pip install cython 179 ``` 180 181 In the `pandas` directory (same one where you found this file after 182 cloning the git repo), execute: 183 184 ```sh 185 python setup.py install 186 ``` 187 188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 189 190 191 ```sh 192 python -m pip install -e . --no-build-isolation --no-use-pep517 193 ``` 194 195 If you have `make`, you can also use `make develop` to run the same command. 196 197 or alternatively 198 199 ```sh 200 python setup.py develop 201 ``` 202 203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 204 205 ## License 206 [BSD 3](LICENSE) 207 208 ## Documentation 209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 210 211 ## Background 212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 213 has been under active development since then. 214 215 ## Getting Help 216 217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 219 220 ## Discussion and Development 221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 222 223 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 224 225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 226 227 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 228 229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 230 231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 232 233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 234 235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 236 237 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md) 238 [end of README.md] [start of asv_bench/benchmarks/tslibs/timestamp.py] 1 import datetime 2 3 import dateutil 4 import pytz 5 6 from pandas import Timestamp 7 8 9 class TimestampConstruction: 10 def time_parse_iso8601_no_tz(self): 11 Timestamp("2017-08-25 08:16:14") 12 13 def time_parse_iso8601_tz(self): 14 Timestamp("2017-08-25 08:16:14-0500") 15 16 def time_parse_dateutil(self): 17 Timestamp("2017/08/25 08:16:14 AM") 18 19 def time_parse_today(self): 20 Timestamp("today") 21 22 def time_parse_now(self): 23 Timestamp("now") 24 25 def time_fromordinal(self): 26 Timestamp.fromordinal(730120) 27 28 def time_fromtimestamp(self): 29 Timestamp.fromtimestamp(1515448538) 30 31 32 class TimestampProperties: 33 _tzs = [None, pytz.timezone("Europe/Amsterdam"), pytz.UTC, dateutil.tz.tzutc()] 34 _freqs = [None, "B"] 35 params = [_tzs, _freqs] 36 param_names = ["tz", "freq"] 37 38 def setup(self, tz, freq): 39 self.ts = Timestamp("2017-08-25 08:16:14", tzinfo=tz, freq=freq) 40 41 def time_tz(self, tz, freq): 42 self.ts.tz 43 44 def time_dayofweek(self, tz, freq): 45 self.ts.dayofweek 46 47 def time_weekday_name(self, tz, freq): 48 self.ts.day_name 49 50 def time_dayofyear(self, tz, freq): 51 self.ts.dayofyear 52 53 def time_week(self, tz, freq): 54 self.ts.week 55 56 def time_quarter(self, tz, freq): 57 self.ts.quarter 58 59 def time_days_in_month(self, tz, freq): 60 self.ts.days_in_month 61 62 def time_freqstr(self, tz, freq): 63 self.ts.freqstr 64 65 def time_is_month_start(self, tz, freq): 66 self.ts.is_month_start 67 68 def time_is_month_end(self, tz, freq): 69 self.ts.is_month_end 70 71 def time_is_quarter_start(self, tz, freq): 72 self.ts.is_quarter_start 73 74 def time_is_quarter_end(self, tz, freq): 75 self.ts.is_quarter_end 76 77 def time_is_year_start(self, tz, freq): 78 self.ts.is_year_start 79 80 def time_is_year_end(self, tz, freq): 81 self.ts.is_year_end 82 83 def time_is_leap_year(self, tz, freq): 84 self.ts.is_leap_year 85 86 def time_microsecond(self, tz, freq): 87 self.ts.microsecond 88 89 def time_month_name(self, tz, freq): 90 self.ts.month_name() 91 92 93 class TimestampOps: 94 params = [None, "US/Eastern", pytz.UTC, dateutil.tz.tzutc()] 95 param_names = ["tz"] 96 97 def setup(self, tz): 98 self.ts = Timestamp("2017-08-25 08:16:14", tz=tz) 99 100 def time_replace_tz(self, tz): 101 self.ts.replace(tzinfo=pytz.timezone("US/Eastern")) 102 103 def time_replace_None(self, tz): 104 self.ts.replace(tzinfo=None) 105 106 def time_to_pydatetime(self, tz): 107 self.ts.to_pydatetime() 108 109 def time_normalize(self, tz): 110 self.ts.normalize() 111 112 def time_tz_convert(self, tz): 113 if self.ts.tz is not None: 114 self.ts.tz_convert(tz) 115 116 def time_tz_localize(self, tz): 117 if self.ts.tz is None: 118 self.ts.tz_localize(tz) 119 120 def time_to_julian_date(self, tz): 121 self.ts.to_julian_date() 122 123 def time_floor(self, tz): 124 self.ts.floor("5T") 125 126 def time_ceil(self, tz): 127 self.ts.ceil("5T") 128 129 130 class TimestampAcrossDst: 131 def setup(self): 132 dt = datetime.datetime(2016, 3, 27, 1) 133 self.tzinfo = pytz.timezone("CET").localize(dt, is_dst=False).tzinfo 134 self.ts2 = Timestamp(dt) 135 136 def time_replace_across_dst(self): 137 self.ts2.replace(tzinfo=self.tzinfo) 138 [end of asv_bench/benchmarks/tslibs/timestamp.py] [start of pandas/conftest.py] 1 from collections import abc 2 from datetime import date, time, timedelta, timezone 3 from decimal import Decimal 4 import operator 5 import os 6 7 from dateutil.tz import tzlocal, tzutc 8 import hypothesis 9 from hypothesis import strategies as st 10 import numpy as np 11 import pytest 12 from pytz import FixedOffset, utc 13 14 import pandas.util._test_decorators as td 15 16 import pandas as pd 17 from pandas import DataFrame 18 import pandas._testing as tm 19 from pandas.core import ops 20 21 hypothesis.settings.register_profile( 22 "ci", 23 # Hypothesis timing checks are tuned for scalars by default, so we bump 24 # them from 200ms to 500ms per test case as the global default. If this 25 # is too short for a specific test, (a) try to make it faster, and (b) 26 # if it really is slow add `@settings(deadline=...)` with a working value, 27 # or `deadline=None` to entirely disable timeouts for that test. 28 deadline=500, 29 suppress_health_check=(hypothesis.HealthCheck.too_slow,), 30 ) 31 hypothesis.settings.load_profile("ci") 32 33 34 def pytest_addoption(parser): 35 parser.addoption("--skip-slow", action="store_true", help="skip slow tests") 36 parser.addoption("--skip-network", action="store_true", help="skip network tests") 37 parser.addoption("--skip-db", action="store_true", help="skip db tests") 38 parser.addoption( 39 "--run-high-memory", action="store_true", help="run high memory tests" 40 ) 41 parser.addoption("--only-slow", action="store_true", help="run only slow tests") 42 parser.addoption( 43 "--strict-data-files", 44 action="store_true", 45 help="Fail if a test is skipped for missing data file.", 46 ) 47 48 49 def pytest_runtest_setup(item): 50 if "slow" in item.keywords and item.config.getoption("--skip-slow"): 51 pytest.skip("skipping due to --skip-slow") 52 53 if "slow" not in item.keywords and item.config.getoption("--only-slow"): 54 pytest.skip("skipping due to --only-slow") 55 56 if "network" in item.keywords and item.config.getoption("--skip-network"): 57 pytest.skip("skipping due to --skip-network") 58 59 if "db" in item.keywords and item.config.getoption("--skip-db"): 60 pytest.skip("skipping due to --skip-db") 61 62 if "high_memory" in item.keywords and not item.config.getoption( 63 "--run-high-memory" 64 ): 65 pytest.skip("skipping high memory test since --run-high-memory was not set") 66 67 68 @pytest.fixture(autouse=True) 69 def configure_tests(): 70 """ 71 Configure settings for all tests and test modules. 72 """ 73 pd.set_option("chained_assignment", "raise") 74 75 76 @pytest.fixture(autouse=True) 77 def add_imports(doctest_namespace): 78 """ 79 Make `np` and `pd` names available for doctests. 80 """ 81 doctest_namespace["np"] = np 82 doctest_namespace["pd"] = pd 83 84 85 @pytest.fixture(params=["bsr", "coo", "csc", "csr", "dia", "dok", "lil"]) 86 def spmatrix(request): 87 """ 88 Yields scipy sparse matrix classes. 89 """ 90 from scipy import sparse 91 92 return getattr(sparse, request.param + "_matrix") 93 94 95 @pytest.fixture(params=[0, 1, "index", "columns"], ids=lambda x: f"axis {repr(x)}") 96 def axis(request): 97 """ 98 Fixture for returning the axis numbers of a DataFrame. 99 """ 100 return request.param 101 102 103 axis_frame = axis 104 105 106 @pytest.fixture(params=[0, "index"], ids=lambda x: f"axis {repr(x)}") 107 def axis_series(request): 108 """ 109 Fixture for returning the axis numbers of a Series. 110 """ 111 return request.param 112 113 114 @pytest.fixture 115 def ip(): 116 """ 117 Get an instance of IPython.InteractiveShell. 118 119 Will raise a skip if IPython is not installed. 120 """ 121 122 pytest.importorskip("IPython", minversion="6.0.0") 123 from IPython.core.interactiveshell import InteractiveShell 124 125 return InteractiveShell() 126 127 128 @pytest.fixture(params=[True, False, None]) 129 def observed(request): 130 """ 131 Pass in the observed keyword to groupby for [True, False] 132 This indicates whether categoricals should return values for 133 values which are not in the grouper [False / None], or only values which 134 appear in the grouper [True]. [None] is supported for future compatibility 135 if we decide to change the default (and would need to warn if this 136 parameter is not passed). 137 """ 138 return request.param 139 140 141 @pytest.fixture(params=[True, False, None]) 142 def ordered_fixture(request): 143 """ 144 Boolean 'ordered' parameter for Categorical. 145 """ 146 return request.param 147 148 149 _all_arithmetic_operators = [ 150 "__add__", 151 "__radd__", 152 "__sub__", 153 "__rsub__", 154 "__mul__", 155 "__rmul__", 156 "__floordiv__", 157 "__rfloordiv__", 158 "__truediv__", 159 "__rtruediv__", 160 "__pow__", 161 "__rpow__", 162 "__mod__", 163 "__rmod__", 164 ] 165 166 167 @pytest.fixture(params=_all_arithmetic_operators) 168 def all_arithmetic_operators(request): 169 """ 170 Fixture for dunder names for common arithmetic operations. 171 """ 172 return request.param 173 174 175 @pytest.fixture( 176 params=[ 177 operator.add, 178 ops.radd, 179 operator.sub, 180 ops.rsub, 181 operator.mul, 182 ops.rmul, 183 operator.truediv, 184 ops.rtruediv, 185 operator.floordiv, 186 ops.rfloordiv, 187 operator.mod, 188 ops.rmod, 189 operator.pow, 190 ops.rpow, 191 ] 192 ) 193 def all_arithmetic_functions(request): 194 """ 195 Fixture for operator and roperator arithmetic functions. 196 197 Notes 198 ----- 199 This includes divmod and rdivmod, whereas all_arithmetic_operators 200 does not. 201 """ 202 return request.param 203 204 205 _all_numeric_reductions = [ 206 "sum", 207 "max", 208 "min", 209 "mean", 210 "prod", 211 "std", 212 "var", 213 "median", 214 "kurt", 215 "skew", 216 ] 217 218 219 @pytest.fixture(params=_all_numeric_reductions) 220 def all_numeric_reductions(request): 221 """ 222 Fixture for numeric reduction names. 223 """ 224 return request.param 225 226 227 _all_boolean_reductions = ["all", "any"] 228 229 230 @pytest.fixture(params=_all_boolean_reductions) 231 def all_boolean_reductions(request): 232 """ 233 Fixture for boolean reduction names. 234 """ 235 return request.param 236 237 238 _cython_table = pd.core.base.SelectionMixin._cython_table.items() 239 240 241 @pytest.fixture(params=list(_cython_table)) 242 def cython_table_items(request): 243 """ 244 Yields a tuple of a function and its corresponding name. Correspond to 245 the list of aggregator "Cython functions" used on selected table items. 246 """ 247 return request.param 248 249 250 def _get_cython_table_params(ndframe, func_names_and_expected): 251 """ 252 Combine frame, functions from SelectionMixin._cython_table 253 keys and expected result. 254 255 Parameters 256 ---------- 257 ndframe : DataFrame or Series 258 func_names_and_expected : Sequence of two items 259 The first item is a name of a NDFrame method ('sum', 'prod') etc. 260 The second item is the expected return value. 261 262 Returns 263 ------- 264 list 265 List of three items (DataFrame, function, expected result) 266 """ 267 results = [] 268 for func_name, expected in func_names_and_expected: 269 results.append((ndframe, func_name, expected)) 270 results += [ 271 (ndframe, func, expected) 272 for func, name in _cython_table 273 if name == func_name 274 ] 275 return results 276 277 278 @pytest.fixture(params=["__eq__", "__ne__", "__le__", "__lt__", "__ge__", "__gt__"]) 279 def all_compare_operators(request): 280 """ 281 Fixture for dunder names for common compare operations 282 283 * >= 284 * > 285 * == 286 * != 287 * < 288 * <= 289 """ 290 return request.param 291 292 293 @pytest.fixture(params=["__le__", "__lt__", "__ge__", "__gt__"]) 294 def compare_operators_no_eq_ne(request): 295 """ 296 Fixture for dunder names for compare operations except == and != 297 298 * >= 299 * > 300 * < 301 * <= 302 """ 303 return request.param 304 305 306 @pytest.fixture( 307 params=["__and__", "__rand__", "__or__", "__ror__", "__xor__", "__rxor__"] 308 ) 309 def all_logical_operators(request): 310 """ 311 Fixture for dunder names for common logical operations 312 313 * | 314 * & 315 * ^ 316 """ 317 return request.param 318 319 320 @pytest.fixture(params=[None, "gzip", "bz2", "zip", "xz"]) 321 def compression(request): 322 """ 323 Fixture for trying common compression types in compression tests. 324 """ 325 return request.param 326 327 328 @pytest.fixture(params=["gzip", "bz2", "zip", "xz"]) 329 def compression_only(request): 330 """ 331 Fixture for trying common compression types in compression tests excluding 332 uncompressed case. 333 """ 334 return request.param 335 336 337 @pytest.fixture(params=[True, False]) 338 def writable(request): 339 """ 340 Fixture that an array is writable. 341 """ 342 return request.param 343 344 345 @pytest.fixture(scope="module") 346 def datetime_tz_utc(): 347 """ 348 Yields the UTC timezone object from the datetime module. 349 """ 350 return timezone.utc 351 352 353 @pytest.fixture(params=["utc", "dateutil/UTC", utc, tzutc(), timezone.utc]) 354 def utc_fixture(request): 355 """ 356 Fixture to provide variants of UTC timezone strings and tzinfo objects. 357 """ 358 return request.param 359 360 361 @pytest.fixture(params=["inner", "outer", "left", "right"]) 362 def join_type(request): 363 """ 364 Fixture for trying all types of join operations. 365 """ 366 return request.param 367 368 369 @pytest.fixture 370 def strict_data_files(pytestconfig): 371 """ 372 Returns the configuration for the test setting `--strict-data-files`. 373 """ 374 return pytestconfig.getoption("--strict-data-files") 375 376 377 @pytest.fixture 378 def datapath(strict_data_files): 379 """ 380 Get the path to a data file. 381 382 Parameters 383 ---------- 384 path : str 385 Path to the file, relative to ``pandas/tests/`` 386 387 Returns 388 ------- 389 path including ``pandas/tests``. 390 391 Raises 392 ------ 393 ValueError 394 If the path doesn't exist and the --strict-data-files option is set. 395 """ 396 BASE_PATH = os.path.join(os.path.dirname(__file__), "tests") 397 398 def deco(*args): 399 path = os.path.join(BASE_PATH, *args) 400 if not os.path.exists(path): 401 if strict_data_files: 402 raise ValueError( 403 f"Could not find file {path} and --strict-data-files is set." 404 ) 405 else: 406 pytest.skip(f"Could not find {path}.") 407 return path 408 409 return deco 410 411 412 @pytest.fixture 413 def iris(datapath): 414 """ 415 The iris dataset as a DataFrame. 416 """ 417 return pd.read_csv(datapath("data", "iris.csv")) 418 419 420 @pytest.fixture(params=["nlargest", "nsmallest"]) 421 def nselect_method(request): 422 """ 423 Fixture for trying all nselect methods. 424 """ 425 return request.param 426 427 428 @pytest.fixture(params=["left", "right", "both", "neither"]) 429 def closed(request): 430 """ 431 Fixture for trying all interval closed parameters. 432 """ 433 return request.param 434 435 436 @pytest.fixture(params=["left", "right", "both", "neither"]) 437 def other_closed(request): 438 """ 439 Secondary closed fixture to allow parametrizing over all pairs of closed. 440 """ 441 return request.param 442 443 444 @pytest.fixture(params=[None, np.nan, pd.NaT, float("nan"), np.float("NaN")]) 445 def nulls_fixture(request): 446 """ 447 Fixture for each null type in pandas. 448 """ 449 return request.param 450 451 452 nulls_fixture2 = nulls_fixture # Generate cartesian product of nulls_fixture 453 454 455 @pytest.fixture(params=[None, np.nan, pd.NaT]) 456 def unique_nulls_fixture(request): 457 """ 458 Fixture for each null type in pandas, each null type exactly once. 459 """ 460 return request.param 461 462 463 # Generate cartesian product of unique_nulls_fixture: 464 unique_nulls_fixture2 = unique_nulls_fixture 465 466 467 TIMEZONES = [ 468 None, 469 "UTC", 470 "US/Eastern", 471 "Asia/Tokyo", 472 "dateutil/US/Pacific", 473 "dateutil/Asia/Singapore", 474 tzutc(), 475 tzlocal(), 476 FixedOffset(300), 477 FixedOffset(0), 478 FixedOffset(-300), 479 timezone.utc, 480 timezone(timedelta(hours=1)), 481 timezone(timedelta(hours=-1), name="foo"), 482 ] 483 TIMEZONE_IDS = [repr(i) for i in TIMEZONES] 484 485 486 @td.parametrize_fixture_doc(str(TIMEZONE_IDS)) 487 @pytest.fixture(params=TIMEZONES, ids=TIMEZONE_IDS) 488 def tz_naive_fixture(request): 489 """ 490 Fixture for trying timezones including default (None): {0} 491 """ 492 return request.param 493 494 495 @td.parametrize_fixture_doc(str(TIMEZONE_IDS[1:])) 496 @pytest.fixture(params=TIMEZONES[1:], ids=TIMEZONE_IDS[1:]) 497 def tz_aware_fixture(request): 498 """ 499 Fixture for trying explicit timezones: {0} 500 """ 501 return request.param 502 503 504 # Generate cartesian product of tz_aware_fixture: 505 tz_aware_fixture2 = tz_aware_fixture 506 507 508 # ---------------------------------------------------------------- 509 # Dtypes 510 # ---------------------------------------------------------------- 511 512 UNSIGNED_INT_DTYPES = ["uint8", "uint16", "uint32", "uint64"] 513 UNSIGNED_EA_INT_DTYPES = ["UInt8", "UInt16", "UInt32", "UInt64"] 514 SIGNED_INT_DTYPES = [int, "int8", "int16", "int32", "int64"] 515 SIGNED_EA_INT_DTYPES = ["Int8", "Int16", "Int32", "Int64"] 516 ALL_INT_DTYPES = UNSIGNED_INT_DTYPES + SIGNED_INT_DTYPES 517 ALL_EA_INT_DTYPES = UNSIGNED_EA_INT_DTYPES + SIGNED_EA_INT_DTYPES 518 519 FLOAT_DTYPES = [float, "float32", "float64"] 520 COMPLEX_DTYPES = [complex, "complex64", "complex128"] 521 STRING_DTYPES = [str, "str", "U"] 522 523 DATETIME64_DTYPES = ["datetime64[ns]", "M8[ns]"] 524 TIMEDELTA64_DTYPES = ["timedelta64[ns]", "m8[ns]"] 525 526 BOOL_DTYPES = [bool, "bool"] 527 BYTES_DTYPES = [bytes, "bytes"] 528 OBJECT_DTYPES = [object, "object"] 529 530 ALL_REAL_DTYPES = FLOAT_DTYPES + ALL_INT_DTYPES 531 ALL_NUMPY_DTYPES = ( 532 ALL_REAL_DTYPES 533 + COMPLEX_DTYPES 534 + STRING_DTYPES 535 + DATETIME64_DTYPES 536 + TIMEDELTA64_DTYPES 537 + BOOL_DTYPES 538 + OBJECT_DTYPES 539 + BYTES_DTYPES 540 ) 541 542 543 @pytest.fixture(params=STRING_DTYPES) 544 def string_dtype(request): 545 """ 546 Parametrized fixture for string dtypes. 547 548 * str 549 * 'str' 550 * 'U' 551 """ 552 return request.param 553 554 555 @pytest.fixture(params=BYTES_DTYPES) 556 def bytes_dtype(request): 557 """ 558 Parametrized fixture for bytes dtypes. 559 560 * bytes 561 * 'bytes' 562 """ 563 return request.param 564 565 566 @pytest.fixture(params=OBJECT_DTYPES) 567 def object_dtype(request): 568 """ 569 Parametrized fixture for object dtypes. 570 571 * object 572 * 'object' 573 """ 574 return request.param 575 576 577 @pytest.fixture(params=DATETIME64_DTYPES) 578 def datetime64_dtype(request): 579 """ 580 Parametrized fixture for datetime64 dtypes. 581 582 * 'datetime64[ns]' 583 * 'M8[ns]' 584 """ 585 return request.param 586 587 588 @pytest.fixture(params=TIMEDELTA64_DTYPES) 589 def timedelta64_dtype(request): 590 """ 591 Parametrized fixture for timedelta64 dtypes. 592 593 * 'timedelta64[ns]' 594 * 'm8[ns]' 595 """ 596 return request.param 597 598 599 @pytest.fixture(params=FLOAT_DTYPES) 600 def float_dtype(request): 601 """ 602 Parameterized fixture for float dtypes. 603 604 * float 605 * 'float32' 606 * 'float64' 607 """ 608 return request.param 609 610 611 @pytest.fixture(params=COMPLEX_DTYPES) 612 def complex_dtype(request): 613 """ 614 Parameterized fixture for complex dtypes. 615 616 * complex 617 * 'complex64' 618 * 'complex128' 619 """ 620 return request.param 621 622 623 @pytest.fixture(params=SIGNED_INT_DTYPES) 624 def sint_dtype(request): 625 """ 626 Parameterized fixture for signed integer dtypes. 627 628 * int 629 * 'int8' 630 * 'int16' 631 * 'int32' 632 * 'int64' 633 """ 634 return request.param 635 636 637 @pytest.fixture(params=UNSIGNED_INT_DTYPES) 638 def uint_dtype(request): 639 """ 640 Parameterized fixture for unsigned integer dtypes. 641 642 * 'uint8' 643 * 'uint16' 644 * 'uint32' 645 * 'uint64' 646 """ 647 return request.param 648 649 650 @pytest.fixture(params=ALL_INT_DTYPES) 651 def any_int_dtype(request): 652 """ 653 Parameterized fixture for any integer dtype. 654 655 * int 656 * 'int8' 657 * 'uint8' 658 * 'int16' 659 * 'uint16' 660 * 'int32' 661 * 'uint32' 662 * 'int64' 663 * 'uint64' 664 """ 665 return request.param 666 667 668 @pytest.fixture(params=ALL_EA_INT_DTYPES) 669 def any_nullable_int_dtype(request): 670 """ 671 Parameterized fixture for any nullable integer dtype. 672 673 * 'UInt8' 674 * 'Int8' 675 * 'UInt16' 676 * 'Int16' 677 * 'UInt32' 678 * 'Int32' 679 * 'UInt64' 680 * 'Int64' 681 """ 682 683 return request.param 684 685 686 @pytest.fixture(params=ALL_REAL_DTYPES) 687 def any_real_dtype(request): 688 """ 689 Parameterized fixture for any (purely) real numeric dtype. 690 691 * int 692 * 'int8' 693 * 'uint8' 694 * 'int16' 695 * 'uint16' 696 * 'int32' 697 * 'uint32' 698 * 'int64' 699 * 'uint64' 700 * float 701 * 'float32' 702 * 'float64' 703 """ 704 return request.param 705 706 707 @pytest.fixture(params=ALL_NUMPY_DTYPES) 708 def any_numpy_dtype(request): 709 """ 710 Parameterized fixture for all numpy dtypes. 711 712 * bool 713 * 'bool' 714 * int 715 * 'int8' 716 * 'uint8' 717 * 'int16' 718 * 'uint16' 719 * 'int32' 720 * 'uint32' 721 * 'int64' 722 * 'uint64' 723 * float 724 * 'float32' 725 * 'float64' 726 * complex 727 * 'complex64' 728 * 'complex128' 729 * str 730 * 'str' 731 * 'U' 732 * bytes 733 * 'bytes' 734 * 'datetime64[ns]' 735 * 'M8[ns]' 736 * 'timedelta64[ns]' 737 * 'm8[ns]' 738 * object 739 * 'object' 740 """ 741 return request.param 742 743 744 # categoricals are handled separately 745 _any_skipna_inferred_dtype = [ 746 ("string", ["a", np.nan, "c"]), 747 ("bytes", [b"a", np.nan, b"c"]), 748 ("empty", [np.nan, np.nan, np.nan]), 749 ("empty", []), 750 ("mixed-integer", ["a", np.nan, 2]), 751 ("mixed", ["a", np.nan, 2.0]), 752 ("floating", [1.0, np.nan, 2.0]), 753 ("integer", [1, np.nan, 2]), 754 ("mixed-integer-float", [1, np.nan, 2.0]), 755 ("decimal", [Decimal(1), np.nan, Decimal(2)]), 756 ("boolean", [True, np.nan, False]), 757 ("datetime64", [np.datetime64("2013-01-01"), np.nan, np.datetime64("2018-01-01")]), 758 ("datetime", [pd.Timestamp("20130101"), np.nan, pd.Timestamp("20180101")]), 759 ("date", [date(2013, 1, 1), np.nan, date(2018, 1, 1)]), 760 # The following two dtypes are commented out due to GH 23554 761 # ('complex', [1 + 1j, np.nan, 2 + 2j]), 762 # ('timedelta64', [np.timedelta64(1, 'D'), 763 # np.nan, np.timedelta64(2, 'D')]), 764 ("timedelta", [timedelta(1), np.nan, timedelta(2)]), 765 ("time", [time(1), np.nan, time(2)]), 766 ("period", [pd.Period(2013), pd.NaT, pd.Period(2018)]), 767 ("interval", [pd.Interval(0, 1), np.nan, pd.Interval(0, 2)]), 768 ] 769 ids, _ = zip(*_any_skipna_inferred_dtype) # use inferred type as fixture-id 770 771 772 @pytest.fixture(params=_any_skipna_inferred_dtype, ids=ids) 773 def any_skipna_inferred_dtype(request): 774 """ 775 Fixture for all inferred dtypes from _libs.lib.infer_dtype 776 777 The covered (inferred) types are: 778 * 'string' 779 * 'empty' 780 * 'bytes' 781 * 'mixed' 782 * 'mixed-integer' 783 * 'mixed-integer-float' 784 * 'floating' 785 * 'integer' 786 * 'decimal' 787 * 'boolean' 788 * 'datetime64' 789 * 'datetime' 790 * 'date' 791 * 'timedelta' 792 * 'time' 793 * 'period' 794 * 'interval' 795 796 Returns 797 ------- 798 inferred_dtype : str 799 The string for the inferred dtype from _libs.lib.infer_dtype 800 values : np.ndarray 801 An array of object dtype that will be inferred to have 802 `inferred_dtype` 803 804 Examples 805 -------- 806 >>> import pandas._libs.lib as lib 807 >>> 808 >>> def test_something(any_skipna_inferred_dtype): 809 ... inferred_dtype, values = any_skipna_inferred_dtype 810 ... # will pass 811 ... assert lib.infer_dtype(values, skipna=True) == inferred_dtype 812 """ 813 inferred_dtype, values = request.param 814 values = np.array(values, dtype=object) # object dtype to avoid casting 815 816 # correctness of inference tested in tests/dtypes/test_inference.py 817 return inferred_dtype, values 818 819 820 @pytest.fixture( 821 params=[ 822 getattr(pd.offsets, o) 823 for o in pd.offsets.__all__ 824 if issubclass(getattr(pd.offsets, o), pd.offsets.Tick) 825 ] 826 ) 827 def tick_classes(request): 828 """ 829 Fixture for Tick based datetime offsets available for a time series. 830 """ 831 return request.param 832 833 834 # ---------------------------------------------------------------- 835 # Global setup for tests using Hypothesis 836 837 838 # Registering these strategies makes them globally available via st.from_type, 839 # which is use for offsets in tests/tseries/offsets/test_offsets_properties.py 840 for name in "MonthBegin MonthEnd BMonthBegin BMonthEnd".split(): 841 cls = getattr(pd.tseries.offsets, name) 842 st.register_type_strategy( 843 cls, st.builds(cls, n=st.integers(-99, 99), normalize=st.booleans()) 844 ) 845 846 for name in "YearBegin YearEnd BYearBegin BYearEnd".split(): 847 cls = getattr(pd.tseries.offsets, name) 848 st.register_type_strategy( 849 cls, 850 st.builds( 851 cls, 852 n=st.integers(-5, 5), 853 normalize=st.booleans(), 854 month=st.integers(min_value=1, max_value=12), 855 ), 856 ) 857 858 for name in "QuarterBegin QuarterEnd BQuarterBegin BQuarterEnd".split(): 859 cls = getattr(pd.tseries.offsets, name) 860 st.register_type_strategy( 861 cls, 862 st.builds( 863 cls, 864 n=st.integers(-24, 24), 865 normalize=st.booleans(), 866 startingMonth=st.integers(min_value=1, max_value=12), 867 ), 868 ) 869 870 871 @pytest.fixture 872 def float_frame(): 873 """ 874 Fixture for DataFrame of floats with index of unique strings 875 876 Columns are ['A', 'B', 'C', 'D']. 877 878 A B C D 879 P7GACiRnxd -0.465578 -0.361863 0.886172 -0.053465 880 qZKh6afn8n -0.466693 -0.373773 0.266873 1.673901 881 tkp0r6Qble 0.148691 -0.059051 0.174817 1.598433 882 wP70WOCtv8 0.133045 -0.581994 -0.992240 0.261651 883 M2AeYQMnCz -1.207959 -0.185775 0.588206 0.563938 884 QEPzyGDYDo -0.381843 -0.758281 0.502575 -0.565053 885 r78Jwns6dn -0.653707 0.883127 0.682199 0.206159 886 ... ... ... ... ... 887 IHEGx9NO0T -0.277360 0.113021 -1.018314 0.196316 888 lPMj8K27FA -1.313667 -0.604776 -1.305618 -0.863999 889 qa66YMWQa5 1.110525 0.475310 -0.747865 0.032121 890 yOa0ATsmcE -0.431457 0.067094 0.096567 -0.264962 891 65znX3uRNG 1.528446 0.160416 -0.109635 -0.032987 892 eCOBvKqf3e 0.235281 1.622222 0.781255 0.392871 893 xSucinXxuV -1.263557 0.252799 -0.552247 0.400426 894 895 [30 rows x 4 columns] 896 """ 897 return DataFrame(tm.getSeriesData()) 898 899 900 @pytest.fixture(params=[pd.Index, pd.Series], ids=["index", "series"]) 901 def index_or_series(request): 902 """ 903 Fixture to parametrize over Index and Series, made necessary by a mypy 904 bug, giving an error: 905 906 List item 0 has incompatible type "Type[Series]"; expected "Type[PandasObject]" 907 908 See GH#29725 909 """ 910 return request.param 911 912 913 @pytest.fixture 914 def dict_subclass(): 915 """ 916 Fixture for a dictionary subclass. 917 """ 918 919 class TestSubDict(dict): 920 def __init__(self, *args, **kwargs): 921 dict.__init__(self, *args, **kwargs) 922 923 return TestSubDict 924 925 926 @pytest.fixture 927 def non_mapping_dict_subclass(): 928 """ 929 Fixture for a non-mapping dictionary subclass. 930 """ 931 932 class TestNonDictMapping(abc.Mapping): 933 def __init__(self, underlying_dict): 934 self._data = underlying_dict 935 936 def __getitem__(self, key): 937 return self._data.__getitem__(key) 938 939 def __iter__(self): 940 return self._data.__iter__() 941 942 def __len__(self): 943 return self._data.__len__() 944 945 return TestNonDictMapping 946 [end of pandas/conftest.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
87188775b42c67791fc85df99aa02ad7a731c19d
BUG: Timestamp(Timestamp(Ambiguous time)) modifies .value with dateutil tz Pretty obscure bug, but this seems fishy: ``` In [7]: pd.__version__ Out[7]: '0.24.0.dev0+1300.ge0a68076a.dirty' # Ambiguous time In [8]: t = pd.Timestamp(1382835600000000000, tz='dateutil/Europe/London') # Repr is consistent In [11]: t Out[11]: Timestamp('2013-10-27 01:00:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London') In [12]: pd.Timestamp(t) Out[12]: Timestamp('2013-10-27 01:00:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London') # .value changes In [13]: t.value Out[13]: 1382835600000000000 In [14]: pd.Timestamp(t).value Out[14]: 1382832000000000000 ``` pytz timezones behave consistently though ``` In [15]: t = pd.Timestamp(1382835600000000000, tz='Europe/London') In [16]: t Out[16]: Timestamp('2013-10-27 01:00:00+0000', tz='Europe/London') In [17]: pd.Timestamp(t) Out[17]: Timestamp('2013-10-27 01:00:00+0000', tz='Europe/London') In [18]: t.value Out[18]: 1382835600000000000 In [19]: pd.Timestamp(t).value Out[19]: 1382835600000000000 ``` The fact that the repr between dateutil timezones and pytz timezones don't match can be possible be seen in a change in dateutil somewhere around 2.6? But the main issue that is `.value` changes. https://github.com/pandas-dev/pandas/blob/216986d4691297d5cfec33b5c62be7890b9a54d7/pandas/tests/indexes/datetimes/test_timezones.py#L564-L571
Similar behavior occurs with Nonexistent times that occur right at the cusp of the transition point: ``` In [11]: pd.__version__ Out[11]: '0.25.0.dev0+818.g06c2a7163.dirty' In [12]: pd.Timestamp(1552211999999999999, tz='UTC').tz_convert('dateutil/US/Pacific') Out[12]: Timestamp('2019-03-10 01:59:59.999999999-0700', tz='dateutil//usr/share/zoneinfo/US/Pacific') In [13]: pd.Timestamp(pd.Timestamp(1552211999999999999, tz='UTC').tz_convert('dateutil/US/Pacific')) Out[13]: Timestamp('2019-03-10 01:59:59.999999999-0800', tz='dateutil//usr/share/zoneinfo/US/Pacific') ``` My hypothesis is that there's a lower level assumption in `npy_datetimestruct_to_datetime` that is causing this to fail. `convert_datetime_to_tsobject` applies an 8 hour shift from local to UTC (where `npy_datetimestruct_to_datetime` is called) first then a 7 hour shift from UTC to local. @mroeschke Thanks for linking this. The problem is that if `TimeStamp.value` doesn't change (for example if we introduce a shortcut that just returns the object when TimeStamp constructor is called on TimeStamp), this breaks some functionality (`date_range`, to be specific). I'll play with the code a bit and see if I can fix the behavior. Some notes. When we make a Timestamp out of an `int`, `ts.value` is always set to that value, because `convert_to_tsobject` simply scales the value according to the unit and never shifts it. When we later call the Timestamp constructor, it casts the Timestamp to datetime and then updates ts.value based on the results of that cast. So, after the first call, ts.value and object can have different DST settings, and the difference is resolved with a repeated cast. At first glance, it appears that `Timestamp.tz_localize` assumes non-DST time when localizing, but when we call the constructor on a Timestamp object, the implementation assumes the ambiguous time as DST and changes `Timestamp.value`. This seems to happen because of the cast to datetime, later operations and cast back to Timestamp. I will research to confirm. More notes. When we call a Timestamp constructor on an `int` with a `dateutil` timezone, it calls `convert_to_ts_object` in `conversion`, which calls `localize_tso`, which calls `get_dst_info` in `timezones`, which calls `get_utc_trans_times_from_dateutil_tz`, which accesses `zip(tz._trans_list, tz._trans_idx)` to determine DST shifts. So, when we call the constructor, it assumes `DST=True`. When we call it the second time, we go to `convert_datetime_to_tsobject`, which calls `pydatetime_to_dt64` in `np_datetime.pyx`, which calls `PyDateTime_GET_YEAR` to get all the date attributes, and then calls `npy_datetimestruct_to_datetime`, which does not care about the timezone anymore (it assumes `DST=False`). This means that when we get the epoch time this way, we ignore whether the time is DST or not. Then we try to fix the result with `get_utcoffset`: ``` offset = get_utcoffset(obj.tzinfo, ts) obj.value -= int(offset.total_seconds() * 1e9) ``` This shifts time an hour back, because `get_utcoffset` is DST-aware. I will attempt to fix this by taking into account the DST offset in `convert_datetime_to_tsobject`.
2020-01-14T07:04:46Z
<patch> diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst --- a/doc/source/whatsnew/v1.1.0.rst +++ b/doc/source/whatsnew/v1.1.0.rst @@ -59,6 +59,7 @@ Categorical Datetimelike ^^^^^^^^^^^^ +- Bug in :class:`Timestamp` where constructing :class:`Timestamp` from ambiguous epoch time and calling constructor again changed :meth:`Timestamp.value` property (:issue:`24329`) - - diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx --- a/pandas/_libs/tslibs/conversion.pyx +++ b/pandas/_libs/tslibs/conversion.pyx @@ -29,7 +29,7 @@ from pandas._libs.tslibs.util cimport ( from pandas._libs.tslibs.timedeltas cimport cast_from_unit from pandas._libs.tslibs.timezones cimport ( is_utc, is_tzlocal, is_fixed_offset, get_utcoffset, get_dst_info, - get_timezone, maybe_get_tz, tz_compare) + get_timezone, maybe_get_tz, tz_compare, treat_tz_as_dateutil) from pandas._libs.tslibs.timezones import UTC from pandas._libs.tslibs.parsing import parse_datetime_string @@ -362,6 +362,14 @@ cdef _TSObject convert_datetime_to_tsobject(datetime ts, object tz, obj.tzinfo = tz else: obj.value = pydatetime_to_dt64(ts, &obj.dts) + # GH 24329 When datetime is ambiguous, + # pydatetime_to_dt64 doesn't take DST into account + # but with dateutil timezone, get_utcoffset does + # so we need to correct for it + if treat_tz_as_dateutil(ts.tzinfo): + if ts.tzinfo.is_ambiguous(ts): + dst_offset = ts.tzinfo.dst(ts) + obj.value += int(dst_offset.total_seconds() * 1e9) obj.tzinfo = ts.tzinfo if obj.tzinfo is not None and not is_utc(obj.tzinfo): </patch>
[]
[]
conda__conda-11666
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> `conda` shell function injects `$CONDA_PREFIX` into `$PATH` causing incorrect behavior in `conda run` ## Description ### What happened? The `$PATH`/`%PATH%` set by `conda run` includes directories from the base environment; `conda activate`, on the other hand, removes base environment components from the PATH. This can be problematic, as `conda run` and `conda activate` can have different executables (and on Windows, DLLs) available to the user. A basic replicating case: ``` (base) $ conda create -n py39 python=3.9 (base) $ conda run -n py39 python -c 'import os; print(os.environ["PATH"]);' ${CONDA_ROOT}/envs/py39/bin:${CONDA_ROOT}/bin:${CONDA_ROOT}/condabin:${HOME}/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin (base) $ conda activate py39 (py39) $ python -c 'import os; print(os.environ["PATH"]);' ${CONDA_ROOT}/envs/py39/bin:${CONDA_ROOT}/condabin:${HOME}/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin ``` (Note that `${CONDA_ROOT}/bin` gets removed from `$PATH` in the activated environment, but not when using `conda run`.) ### Conda Details <details> <summary><code>conda info</code></summary> ```shell active environment : base active env location : ${CONDA_ROOT} shell level : 1 user config file : ${HOME}/.condarc populated config files : ${HOME}/.condarc conda version : 4.11.0 conda-build version : 3.21.7 python version : 3.8.12.final.0 virtual packages : __osx=10.16=0 __unix=0=0 __archspec=1=x86_64 base environment : ${CONDA_ROOT} (writable) conda av data dir : ${CONDA_ROOT}/etc/conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch package cache : ${CONDA_ROOT}/pkgs ${HOME}/.conda/pkgs envs directories : ${CONDA_ROOT}/envs ${HOME}/.conda/envs platform : osx-64 user-agent : conda/4.11.0 requests/2.27.1 CPython/3.8.12 Darwin/20.6.0 OSX/10.16 UID:GID : 502:20 netrc file : None offline mode : False ``` </details> <details> <summary><code>conda config</code></summary> ```shell ==> ${HOME}/.condarc <== restore_free_channel: False conda_build: error_overdepending: True error_overlinking: True ``` </details> <details> <summary><code>conda list</code></summary> ``` # packages in environment at /Users/clee/Applications/miniconda3: # # Name Version Build Channel anaconda-client 1.9.0 py38hecd8cb5_0 defaults attrs 21.4.0 pyhd3eb1b0_0 defaults beautifulsoup4 4.10.0 pyh06a4308_0 defaults brotlipy 0.7.0 py38h9ed2024_1003 defaults bzip2 1.0.8 h1de35cc_0 defaults ca-certificates 2021.10.26 hecd8cb5_2 defaults certifi 2021.10.8 py38hecd8cb5_2 defaults cffi 1.15.0 py38hc55c11b_1 defaults chardet 4.0.0 py38hecd8cb5_1003 defaults charset-normalizer 2.0.4 pyhd3eb1b0_0 defaults clyent 1.2.2 py38_1 defaults conda 4.11.0 py38hecd8cb5_0 defaults conda-build 3.21.7 py38hecd8cb5_0 defaults conda-content-trust 0.1.1 pyhd3eb1b0_0 defaults conda-package-handling 1.7.3 py38h9ed2024_1 defaults conda-token 0.3.0 pyhd3eb1b0_0 defaults coreutils 8.32 haf1e3a3_0 defaults cryptography 36.0.0 py38hf6deb26_0 defaults filelock 3.4.2 pyhd3eb1b0_0 defaults glob2 0.7 pyhd3eb1b0_0 defaults icu 58.2 h0a44026_3 defaults idna 3.3 pyhd3eb1b0_0 defaults importlib-metadata 4.8.2 py38hecd8cb5_0 defaults importlib_metadata 4.8.2 hd3eb1b0_0 defaults ipython_genutils 0.2.0 pyhd3eb1b0_1 defaults jinja2 2.11.3 pyhd3eb1b0_0 defaults jq 1.6 h9ed2024_1000 defaults jsonschema 3.2.0 pyhd3eb1b0_2 defaults jupyter_core 4.9.1 py38hecd8cb5_0 defaults libarchive 3.4.2 ha0e9c3a_0 defaults libcxx 12.0.0 h2f01273_0 defaults libffi 3.3 hb1e8313_2 defaults libiconv 1.16 h1de35cc_0 defaults liblief 0.10.1 h0a44026_0 defaults libxml2 2.9.12 hcdb78fc_0 defaults lz4-c 1.9.3 h23ab428_1 defaults markupsafe 2.0.1 py38h9ed2024_0 defaults nbformat 5.1.3 pyhd3eb1b0_0 defaults ncurses 6.3 hca72f7f_2 defaults oniguruma 6.9.7.1 h9ed2024_0 defaults openssl 1.1.1m hca72f7f_0 defaults packaging 21.3 pyhd3eb1b0_0 defaults pip 21.2.4 py38hecd8cb5_0 defaults pkginfo 1.8.2 pyhd3eb1b0_0 defaults psutil 5.8.0 py38h9ed2024_1 defaults py-lief 0.10.1 py38haf313ee_0 defaults pycosat 0.6.3 py38h1de35cc_1 defaults pycparser 2.21 pyhd3eb1b0_0 defaults pyopenssl 21.0.0 pyhd3eb1b0_1 defaults pyparsing 3.0.4 pyhd3eb1b0_0 defaults pyrsistent 0.18.0 py38hca72f7f_0 defaults pysocks 1.7.1 py38_1 defaults python 3.8.12 h88f2d9e_0 defaults python-dateutil 2.8.2 pyhd3eb1b0_0 defaults python-libarchive-c 2.9 pyhd3eb1b0_1 defaults python.app 3 py38hca72f7f_0 defaults pytz 2021.3 pyhd3eb1b0_0 defaults pyyaml 6.0 py38hca72f7f_1 defaults readline 8.1.2 hca72f7f_1 defaults requests 2.27.1 pyhd3eb1b0_0 defaults ripgrep 12.1.1 0 defaults ruamel.yaml 0.16.12 py38haf1e3a3_1 defaults ruamel.yaml.clib 0.2.6 py38hca72f7f_0 defaults ruamel_yaml 0.15.100 py38h9ed2024_0 defaults setuptools 58.0.4 py38hecd8cb5_0 defaults six 1.16.0 pyhd3eb1b0_0 defaults soupsieve 2.3.1 pyhd3eb1b0_0 defaults sqlite 3.37.0 h707629a_0 defaults tk 8.6.11 h7bc2e8c_0 defaults tqdm 4.62.3 pyhd3eb1b0_1 defaults traitlets 5.1.1 pyhd3eb1b0_0 defaults urllib3 1.26.7 pyhd3eb1b0_0 defaults wget 1.20.1 h051b688_0 defaults wheel 0.37.1 pyhd3eb1b0_0 defaults xz 5.2.5 h1de35cc_0 defaults yaml 0.2.5 haf1e3a3_0 defaults zipp 3.7.0 pyhd3eb1b0_0 defaults zlib 1.2.11 h4dc903c_4 defaults zstd 1.5.0 hcb37349_1 defaults ``` </details> ### Resolution It would appear that the `__add_sys_prefix_to_path` shell function added in conda 4.6.12 is the culprit here. ## Duplicate Issues - https://github.com/conda/conda/issues/11305 - https://github.com/conda/conda/issues/9587 - https://github.com/conda/conda/issues/8450 - https://github.com/conda/conda/issues/10786 - https://github.com/conda/conda/issues/9571 </issue> <code> [start of README.md] 1 [conda-logo]: https://s3.amazonaws.com/conda-dev/conda_logo.svg 2 [ci-tests-badge]: https://github.com/conda/conda/actions/workflows/ci.yml/badge.svg 3 [ci-images-badge]: https://github.com/conda/conda/actions/workflows/ci-images.yml/badge.svg 4 [codecov-badge]: https://img.shields.io/codecov/c/github/conda/conda/main.svg?label=coverage 5 [release-badge]: https://img.shields.io/github/release/conda/conda.svg 6 [gitpod]: https://gitpod.io/button/open-in-gitpod.svg 7 8 [![Conda Logo][conda-logo]](https://github.com/conda/conda) 9 10 11 [![CI Tests (GitHub Actions)][ci-tests-badge]](https://github.com/conda/conda/actions/workflows/ci.yml) 12 [![CI Images (GitHub Actions)][ci-images-badge]](https://github.com/conda/conda/actions/workflows/ci-images.yml) 13 [![Codecov Status][codecov-badge]](https://codecov.io/gh/conda/conda/branch/main) 14 [![latest release version][release-badge]](https://github.com/conda/conda/releases) 15 16 Conda is a cross-platform, language-agnostic binary package manager. It is the 17 package manager used by [Anaconda](https://www.anaconda.com/distribution/) installations, but it may be 18 used for other systems as well. Conda makes environments first-class 19 citizens, making it easy to create independent environments even for C 20 libraries. Conda is written entirely in Python, and is BSD licensed open 21 source. 22 23 Conda is enhanced by organizations, tools, and repositories created and managed by 24 the amazing members of the conda community. Some of them can be found 25 [here](https://github.com/conda/conda/wiki/Conda-Community). 26 27 28 ## Installation 29 30 Conda is a part of the [Anaconda Distribution](https://repo.anaconda.com). 31 Use [Miniconda](https://docs.conda.io/en/latest/miniconda.html) to bootstrap a minimal installation 32 that only includes conda and its dependencies. 33 34 35 ## Getting Started 36 37 If you install the Anaconda Distribution, you will already have hundreds of packages 38 installed. You can see what packages are installed by running 39 40 ```bash 41 $ conda list 42 ``` 43 44 to see all the packages that are available, use 45 46 ```bash 47 $ conda search 48 ``` 49 50 and to install a package, use 51 52 ```bash 53 $ conda install <package-name> 54 ``` 55 56 The real power of conda comes from its ability to manage environments. 57 In conda, an environment can be thought of as a completely separate installation. 58 Conda installs packages into environments efficiently using [hard links](https://en.wikipedia.org/wiki/Hard_link) by default when it is possible, so 59 environments are space efficient, and take seconds to create. 60 61 The default environment, which `conda` itself is installed into is called 62 `base`. To create another environment, use the `conda create` 63 command. For instance, to create an environment with the IPython notebook and 64 NumPy 1.6, which is older than the version that comes with Anaconda by 65 default, you would run: 66 67 ```bash 68 $ conda create -n numpy16 ipython-notebook numpy=1.6 69 ``` 70 71 This creates an environment called `numpy16` with the latest version of 72 the IPython notebook, NumPy 1.6, and their dependencies. 73 74 We can now activate this environment, use 75 76 ```bash 77 $ conda activate numpy16 78 ``` 79 80 This puts the bin directory of the `numpy16` environment in the front of the 81 `PATH`, and sets it as the default environment for all subsequent conda commands. 82 83 To go back to the base environment, use 84 85 ```bash 86 $ conda deactivate 87 ``` 88 89 ## Building Your Own Packages 90 91 You can easily build your own packages for conda, and upload them 92 to [anaconda.org](https://anaconda.org), a free service for hosting 93 packages for conda, as well as other package managers. 94 To build a package, create a recipe. Package building documentation is available 95 [here](https://docs.conda.io/projects/conda-build/en/latest/). 96 See [AnacondaRecipes](https://github.com/AnacondaRecipes) for the recipes that make up the Anaconda Distribution and `defaults` channel. 97 [Conda-forge](https://conda-forge.org/feedstocks/) and [Bioconda](https://github.com/bioconda/bioconda-recipes) are community-driven conda-based distributions. 98 99 To upload to anaconda.org, create an account. Then, install the 100 anaconda-client and login 101 102 ```bash 103 $ conda install anaconda-client 104 $ anaconda login 105 ``` 106 107 Then, after you build your recipe 108 109 ```bash 110 $ conda build <recipe-dir> 111 ``` 112 113 you will be prompted to upload to anaconda.org. 114 115 To add your anaconda.org channel, or other's channels, to conda so 116 that `conda install` will find and install their packages, run 117 118 ```bash 119 $ conda config --add channels https://conda.anaconda.org/username 120 ``` 121 122 (replacing `username` with the username of the person whose channel you want 123 to add). 124 125 ## Getting Help 126 127 - [Documentation](https://docs.conda.io/projects/conda/en/latest) 128 - [Twitter](https://twitter.com/condaproject) 129 - [Slack](https://conda.slack.com) 130 - [Bug Reports/Feature Requests](https://github.com/conda/conda/issues) 131 - [Installer/Package Issues](https://github.com/ContinuumIO/anaconda-issues/issues) 132 133 ## Contributing 134 135 [![open in gitpod for one-click development][gitpod]](https://gitpod.io/#https://github.com/conda/conda) 136 137 Contributions to conda are welcome. See the [contributing](CONTRIBUTING.md) documentation 138 for instructions on setting up a development environment. 139 [end of README.md] [start of conda/_vendor/appdirs.py] 1 #!/usr/bin/env python 2 # Copyright (c) 2005-2010 ActiveState Software Inc. 3 4 """Utilities for determining application-specific dirs. 5 6 See <http://github.com/ActiveState/appdirs> for details and usage. 7 """ 8 # Dev Notes: 9 # - MSDN on where to store app data files: 10 # http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 11 # - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html 12 # - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html 13 14 __version_info__ = (1, 2, 0) 15 __version__ = '.'.join(map(str, __version_info__)) 16 17 18 import sys 19 import os 20 21 PY3 = sys.version_info[0] == 3 22 23 if PY3: 24 unicode = str 25 26 class AppDirsError(Exception): 27 pass 28 29 30 31 def user_data_dir(appname, appauthor=None, version=None, roaming=False): 32 r"""Return full path to the user-specific data dir for this application. 33 34 "appname" is the name of application. 35 "appauthor" (only required and used on Windows) is the name of the 36 appauthor or distributing body for this application. Typically 37 it is the owning company name. 38 "version" is an optional version path element to append to the 39 path. You might want to use this if you want multiple versions 40 of your app to be able to run independently. If used, this 41 would typically be "<major>.<minor>". 42 "roaming" (boolean, default False) can be set True to use the Windows 43 roaming appdata directory. That means that for users on a Windows 44 network setup for roaming profiles, this user data will be 45 sync'd on login. See 46 <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> 47 for a discussion of issues. 48 49 Typical user data directories are: 50 Mac OS X: ~/Library/Application Support/<AppName> 51 Unix: ~/.config/<appname> # or in $XDG_CONFIG_HOME if defined 52 Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName> 53 Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName> 54 Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName> 55 Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName> 56 57 For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. We don't 58 use $XDG_DATA_HOME as that data dir is mostly used at the time of 59 installation, instead of the application adding data during runtime. 60 Also, in practice, Linux apps tend to store their data in 61 "~/.config/<appname>" instead of "~/.local/share/<appname>". 62 """ 63 if sys.platform.startswith("win"): 64 if appauthor is None: 65 raise AppDirsError("must specify 'appauthor' on Windows") 66 const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" 67 path = os.path.join(_get_win_folder(const), appauthor, appname) 68 elif sys.platform == 'darwin': 69 path = os.path.join( 70 os.path.expanduser('~/Library/Application Support/'), 71 appname) 72 else: 73 path = os.path.join( 74 os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")), 75 appname.lower()) 76 if version: 77 path = os.path.join(path, version) 78 return path 79 80 81 def site_data_dir(appname, appauthor=None, version=None): 82 """Return full path to the user-shared data dir for this application. 83 84 "appname" is the name of application. 85 "appauthor" (only required and used on Windows) is the name of the 86 appauthor or distributing body for this application. Typically 87 it is the owning company name. 88 "version" is an optional version path element to append to the 89 path. You might want to use this if you want multiple versions 90 of your app to be able to run independently. If used, this 91 would typically be "<major>.<minor>". 92 93 Typical user data directories are: 94 Mac OS X: /Library/Application Support/<AppName> 95 Unix: /etc/xdg/<appname> 96 Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName> 97 Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) 98 Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7. 99 100 For Unix, this is using the $XDG_CONFIG_DIRS[0] default. 101 102 WARNING: Do not use this on Windows. See the Vista-Fail note above for why. 103 """ 104 if sys.platform.startswith("win"): 105 if appauthor is None: 106 raise AppDirsError("must specify 'appauthor' on Windows") 107 path = os.path.join(_get_win_folder("CSIDL_COMMON_APPDATA"), 108 appauthor, appname) 109 elif sys.platform == 'darwin': 110 path = os.path.join( 111 os.path.expanduser('/Library/Application Support'), 112 appname) 113 else: 114 # XDG default for $XDG_CONFIG_DIRS[0]. Perhaps should actually 115 # *use* that envvar, if defined. 116 path = "/etc/xdg/"+appname.lower() 117 if version: 118 path = os.path.join(path, version) 119 return path 120 121 122 def user_cache_dir(appname, appauthor=None, version=None, opinion=True): 123 r"""Return full path to the user-specific cache dir for this application. 124 125 "appname" is the name of application. 126 "appauthor" (only required and used on Windows) is the name of the 127 appauthor or distributing body for this application. Typically 128 it is the owning company name. 129 "version" is an optional version path element to append to the 130 path. You might want to use this if you want multiple versions 131 of your app to be able to run independently. If used, this 132 would typically be "<major>.<minor>". 133 "opinion" (boolean) can be False to disable the appending of 134 "Cache" to the base app data dir for Windows. See 135 discussion below. 136 137 Typical user cache directories are: 138 Mac OS X: ~/Library/Caches/<AppName> 139 Unix: ~/.cache/<appname> (XDG default) 140 Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache 141 Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache 142 143 On Windows the only suggestion in the MSDN docs is that local settings go in 144 the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming 145 app data dir (the default returned by `user_data_dir` above). Apps typically 146 put cache data somewhere *under* the given dir here. Some examples: 147 ...\Mozilla\Firefox\Profiles\<ProfileName>\Cache 148 ...\Acme\SuperApp\Cache\1.0 149 OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. 150 This can be disabled with the `opinion=False` option. 151 """ 152 if sys.platform.startswith("win"): 153 if appauthor is None: 154 raise AppDirsError("must specify 'appauthor' on Windows") 155 path = os.path.join(_get_win_folder("CSIDL_LOCAL_APPDATA"), 156 appauthor, appname) 157 if opinion: 158 path = os.path.join(path, "Cache") 159 elif sys.platform == 'darwin': 160 path = os.path.join( 161 os.path.expanduser('~/Library/Caches'), 162 appname) 163 else: 164 path = os.path.join( 165 os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')), 166 appname.lower()) 167 if version: 168 path = os.path.join(path, version) 169 return path 170 171 def user_log_dir(appname, appauthor=None, version=None, opinion=True): 172 r"""Return full path to the user-specific log dir for this application. 173 174 "appname" is the name of application. 175 "appauthor" (only required and used on Windows) is the name of the 176 appauthor or distributing body for this application. Typically 177 it is the owning company name. 178 "version" is an optional version path element to append to the 179 path. You might want to use this if you want multiple versions 180 of your app to be able to run independently. If used, this 181 would typically be "<major>.<minor>". 182 "opinion" (boolean) can be False to disable the appending of 183 "Logs" to the base app data dir for Windows, and "log" to the 184 base cache dir for Unix. See discussion below. 185 186 Typical user cache directories are: 187 Mac OS X: ~/Library/Logs/<AppName> 188 Unix: ~/.cache/<appname>/log # or under $XDG_CACHE_HOME if defined 189 Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs 190 Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs 191 192 On Windows the only suggestion in the MSDN docs is that local settings 193 go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in 194 examples of what some windows apps use for a logs dir.) 195 196 OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` 197 value for Windows and appends "log" to the user cache dir for Unix. 198 This can be disabled with the `opinion=False` option. 199 """ 200 if sys.platform == "darwin": 201 path = os.path.join( 202 os.path.expanduser('~/Library/Logs'), 203 appname) 204 elif sys.platform == "win32": 205 path = user_data_dir(appname, appauthor, version); version=False 206 if opinion: 207 path = os.path.join(path, "Logs") 208 else: 209 path = user_cache_dir(appname, appauthor, version); version=False 210 if opinion: 211 path = os.path.join(path, "log") 212 if version: 213 path = os.path.join(path, version) 214 return path 215 216 217 class AppDirs(object): 218 """Convenience wrapper for getting application dirs.""" 219 def __init__(self, appname, appauthor, version=None, roaming=False): 220 self.appname = appname 221 self.appauthor = appauthor 222 self.version = version 223 self.roaming = roaming 224 @property 225 def user_data_dir(self): 226 return user_data_dir(self.appname, self.appauthor, 227 version=self.version, roaming=self.roaming) 228 @property 229 def site_data_dir(self): 230 return site_data_dir(self.appname, self.appauthor, 231 version=self.version) 232 @property 233 def user_cache_dir(self): 234 return user_cache_dir(self.appname, self.appauthor, 235 version=self.version) 236 @property 237 def user_log_dir(self): 238 return user_log_dir(self.appname, self.appauthor, 239 version=self.version) 240 241 242 243 244 #---- internal support stuff 245 246 def _get_win_folder_from_registry(csidl_name): 247 """This is a fallback technique at best. I'm not sure if using the 248 registry for this guarantees us the correct answer for all CSIDL_* 249 names. 250 """ 251 import _winreg 252 253 shell_folder_name = { 254 "CSIDL_APPDATA": "AppData", 255 "CSIDL_COMMON_APPDATA": "Common AppData", 256 "CSIDL_LOCAL_APPDATA": "Local AppData", 257 }[csidl_name] 258 259 key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, 260 r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders") 261 dir, type = _winreg.QueryValueEx(key, shell_folder_name) 262 return dir 263 264 def _get_win_folder_with_pywin32(csidl_name): 265 from win32com.shell import shellcon, shell 266 dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) 267 # Try to make this a unicode path because SHGetFolderPath does 268 # not return unicode strings when there is unicode data in the 269 # path. 270 try: 271 dir = unicode(dir) 272 273 # Downgrade to short path name if have highbit chars. See 274 # <http://bugs.activestate.com/show_bug.cgi?id=85099>. 275 has_high_char = False 276 for c in dir: 277 if ord(c) > 255: 278 has_high_char = True 279 break 280 if has_high_char: 281 try: 282 import win32api 283 dir = win32api.GetShortPathName(dir) 284 except ImportError: 285 pass 286 except UnicodeError: 287 pass 288 return dir 289 290 def _get_win_folder_with_ctypes(csidl_name): 291 import ctypes 292 293 csidl_const = { 294 "CSIDL_APPDATA": 26, 295 "CSIDL_COMMON_APPDATA": 35, 296 "CSIDL_LOCAL_APPDATA": 28, 297 }[csidl_name] 298 299 buf = ctypes.create_unicode_buffer(1024) 300 ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) 301 302 # Downgrade to short path name if have highbit chars. See 303 # <http://bugs.activestate.com/show_bug.cgi?id=85099>. 304 has_high_char = False 305 for c in buf: 306 if ord(c) > 255: 307 has_high_char = True 308 break 309 if has_high_char: 310 buf2 = ctypes.create_unicode_buffer(1024) 311 if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): 312 buf = buf2 313 314 return buf.value 315 316 if sys.platform == "win32": 317 try: 318 import win32com.shell 319 _get_win_folder = _get_win_folder_with_pywin32 320 except ImportError: 321 try: 322 import ctypes 323 _get_win_folder = _get_win_folder_with_ctypes 324 except ImportError: 325 _get_win_folder = _get_win_folder_from_registry 326 327 328 329 #---- self test code 330 331 if __name__ == "__main__": 332 appname = "MyApp" 333 appauthor = "MyCompany" 334 335 props = ("user_data_dir", "site_data_dir", "user_cache_dir", 336 "user_log_dir") 337 338 print("-- app dirs (without optional 'version')") 339 dirs = AppDirs(appname, appauthor, version="1.0") 340 for prop in props: 341 print("%s: %s" % (prop, getattr(dirs, prop))) 342 343 print("\n-- app dirs (with optional 'version')") 344 dirs = AppDirs(appname, appauthor) 345 for prop in props: 346 print("%s: %s" % (prop, getattr(dirs, prop))) 347 [end of conda/_vendor/appdirs.py] [start of conda/base/constants.py] 1 # -*- coding: utf-8 -*- 2 # Copyright (C) 2012 Anaconda, Inc 3 # SPDX-License-Identifier: BSD-3-Clause 4 """ 5 This file should hold most string literals and magic numbers used throughout the code base. 6 The exception is if a literal is specifically meant to be private to and isolated within a module. 7 Think of this as a "more static" source of configuration information. 8 9 Another important source of "static" configuration is conda/models/enums.py. 10 """ 11 from __future__ import absolute_import, division, print_function, unicode_literals 12 13 from enum import Enum, EnumMeta 14 from os.path import join 15 import struct 16 17 from ..common.compat import on_win, six_with_metaclass 18 19 PREFIX_PLACEHOLDER = ('/opt/anaconda1anaconda2' 20 # this is intentionally split into parts, such that running 21 # this program on itself will leave it unchanged 22 'anaconda3') 23 24 machine_bits = 8 * struct.calcsize("P") 25 26 APP_NAME = 'conda' 27 28 if on_win: 29 SEARCH_PATH = ( 30 'C:/ProgramData/conda/.condarc', 31 'C:/ProgramData/conda/condarc', 32 'C:/ProgramData/conda/condarc.d', 33 ) 34 else: 35 SEARCH_PATH = ( 36 '/etc/conda/.condarc', 37 '/etc/conda/condarc', 38 '/etc/conda/condarc.d/', 39 '/var/lib/conda/.condarc', 40 '/var/lib/conda/condarc', 41 '/var/lib/conda/condarc.d/', 42 ) 43 44 SEARCH_PATH += ( 45 '$CONDA_ROOT/.condarc', 46 '$CONDA_ROOT/condarc', 47 '$CONDA_ROOT/condarc.d/', 48 '$XDG_CONFIG_HOME/conda/.condarc', 49 '$XDG_CONFIG_HOME/conda/condarc', 50 '$XDG_CONFIG_HOME/conda/condarc.d/', 51 '~/.config/conda/.condarc', 52 '~/.config/conda/condarc', 53 '~/.config/conda/condarc.d/', 54 '~/.conda/.condarc', 55 '~/.conda/condarc', 56 '~/.conda/condarc.d/', 57 '~/.condarc', 58 '$CONDA_PREFIX/.condarc', 59 '$CONDA_PREFIX/condarc', 60 '$CONDA_PREFIX/condarc.d/', 61 '$CONDARC', 62 ) 63 64 DEFAULT_CHANNEL_ALIAS = 'https://conda.anaconda.org' 65 CONDA_HOMEPAGE_URL = 'https://conda.io' 66 ERROR_UPLOAD_URL = 'https://conda.io/conda-post/unexpected-error' 67 DEFAULTS_CHANNEL_NAME = 'defaults' 68 69 KNOWN_SUBDIRS = PLATFORM_DIRECTORIES = ( 70 "noarch", 71 "linux-32", 72 "linux-64", 73 "linux-aarch64", 74 "linux-armv6l", 75 "linux-armv7l", 76 "linux-ppc64", 77 "linux-ppc64le", 78 "linux-s390x", 79 "osx-64", 80 "osx-arm64", 81 "win-32", 82 "win-64", 83 "zos-z", 84 ) 85 86 RECOGNIZED_URL_SCHEMES = ('http', 'https', 'ftp', 's3', 'file') 87 88 89 DEFAULT_CHANNELS_UNIX = ( 90 'https://repo.anaconda.com/pkgs/main', 91 'https://repo.anaconda.com/pkgs/r', 92 ) 93 94 DEFAULT_CHANNELS_WIN = ( 95 'https://repo.anaconda.com/pkgs/main', 96 'https://repo.anaconda.com/pkgs/r', 97 'https://repo.anaconda.com/pkgs/msys2', 98 ) 99 100 DEFAULT_CUSTOM_CHANNELS = { 101 'pkgs/pro': 'https://repo.anaconda.com', 102 } 103 104 DEFAULT_CHANNELS = DEFAULT_CHANNELS_WIN if on_win else DEFAULT_CHANNELS_UNIX 105 106 ROOT_ENV_NAME = 'base' 107 108 ROOT_NO_RM = ( 109 'python', 110 'pycosat', 111 'ruamel_yaml', 112 'conda', 113 'openssl', 114 'requests', 115 ) 116 117 DEFAULT_AGGRESSIVE_UPDATE_PACKAGES = ( 118 'ca-certificates', 119 'certifi', 120 'openssl', 121 ) 122 123 if on_win: 124 COMPATIBLE_SHELLS = ( 125 'bash', 126 'cmd.exe', 127 'fish', 128 'tcsh', 129 'xonsh', 130 'zsh', 131 'powershell', 132 ) 133 else: 134 COMPATIBLE_SHELLS = ( 135 'bash', 136 'fish', 137 'tcsh', 138 'xonsh', 139 'zsh', 140 'powershell', 141 ) 142 143 144 # Maximum priority, reserved for packages we really want to remove 145 MAX_CHANNEL_PRIORITY = 10000 146 147 CONDA_PACKAGE_EXTENSION_V1 = ".tar.bz2" 148 CONDA_PACKAGE_EXTENSION_V2 = ".conda" 149 CONDA_PACKAGE_EXTENSIONS = ( 150 CONDA_PACKAGE_EXTENSION_V2, 151 CONDA_PACKAGE_EXTENSION_V1, 152 ) 153 CONDA_TARBALL_EXTENSION = CONDA_PACKAGE_EXTENSION_V1 # legacy support for conda-build; remove this line # NOQA 154 CONDA_TEMP_EXTENSION = '.c~' 155 CONDA_TEMP_EXTENSIONS = (CONDA_TEMP_EXTENSION, ".trash") 156 CONDA_LOGS_DIR = ".logs" 157 158 UNKNOWN_CHANNEL = "<unknown>" 159 REPODATA_FN = "repodata.json" 160 161 #: Default name of the notices file on the server we look for 162 NOTICES_FN = "notices.json" 163 164 #: Name of cache file where read notice IDs are stored 165 NOTICES_CACHE_FN = "notices.cache" 166 167 #: Determines the subdir for notices cache 168 NOTICES_CACHE_SUBDIR = "notices" 169 170 DRY_RUN_PREFIX = "Dry run action:" 171 PREFIX_NAME_DISALLOWED_CHARS = {"/", " ", ":", "#"} 172 173 # TODO: Determine whether conda.base is the right place for this data; it 174 # should be a constant, but another module may be more appropriate. 175 # 176 # You could argue that the signatures being here is not necessary; indeed, we 177 # are not necessarily going to be able to check them *properly* (based on some 178 # prior expectations) as the user, since this is the beginning of trust 179 # bootstrapping, the first/backup version of the root of trust metadata. 180 # Still, the signatures here are useful for diagnostic purposes, and, more 181 # important, to allow self-consistency checks: that helps us avoid breaking the 182 # chain of trust if someone accidentally lists the wrong keys down the line. (: 183 # The discrepancy can be detected when loading the root data, and we can 184 # decline to cache incorrect trust metadata that would make further root 185 # updates impossible. 186 # 187 INITIAL_TRUST_ROOT = { 188 "signatures": { 189 "6d4d5888398ad77465e9fd53996309187723e16509144aa6733015c960378e7a": { 190 "other_headers": "04001608001d162104d2ca1d4bf5d77e7c312534284dd9c45328b685ec0502605dbb03", # noqa: E501 191 "signature": "b71c9b3aa60e77258c402e574397127bcb4bc15ef3055ada8539b0d1e355bf1415a135fb7cecc9244f839a929f6b1f82844a5b3df8d6225ec9a50b181692490f" # noqa: E501 192 }, 193 "508debb915ede0b16dc0cff63f250bde73c5923317b44719fcfc25cc95560c44": { 194 "other_headers": "04001608001d162104e6dffee4638f24cfa60a08ba03afe1314a3a38fc050260621281", # noqa: E501 195 "signature": "29d53d4e7dbea0a3efb07266d22e57cf4df7abe004453981c631245716e1b737c7a6b4ab95f42592af70be67abf56e97020e1aa1f52b49ef39b37481f05d5701" # noqa: E501 196 } 197 }, 198 "signed": { 199 "delegations": { 200 "key_mgr": { 201 "pubkeys": [ 202 "f24c813d23a9b26be665eee5c54680c35321061b337f862385ed6d783b0bedb0" 203 ], 204 "threshold": 1 205 }, 206 "root": { 207 "pubkeys": [ 208 "668a3217d72d4064edb16648435dc4a3e09a172ecee45dcab1464dcd2f402ec6", 209 "508debb915ede0b16dc0cff63f250bde73c5923317b44719fcfc25cc95560c44", 210 "6d4d5888398ad77465e9fd53996309187723e16509144aa6733015c960378e7a", 211 "e0c88b4c0721bd451b7e720dfb0d0bb6b3853f0cbcf5570edd73367d0841be51" 212 ], 213 "threshold": 2 214 } 215 }, 216 "expiration": "2022-10-31T18:00:00Z", 217 "metadata_spec_version": "0.6.0", 218 "timestamp": "2021-03-26T00:00:00Z", 219 "type": "root", 220 "version": 1 221 } 222 } 223 224 225 class SafetyChecks(Enum): 226 disabled = 'disabled' 227 warn = 'warn' 228 enabled = 'enabled' 229 230 def __str__(self): 231 return self.value 232 233 234 class PathConflict(Enum): 235 clobber = 'clobber' 236 warn = 'warn' 237 prevent = 'prevent' 238 239 def __str__(self): 240 return self.value 241 242 243 class DepsModifier(Enum): 244 """Flags to enable alternate handling of dependencies.""" 245 NOT_SET = 'not_set' # default 246 NO_DEPS = 'no_deps' 247 ONLY_DEPS = 'only_deps' 248 249 def __str__(self): 250 return self.value 251 252 253 class UpdateModifier(Enum): 254 SPECS_SATISFIED_SKIP_SOLVE = 'specs_satisfied_skip_solve' 255 FREEZE_INSTALLED = 'freeze_installed' # freeze is a better name for --no-update-deps 256 UPDATE_DEPS = 'update_deps' 257 UPDATE_SPECS = 'update_specs' # default 258 UPDATE_ALL = 'update_all' 259 # TODO: add REINSTALL_ALL, see https://github.com/conda/conda/issues/6247 and https://github.com/conda/conda/issues/3149 # NOQA 260 261 def __str__(self): 262 return self.value 263 264 265 class ChannelPriorityMeta(EnumMeta): 266 267 def __call__(cls, value, *args, **kwargs): 268 try: 269 return super(ChannelPriorityMeta, cls).__call__(value, *args, **kwargs) 270 except ValueError: 271 if isinstance(value, str): 272 from ..auxlib.type_coercion import typify 273 value = typify(value) 274 if value is True: 275 value = 'flexible' 276 elif value is False: 277 value = cls.DISABLED 278 return super(ChannelPriorityMeta, cls).__call__(value, *args, **kwargs) 279 280 281 class ValueEnum(Enum): 282 """Subclass of enum that returns the value of the enum as its str representation""" 283 284 def __str__(self): 285 return f"{self.value}" 286 287 288 class ChannelPriority(six_with_metaclass(ChannelPriorityMeta, ValueEnum)): 289 __name__ = "ChannelPriority" 290 291 STRICT = 'strict' 292 # STRICT_OR_FLEXIBLE = 'strict_or_flexible' # TODO: consider implementing if needed 293 FLEXIBLE = 'flexible' 294 DISABLED = 'disabled' 295 296 297 class SatSolverChoice(ValueEnum): 298 PYCOSAT = 'pycosat' 299 PYCRYPTOSAT = 'pycryptosat' 300 PYSAT = 'pysat' 301 302 303 class ExperimentalSolverChoice(ValueEnum): 304 CLASSIC = 'classic' 305 LIBMAMBA = 'libmamba' 306 LIBMAMBA_DRAFT = 'libmamba-draft' 307 308 309 class NoticeLevel(ValueEnum): 310 CRITICAL = "critical" 311 WARNING = "warning" 312 INFO = "info" 313 314 315 # Magic files for permissions determination 316 PACKAGE_CACHE_MAGIC_FILE = 'urls.txt' 317 PREFIX_MAGIC_FILE = join('conda-meta', 'history') 318 319 PREFIX_STATE_FILE = join('conda-meta', 'state') 320 PACKAGE_ENV_VARS_DIR = join('etc', 'conda', 'env_vars.d') 321 CONDA_ENV_VARS_UNSET_VAR = "***unset***" 322 323 324 # TODO: should be frozendict(), but I don't want to import frozendict from auxlib here. 325 NAMESPACES_MAP = { # base package name, namespace 326 "python": "python", 327 "r": "r", 328 "r-base": "r", 329 "mro-base": "r", 330 "erlang": "erlang", 331 "java": "java", 332 "openjdk": "java", 333 "julia": "julia", 334 "latex": "latex", 335 "lua": "lua", 336 "nodejs": "js", 337 "perl": "perl", 338 "php": "php", 339 "ruby": "ruby", 340 "m2-base": "m2", 341 "msys2-conda-epoch": "m2w64", 342 } 343 344 NAMESPACE_PACKAGE_NAMES = frozenset(NAMESPACES_MAP) 345 NAMESPACES = frozenset(NAMESPACES_MAP.values()) 346 347 # Namespace arbiters of uniqueness 348 # global: some repository established by Anaconda, Inc. and conda-forge 349 # python: https://pypi.org/simple 350 # r: https://cran.r-project.org/web/packages/available_packages_by_name.html 351 # erlang: https://hex.pm/packages 352 # java: https://repo1.maven.org/maven2/ 353 # julia: https://pkg.julialang.org/ 354 # latex: https://ctan.org/pkg 355 # lua: https://luarocks.org/m/root 356 # js: https://docs.npmjs.com/misc/registry 357 # pascal: ??? 358 # perl: https://www.cpan.org/modules/01modules.index.html 359 # php: https://packagist.org/ 360 # ruby: https://rubygems.org/gems 361 # clojure: https://clojars.org/ 362 363 364 # Not all python namespace packages are registered on PyPI. If a package 365 # contains files in site-packages, it probably belongs in the python namespace. 366 [end of conda/base/constants.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
bec287d82e18699fad29fa58cf2425e1e0e20143
`conda` shell function injects `$CONDA_PREFIX` into `$PATH` causing incorrect behavior in `conda run` ## Description ### What happened? The `$PATH`/`%PATH%` set by `conda run` includes directories from the base environment; `conda activate`, on the other hand, removes base environment components from the PATH. This can be problematic, as `conda run` and `conda activate` can have different executables (and on Windows, DLLs) available to the user. A basic replicating case: ``` (base) $ conda create -n py39 python=3.9 (base) $ conda run -n py39 python -c 'import os; print(os.environ["PATH"]);' ${CONDA_ROOT}/envs/py39/bin:${CONDA_ROOT}/bin:${CONDA_ROOT}/condabin:${HOME}/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin (base) $ conda activate py39 (py39) $ python -c 'import os; print(os.environ["PATH"]);' ${CONDA_ROOT}/envs/py39/bin:${CONDA_ROOT}/condabin:${HOME}/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin ``` (Note that `${CONDA_ROOT}/bin` gets removed from `$PATH` in the activated environment, but not when using `conda run`.) ### Conda Details <details> <summary><code>conda info</code></summary> ```shell active environment : base active env location : ${CONDA_ROOT} shell level : 1 user config file : ${HOME}/.condarc populated config files : ${HOME}/.condarc conda version : 4.11.0 conda-build version : 3.21.7 python version : 3.8.12.final.0 virtual packages : __osx=10.16=0 __unix=0=0 __archspec=1=x86_64 base environment : ${CONDA_ROOT} (writable) conda av data dir : ${CONDA_ROOT}/etc/conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch package cache : ${CONDA_ROOT}/pkgs ${HOME}/.conda/pkgs envs directories : ${CONDA_ROOT}/envs ${HOME}/.conda/envs platform : osx-64 user-agent : conda/4.11.0 requests/2.27.1 CPython/3.8.12 Darwin/20.6.0 OSX/10.16 UID:GID : 502:20 netrc file : None offline mode : False ``` </details> <details> <summary><code>conda config</code></summary> ```shell ==> ${HOME}/.condarc <== restore_free_channel: False conda_build: error_overdepending: True error_overlinking: True ``` </details> <details> <summary><code>conda list</code></summary> ``` # packages in environment at /Users/clee/Applications/miniconda3: # # Name Version Build Channel anaconda-client 1.9.0 py38hecd8cb5_0 defaults attrs 21.4.0 pyhd3eb1b0_0 defaults beautifulsoup4 4.10.0 pyh06a4308_0 defaults brotlipy 0.7.0 py38h9ed2024_1003 defaults bzip2 1.0.8 h1de35cc_0 defaults ca-certificates 2021.10.26 hecd8cb5_2 defaults certifi 2021.10.8 py38hecd8cb5_2 defaults cffi 1.15.0 py38hc55c11b_1 defaults chardet 4.0.0 py38hecd8cb5_1003 defaults charset-normalizer 2.0.4 pyhd3eb1b0_0 defaults clyent 1.2.2 py38_1 defaults conda 4.11.0 py38hecd8cb5_0 defaults conda-build 3.21.7 py38hecd8cb5_0 defaults conda-content-trust 0.1.1 pyhd3eb1b0_0 defaults conda-package-handling 1.7.3 py38h9ed2024_1 defaults conda-token 0.3.0 pyhd3eb1b0_0 defaults coreutils 8.32 haf1e3a3_0 defaults cryptography 36.0.0 py38hf6deb26_0 defaults filelock 3.4.2 pyhd3eb1b0_0 defaults glob2 0.7 pyhd3eb1b0_0 defaults icu 58.2 h0a44026_3 defaults idna 3.3 pyhd3eb1b0_0 defaults importlib-metadata 4.8.2 py38hecd8cb5_0 defaults importlib_metadata 4.8.2 hd3eb1b0_0 defaults ipython_genutils 0.2.0 pyhd3eb1b0_1 defaults jinja2 2.11.3 pyhd3eb1b0_0 defaults jq 1.6 h9ed2024_1000 defaults jsonschema 3.2.0 pyhd3eb1b0_2 defaults jupyter_core 4.9.1 py38hecd8cb5_0 defaults libarchive 3.4.2 ha0e9c3a_0 defaults libcxx 12.0.0 h2f01273_0 defaults libffi 3.3 hb1e8313_2 defaults libiconv 1.16 h1de35cc_0 defaults liblief 0.10.1 h0a44026_0 defaults libxml2 2.9.12 hcdb78fc_0 defaults lz4-c 1.9.3 h23ab428_1 defaults markupsafe 2.0.1 py38h9ed2024_0 defaults nbformat 5.1.3 pyhd3eb1b0_0 defaults ncurses 6.3 hca72f7f_2 defaults oniguruma 6.9.7.1 h9ed2024_0 defaults openssl 1.1.1m hca72f7f_0 defaults packaging 21.3 pyhd3eb1b0_0 defaults pip 21.2.4 py38hecd8cb5_0 defaults pkginfo 1.8.2 pyhd3eb1b0_0 defaults psutil 5.8.0 py38h9ed2024_1 defaults py-lief 0.10.1 py38haf313ee_0 defaults pycosat 0.6.3 py38h1de35cc_1 defaults pycparser 2.21 pyhd3eb1b0_0 defaults pyopenssl 21.0.0 pyhd3eb1b0_1 defaults pyparsing 3.0.4 pyhd3eb1b0_0 defaults pyrsistent 0.18.0 py38hca72f7f_0 defaults pysocks 1.7.1 py38_1 defaults python 3.8.12 h88f2d9e_0 defaults python-dateutil 2.8.2 pyhd3eb1b0_0 defaults python-libarchive-c 2.9 pyhd3eb1b0_1 defaults python.app 3 py38hca72f7f_0 defaults pytz 2021.3 pyhd3eb1b0_0 defaults pyyaml 6.0 py38hca72f7f_1 defaults readline 8.1.2 hca72f7f_1 defaults requests 2.27.1 pyhd3eb1b0_0 defaults ripgrep 12.1.1 0 defaults ruamel.yaml 0.16.12 py38haf1e3a3_1 defaults ruamel.yaml.clib 0.2.6 py38hca72f7f_0 defaults ruamel_yaml 0.15.100 py38h9ed2024_0 defaults setuptools 58.0.4 py38hecd8cb5_0 defaults six 1.16.0 pyhd3eb1b0_0 defaults soupsieve 2.3.1 pyhd3eb1b0_0 defaults sqlite 3.37.0 h707629a_0 defaults tk 8.6.11 h7bc2e8c_0 defaults tqdm 4.62.3 pyhd3eb1b0_1 defaults traitlets 5.1.1 pyhd3eb1b0_0 defaults urllib3 1.26.7 pyhd3eb1b0_0 defaults wget 1.20.1 h051b688_0 defaults wheel 0.37.1 pyhd3eb1b0_0 defaults xz 5.2.5 h1de35cc_0 defaults yaml 0.2.5 haf1e3a3_0 defaults zipp 3.7.0 pyhd3eb1b0_0 defaults zlib 1.2.11 h4dc903c_4 defaults zstd 1.5.0 hcb37349_1 defaults ``` </details> ### Resolution It would appear that the `__add_sys_prefix_to_path` shell function added in conda 4.6.12 is the culprit here. ## Duplicate Issues - https://github.com/conda/conda/issues/11305 - https://github.com/conda/conda/issues/9587 - https://github.com/conda/conda/issues/8450 - https://github.com/conda/conda/issues/10786 - https://github.com/conda/conda/issues/9571
@chenghlee do you think this is related to https://github.com/conda/conda/issues/11072? @jezdez That's my suspicion, but I can't exactly prove it. Related to issue https://github.com/conda/conda/issues/11231 We've reviewed the work-around in #11257 and decided to remove this from the upcoming 4.12.0 release since we don't have the confidence that it would be an appropriate fix. We'll need to dig more into why #8528 added the `__add_sys_prefix_to_path` function to the shell scripts (and related Windows versions). @kenodegard's [comment](https://github.com/conda/conda/pull/11257#issuecomment-1050531320) also sheds some light on the subtleties of the problem. Potentially related to https://github.com/conda/conda/issues/11305 As mentioned in yesterday's community call we're bumping this up in severity since it seems to affect a lot of users of Visual Studio Code.
2022-07-28T03:02:26Z
<patch> diff --git a/conda/activate.py b/conda/activate.py --- a/conda/activate.py +++ b/conda/activate.py @@ -289,6 +289,7 @@ def build_stack(self, env_name_or_prefix): return self._build_activate_stack(env_name_or_prefix, True) def _build_activate_stack(self, env_name_or_prefix, stack): + # get environment prefix if re.search(r'\\|/', env_name_or_prefix): prefix = expand(env_name_or_prefix) if not isdir(join(prefix, 'conda-meta')): @@ -299,79 +300,74 @@ def _build_activate_stack(self, env_name_or_prefix, stack): else: prefix = locate_prefix_by_name(env_name_or_prefix) - # query environment + # get prior shlvl and prefix old_conda_shlvl = int(self.environ.get('CONDA_SHLVL', '').strip() or 0) - new_conda_shlvl = old_conda_shlvl + 1 old_conda_prefix = self.environ.get('CONDA_PREFIX') + # if the prior active prefix is this prefix we are actually doing a reactivate if old_conda_prefix == prefix and old_conda_shlvl > 0: return self.build_reactivate() activate_scripts = self._get_activate_scripts(prefix) + conda_shlvl = old_conda_shlvl + 1 conda_default_env = self._default_env(prefix) conda_prompt_modifier = self._prompt_modifier(prefix, conda_default_env) - conda_environment_env_vars = self._get_environment_env_vars(prefix) - unset_env_vars = [k for k, v in conda_environment_env_vars.items() - if v == CONDA_ENV_VARS_UNSET_VAR] - [conda_environment_env_vars.pop(_) for _ in unset_env_vars] - - clobbering_env_vars = [k for k in conda_environment_env_vars.keys() - if k in os.environ.keys()] - - for cvar in clobbering_env_vars: - save_var = "__CONDA_SHLVL_%s_%s" % (old_conda_shlvl, cvar) - conda_environment_env_vars[save_var] = os.environ.get(cvar) + env_vars = { + name: value + for name, value in self._get_environment_env_vars(prefix).items() + if value != CONDA_ENV_VARS_UNSET_VAR + } - if clobbering_env_vars: + # get clobbered environment variables + clobber_vars = set(env_vars.keys()).intersection(os.environ.keys()) + if clobber_vars: print("WARNING: overwriting environment variables set in the machine", file=sys.stderr) - print("overwriting variable %s" % ' '.join(clobbering_env_vars), file=sys.stderr) + print(f"overwriting variable {clobber_vars}", file=sys.stderr) + for name in clobber_vars: + env_vars[f"__CONDA_SHLVL_{old_conda_shlvl}_{name}"] = os.environ.get(name) - unset_vars = [] if old_conda_shlvl == 0: export_vars, unset_vars = self.get_export_unset_vars( path=self.pathsep_join(self._add_prefix_to_path(prefix)), conda_prefix=prefix, - conda_shlvl=new_conda_shlvl, + conda_shlvl=conda_shlvl, + conda_default_env=conda_default_env, + conda_prompt_modifier=conda_prompt_modifier, + **env_vars, + ) + deactivate_scripts = () + elif stack: + export_vars, unset_vars = self.get_export_unset_vars( + path=self.pathsep_join(self._add_prefix_to_path(prefix)), + conda_prefix=prefix, + conda_shlvl=conda_shlvl, conda_default_env=conda_default_env, conda_prompt_modifier=conda_prompt_modifier, - **conda_environment_env_vars, + **env_vars, + **{ + f"CONDA_PREFIX_{old_conda_shlvl}": old_conda_prefix, + f"CONDA_STACKED_{conda_shlvl}": "true", + }, ) deactivate_scripts = () else: - if self.environ.get('CONDA_PREFIX_%s' % (old_conda_shlvl - 1)) == prefix: - # in this case, user is attempting to activate the previous environment, - # i.e. step back down - return self.build_deactivate() - if stack: - deactivate_scripts = () - export_vars, unset_vars = self.get_export_unset_vars( - path=self.pathsep_join(self._add_prefix_to_path(prefix)), - conda_prefix=prefix, - conda_shlvl=new_conda_shlvl, - conda_default_env=conda_default_env, - conda_prompt_modifier=conda_prompt_modifier, - **conda_environment_env_vars, - ) - export_vars['CONDA_PREFIX_%d' % old_conda_shlvl] = old_conda_prefix - export_vars['CONDA_STACKED_%d' % new_conda_shlvl] = 'true' - else: - deactivate_scripts = self._get_deactivate_scripts(old_conda_prefix) - export_vars, unset_vars = self.get_export_unset_vars( - path=self.pathsep_join(self._replace_prefix_in_path(old_conda_prefix, prefix)), - conda_prefix=prefix, - conda_shlvl=new_conda_shlvl, - conda_default_env=conda_default_env, - conda_prompt_modifier=conda_prompt_modifier, - **conda_environment_env_vars, - ) - export_vars['CONDA_PREFIX_%d' % old_conda_shlvl] = old_conda_prefix + export_vars, unset_vars = self.get_export_unset_vars( + path=self.pathsep_join(self._replace_prefix_in_path(old_conda_prefix, prefix)), + conda_prefix=prefix, + conda_shlvl=conda_shlvl, + conda_default_env=conda_default_env, + conda_prompt_modifier=conda_prompt_modifier, + **env_vars, + **{ + f"CONDA_PREFIX_{old_conda_shlvl}": old_conda_prefix, + }, + ) + deactivate_scripts = self._get_deactivate_scripts(old_conda_prefix) set_vars = {} if context.changeps1: self._update_prompt(set_vars, conda_prompt_modifier) - self._build_activate_shell_custom(export_vars) - return { 'unset_vars': unset_vars, 'set_vars': set_vars, @@ -522,21 +518,6 @@ def _get_starting_path_list(self): clean_paths[sys.platform] if sys.platform in clean_paths else '/usr/bin') path_split = path.split(os.pathsep) - # We used to prepend sys.prefix\Library\bin to PATH on startup but not anymore. - # Instead, in conda 4.6 we add the full suite of entries. This is performed in - # condabin\conda.bat and condabin\ _conda_activate.bat. However, we - # need to ignore the stuff we add there, and only consider actual PATH entries. - prefix_dirs = tuple(self._get_path_dirs(sys.prefix)) - start_index = 0 - while (start_index < len(prefix_dirs) and - start_index < len(path_split) and - paths_equal(path_split[start_index], prefix_dirs[start_index])): - start_index += 1 - path_split = path_split[start_index:] - library_bin_dir = self.path_conversion( - self.sep.join((sys.prefix, 'Library', 'bin'))) - if paths_equal(path_split[0], library_bin_dir): - path_split = path_split[1:] return path_split def _get_path_dirs(self, prefix, extra_library_bin=False): @@ -617,11 +598,6 @@ def index_of_path(paths, test_path): return tuple(path_list) - def _build_activate_shell_custom(self, export_vars): - # A method that can be overridden by shell-specific implementations. - # The signature of this method may change in the future. - pass - def _update_prompt(self, set_vars, conda_prompt_modifier): pass </patch>
[]
[]
docker__compose-4333
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add option to resolve tags to hashes for config command **Description** Docker 1.13 adds the option to deploy stacks using Docker Compose version 3 files in addition to Distributed Application Bundle (DAB) files. When `docker-compose bundle` is used to produce a DAB, it fetches image digests and replaces image tags with digests in the compose project. This produces a DAB whose images are pinned to a specific digest, rather than a tag that can be changed to different digests over time. `docker-compose config` can be used to “compile” or combine multiple compose files together, resolve `.env` variables, and handle all of Docker Compose's options to produce a single compose file suitable for use with `docker stack deploy --compose-file`. It would be useful to add an option to the `config` command that replaces image tags with digests—like the `bundle` command—to produce a compose file destined for stack deployments that acts like DAB files with respect to image pinning. **Steps to reproduce the issue:** 1. Given Docker Compose file(s) forming a project, call `docker-compose config`. **Describe the results you received:** A combined/resolved Docker Compose configuration is outputted. Any image tags remain as tags. **Describe the results you expected:** An option on `docker-compose config` that causes Docker Compose to output image digests instead of tags. **Additional information you deem important (e.g. issue happens only occasionally):** The brief description of the `config` command says that is is to “Validate and view the compose file.” But, it is the only way to produce a Docker Compose file that uses features not supported by `docker stack deploy --compose-file`. I believe this feature request points to a broader need for `docker-compose config` to be treated more like `docker-compose bundle`, and be used for generating a single/compiled/resolved Compose file for a Docker Compose project. This includes adopting bundle's options (`--push-images` and `--output`). </issue> <code> [start of README.md] 1 Docker Compose 2 ============== 3 ![Docker Compose](logo.png?raw=true "Docker Compose Logo") 4 5 Compose is a tool for defining and running multi-container Docker applications. 6 With Compose, you use a Compose file to configure your application's services. 7 Then, using a single command, you create and start all the services 8 from your configuration. To learn more about all the features of Compose 9 see [the list of features](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#features). 10 11 Compose is great for development, testing, and staging environments, as well as 12 CI workflows. You can learn more about each case in 13 [Common Use Cases](https://github.com/docker/docker.github.io/blob/master/compose/overview.md#common-use-cases). 14 15 Using Compose is basically a three-step process. 16 17 1. Define your app's environment with a `Dockerfile` so it can be 18 reproduced anywhere. 19 2. Define the services that make up your app in `docker-compose.yml` so 20 they can be run together in an isolated environment: 21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app. 22 23 A `docker-compose.yml` looks like this: 24 25 version: '2' 26 27 services: 28 web: 29 build: . 30 ports: 31 - "5000:5000" 32 volumes: 33 - .:/code 34 redis: 35 image: redis 36 37 For more information about the Compose file, see the 38 [Compose file reference](https://github.com/docker/docker.github.io/blob/master/compose/compose-file/compose-versioning.md) 39 40 Compose has commands for managing the whole lifecycle of your application: 41 42 * Start, stop and rebuild services 43 * View the status of running services 44 * Stream the log output of running services 45 * Run a one-off command on a service 46 47 Installation and documentation 48 ------------------------------ 49 50 - Full documentation is available on [Docker's website](https://docs.docker.com/compose/). 51 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose) 52 - Code repository for Compose is on [Github](https://github.com/docker/compose) 53 - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new) 54 55 Contributing 56 ------------ 57 58 [![Build Status](https://jenkins.dockerproject.org/buildStatus/icon?job=docker/compose/master)](https://jenkins.dockerproject.org/job/docker/job/compose/job/master/) 59 60 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md). 61 62 Releasing 63 --------- 64 65 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md). 66 [end of README.md] [start of compose/cli/command.py] 1 from __future__ import absolute_import 2 from __future__ import unicode_literals 3 4 import logging 5 import os 6 import re 7 import ssl 8 9 import six 10 11 from . import errors 12 from . import verbose_proxy 13 from .. import config 14 from ..config.environment import Environment 15 from ..const import API_VERSIONS 16 from ..project import Project 17 from .docker_client import docker_client 18 from .docker_client import tls_config_from_options 19 from .utils import get_version_info 20 21 log = logging.getLogger(__name__) 22 23 24 def project_from_options(project_dir, options): 25 environment = Environment.from_env_file(project_dir) 26 host = options.get('--host') 27 if host is not None: 28 host = host.lstrip('=') 29 return get_project( 30 project_dir, 31 get_config_path_from_options(project_dir, options, environment), 32 project_name=options.get('--project-name'), 33 verbose=options.get('--verbose'), 34 host=host, 35 tls_config=tls_config_from_options(options), 36 environment=environment, 37 override_dir=options.get('--project-directory'), 38 ) 39 40 41 def get_config_from_options(base_dir, options): 42 environment = Environment.from_env_file(base_dir) 43 config_path = get_config_path_from_options( 44 base_dir, options, environment 45 ) 46 return config.load( 47 config.find(base_dir, config_path, environment) 48 ) 49 50 51 def get_config_path_from_options(base_dir, options, environment): 52 file_option = options.get('--file') 53 if file_option: 54 return file_option 55 56 config_files = environment.get('COMPOSE_FILE') 57 if config_files: 58 pathsep = environment.get('COMPOSE_PATH_SEPARATOR', os.pathsep) 59 return config_files.split(pathsep) 60 return None 61 62 63 def get_tls_version(environment): 64 compose_tls_version = environment.get('COMPOSE_TLS_VERSION', None) 65 if not compose_tls_version: 66 return None 67 68 tls_attr_name = "PROTOCOL_{}".format(compose_tls_version) 69 if not hasattr(ssl, tls_attr_name): 70 log.warn( 71 'The "{}" protocol is unavailable. You may need to update your ' 72 'version of Python or OpenSSL. Falling back to TLSv1 (default).' 73 .format(compose_tls_version) 74 ) 75 return None 76 77 return getattr(ssl, tls_attr_name) 78 79 80 def get_client(environment, verbose=False, version=None, tls_config=None, host=None, 81 tls_version=None): 82 83 client = docker_client( 84 version=version, tls_config=tls_config, host=host, 85 environment=environment, tls_version=get_tls_version(environment) 86 ) 87 if verbose: 88 version_info = six.iteritems(client.version()) 89 log.info(get_version_info('full')) 90 log.info("Docker base_url: %s", client.base_url) 91 log.info("Docker version: %s", 92 ", ".join("%s=%s" % item for item in version_info)) 93 return verbose_proxy.VerboseProxy('docker', client) 94 return client 95 96 97 def get_project(project_dir, config_path=None, project_name=None, verbose=False, 98 host=None, tls_config=None, environment=None, override_dir=None): 99 if not environment: 100 environment = Environment.from_env_file(project_dir) 101 config_details = config.find(project_dir, config_path, environment, override_dir) 102 project_name = get_project_name( 103 config_details.working_dir, project_name, environment 104 ) 105 config_data = config.load(config_details) 106 107 api_version = environment.get( 108 'COMPOSE_API_VERSION', 109 API_VERSIONS[config_data.version]) 110 111 client = get_client( 112 verbose=verbose, version=api_version, tls_config=tls_config, 113 host=host, environment=environment 114 ) 115 116 with errors.handle_connection_errors(client): 117 return Project.from_config(project_name, config_data, client) 118 119 120 def get_project_name(working_dir, project_name=None, environment=None): 121 def normalize_name(name): 122 return re.sub(r'[^a-z0-9]', '', name.lower()) 123 124 if not environment: 125 environment = Environment.from_env_file(working_dir) 126 project_name = project_name or environment.get('COMPOSE_PROJECT_NAME') 127 if project_name: 128 return normalize_name(project_name) 129 130 project = os.path.basename(os.path.abspath(working_dir)) 131 if project: 132 return normalize_name(project) 133 134 return 'default' 135 [end of compose/cli/command.py] [start of compose/cli/main.py] 1 from __future__ import absolute_import 2 from __future__ import print_function 3 from __future__ import unicode_literals 4 5 import contextlib 6 import functools 7 import json 8 import logging 9 import pipes 10 import re 11 import subprocess 12 import sys 13 from distutils.spawn import find_executable 14 from inspect import getdoc 15 from operator import attrgetter 16 17 from . import errors 18 from . import signals 19 from .. import __version__ 20 from ..bundle import get_image_digests 21 from ..bundle import MissingDigests 22 from ..bundle import serialize_bundle 23 from ..config import ConfigurationError 24 from ..config import parse_environment 25 from ..config import resolve_build_args 26 from ..config.environment import Environment 27 from ..config.serialize import serialize_config 28 from ..config.types import VolumeSpec 29 from ..const import IS_WINDOWS_PLATFORM 30 from ..errors import StreamParseError 31 from ..progress_stream import StreamOutputError 32 from ..project import NoSuchService 33 from ..project import OneOffFilter 34 from ..project import ProjectError 35 from ..service import BuildAction 36 from ..service import BuildError 37 from ..service import ConvergenceStrategy 38 from ..service import ImageType 39 from ..service import NeedsBuildError 40 from ..service import OperationFailedError 41 from .command import get_config_from_options 42 from .command import project_from_options 43 from .docopt_command import DocoptDispatcher 44 from .docopt_command import get_handler 45 from .docopt_command import NoSuchCommand 46 from .errors import UserError 47 from .formatter import ConsoleWarningFormatter 48 from .formatter import Formatter 49 from .log_printer import build_log_presenters 50 from .log_printer import LogPrinter 51 from .utils import get_version_info 52 from .utils import human_readable_file_size 53 from .utils import yesno 54 55 56 if not IS_WINDOWS_PLATFORM: 57 from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation 58 59 log = logging.getLogger(__name__) 60 console_handler = logging.StreamHandler(sys.stderr) 61 62 63 def main(): 64 signals.ignore_sigpipe() 65 try: 66 command = dispatch() 67 command() 68 except (KeyboardInterrupt, signals.ShutdownException): 69 log.error("Aborting.") 70 sys.exit(1) 71 except (UserError, NoSuchService, ConfigurationError, 72 ProjectError, OperationFailedError) as e: 73 log.error(e.msg) 74 sys.exit(1) 75 except BuildError as e: 76 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason)) 77 sys.exit(1) 78 except StreamOutputError as e: 79 log.error(e) 80 sys.exit(1) 81 except NeedsBuildError as e: 82 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name) 83 sys.exit(1) 84 except NoSuchCommand as e: 85 commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand))) 86 log.error("No such command: %s\n\n%s", e.command, commands) 87 sys.exit(1) 88 except (errors.ConnectionError, StreamParseError): 89 sys.exit(1) 90 91 92 def dispatch(): 93 setup_logging() 94 dispatcher = DocoptDispatcher( 95 TopLevelCommand, 96 {'options_first': True, 'version': get_version_info('compose')}) 97 98 options, handler, command_options = dispatcher.parse(sys.argv[1:]) 99 setup_console_handler(console_handler, options.get('--verbose')) 100 return functools.partial(perform_command, options, handler, command_options) 101 102 103 def perform_command(options, handler, command_options): 104 if options['COMMAND'] in ('help', 'version'): 105 # Skip looking up the compose file. 106 handler(command_options) 107 return 108 109 if options['COMMAND'] in ('config', 'bundle'): 110 command = TopLevelCommand(None) 111 handler(command, options, command_options) 112 return 113 114 project = project_from_options('.', options) 115 command = TopLevelCommand(project) 116 with errors.handle_connection_errors(project.client): 117 handler(command, command_options) 118 119 120 def setup_logging(): 121 root_logger = logging.getLogger() 122 root_logger.addHandler(console_handler) 123 root_logger.setLevel(logging.DEBUG) 124 125 # Disable requests logging 126 logging.getLogger("requests").propagate = False 127 128 129 def setup_console_handler(handler, verbose): 130 if handler.stream.isatty(): 131 format_class = ConsoleWarningFormatter 132 else: 133 format_class = logging.Formatter 134 135 if verbose: 136 handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s')) 137 handler.setLevel(logging.DEBUG) 138 else: 139 handler.setFormatter(format_class()) 140 handler.setLevel(logging.INFO) 141 142 143 # stolen from docopt master 144 def parse_doc_section(name, source): 145 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)', 146 re.IGNORECASE | re.MULTILINE) 147 return [s.strip() for s in pattern.findall(source)] 148 149 150 class TopLevelCommand(object): 151 """Define and run multi-container applications with Docker. 152 153 Usage: 154 docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...] 155 docker-compose -h|--help 156 157 Options: 158 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) 159 -p, --project-name NAME Specify an alternate project name (default: directory name) 160 --verbose Show more output 161 -v, --version Print version and exit 162 -H, --host HOST Daemon socket to connect to 163 164 --tls Use TLS; implied by --tlsverify 165 --tlscacert CA_PATH Trust certs signed only by this CA 166 --tlscert CLIENT_CERT_PATH Path to TLS certificate file 167 --tlskey TLS_KEY_PATH Path to TLS key file 168 --tlsverify Use TLS and verify the remote 169 --skip-hostname-check Don't check the daemon's hostname against the name specified 170 in the client certificate (for example if your docker host 171 is an IP address) 172 --project-directory PATH Specify an alternate working directory 173 (default: the path of the compose file) 174 175 Commands: 176 build Build or rebuild services 177 bundle Generate a Docker bundle from the Compose file 178 config Validate and view the compose file 179 create Create services 180 down Stop and remove containers, networks, images, and volumes 181 events Receive real time events from containers 182 exec Execute a command in a running container 183 help Get help on a command 184 images List images 185 kill Kill containers 186 logs View output from containers 187 pause Pause services 188 port Print the public port for a port binding 189 ps List containers 190 pull Pull service images 191 push Push service images 192 restart Restart services 193 rm Remove stopped containers 194 run Run a one-off command 195 scale Set number of containers for a service 196 start Start services 197 stop Stop services 198 top Display the running processes 199 unpause Unpause services 200 up Create and start containers 201 version Show the Docker-Compose version information 202 """ 203 204 def __init__(self, project, project_dir='.'): 205 self.project = project 206 self.project_dir = '.' 207 208 def build(self, options): 209 """ 210 Build or rebuild services. 211 212 Services are built once and then tagged as `project_service`, 213 e.g. `composetest_db`. If you change a service's `Dockerfile` or the 214 contents of its build directory, you can run `docker-compose build` to rebuild it. 215 216 Usage: build [options] [--build-arg key=val...] [SERVICE...] 217 218 Options: 219 --force-rm Always remove intermediate containers. 220 --no-cache Do not use cache when building the image. 221 --pull Always attempt to pull a newer version of the image. 222 --build-arg key=val Set build-time variables for one service. 223 """ 224 service_names = options['SERVICE'] 225 build_args = options.get('--build-arg', None) 226 if build_args: 227 environment = Environment.from_env_file(self.project_dir) 228 build_args = resolve_build_args(build_args, environment) 229 230 if not service_names and build_args: 231 raise UserError("Need service name for --build-arg option") 232 233 self.project.build( 234 service_names=service_names, 235 no_cache=bool(options.get('--no-cache', False)), 236 pull=bool(options.get('--pull', False)), 237 force_rm=bool(options.get('--force-rm', False)), 238 build_args=build_args) 239 240 def bundle(self, config_options, options): 241 """ 242 Generate a Distributed Application Bundle (DAB) from the Compose file. 243 244 Images must have digests stored, which requires interaction with a 245 Docker registry. If digests aren't stored for all images, you can fetch 246 them with `docker-compose pull` or `docker-compose push`. To push images 247 automatically when bundling, pass `--push-images`. Only services with 248 a `build` option specified will have their images pushed. 249 250 Usage: bundle [options] 251 252 Options: 253 --push-images Automatically push images for any services 254 which have a `build` option specified. 255 256 -o, --output PATH Path to write the bundle file to. 257 Defaults to "<project name>.dab". 258 """ 259 self.project = project_from_options('.', config_options) 260 compose_config = get_config_from_options(self.project_dir, config_options) 261 262 output = options["--output"] 263 if not output: 264 output = "{}.dab".format(self.project.name) 265 266 with errors.handle_connection_errors(self.project.client): 267 try: 268 image_digests = get_image_digests( 269 self.project, 270 allow_push=options['--push-images'], 271 ) 272 except MissingDigests as e: 273 def list_images(images): 274 return "\n".join(" {}".format(name) for name in sorted(images)) 275 276 paras = ["Some images are missing digests."] 277 278 if e.needs_push: 279 command_hint = ( 280 "Use `docker-compose push {}` to push them. " 281 "You can do this automatically with `docker-compose bundle --push-images`." 282 .format(" ".join(sorted(e.needs_push))) 283 ) 284 paras += [ 285 "The following images can be pushed:", 286 list_images(e.needs_push), 287 command_hint, 288 ] 289 290 if e.needs_pull: 291 command_hint = ( 292 "Use `docker-compose pull {}` to pull them. " 293 .format(" ".join(sorted(e.needs_pull))) 294 ) 295 296 paras += [ 297 "The following images need to be pulled:", 298 list_images(e.needs_pull), 299 command_hint, 300 ] 301 302 raise UserError("\n\n".join(paras)) 303 304 with open(output, 'w') as f: 305 f.write(serialize_bundle(compose_config, image_digests)) 306 307 log.info("Wrote bundle to {}".format(output)) 308 309 def config(self, config_options, options): 310 """ 311 Validate and view the compose file. 312 313 Usage: config [options] 314 315 Options: 316 -q, --quiet Only validate the configuration, don't print 317 anything. 318 --services Print the service names, one per line. 319 --volumes Print the volume names, one per line. 320 321 """ 322 compose_config = get_config_from_options(self.project_dir, config_options) 323 324 if options['--quiet']: 325 return 326 327 if options['--services']: 328 print('\n'.join(service['name'] for service in compose_config.services)) 329 return 330 331 if options['--volumes']: 332 print('\n'.join(volume for volume in compose_config.volumes)) 333 return 334 335 print(serialize_config(compose_config)) 336 337 def create(self, options): 338 """ 339 Creates containers for a service. 340 341 Usage: create [options] [SERVICE...] 342 343 Options: 344 --force-recreate Recreate containers even if their configuration and 345 image haven't changed. Incompatible with --no-recreate. 346 --no-recreate If containers already exist, don't recreate them. 347 Incompatible with --force-recreate. 348 --no-build Don't build an image, even if it's missing. 349 --build Build images before creating containers. 350 """ 351 service_names = options['SERVICE'] 352 353 self.project.create( 354 service_names=service_names, 355 strategy=convergence_strategy_from_opts(options), 356 do_build=build_action_from_opts(options), 357 ) 358 359 def down(self, options): 360 """ 361 Stops containers and removes containers, networks, volumes, and images 362 created by `up`. 363 364 By default, the only things removed are: 365 366 - Containers for services defined in the Compose file 367 - Networks defined in the `networks` section of the Compose file 368 - The default network, if one is used 369 370 Networks and volumes defined as `external` are never removed. 371 372 Usage: down [options] 373 374 Options: 375 --rmi type Remove images. Type must be one of: 376 'all': Remove all images used by any service. 377 'local': Remove only images that don't have a custom tag 378 set by the `image` field. 379 -v, --volumes Remove named volumes declared in the `volumes` section 380 of the Compose file and anonymous volumes 381 attached to containers. 382 --remove-orphans Remove containers for services not defined in the 383 Compose file 384 """ 385 image_type = image_type_from_opt('--rmi', options['--rmi']) 386 self.project.down(image_type, options['--volumes'], options['--remove-orphans']) 387 388 def events(self, options): 389 """ 390 Receive real time events from containers. 391 392 Usage: events [options] [SERVICE...] 393 394 Options: 395 --json Output events as a stream of json objects 396 """ 397 def format_event(event): 398 attributes = ["%s=%s" % item for item in event['attributes'].items()] 399 return ("{time} {type} {action} {id} ({attrs})").format( 400 attrs=", ".join(sorted(attributes)), 401 **event) 402 403 def json_format_event(event): 404 event['time'] = event['time'].isoformat() 405 event.pop('container') 406 return json.dumps(event) 407 408 for event in self.project.events(): 409 formatter = json_format_event if options['--json'] else format_event 410 print(formatter(event)) 411 sys.stdout.flush() 412 413 def exec_command(self, options): 414 """ 415 Execute a command in a running container 416 417 Usage: exec [options] SERVICE COMMAND [ARGS...] 418 419 Options: 420 -d Detached mode: Run command in the background. 421 --privileged Give extended privileges to the process. 422 --user USER Run the command as this user. 423 -T Disable pseudo-tty allocation. By default `docker-compose exec` 424 allocates a TTY. 425 --index=index index of the container if there are multiple 426 instances of a service [default: 1] 427 """ 428 index = int(options.get('--index')) 429 service = self.project.get_service(options['SERVICE']) 430 detach = options['-d'] 431 432 try: 433 container = service.get_container(number=index) 434 except ValueError as e: 435 raise UserError(str(e)) 436 command = [options['COMMAND']] + options['ARGS'] 437 tty = not options["-T"] 438 439 if IS_WINDOWS_PLATFORM and not detach: 440 args = ["exec"] 441 442 if options["-d"]: 443 args += ["--detach"] 444 else: 445 args += ["--interactive"] 446 447 if not options["-T"]: 448 args += ["--tty"] 449 450 if options["--privileged"]: 451 args += ["--privileged"] 452 453 if options["--user"]: 454 args += ["--user", options["--user"]] 455 456 args += [container.id] 457 args += command 458 459 sys.exit(call_docker(args)) 460 461 create_exec_options = { 462 "privileged": options["--privileged"], 463 "user": options["--user"], 464 "tty": tty, 465 "stdin": tty, 466 } 467 468 exec_id = container.create_exec(command, **create_exec_options) 469 470 if detach: 471 container.start_exec(exec_id, tty=tty) 472 return 473 474 signals.set_signal_handler_to_shutdown() 475 try: 476 operation = ExecOperation( 477 self.project.client, 478 exec_id, 479 interactive=tty, 480 ) 481 pty = PseudoTerminal(self.project.client, operation) 482 pty.start() 483 except signals.ShutdownException: 484 log.info("received shutdown exception: closing") 485 exit_code = self.project.client.exec_inspect(exec_id).get("ExitCode") 486 sys.exit(exit_code) 487 488 @classmethod 489 def help(cls, options): 490 """ 491 Get help on a command. 492 493 Usage: help [COMMAND] 494 """ 495 if options['COMMAND']: 496 subject = get_handler(cls, options['COMMAND']) 497 else: 498 subject = cls 499 500 print(getdoc(subject)) 501 502 def images(self, options): 503 """ 504 List images used by the created containers. 505 Usage: images [options] [SERVICE...] 506 507 Options: 508 -q Only display IDs 509 """ 510 containers = sorted( 511 self.project.containers(service_names=options['SERVICE'], stopped=True) + 512 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only), 513 key=attrgetter('name')) 514 515 if options['-q']: 516 for image in set(c.image for c in containers): 517 print(image.split(':')[1]) 518 else: 519 headers = [ 520 'Container', 521 'Repository', 522 'Tag', 523 'Image Id', 524 'Size' 525 ] 526 rows = [] 527 for container in containers: 528 image_config = container.image_config 529 repo_tags = image_config['RepoTags'][0].split(':') 530 image_id = image_config['Id'].split(':')[1][:12] 531 size = human_readable_file_size(image_config['Size']) 532 rows.append([ 533 container.name, 534 repo_tags[0], 535 repo_tags[1], 536 image_id, 537 size 538 ]) 539 print(Formatter().table(headers, rows)) 540 541 def kill(self, options): 542 """ 543 Force stop service containers. 544 545 Usage: kill [options] [SERVICE...] 546 547 Options: 548 -s SIGNAL SIGNAL to send to the container. 549 Default signal is SIGKILL. 550 """ 551 signal = options.get('-s', 'SIGKILL') 552 553 self.project.kill(service_names=options['SERVICE'], signal=signal) 554 555 def logs(self, options): 556 """ 557 View output from containers. 558 559 Usage: logs [options] [SERVICE...] 560 561 Options: 562 --no-color Produce monochrome output. 563 -f, --follow Follow log output. 564 -t, --timestamps Show timestamps. 565 --tail="all" Number of lines to show from the end of the logs 566 for each container. 567 """ 568 containers = self.project.containers(service_names=options['SERVICE'], stopped=True) 569 570 tail = options['--tail'] 571 if tail is not None: 572 if tail.isdigit(): 573 tail = int(tail) 574 elif tail != 'all': 575 raise UserError("tail flag must be all or a number") 576 log_args = { 577 'follow': options['--follow'], 578 'tail': tail, 579 'timestamps': options['--timestamps'] 580 } 581 print("Attaching to", list_containers(containers)) 582 log_printer_from_project( 583 self.project, 584 containers, 585 options['--no-color'], 586 log_args, 587 event_stream=self.project.events(service_names=options['SERVICE'])).run() 588 589 def pause(self, options): 590 """ 591 Pause services. 592 593 Usage: pause [SERVICE...] 594 """ 595 containers = self.project.pause(service_names=options['SERVICE']) 596 exit_if(not containers, 'No containers to pause', 1) 597 598 def port(self, options): 599 """ 600 Print the public port for a port binding. 601 602 Usage: port [options] SERVICE PRIVATE_PORT 603 604 Options: 605 --protocol=proto tcp or udp [default: tcp] 606 --index=index index of the container if there are multiple 607 instances of a service [default: 1] 608 """ 609 index = int(options.get('--index')) 610 service = self.project.get_service(options['SERVICE']) 611 try: 612 container = service.get_container(number=index) 613 except ValueError as e: 614 raise UserError(str(e)) 615 print(container.get_local_port( 616 options['PRIVATE_PORT'], 617 protocol=options.get('--protocol') or 'tcp') or '') 618 619 def ps(self, options): 620 """ 621 List containers. 622 623 Usage: ps [options] [SERVICE...] 624 625 Options: 626 -q Only display IDs 627 """ 628 containers = sorted( 629 self.project.containers(service_names=options['SERVICE'], stopped=True) + 630 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only), 631 key=attrgetter('name')) 632 633 if options['-q']: 634 for container in containers: 635 print(container.id) 636 else: 637 headers = [ 638 'Name', 639 'Command', 640 'State', 641 'Ports', 642 ] 643 rows = [] 644 for container in containers: 645 command = container.human_readable_command 646 if len(command) > 30: 647 command = '%s ...' % command[:26] 648 rows.append([ 649 container.name, 650 command, 651 container.human_readable_state, 652 container.human_readable_ports, 653 ]) 654 print(Formatter().table(headers, rows)) 655 656 def pull(self, options): 657 """ 658 Pulls images for services. 659 660 Usage: pull [options] [SERVICE...] 661 662 Options: 663 --ignore-pull-failures Pull what it can and ignores images with pull failures. 664 --parallel Pull multiple images in parallel. 665 """ 666 self.project.pull( 667 service_names=options['SERVICE'], 668 ignore_pull_failures=options.get('--ignore-pull-failures'), 669 parallel_pull=options.get('--parallel') 670 ) 671 672 def push(self, options): 673 """ 674 Pushes images for services. 675 676 Usage: push [options] [SERVICE...] 677 678 Options: 679 --ignore-push-failures Push what it can and ignores images with push failures. 680 """ 681 self.project.push( 682 service_names=options['SERVICE'], 683 ignore_push_failures=options.get('--ignore-push-failures') 684 ) 685 686 def rm(self, options): 687 """ 688 Removes stopped service containers. 689 690 By default, anonymous volumes attached to containers will not be removed. You 691 can override this with `-v`. To list all volumes, use `docker volume ls`. 692 693 Any data which is not in a volume will be lost. 694 695 Usage: rm [options] [SERVICE...] 696 697 Options: 698 -f, --force Don't ask to confirm removal 699 -s, --stop Stop the containers, if required, before removing 700 -v Remove any anonymous volumes attached to containers 701 -a, --all Deprecated - no effect. 702 """ 703 if options.get('--all'): 704 log.warn( 705 '--all flag is obsolete. This is now the default behavior ' 706 'of `docker-compose rm`' 707 ) 708 one_off = OneOffFilter.include 709 710 if options.get('--stop'): 711 running_containers = self.project.containers( 712 service_names=options['SERVICE'], stopped=False, one_off=one_off 713 ) 714 self.project.stop( 715 service_names=running_containers, 716 one_off=one_off 717 ) 718 719 all_containers = self.project.containers( 720 service_names=options['SERVICE'], stopped=True, one_off=one_off 721 ) 722 stopped_containers = [c for c in all_containers if not c.is_running] 723 724 if len(stopped_containers) > 0: 725 print("Going to remove", list_containers(stopped_containers)) 726 if options.get('--force') \ 727 or yesno("Are you sure? [yN] ", default=False): 728 self.project.remove_stopped( 729 service_names=options['SERVICE'], 730 v=options.get('-v', False), 731 one_off=one_off 732 ) 733 else: 734 print("No stopped containers") 735 736 def run(self, options): 737 """ 738 Run a one-off command on a service. 739 740 For example: 741 742 $ docker-compose run web python manage.py shell 743 744 By default, linked services will be started, unless they are already 745 running. If you do not want to start linked services, use 746 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`. 747 748 Usage: run [options] [-v VOLUME...] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...] 749 750 Options: 751 -d Detached mode: Run container in the background, print 752 new container name. 753 --name NAME Assign a name to the container 754 --entrypoint CMD Override the entrypoint of the image. 755 -e KEY=VAL Set an environment variable (can be used multiple times) 756 -u, --user="" Run as specified username or uid 757 --no-deps Don't start linked services. 758 --rm Remove container after run. Ignored in detached mode. 759 -p, --publish=[] Publish a container's port(s) to the host 760 --service-ports Run command with the service's ports enabled and mapped 761 to the host. 762 -v, --volume=[] Bind mount a volume (default []) 763 -T Disable pseudo-tty allocation. By default `docker-compose run` 764 allocates a TTY. 765 -w, --workdir="" Working directory inside the container 766 """ 767 service = self.project.get_service(options['SERVICE']) 768 detach = options['-d'] 769 770 if options['--publish'] and options['--service-ports']: 771 raise UserError( 772 'Service port mapping and manual port mapping ' 773 'can not be used together' 774 ) 775 776 if options['COMMAND'] is not None: 777 command = [options['COMMAND']] + options['ARGS'] 778 elif options['--entrypoint'] is not None: 779 command = [] 780 else: 781 command = service.options.get('command') 782 783 container_options = build_container_options(options, detach, command) 784 run_one_off_container(container_options, self.project, service, options) 785 786 def scale(self, options): 787 """ 788 Set number of containers to run for a service. 789 790 Numbers are specified in the form `service=num` as arguments. 791 For example: 792 793 $ docker-compose scale web=2 worker=3 794 795 Usage: scale [options] [SERVICE=NUM...] 796 797 Options: 798 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. 799 (default: 10) 800 """ 801 timeout = timeout_from_opts(options) 802 803 for s in options['SERVICE=NUM']: 804 if '=' not in s: 805 raise UserError('Arguments to scale should be in the form service=num') 806 service_name, num = s.split('=', 1) 807 try: 808 num = int(num) 809 except ValueError: 810 raise UserError('Number of containers for service "%s" is not a ' 811 'number' % service_name) 812 self.project.get_service(service_name).scale(num, timeout=timeout) 813 814 def start(self, options): 815 """ 816 Start existing containers. 817 818 Usage: start [SERVICE...] 819 """ 820 containers = self.project.start(service_names=options['SERVICE']) 821 exit_if(not containers, 'No containers to start', 1) 822 823 def stop(self, options): 824 """ 825 Stop running containers without removing them. 826 827 They can be started again with `docker-compose start`. 828 829 Usage: stop [options] [SERVICE...] 830 831 Options: 832 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. 833 (default: 10) 834 """ 835 timeout = timeout_from_opts(options) 836 self.project.stop(service_names=options['SERVICE'], timeout=timeout) 837 838 def restart(self, options): 839 """ 840 Restart running containers. 841 842 Usage: restart [options] [SERVICE...] 843 844 Options: 845 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. 846 (default: 10) 847 """ 848 timeout = timeout_from_opts(options) 849 containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout) 850 exit_if(not containers, 'No containers to restart', 1) 851 852 def top(self, options): 853 """ 854 Display the running processes 855 856 Usage: top [SERVICE...] 857 858 """ 859 containers = sorted( 860 self.project.containers(service_names=options['SERVICE'], stopped=False) + 861 self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only), 862 key=attrgetter('name') 863 ) 864 865 for idx, container in enumerate(containers): 866 if idx > 0: 867 print() 868 869 top_data = self.project.client.top(container.name) 870 headers = top_data.get("Titles") 871 rows = [] 872 873 for process in top_data.get("Processes", []): 874 rows.append(process) 875 876 print(container.name) 877 print(Formatter().table(headers, rows)) 878 879 def unpause(self, options): 880 """ 881 Unpause services. 882 883 Usage: unpause [SERVICE...] 884 """ 885 containers = self.project.unpause(service_names=options['SERVICE']) 886 exit_if(not containers, 'No containers to unpause', 1) 887 888 def up(self, options): 889 """ 890 Builds, (re)creates, starts, and attaches to containers for a service. 891 892 Unless they are already running, this command also starts any linked services. 893 894 The `docker-compose up` command aggregates the output of each container. When 895 the command exits, all containers are stopped. Running `docker-compose up -d` 896 starts the containers in the background and leaves them running. 897 898 If there are existing containers for a service, and the service's configuration 899 or image was changed after the container's creation, `docker-compose up` picks 900 up the changes by stopping and recreating the containers (preserving mounted 901 volumes). To prevent Compose from picking up changes, use the `--no-recreate` 902 flag. 903 904 If you want to force Compose to stop and recreate all containers, use the 905 `--force-recreate` flag. 906 907 Usage: up [options] [SERVICE...] 908 909 Options: 910 -d Detached mode: Run containers in the background, 911 print new container names. 912 Incompatible with --abort-on-container-exit. 913 --no-color Produce monochrome output. 914 --no-deps Don't start linked services. 915 --force-recreate Recreate containers even if their configuration 916 and image haven't changed. 917 Incompatible with --no-recreate. 918 --no-recreate If containers already exist, don't recreate them. 919 Incompatible with --force-recreate. 920 --no-build Don't build an image, even if it's missing. 921 --build Build images before starting containers. 922 --abort-on-container-exit Stops all containers if any container was stopped. 923 Incompatible with -d. 924 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown 925 when attached or when containers are already 926 running. (default: 10) 927 --remove-orphans Remove containers for services not 928 defined in the Compose file 929 --exit-code-from SERVICE Return the exit code of the selected service container. 930 Requires --abort-on-container-exit. 931 """ 932 start_deps = not options['--no-deps'] 933 exit_value_from = exitval_from_opts(options, self.project) 934 cascade_stop = options['--abort-on-container-exit'] 935 service_names = options['SERVICE'] 936 timeout = timeout_from_opts(options) 937 remove_orphans = options['--remove-orphans'] 938 detached = options.get('-d') 939 940 if detached and cascade_stop: 941 raise UserError("--abort-on-container-exit and -d cannot be combined.") 942 943 with up_shutdown_context(self.project, service_names, timeout, detached): 944 to_attach = self.project.up( 945 service_names=service_names, 946 start_deps=start_deps, 947 strategy=convergence_strategy_from_opts(options), 948 do_build=build_action_from_opts(options), 949 timeout=timeout, 950 detached=detached, 951 remove_orphans=remove_orphans) 952 953 if detached: 954 return 955 956 attached_containers = filter_containers_to_service_names(to_attach, service_names) 957 958 log_printer = log_printer_from_project( 959 self.project, 960 attached_containers, 961 options['--no-color'], 962 {'follow': True}, 963 cascade_stop, 964 event_stream=self.project.events(service_names=service_names)) 965 print("Attaching to", list_containers(log_printer.containers)) 966 cascade_starter = log_printer.run() 967 968 if cascade_stop: 969 print("Aborting on container exit...") 970 971 exit_code = 0 972 if exit_value_from: 973 candidates = filter( 974 lambda c: c.service == exit_value_from, 975 attached_containers) 976 if not candidates: 977 log.error( 978 'No containers matching the spec "{0}" ' 979 'were run.'.format(exit_value_from) 980 ) 981 exit_code = 2 982 elif len(candidates) > 1: 983 exit_values = filter( 984 lambda e: e != 0, 985 [c.inspect()['State']['ExitCode'] for c in candidates] 986 ) 987 988 exit_code = exit_values[0] 989 else: 990 exit_code = candidates[0].inspect()['State']['ExitCode'] 991 else: 992 for e in self.project.containers(service_names=options['SERVICE'], stopped=True): 993 if (not e.is_running and cascade_starter == e.name): 994 if not e.exit_code == 0: 995 exit_code = e.exit_code 996 break 997 998 self.project.stop(service_names=service_names, timeout=timeout) 999 sys.exit(exit_code) 1000 1001 @classmethod 1002 def version(cls, options): 1003 """ 1004 Show version informations 1005 1006 Usage: version [--short] 1007 1008 Options: 1009 --short Shows only Compose's version number. 1010 """ 1011 if options['--short']: 1012 print(__version__) 1013 else: 1014 print(get_version_info('full')) 1015 1016 1017 def convergence_strategy_from_opts(options): 1018 no_recreate = options['--no-recreate'] 1019 force_recreate = options['--force-recreate'] 1020 if force_recreate and no_recreate: 1021 raise UserError("--force-recreate and --no-recreate cannot be combined.") 1022 1023 if force_recreate: 1024 return ConvergenceStrategy.always 1025 1026 if no_recreate: 1027 return ConvergenceStrategy.never 1028 1029 return ConvergenceStrategy.changed 1030 1031 1032 def timeout_from_opts(options): 1033 timeout = options.get('--timeout') 1034 return None if timeout is None else int(timeout) 1035 1036 1037 def exitval_from_opts(options, project): 1038 exit_value_from = options.get('--exit-code-from') 1039 if exit_value_from: 1040 if not options.get('--abort-on-container-exit'): 1041 log.warn('using --exit-code-from implies --abort-on-container-exit') 1042 options['--abort-on-container-exit'] = True 1043 if exit_value_from not in [s.name for s in project.get_services()]: 1044 log.error('No service named "%s" was found in your compose file.', 1045 exit_value_from) 1046 sys.exit(2) 1047 return exit_value_from 1048 1049 1050 def image_type_from_opt(flag, value): 1051 if not value: 1052 return ImageType.none 1053 try: 1054 return ImageType[value] 1055 except KeyError: 1056 raise UserError("%s flag must be one of: all, local" % flag) 1057 1058 1059 def build_action_from_opts(options): 1060 if options['--build'] and options['--no-build']: 1061 raise UserError("--build and --no-build can not be combined.") 1062 1063 if options['--build']: 1064 return BuildAction.force 1065 1066 if options['--no-build']: 1067 return BuildAction.skip 1068 1069 return BuildAction.none 1070 1071 1072 def build_container_options(options, detach, command): 1073 container_options = { 1074 'command': command, 1075 'tty': not (detach or options['-T'] or not sys.stdin.isatty()), 1076 'stdin_open': not detach, 1077 'detach': detach, 1078 } 1079 1080 if options['-e']: 1081 container_options['environment'] = Environment.from_command_line( 1082 parse_environment(options['-e']) 1083 ) 1084 1085 if options['--entrypoint']: 1086 container_options['entrypoint'] = options.get('--entrypoint') 1087 1088 if options['--rm']: 1089 container_options['restart'] = None 1090 1091 if options['--user']: 1092 container_options['user'] = options.get('--user') 1093 1094 if not options['--service-ports']: 1095 container_options['ports'] = [] 1096 1097 if options['--publish']: 1098 container_options['ports'] = options.get('--publish') 1099 1100 if options['--name']: 1101 container_options['name'] = options['--name'] 1102 1103 if options['--workdir']: 1104 container_options['working_dir'] = options['--workdir'] 1105 1106 if options['--volume']: 1107 volumes = [VolumeSpec.parse(i) for i in options['--volume']] 1108 container_options['volumes'] = volumes 1109 1110 return container_options 1111 1112 1113 def run_one_off_container(container_options, project, service, options): 1114 if not options['--no-deps']: 1115 deps = service.get_dependency_names() 1116 if deps: 1117 project.up( 1118 service_names=deps, 1119 start_deps=True, 1120 strategy=ConvergenceStrategy.never) 1121 1122 project.initialize() 1123 1124 container = service.create_container( 1125 quiet=True, 1126 one_off=True, 1127 **container_options) 1128 1129 if options['-d']: 1130 service.start_container(container) 1131 print(container.name) 1132 return 1133 1134 def remove_container(force=False): 1135 if options['--rm']: 1136 project.client.remove_container(container.id, force=True, v=True) 1137 1138 signals.set_signal_handler_to_shutdown() 1139 try: 1140 try: 1141 if IS_WINDOWS_PLATFORM: 1142 service.connect_container_to_networks(container) 1143 exit_code = call_docker(["start", "--attach", "--interactive", container.id]) 1144 else: 1145 operation = RunOperation( 1146 project.client, 1147 container.id, 1148 interactive=not options['-T'], 1149 logs=False, 1150 ) 1151 pty = PseudoTerminal(project.client, operation) 1152 sockets = pty.sockets() 1153 service.start_container(container) 1154 pty.start(sockets) 1155 exit_code = container.wait() 1156 except signals.ShutdownException: 1157 project.client.stop(container.id) 1158 exit_code = 1 1159 except signals.ShutdownException: 1160 project.client.kill(container.id) 1161 remove_container(force=True) 1162 sys.exit(2) 1163 1164 remove_container() 1165 sys.exit(exit_code) 1166 1167 1168 def log_printer_from_project( 1169 project, 1170 containers, 1171 monochrome, 1172 log_args, 1173 cascade_stop=False, 1174 event_stream=None, 1175 ): 1176 return LogPrinter( 1177 containers, 1178 build_log_presenters(project.service_names, monochrome), 1179 event_stream or project.events(), 1180 cascade_stop=cascade_stop, 1181 log_args=log_args) 1182 1183 1184 def filter_containers_to_service_names(containers, service_names): 1185 if not service_names: 1186 return containers 1187 1188 return [ 1189 container 1190 for container in containers if container.service in service_names 1191 ] 1192 1193 1194 @contextlib.contextmanager 1195 def up_shutdown_context(project, service_names, timeout, detached): 1196 if detached: 1197 yield 1198 return 1199 1200 signals.set_signal_handler_to_shutdown() 1201 try: 1202 try: 1203 yield 1204 except signals.ShutdownException: 1205 print("Gracefully stopping... (press Ctrl+C again to force)") 1206 project.stop(service_names=service_names, timeout=timeout) 1207 except signals.ShutdownException: 1208 project.kill(service_names=service_names) 1209 sys.exit(2) 1210 1211 1212 def list_containers(containers): 1213 return ", ".join(c.name for c in containers) 1214 1215 1216 def exit_if(condition, message, exit_code): 1217 if condition: 1218 log.error(message) 1219 raise SystemExit(exit_code) 1220 1221 1222 def call_docker(args): 1223 executable_path = find_executable('docker') 1224 if not executable_path: 1225 raise UserError(errors.docker_not_found_msg("Couldn't find `docker` binary.")) 1226 1227 args = [executable_path] + args 1228 log.debug(" ".join(map(pipes.quote, args))) 1229 1230 return subprocess.call(args) 1231 [end of compose/cli/main.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
docker/compose
73aff2b50f22d4e9fac99eba629ae3478f5bf2a9
Add option to resolve tags to hashes for config command **Description** Docker 1.13 adds the option to deploy stacks using Docker Compose version 3 files in addition to Distributed Application Bundle (DAB) files. When `docker-compose bundle` is used to produce a DAB, it fetches image digests and replaces image tags with digests in the compose project. This produces a DAB whose images are pinned to a specific digest, rather than a tag that can be changed to different digests over time. `docker-compose config` can be used to “compile” or combine multiple compose files together, resolve `.env` variables, and handle all of Docker Compose's options to produce a single compose file suitable for use with `docker stack deploy --compose-file`. It would be useful to add an option to the `config` command that replaces image tags with digests—like the `bundle` command—to produce a compose file destined for stack deployments that acts like DAB files with respect to image pinning. **Steps to reproduce the issue:** 1. Given Docker Compose file(s) forming a project, call `docker-compose config`. **Describe the results you received:** A combined/resolved Docker Compose configuration is outputted. Any image tags remain as tags. **Describe the results you expected:** An option on `docker-compose config` that causes Docker Compose to output image digests instead of tags. **Additional information you deem important (e.g. issue happens only occasionally):** The brief description of the `config` command says that is is to “Validate and view the compose file.” But, it is the only way to produce a Docker Compose file that uses features not supported by `docker stack deploy --compose-file`. I believe this feature request points to a broader need for `docker-compose config` to be treated more like `docker-compose bundle`, and be used for generating a single/compiled/resolved Compose file for a Docker Compose project. This includes adopting bundle's options (`--push-images` and `--output`).
2017-01-14T18:39:30Z
<patch> diff --git a/compose/cli/main.py b/compose/cli/main.py --- a/compose/cli/main.py +++ b/compose/cli/main.py @@ -263,43 +263,7 @@ def bundle(self, config_options, options): if not output: output = "{}.dab".format(self.project.name) - with errors.handle_connection_errors(self.project.client): - try: - image_digests = get_image_digests( - self.project, - allow_push=options['--push-images'], - ) - except MissingDigests as e: - def list_images(images): - return "\n".join(" {}".format(name) for name in sorted(images)) - - paras = ["Some images are missing digests."] - - if e.needs_push: - command_hint = ( - "Use `docker-compose push {}` to push them. " - "You can do this automatically with `docker-compose bundle --push-images`." - .format(" ".join(sorted(e.needs_push))) - ) - paras += [ - "The following images can be pushed:", - list_images(e.needs_push), - command_hint, - ] - - if e.needs_pull: - command_hint = ( - "Use `docker-compose pull {}` to pull them. " - .format(" ".join(sorted(e.needs_pull))) - ) - - paras += [ - "The following images need to be pulled:", - list_images(e.needs_pull), - command_hint, - ] - - raise UserError("\n\n".join(paras)) + image_digests = image_digests_for_project(self.project, options['--push-images']) with open(output, 'w') as f: f.write(serialize_bundle(compose_config, image_digests)) @@ -313,13 +277,20 @@ def config(self, config_options, options): Usage: config [options] Options: - -q, --quiet Only validate the configuration, don't print - anything. - --services Print the service names, one per line. - --volumes Print the volume names, one per line. + --resolve-image-digests Pin image tags to digests. + -q, --quiet Only validate the configuration, don't print + anything. + --services Print the service names, one per line. + --volumes Print the volume names, one per line. """ + compose_config = get_config_from_options(self.project_dir, config_options) + image_digests = None + + if options['--resolve-image-digests']: + self.project = project_from_options('.', config_options) + image_digests = image_digests_for_project(self.project) if options['--quiet']: return @@ -332,7 +303,7 @@ def config(self, config_options, options): print('\n'.join(volume for volume in compose_config.volumes)) return - print(serialize_config(compose_config)) + print(serialize_config(compose_config, image_digests)) def create(self, options): """ @@ -1034,6 +1005,45 @@ def timeout_from_opts(options): return None if timeout is None else int(timeout) +def image_digests_for_project(project, allow_push=False): + with errors.handle_connection_errors(project.client): + try: + return get_image_digests( + project, + allow_push=allow_push + ) + except MissingDigests as e: + def list_images(images): + return "\n".join(" {}".format(name) for name in sorted(images)) + + paras = ["Some images are missing digests."] + + if e.needs_push: + command_hint = ( + "Use `docker-compose push {}` to push them. " + .format(" ".join(sorted(e.needs_push))) + ) + paras += [ + "The following images can be pushed:", + list_images(e.needs_push), + command_hint, + ] + + if e.needs_pull: + command_hint = ( + "Use `docker-compose pull {}` to pull them. " + .format(" ".join(sorted(e.needs_pull))) + ) + + paras += [ + "The following images need to be pulled:", + list_images(e.needs_pull), + command_hint, + ] + + raise UserError("\n\n".join(paras)) + + def exitval_from_opts(options, project): exit_value_from = options.get('--exit-code-from') if exit_value_from: diff --git a/compose/config/serialize.py b/compose/config/serialize.py --- a/compose/config/serialize.py +++ b/compose/config/serialize.py @@ -26,10 +26,13 @@ def serialize_dict_type(dumper, data): yaml.SafeDumper.add_representer(types.ServicePort, serialize_dict_type) -def denormalize_config(config): +def denormalize_config(config, image_digests=None): result = {'version': V2_1 if config.version == V1 else config.version} denormalized_services = [ - denormalize_service_dict(service_dict, config.version) + denormalize_service_dict( + service_dict, + config.version, + image_digests[service_dict['name']] if image_digests else None) for service_dict in config.services ] result['services'] = { @@ -51,9 +54,9 @@ def denormalize_config(config): return result -def serialize_config(config): +def serialize_config(config, image_digests=None): return yaml.safe_dump( - denormalize_config(config), + denormalize_config(config, image_digests), default_flow_style=False, indent=2, width=80) @@ -78,9 +81,12 @@ def serialize_ns_time_value(value): return '{0}{1}'.format(*result) -def denormalize_service_dict(service_dict, version): +def denormalize_service_dict(service_dict, version, image_digest=None): service_dict = service_dict.copy() + if image_digest: + service_dict['image'] = image_digest + if 'restart' in service_dict: service_dict['restart'] = types.serialize_restart_spec( service_dict['restart'] </patch>
[]
[]
numpy__numpy-11219
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> PR #10559 failed to fix einsum (optimize=True) broadcasting bug #10559 introduced the [following code](https://github.com/numpy/numpy/blob/a9e9343ecf5a6564eb09f1d7078c407db2ff7d9e/numpy/core/einsumfunc.py#L1107) to prevent dispatching `numpy.tensordot` in a case where einsum was broadcasting over a singleton dimension. ```python # Handle broadcasting vs BLAS cases if blas: # Checks have already been handled input_str, results_index = einsum_str.split('->') input_left, input_right = input_str.split(',') if 1 in tmp_operands[0] or 1 in tmp_operands[1]: left_dims = {dim: size for dim, size in zip(input_left, tmp_operands[0].shape)} right_dims = {dim: size for dim, size in zip(input_right, tmp_operands[1].shape)} # If dims do not match we are broadcasting, BLAS off if any(left_dims[ind] != right_dims[ind] for ind in idx_rm): blas = False ``` However, this checks to see if `1` occurs within the operand array itself rather than the shape of the operand. Incidentally, this likely produced a nasty performance regression. Thus the line `if 1 in tmp_operands[0] or 1 in tmp_operands[1]` should be `if 1 in tmp_operands[0].shape or 1 in tmp_operands[1].shape` This wasn't caught by the [unit test](https://github.com/numpy/numpy/blob/a9e9343ecf5a6564eb09f1d7078c407db2ff7d9e/numpy/core/tests/test_einsum.py#L486) because arrays of ones were used 🌌 This leads to the following behavior: ```python >>> x = np.array([0., 1., 0.]) # contains 1, no blas >>> y = np.array([0.0]) >>> np.einsum("i,i", x, y, optimize=True) 0. ``` ```python >>> x = np.array([0., -1., 0.]) # doesn't contain 1, yes blas >>> y = np.array([0.0]) >>> np.einsum("i,i", x, y, optimize=True) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-184-b0dcea8eedea> in <module>() 1 x = np.array([0., -1., 0.]) 2 y = np.array([0.0]) ----> 3 np.einsum("i,i", x, y, optimize=True) c:\anaconda\envs\py36\lib\site-packages\numpy\core\einsumfunc.py in einsum(*operands, **kwargs) 1132 1133 # Contract! -> 1134 new_view = tensordot(*tmp_operands, axes=(tuple(left_pos), tuple(right_pos))) 1135 1136 # Build a new view if needed c:\anaconda\envs\py36\lib\site-packages\numpy\core\numeric.py in tensordot(a, b, axes) 1281 axes_b[k] += ndb 1282 if not equal: -> 1283 raise ValueError("shape-mismatch for sum") 1284 1285 # Move the axes to sum over to the end of "a" ValueError: shape-mismatch for sum ``` </issue> <code> [start of README.md] 1 # <img alt="NumPy" src="branding/icons/numpylogo.svg" height="60"> 2 3 [![Travis](https://img.shields.io/travis/numpy/numpy/master.svg?label=Travis%20CI)](https://travis-ci.org/numpy/numpy) 4 [![AppVeyor](https://img.shields.io/appveyor/ci/charris/numpy/master.svg?label=AppVeyor)](https://ci.appveyor.com/project/charris/numpy) 5 6 NumPy is the fundamental package needed for scientific computing with Python. 7 8 - **Website (including documentation):** http://www.numpy.org 9 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion 10 - **Source:** https://github.com/numpy/numpy 11 - **Bug reports:** https://github.com/numpy/numpy/issues 12 13 It provides: 14 15 - a powerful N-dimensional array object 16 - sophisticated (broadcasting) functions 17 - tools for integrating C/C++ and Fortran code 18 - useful linear algebra, Fourier transform, and random number capabilities 19 20 If ``nose`` is installed, tests can be run after installation with: 21 22 python -c 'import numpy; numpy.test()' 23 24 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org) 25 [end of README.md] [start of numpy/lib/shape_base.py] 1 from __future__ import division, absolute_import, print_function 2 3 import warnings 4 5 import numpy.core.numeric as _nx 6 from numpy.core.numeric import ( 7 asarray, zeros, outer, concatenate, array, asanyarray 8 ) 9 from numpy.core.fromnumeric import product, reshape, transpose 10 from numpy.core.multiarray import normalize_axis_index 11 from numpy.core import vstack, atleast_3d 12 from numpy.lib.index_tricks import ndindex 13 from numpy.matrixlib.defmatrix import matrix # this raises all the right alarm bells 14 15 16 __all__ = [ 17 'column_stack', 'row_stack', 'dstack', 'array_split', 'split', 18 'hsplit', 'vsplit', 'dsplit', 'apply_over_axes', 'expand_dims', 19 'apply_along_axis', 'kron', 'tile', 'get_array_wrap' 20 ] 21 22 23 def apply_along_axis(func1d, axis, arr, *args, **kwargs): 24 """ 25 Apply a function to 1-D slices along the given axis. 26 27 Execute `func1d(a, *args)` where `func1d` operates on 1-D arrays and `a` 28 is a 1-D slice of `arr` along `axis`. 29 30 This is equivalent to (but faster than) the following use of `ndindex` and 31 `s_`, which sets each of ``ii``, ``jj``, and ``kk`` to a tuple of indices:: 32 33 Ni, Nk = a.shape[:axis], a.shape[axis+1:] 34 for ii in ndindex(Ni): 35 for kk in ndindex(Nk): 36 f = func1d(arr[ii + s_[:,] + kk]) 37 Nj = f.shape 38 for jj in ndindex(Nj): 39 out[ii + jj + kk] = f[jj] 40 41 Equivalently, eliminating the inner loop, this can be expressed as:: 42 43 Ni, Nk = a.shape[:axis], a.shape[axis+1:] 44 for ii in ndindex(Ni): 45 for kk in ndindex(Nk): 46 out[ii + s_[...,] + kk] = func1d(arr[ii + s_[:,] + kk]) 47 48 Parameters 49 ---------- 50 func1d : function (M,) -> (Nj...) 51 This function should accept 1-D arrays. It is applied to 1-D 52 slices of `arr` along the specified axis. 53 axis : integer 54 Axis along which `arr` is sliced. 55 arr : ndarray (Ni..., M, Nk...) 56 Input array. 57 args : any 58 Additional arguments to `func1d`. 59 kwargs : any 60 Additional named arguments to `func1d`. 61 62 .. versionadded:: 1.9.0 63 64 65 Returns 66 ------- 67 out : ndarray (Ni..., Nj..., Nk...) 68 The output array. The shape of `out` is identical to the shape of 69 `arr`, except along the `axis` dimension. This axis is removed, and 70 replaced with new dimensions equal to the shape of the return value 71 of `func1d`. So if `func1d` returns a scalar `out` will have one 72 fewer dimensions than `arr`. 73 74 See Also 75 -------- 76 apply_over_axes : Apply a function repeatedly over multiple axes. 77 78 Examples 79 -------- 80 >>> def my_func(a): 81 ... \"\"\"Average first and last element of a 1-D array\"\"\" 82 ... return (a[0] + a[-1]) * 0.5 83 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) 84 >>> np.apply_along_axis(my_func, 0, b) 85 array([ 4., 5., 6.]) 86 >>> np.apply_along_axis(my_func, 1, b) 87 array([ 2., 5., 8.]) 88 89 For a function that returns a 1D array, the number of dimensions in 90 `outarr` is the same as `arr`. 91 92 >>> b = np.array([[8,1,7], [4,3,9], [5,2,6]]) 93 >>> np.apply_along_axis(sorted, 1, b) 94 array([[1, 7, 8], 95 [3, 4, 9], 96 [2, 5, 6]]) 97 98 For a function that returns a higher dimensional array, those dimensions 99 are inserted in place of the `axis` dimension. 100 101 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) 102 >>> np.apply_along_axis(np.diag, -1, b) 103 array([[[1, 0, 0], 104 [0, 2, 0], 105 [0, 0, 3]], 106 [[4, 0, 0], 107 [0, 5, 0], 108 [0, 0, 6]], 109 [[7, 0, 0], 110 [0, 8, 0], 111 [0, 0, 9]]]) 112 """ 113 # handle negative axes 114 arr = asanyarray(arr) 115 nd = arr.ndim 116 axis = normalize_axis_index(axis, nd) 117 118 # arr, with the iteration axis at the end 119 in_dims = list(range(nd)) 120 inarr_view = transpose(arr, in_dims[:axis] + in_dims[axis+1:] + [axis]) 121 122 # compute indices for the iteration axes, and append a trailing ellipsis to 123 # prevent 0d arrays decaying to scalars, which fixes gh-8642 124 inds = ndindex(inarr_view.shape[:-1]) 125 inds = (ind + (Ellipsis,) for ind in inds) 126 127 # invoke the function on the first item 128 try: 129 ind0 = next(inds) 130 except StopIteration: 131 raise ValueError('Cannot apply_along_axis when any iteration dimensions are 0') 132 res = asanyarray(func1d(inarr_view[ind0], *args, **kwargs)) 133 134 # build a buffer for storing evaluations of func1d. 135 # remove the requested axis, and add the new ones on the end. 136 # laid out so that each write is contiguous. 137 # for a tuple index inds, buff[inds] = func1d(inarr_view[inds]) 138 buff = zeros(inarr_view.shape[:-1] + res.shape, res.dtype) 139 140 # permutation of axes such that out = buff.transpose(buff_permute) 141 buff_dims = list(range(buff.ndim)) 142 buff_permute = ( 143 buff_dims[0 : axis] + 144 buff_dims[buff.ndim-res.ndim : buff.ndim] + 145 buff_dims[axis : buff.ndim-res.ndim] 146 ) 147 148 # matrices have a nasty __array_prepare__ and __array_wrap__ 149 if not isinstance(res, matrix): 150 buff = res.__array_prepare__(buff) 151 152 # save the first result, then compute and save all remaining results 153 buff[ind0] = res 154 for ind in inds: 155 buff[ind] = asanyarray(func1d(inarr_view[ind], *args, **kwargs)) 156 157 if not isinstance(res, matrix): 158 # wrap the array, to preserve subclasses 159 buff = res.__array_wrap__(buff) 160 161 # finally, rotate the inserted axes back to where they belong 162 return transpose(buff, buff_permute) 163 164 else: 165 # matrices have to be transposed first, because they collapse dimensions! 166 out_arr = transpose(buff, buff_permute) 167 return res.__array_wrap__(out_arr) 168 169 170 def apply_over_axes(func, a, axes): 171 """ 172 Apply a function repeatedly over multiple axes. 173 174 `func` is called as `res = func(a, axis)`, where `axis` is the first 175 element of `axes`. The result `res` of the function call must have 176 either the same dimensions as `a` or one less dimension. If `res` 177 has one less dimension than `a`, a dimension is inserted before 178 `axis`. The call to `func` is then repeated for each axis in `axes`, 179 with `res` as the first argument. 180 181 Parameters 182 ---------- 183 func : function 184 This function must take two arguments, `func(a, axis)`. 185 a : array_like 186 Input array. 187 axes : array_like 188 Axes over which `func` is applied; the elements must be integers. 189 190 Returns 191 ------- 192 apply_over_axis : ndarray 193 The output array. The number of dimensions is the same as `a`, 194 but the shape can be different. This depends on whether `func` 195 changes the shape of its output with respect to its input. 196 197 See Also 198 -------- 199 apply_along_axis : 200 Apply a function to 1-D slices of an array along the given axis. 201 202 Notes 203 ------ 204 This function is equivalent to tuple axis arguments to reorderable ufuncs 205 with keepdims=True. Tuple axis arguments to ufuncs have been available since 206 version 1.7.0. 207 208 Examples 209 -------- 210 >>> a = np.arange(24).reshape(2,3,4) 211 >>> a 212 array([[[ 0, 1, 2, 3], 213 [ 4, 5, 6, 7], 214 [ 8, 9, 10, 11]], 215 [[12, 13, 14, 15], 216 [16, 17, 18, 19], 217 [20, 21, 22, 23]]]) 218 219 Sum over axes 0 and 2. The result has same number of dimensions 220 as the original array: 221 222 >>> np.apply_over_axes(np.sum, a, [0,2]) 223 array([[[ 60], 224 [ 92], 225 [124]]]) 226 227 Tuple axis arguments to ufuncs are equivalent: 228 229 >>> np.sum(a, axis=(0,2), keepdims=True) 230 array([[[ 60], 231 [ 92], 232 [124]]]) 233 234 """ 235 val = asarray(a) 236 N = a.ndim 237 if array(axes).ndim == 0: 238 axes = (axes,) 239 for axis in axes: 240 if axis < 0: 241 axis = N + axis 242 args = (val, axis) 243 res = func(*args) 244 if res.ndim == val.ndim: 245 val = res 246 else: 247 res = expand_dims(res, axis) 248 if res.ndim == val.ndim: 249 val = res 250 else: 251 raise ValueError("function is not returning " 252 "an array of the correct shape") 253 return val 254 255 def expand_dims(a, axis): 256 """ 257 Expand the shape of an array. 258 259 Insert a new axis that will appear at the `axis` position in the expanded 260 array shape. 261 262 .. note:: Previous to NumPy 1.13.0, neither ``axis < -a.ndim - 1`` nor 263 ``axis > a.ndim`` raised errors or put the new axis where documented. 264 Those axis values are now deprecated and will raise an AxisError in the 265 future. 266 267 Parameters 268 ---------- 269 a : array_like 270 Input array. 271 axis : int 272 Position in the expanded axes where the new axis is placed. 273 274 Returns 275 ------- 276 res : ndarray 277 Output array. The number of dimensions is one greater than that of 278 the input array. 279 280 See Also 281 -------- 282 squeeze : The inverse operation, removing singleton dimensions 283 reshape : Insert, remove, and combine dimensions, and resize existing ones 284 doc.indexing, atleast_1d, atleast_2d, atleast_3d 285 286 Examples 287 -------- 288 >>> x = np.array([1,2]) 289 >>> x.shape 290 (2,) 291 292 The following is equivalent to ``x[np.newaxis,:]`` or ``x[np.newaxis]``: 293 294 >>> y = np.expand_dims(x, axis=0) 295 >>> y 296 array([[1, 2]]) 297 >>> y.shape 298 (1, 2) 299 300 >>> y = np.expand_dims(x, axis=1) # Equivalent to x[:,np.newaxis] 301 >>> y 302 array([[1], 303 [2]]) 304 >>> y.shape 305 (2, 1) 306 307 Note that some examples may use ``None`` instead of ``np.newaxis``. These 308 are the same objects: 309 310 >>> np.newaxis is None 311 True 312 313 """ 314 a = asarray(a) 315 shape = a.shape 316 if axis > a.ndim or axis < -a.ndim - 1: 317 # 2017-05-17, 1.13.0 318 warnings.warn("Both axis > a.ndim and axis < -a.ndim - 1 are " 319 "deprecated and will raise an AxisError in the future.", 320 DeprecationWarning, stacklevel=2) 321 # When the deprecation period expires, delete this if block, 322 if axis < 0: 323 axis = axis + a.ndim + 1 324 # and uncomment the following line. 325 # axis = normalize_axis_index(axis, a.ndim + 1) 326 return a.reshape(shape[:axis] + (1,) + shape[axis:]) 327 328 row_stack = vstack 329 330 def column_stack(tup): 331 """ 332 Stack 1-D arrays as columns into a 2-D array. 333 334 Take a sequence of 1-D arrays and stack them as columns 335 to make a single 2-D array. 2-D arrays are stacked as-is, 336 just like with `hstack`. 1-D arrays are turned into 2-D columns 337 first. 338 339 Parameters 340 ---------- 341 tup : sequence of 1-D or 2-D arrays. 342 Arrays to stack. All of them must have the same first dimension. 343 344 Returns 345 ------- 346 stacked : 2-D array 347 The array formed by stacking the given arrays. 348 349 See Also 350 -------- 351 stack, hstack, vstack, concatenate 352 353 Examples 354 -------- 355 >>> a = np.array((1,2,3)) 356 >>> b = np.array((2,3,4)) 357 >>> np.column_stack((a,b)) 358 array([[1, 2], 359 [2, 3], 360 [3, 4]]) 361 362 """ 363 arrays = [] 364 for v in tup: 365 arr = array(v, copy=False, subok=True) 366 if arr.ndim < 2: 367 arr = array(arr, copy=False, subok=True, ndmin=2).T 368 arrays.append(arr) 369 return _nx.concatenate(arrays, 1) 370 371 def dstack(tup): 372 """ 373 Stack arrays in sequence depth wise (along third axis). 374 375 This is equivalent to concatenation along the third axis after 2-D arrays 376 of shape `(M,N)` have been reshaped to `(M,N,1)` and 1-D arrays of shape 377 `(N,)` have been reshaped to `(1,N,1)`. Rebuilds arrays divided by 378 `dsplit`. 379 380 This function makes most sense for arrays with up to 3 dimensions. For 381 instance, for pixel-data with a height (first axis), width (second axis), 382 and r/g/b channels (third axis). The functions `concatenate`, `stack` and 383 `block` provide more general stacking and concatenation operations. 384 385 Parameters 386 ---------- 387 tup : sequence of arrays 388 The arrays must have the same shape along all but the third axis. 389 1-D or 2-D arrays must have the same shape. 390 391 Returns 392 ------- 393 stacked : ndarray 394 The array formed by stacking the given arrays, will be at least 3-D. 395 396 See Also 397 -------- 398 stack : Join a sequence of arrays along a new axis. 399 vstack : Stack along first axis. 400 hstack : Stack along second axis. 401 concatenate : Join a sequence of arrays along an existing axis. 402 dsplit : Split array along third axis. 403 404 Examples 405 -------- 406 >>> a = np.array((1,2,3)) 407 >>> b = np.array((2,3,4)) 408 >>> np.dstack((a,b)) 409 array([[[1, 2], 410 [2, 3], 411 [3, 4]]]) 412 413 >>> a = np.array([[1],[2],[3]]) 414 >>> b = np.array([[2],[3],[4]]) 415 >>> np.dstack((a,b)) 416 array([[[1, 2]], 417 [[2, 3]], 418 [[3, 4]]]) 419 420 """ 421 return _nx.concatenate([atleast_3d(_m) for _m in tup], 2) 422 423 def _replace_zero_by_x_arrays(sub_arys): 424 for i in range(len(sub_arys)): 425 if _nx.ndim(sub_arys[i]) == 0: 426 sub_arys[i] = _nx.empty(0, dtype=sub_arys[i].dtype) 427 elif _nx.sometrue(_nx.equal(_nx.shape(sub_arys[i]), 0)): 428 sub_arys[i] = _nx.empty(0, dtype=sub_arys[i].dtype) 429 return sub_arys 430 431 def array_split(ary, indices_or_sections, axis=0): 432 """ 433 Split an array into multiple sub-arrays. 434 435 Please refer to the ``split`` documentation. The only difference 436 between these functions is that ``array_split`` allows 437 `indices_or_sections` to be an integer that does *not* equally 438 divide the axis. For an array of length l that should be split 439 into n sections, it returns l % n sub-arrays of size l//n + 1 440 and the rest of size l//n. 441 442 See Also 443 -------- 444 split : Split array into multiple sub-arrays of equal size. 445 446 Examples 447 -------- 448 >>> x = np.arange(8.0) 449 >>> np.array_split(x, 3) 450 [array([ 0., 1., 2.]), array([ 3., 4., 5.]), array([ 6., 7.])] 451 452 >>> x = np.arange(7.0) 453 >>> np.array_split(x, 3) 454 [array([ 0., 1., 2.]), array([ 3., 4.]), array([ 5., 6.])] 455 456 """ 457 try: 458 Ntotal = ary.shape[axis] 459 except AttributeError: 460 Ntotal = len(ary) 461 try: 462 # handle scalar case. 463 Nsections = len(indices_or_sections) + 1 464 div_points = [0] + list(indices_or_sections) + [Ntotal] 465 except TypeError: 466 # indices_or_sections is a scalar, not an array. 467 Nsections = int(indices_or_sections) 468 if Nsections <= 0: 469 raise ValueError('number sections must be larger than 0.') 470 Neach_section, extras = divmod(Ntotal, Nsections) 471 section_sizes = ([0] + 472 extras * [Neach_section+1] + 473 (Nsections-extras) * [Neach_section]) 474 div_points = _nx.array(section_sizes).cumsum() 475 476 sub_arys = [] 477 sary = _nx.swapaxes(ary, axis, 0) 478 for i in range(Nsections): 479 st = div_points[i] 480 end = div_points[i + 1] 481 sub_arys.append(_nx.swapaxes(sary[st:end], axis, 0)) 482 483 return sub_arys 484 485 486 def split(ary,indices_or_sections,axis=0): 487 """ 488 Split an array into multiple sub-arrays. 489 490 Parameters 491 ---------- 492 ary : ndarray 493 Array to be divided into sub-arrays. 494 indices_or_sections : int or 1-D array 495 If `indices_or_sections` is an integer, N, the array will be divided 496 into N equal arrays along `axis`. If such a split is not possible, 497 an error is raised. 498 499 If `indices_or_sections` is a 1-D array of sorted integers, the entries 500 indicate where along `axis` the array is split. For example, 501 ``[2, 3]`` would, for ``axis=0``, result in 502 503 - ary[:2] 504 - ary[2:3] 505 - ary[3:] 506 507 If an index exceeds the dimension of the array along `axis`, 508 an empty sub-array is returned correspondingly. 509 axis : int, optional 510 The axis along which to split, default is 0. 511 512 Returns 513 ------- 514 sub-arrays : list of ndarrays 515 A list of sub-arrays. 516 517 Raises 518 ------ 519 ValueError 520 If `indices_or_sections` is given as an integer, but 521 a split does not result in equal division. 522 523 See Also 524 -------- 525 array_split : Split an array into multiple sub-arrays of equal or 526 near-equal size. Does not raise an exception if 527 an equal division cannot be made. 528 hsplit : Split array into multiple sub-arrays horizontally (column-wise). 529 vsplit : Split array into multiple sub-arrays vertically (row wise). 530 dsplit : Split array into multiple sub-arrays along the 3rd axis (depth). 531 concatenate : Join a sequence of arrays along an existing axis. 532 stack : Join a sequence of arrays along a new axis. 533 hstack : Stack arrays in sequence horizontally (column wise). 534 vstack : Stack arrays in sequence vertically (row wise). 535 dstack : Stack arrays in sequence depth wise (along third dimension). 536 537 Examples 538 -------- 539 >>> x = np.arange(9.0) 540 >>> np.split(x, 3) 541 [array([ 0., 1., 2.]), array([ 3., 4., 5.]), array([ 6., 7., 8.])] 542 543 >>> x = np.arange(8.0) 544 >>> np.split(x, [3, 5, 6, 10]) 545 [array([ 0., 1., 2.]), 546 array([ 3., 4.]), 547 array([ 5.]), 548 array([ 6., 7.]), 549 array([], dtype=float64)] 550 551 """ 552 try: 553 len(indices_or_sections) 554 except TypeError: 555 sections = indices_or_sections 556 N = ary.shape[axis] 557 if N % sections: 558 raise ValueError( 559 'array split does not result in an equal division') 560 res = array_split(ary, indices_or_sections, axis) 561 return res 562 563 def hsplit(ary, indices_or_sections): 564 """ 565 Split an array into multiple sub-arrays horizontally (column-wise). 566 567 Please refer to the `split` documentation. `hsplit` is equivalent 568 to `split` with ``axis=1``, the array is always split along the second 569 axis regardless of the array dimension. 570 571 See Also 572 -------- 573 split : Split an array into multiple sub-arrays of equal size. 574 575 Examples 576 -------- 577 >>> x = np.arange(16.0).reshape(4, 4) 578 >>> x 579 array([[ 0., 1., 2., 3.], 580 [ 4., 5., 6., 7.], 581 [ 8., 9., 10., 11.], 582 [ 12., 13., 14., 15.]]) 583 >>> np.hsplit(x, 2) 584 [array([[ 0., 1.], 585 [ 4., 5.], 586 [ 8., 9.], 587 [ 12., 13.]]), 588 array([[ 2., 3.], 589 [ 6., 7.], 590 [ 10., 11.], 591 [ 14., 15.]])] 592 >>> np.hsplit(x, np.array([3, 6])) 593 [array([[ 0., 1., 2.], 594 [ 4., 5., 6.], 595 [ 8., 9., 10.], 596 [ 12., 13., 14.]]), 597 array([[ 3.], 598 [ 7.], 599 [ 11.], 600 [ 15.]]), 601 array([], dtype=float64)] 602 603 With a higher dimensional array the split is still along the second axis. 604 605 >>> x = np.arange(8.0).reshape(2, 2, 2) 606 >>> x 607 array([[[ 0., 1.], 608 [ 2., 3.]], 609 [[ 4., 5.], 610 [ 6., 7.]]]) 611 >>> np.hsplit(x, 2) 612 [array([[[ 0., 1.]], 613 [[ 4., 5.]]]), 614 array([[[ 2., 3.]], 615 [[ 6., 7.]]])] 616 617 """ 618 if _nx.ndim(ary) == 0: 619 raise ValueError('hsplit only works on arrays of 1 or more dimensions') 620 if ary.ndim > 1: 621 return split(ary, indices_or_sections, 1) 622 else: 623 return split(ary, indices_or_sections, 0) 624 625 def vsplit(ary, indices_or_sections): 626 """ 627 Split an array into multiple sub-arrays vertically (row-wise). 628 629 Please refer to the ``split`` documentation. ``vsplit`` is equivalent 630 to ``split`` with `axis=0` (default), the array is always split along the 631 first axis regardless of the array dimension. 632 633 See Also 634 -------- 635 split : Split an array into multiple sub-arrays of equal size. 636 637 Examples 638 -------- 639 >>> x = np.arange(16.0).reshape(4, 4) 640 >>> x 641 array([[ 0., 1., 2., 3.], 642 [ 4., 5., 6., 7.], 643 [ 8., 9., 10., 11.], 644 [ 12., 13., 14., 15.]]) 645 >>> np.vsplit(x, 2) 646 [array([[ 0., 1., 2., 3.], 647 [ 4., 5., 6., 7.]]), 648 array([[ 8., 9., 10., 11.], 649 [ 12., 13., 14., 15.]])] 650 >>> np.vsplit(x, np.array([3, 6])) 651 [array([[ 0., 1., 2., 3.], 652 [ 4., 5., 6., 7.], 653 [ 8., 9., 10., 11.]]), 654 array([[ 12., 13., 14., 15.]]), 655 array([], dtype=float64)] 656 657 With a higher dimensional array the split is still along the first axis. 658 659 >>> x = np.arange(8.0).reshape(2, 2, 2) 660 >>> x 661 array([[[ 0., 1.], 662 [ 2., 3.]], 663 [[ 4., 5.], 664 [ 6., 7.]]]) 665 >>> np.vsplit(x, 2) 666 [array([[[ 0., 1.], 667 [ 2., 3.]]]), 668 array([[[ 4., 5.], 669 [ 6., 7.]]])] 670 671 """ 672 if _nx.ndim(ary) < 2: 673 raise ValueError('vsplit only works on arrays of 2 or more dimensions') 674 return split(ary, indices_or_sections, 0) 675 676 def dsplit(ary, indices_or_sections): 677 """ 678 Split array into multiple sub-arrays along the 3rd axis (depth). 679 680 Please refer to the `split` documentation. `dsplit` is equivalent 681 to `split` with ``axis=2``, the array is always split along the third 682 axis provided the array dimension is greater than or equal to 3. 683 684 See Also 685 -------- 686 split : Split an array into multiple sub-arrays of equal size. 687 688 Examples 689 -------- 690 >>> x = np.arange(16.0).reshape(2, 2, 4) 691 >>> x 692 array([[[ 0., 1., 2., 3.], 693 [ 4., 5., 6., 7.]], 694 [[ 8., 9., 10., 11.], 695 [ 12., 13., 14., 15.]]]) 696 >>> np.dsplit(x, 2) 697 [array([[[ 0., 1.], 698 [ 4., 5.]], 699 [[ 8., 9.], 700 [ 12., 13.]]]), 701 array([[[ 2., 3.], 702 [ 6., 7.]], 703 [[ 10., 11.], 704 [ 14., 15.]]])] 705 >>> np.dsplit(x, np.array([3, 6])) 706 [array([[[ 0., 1., 2.], 707 [ 4., 5., 6.]], 708 [[ 8., 9., 10.], 709 [ 12., 13., 14.]]]), 710 array([[[ 3.], 711 [ 7.]], 712 [[ 11.], 713 [ 15.]]]), 714 array([], dtype=float64)] 715 716 """ 717 if _nx.ndim(ary) < 3: 718 raise ValueError('dsplit only works on arrays of 3 or more dimensions') 719 return split(ary, indices_or_sections, 2) 720 721 def get_array_prepare(*args): 722 """Find the wrapper for the array with the highest priority. 723 724 In case of ties, leftmost wins. If no wrapper is found, return None 725 """ 726 wrappers = sorted((getattr(x, '__array_priority__', 0), -i, 727 x.__array_prepare__) for i, x in enumerate(args) 728 if hasattr(x, '__array_prepare__')) 729 if wrappers: 730 return wrappers[-1][-1] 731 return None 732 733 def get_array_wrap(*args): 734 """Find the wrapper for the array with the highest priority. 735 736 In case of ties, leftmost wins. If no wrapper is found, return None 737 """ 738 wrappers = sorted((getattr(x, '__array_priority__', 0), -i, 739 x.__array_wrap__) for i, x in enumerate(args) 740 if hasattr(x, '__array_wrap__')) 741 if wrappers: 742 return wrappers[-1][-1] 743 return None 744 745 def kron(a, b): 746 """ 747 Kronecker product of two arrays. 748 749 Computes the Kronecker product, a composite array made of blocks of the 750 second array scaled by the first. 751 752 Parameters 753 ---------- 754 a, b : array_like 755 756 Returns 757 ------- 758 out : ndarray 759 760 See Also 761 -------- 762 outer : The outer product 763 764 Notes 765 ----- 766 The function assumes that the number of dimensions of `a` and `b` 767 are the same, if necessary prepending the smallest with ones. 768 If `a.shape = (r0,r1,..,rN)` and `b.shape = (s0,s1,...,sN)`, 769 the Kronecker product has shape `(r0*s0, r1*s1, ..., rN*SN)`. 770 The elements are products of elements from `a` and `b`, organized 771 explicitly by:: 772 773 kron(a,b)[k0,k1,...,kN] = a[i0,i1,...,iN] * b[j0,j1,...,jN] 774 775 where:: 776 777 kt = it * st + jt, t = 0,...,N 778 779 In the common 2-D case (N=1), the block structure can be visualized:: 780 781 [[ a[0,0]*b, a[0,1]*b, ... , a[0,-1]*b ], 782 [ ... ... ], 783 [ a[-1,0]*b, a[-1,1]*b, ... , a[-1,-1]*b ]] 784 785 786 Examples 787 -------- 788 >>> np.kron([1,10,100], [5,6,7]) 789 array([ 5, 6, 7, 50, 60, 70, 500, 600, 700]) 790 >>> np.kron([5,6,7], [1,10,100]) 791 array([ 5, 50, 500, 6, 60, 600, 7, 70, 700]) 792 793 >>> np.kron(np.eye(2), np.ones((2,2))) 794 array([[ 1., 1., 0., 0.], 795 [ 1., 1., 0., 0.], 796 [ 0., 0., 1., 1.], 797 [ 0., 0., 1., 1.]]) 798 799 >>> a = np.arange(100).reshape((2,5,2,5)) 800 >>> b = np.arange(24).reshape((2,3,4)) 801 >>> c = np.kron(a,b) 802 >>> c.shape 803 (2, 10, 6, 20) 804 >>> I = (1,3,0,2) 805 >>> J = (0,2,1) 806 >>> J1 = (0,) + J # extend to ndim=4 807 >>> S1 = (1,) + b.shape 808 >>> K = tuple(np.array(I) * np.array(S1) + np.array(J1)) 809 >>> c[K] == a[I]*b[J] 810 True 811 812 """ 813 b = asanyarray(b) 814 a = array(a, copy=False, subok=True, ndmin=b.ndim) 815 ndb, nda = b.ndim, a.ndim 816 if (nda == 0 or ndb == 0): 817 return _nx.multiply(a, b) 818 as_ = a.shape 819 bs = b.shape 820 if not a.flags.contiguous: 821 a = reshape(a, as_) 822 if not b.flags.contiguous: 823 b = reshape(b, bs) 824 nd = ndb 825 if (ndb != nda): 826 if (ndb > nda): 827 as_ = (1,)*(ndb-nda) + as_ 828 else: 829 bs = (1,)*(nda-ndb) + bs 830 nd = nda 831 result = outer(a, b).reshape(as_+bs) 832 axis = nd-1 833 for _ in range(nd): 834 result = concatenate(result, axis=axis) 835 wrapper = get_array_prepare(a, b) 836 if wrapper is not None: 837 result = wrapper(result) 838 wrapper = get_array_wrap(a, b) 839 if wrapper is not None: 840 result = wrapper(result) 841 return result 842 843 844 def tile(A, reps): 845 """ 846 Construct an array by repeating A the number of times given by reps. 847 848 If `reps` has length ``d``, the result will have dimension of 849 ``max(d, A.ndim)``. 850 851 If ``A.ndim < d``, `A` is promoted to be d-dimensional by prepending new 852 axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication, 853 or shape (1, 1, 3) for 3-D replication. If this is not the desired 854 behavior, promote `A` to d-dimensions manually before calling this 855 function. 856 857 If ``A.ndim > d``, `reps` is promoted to `A`.ndim by pre-pending 1's to it. 858 Thus for an `A` of shape (2, 3, 4, 5), a `reps` of (2, 2) is treated as 859 (1, 1, 2, 2). 860 861 Note : Although tile may be used for broadcasting, it is strongly 862 recommended to use numpy's broadcasting operations and functions. 863 864 Parameters 865 ---------- 866 A : array_like 867 The input array. 868 reps : array_like 869 The number of repetitions of `A` along each axis. 870 871 Returns 872 ------- 873 c : ndarray 874 The tiled output array. 875 876 See Also 877 -------- 878 repeat : Repeat elements of an array. 879 broadcast_to : Broadcast an array to a new shape 880 881 Examples 882 -------- 883 >>> a = np.array([0, 1, 2]) 884 >>> np.tile(a, 2) 885 array([0, 1, 2, 0, 1, 2]) 886 >>> np.tile(a, (2, 2)) 887 array([[0, 1, 2, 0, 1, 2], 888 [0, 1, 2, 0, 1, 2]]) 889 >>> np.tile(a, (2, 1, 2)) 890 array([[[0, 1, 2, 0, 1, 2]], 891 [[0, 1, 2, 0, 1, 2]]]) 892 893 >>> b = np.array([[1, 2], [3, 4]]) 894 >>> np.tile(b, 2) 895 array([[1, 2, 1, 2], 896 [3, 4, 3, 4]]) 897 >>> np.tile(b, (2, 1)) 898 array([[1, 2], 899 [3, 4], 900 [1, 2], 901 [3, 4]]) 902 903 >>> c = np.array([1,2,3,4]) 904 >>> np.tile(c,(4,1)) 905 array([[1, 2, 3, 4], 906 [1, 2, 3, 4], 907 [1, 2, 3, 4], 908 [1, 2, 3, 4]]) 909 """ 910 try: 911 tup = tuple(reps) 912 except TypeError: 913 tup = (reps,) 914 d = len(tup) 915 if all(x == 1 for x in tup) and isinstance(A, _nx.ndarray): 916 # Fixes the problem that the function does not make a copy if A is a 917 # numpy array and the repetitions are 1 in all dimensions 918 return _nx.array(A, copy=True, subok=True, ndmin=d) 919 else: 920 # Note that no copy of zero-sized arrays is made. However since they 921 # have no data there is no risk of an inadvertent overwrite. 922 c = _nx.array(A, copy=False, subok=True, ndmin=d) 923 if (d < c.ndim): 924 tup = (1,)*(c.ndim-d) + tup 925 shape_out = tuple(s*t for s, t in zip(c.shape, tup)) 926 n = c.size 927 if n > 0: 928 for dim_in, nrep in zip(c.shape, tup): 929 if nrep != 1: 930 c = c.reshape(-1, n).repeat(nrep, 0) 931 n //= dim_in 932 return c.reshape(shape_out) 933 [end of numpy/lib/shape_base.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
numpy/numpy
306d79c029511bc90493a2aa686c2ddcc012a776
PR #10559 failed to fix einsum (optimize=True) broadcasting bug #10559 introduced the [following code](https://github.com/numpy/numpy/blob/a9e9343ecf5a6564eb09f1d7078c407db2ff7d9e/numpy/core/einsumfunc.py#L1107) to prevent dispatching `numpy.tensordot` in a case where einsum was broadcasting over a singleton dimension. ```python # Handle broadcasting vs BLAS cases if blas: # Checks have already been handled input_str, results_index = einsum_str.split('->') input_left, input_right = input_str.split(',') if 1 in tmp_operands[0] or 1 in tmp_operands[1]: left_dims = {dim: size for dim, size in zip(input_left, tmp_operands[0].shape)} right_dims = {dim: size for dim, size in zip(input_right, tmp_operands[1].shape)} # If dims do not match we are broadcasting, BLAS off if any(left_dims[ind] != right_dims[ind] for ind in idx_rm): blas = False ``` However, this checks to see if `1` occurs within the operand array itself rather than the shape of the operand. Incidentally, this likely produced a nasty performance regression. Thus the line `if 1 in tmp_operands[0] or 1 in tmp_operands[1]` should be `if 1 in tmp_operands[0].shape or 1 in tmp_operands[1].shape` This wasn't caught by the [unit test](https://github.com/numpy/numpy/blob/a9e9343ecf5a6564eb09f1d7078c407db2ff7d9e/numpy/core/tests/test_einsum.py#L486) because arrays of ones were used 🌌 This leads to the following behavior: ```python >>> x = np.array([0., 1., 0.]) # contains 1, no blas >>> y = np.array([0.0]) >>> np.einsum("i,i", x, y, optimize=True) 0. ``` ```python >>> x = np.array([0., -1., 0.]) # doesn't contain 1, yes blas >>> y = np.array([0.0]) >>> np.einsum("i,i", x, y, optimize=True) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-184-b0dcea8eedea> in <module>() 1 x = np.array([0., -1., 0.]) 2 y = np.array([0.0]) ----> 3 np.einsum("i,i", x, y, optimize=True) c:\anaconda\envs\py36\lib\site-packages\numpy\core\einsumfunc.py in einsum(*operands, **kwargs) 1132 1133 # Contract! -> 1134 new_view = tensordot(*tmp_operands, axes=(tuple(left_pos), tuple(right_pos))) 1135 1136 # Build a new view if needed c:\anaconda\envs\py36\lib\site-packages\numpy\core\numeric.py in tensordot(a, b, axes) 1281 axes_b[k] += ndb 1282 if not equal: -> 1283 raise ValueError("shape-mismatch for sum") 1284 1285 # Move the axes to sum over to the end of "a" ValueError: shape-mismatch for sum ```
Ouch, that is definitely a bug- can you patch this? In addition I noticed that the fix doesn't allow GEMM's for non-broadcasting cases which is a legitimate use. I patched this in `opt_einsum` itself in a very different [manner](https://github.com/dgasmith/opt_einsum/commit/4dbe46eede0fbc5d4e6d4ce11b732cef9cb1d100). I should likely port this over when I get a chance. Excuse me, why numpy arrows `np.einsum("i,i", x, y, optimize=True)` when dimension of x and y is different? In [document,](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.einsum.html) `np.einsum('i,i', a, b) is equivalent to np.inner(a,b)`. But np.inner disallows np.inner(x, y) when dimension of x and y. I thought that same labels means same label dimension. For example, ``` >>> numpy.einsum('ii', numpy.ones((3, 1))) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ryo/workspace/chainer-env/lib/python3.5/site-packages/numpy/core/einsumfunc.py", line 1069, in einsum return c_einsum(*operands, **kwargs) ValueError: dimensions in operand 0 for collapsing index 'i' don't match (3 != 1) ``` This error message apparently seems to indicate that the number of dimensions should be the same. Yes, for inner products the dimensions of both arrays should be the same. It is unclear what this operation should be for different sized arrays. Both `einsum` and `inner` bounce inputs like this. The shown error message discusses the size of individual dimensions not the number of dimensions? Sorry not entirely sure what the second example is getting at there. > Both einsum and inner bounce inputs like this. But np.inner raises error, einsum does not raise error? ``` >>> np.einsum("i, i", np.ones(1), np.ones(3)) 3.0 >>> np.inner(np.ones(1), np.ones(3)) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: shapes (1,) and (3,) not aligned: 1 (dim 0) != 3 (dim 0) ``` IIUC, einsum broadcasts inputs, try ones(2) instead of ones(1). @dgasmith So this still needs a fix? The original issue still needs a fix. There are two ways to approach this 1) fix the typo as shown in the script 2) use a more comprehensive solution that is more flexible in what is permissible via BLAS (implemented in `opt_einsum`). I would like to do 2), but it is quite apparent that the many uses of `einsum` are quite extreme and not always documented so I am bit hesitant to do 2) without some extreme testing. @dgasmith I'm sorry for dropping off the map on this. I haven't had time to circle back. It sounds like the best mode of operation is to do the typo-fix, since that is pretty trivial and does resolve the issue, and then eventually bring in your additional logic to enable BLAS (and hopefully add to the documentation). I can try to get around to the typo fix this evening. Regarding unit tests, I originally caught this issue via my unit tests for my [autograd library](https://github.com/rsokl/MyGrad/blob/master/tests/linalg/test_einsum.py). These test both broadcasting and non-broadcasting operations. Currently, they exclude the optimization flag (because of the present issue), but it can be added easily. Please feel free to leverage these if they would be any help to you. @rsokl Great, thanks for fixing the current issue. If you have a moment can you add a tests that fails under the current code? It would be great to continue to cover previous issues in the tests so that we do not regress. It is probably worth fixing the current issue and rerunning your code with the optimize flag on to see if thats clean. If not please do make a new issue with a code snippet for that as well!
2018-06-01T04:07:54Z
<patch> diff --git a/numpy/core/einsumfunc.py b/numpy/core/einsumfunc.py --- a/numpy/core/einsumfunc.py +++ b/numpy/core/einsumfunc.py @@ -1109,7 +1109,7 @@ def einsum(*operands, **kwargs): # Checks have already been handled input_str, results_index = einsum_str.split('->') input_left, input_right = input_str.split(',') - if 1 in tmp_operands[0] or 1 in tmp_operands[1]: + if 1 in tmp_operands[0].shape or 1 in tmp_operands[1].shape: left_dims = {dim: size for dim, size in zip(input_left, tmp_operands[0].shape)} right_dims = {dim: size for dim, size in </patch>
[]
[]
pantsbuild__pants-12281
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Invalid option enum values lead to parse error that does not list valid choices. For example: ```console $ ./pants dependencies --type=x src/python/pants/util/:: 09:44:50.91 [ERROR] Error computing value for --type in scope 'dependencies' (may also be from PANTS_* environment variables). Caused by: ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 582, in to_value_type return type_arg(val_str) File "/usr/lib/python3.7/enum.py", line 315, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.7/enum.py", line 569, in __new__ raise exc File "/usr/lib/python3.7/enum.py", line 553, in __new__ result = cls._missing_(value) File "/usr/lib/python3.7/enum.py", line 582, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 284, in parse_args dest, kwargs, flag_vals, parse_args_request.passthrough_args File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in _compute_value flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in <listcomp> flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 617, in to_value_type return self.to_value_type(val_str, type_arg, member_type, dest) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 585, in to_value_type f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " pants.option.errors.ParseError: Error applying type 'DependencyType' to option value 'x', for option '--type' in scope 'dependencies': 'x' is not a valid DependencyType ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 582, in to_value_type return type_arg(val_str) File "/usr/lib/python3.7/enum.py", line 315, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.7/enum.py", line 569, in __new__ raise exc File "/usr/lib/python3.7/enum.py", line 553, in __new__ result = cls._missing_(value) File "/usr/lib/python3.7/enum.py", line 582, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 284, in parse_args dest, kwargs, flag_vals, parse_args_request.passthrough_args File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in _compute_value flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in <listcomp> flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 617, in to_value_type return self.to_value_type(val_str, type_arg, member_type, dest) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 585, in to_value_type f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " pants.option.errors.ParseError: Error applying type 'DependencyType' to option value 'x', for option '--type' in scope 'dependencies': 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/bin/daemon_pants_runner.py", line 130, in single_daemonized_run cancellation_latch=cancellation_latch, File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/bin/local_pants_runner.py", line 143, in create options.for_scope(scope) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/util/memo.py", line 123, in memoize result = func(*args, **kwargs) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/options.py", line 422, in for_scope values = self._parser_hierarchy.get_parser_by_scope(scope).parse_args(parse_args_request) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 294, in parse_args f"Error computing value for {args_str} in {self._scope_str()} (may also be " pants.option.errors.ParseError: Error computing value for --type in scope 'dependencies' (may also be from PANTS_* environment variables). Caused by: ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 582, in to_value_type return type_arg(val_str) File "/usr/lib/python3.7/enum.py", line 315, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.7/enum.py", line 569, in __new__ raise exc File "/usr/lib/python3.7/enum.py", line 553, in __new__ result = cls._missing_(value) File "/usr/lib/python3.7/enum.py", line 582, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 284, in parse_args dest, kwargs, flag_vals, parse_args_request.passthrough_args File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in _compute_value flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in <listcomp> flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 617, in to_value_type return self.to_value_type(val_str, type_arg, member_type, dest) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 585, in to_value_type f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " pants.option.errors.ParseError: Error applying type 'DependencyType' to option value 'x', for option '--type' in scope 'dependencies': 'x' is not a valid DependencyType (Use --no-process-execution-local-cleanup to inspect chroots and/or -ldebug for more logs. See https://www.pantsbuild.org/v2.6/docs/troubleshooting for common issues. Consider reaching out for help: https://www.pantsbuild.org/v2.6/docs/getting-help.) ``` This is as opposed to, for example, straight up argparse which gives something like this: ```console $ pex --venv bob usage: pex [-o OUTPUT.PEX] [options] [-- arg1 arg2 ...] pex builds a PEX (Python Executable) file based on the given specifications: sources, requirements, their dependencies and other options. Command-line options can be provided in one or more files by prefixing the filenames with an @ symbol. These files must contain one argument per line. pex: error: argument --venv: invalid choice: 'bob' (choose from 'prepend', 'append') ``` </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a scalable build system for _monorepos_: codebases containing 4 multiple projects, often using multiple programming languages and frameworks, 5 in a single unified code repository. 6 7 Some noteworthy features include: 8 9 * Explicit dependency modeling. 10 * Fine-grained invalidation. 11 * Shared result caching. 12 * Concurrent execution. 13 * Remote execution. 14 * Unified interface for multiple tools and languages. 15 * Extensibility and customizability via a plugin API. 16 17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/). 18 19 We release to [PyPI](https://pypi.org/pypi) 20 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 21 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 22 23 # Requirements 24 25 To run Pants, you need: 26 27 * Linux or macOS. 28 * Python 3.7+ discoverable on your `PATH`. 29 * A C compiler, system headers and Python headers (to compile native Python modules). 30 * Internet access (so that Pants can fully bootstrap itself). 31 [end of README.md] [start of pants-plugins/internal_plugins/releases/register.py] 1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from packaging.version import Version 5 6 from pants.backend.python.goals.setup_py import SetupKwargs, SetupKwargsRequest 7 from pants.engine.fs import DigestContents, GlobMatchErrorBehavior, PathGlobs 8 from pants.engine.rules import Get, collect_rules, rule 9 from pants.engine.target import Target 10 from pants.engine.unions import UnionRule 11 from pants.option.subsystem import Subsystem 12 from pants.util.frozendict import FrozenDict 13 from pants.version import PANTS_SEMVER, VERSION 14 15 16 class PantsReleases(Subsystem): 17 options_scope = "pants-releases" 18 help = "Options for Pants's release process." 19 20 @classmethod 21 def register_options(cls, register): 22 super().register_options(register) 23 register( 24 "--release-notes", 25 type=dict, 26 help="A dict from branch name to release notes rst-file location.", 27 ) 28 29 @property 30 def _release_notes(self) -> FrozenDict[str, str]: 31 return FrozenDict(self.options.release_notes) 32 33 @classmethod 34 def _branch_name(cls, version: Version) -> str: 35 """Defines a mapping between versions and branches. 36 37 All releases, including dev releases, map to a particular branch page. 38 """ 39 suffix = version.public[len(version.base_version) :] 40 components = version.base_version.split(".") + [suffix] 41 if suffix != "" and not ( 42 suffix.startswith("rc") 43 or suffix.startswith("a") 44 or suffix.startswith("b") 45 or suffix.startswith(".dev") 46 ): 47 raise ValueError(f"Unparseable pants version number: {version}") 48 return "{}.{}.x".format(*components[:2]) 49 50 def notes_file_for_version(self, version: Version) -> str: 51 """Given the parsed Version of Pants, return its release notes file path.""" 52 branch_name = self._branch_name(version) 53 notes_file = self._release_notes.get(branch_name) 54 if notes_file is None: 55 raise ValueError( 56 f"Version {version} lives in branch {branch_name}, which is not configured in " 57 f"{self._release_notes}." 58 ) 59 return notes_file 60 61 62 class PantsSetupKwargsRequest(SetupKwargsRequest): 63 @classmethod 64 def is_applicable(cls, _: Target) -> bool: 65 # We always use our custom `setup()` kwargs generator for `python_distribution` targets in 66 # this repo. 67 return True 68 69 70 @rule 71 async def pants_setup_kwargs( 72 request: PantsSetupKwargsRequest, pants_releases: PantsReleases 73 ) -> SetupKwargs: 74 kwargs = request.explicit_kwargs.copy() 75 76 # Validate that required fields are set. 77 if not kwargs["name"].startswith("pantsbuild.pants"): 78 raise ValueError( 79 f"Invalid `name` kwarg in the `provides` field for {request.target.address}. The name " 80 f"must start with 'pantsbuild.pants', but was {kwargs['name']}." 81 ) 82 if "description" not in kwargs: 83 raise ValueError( 84 f"Missing a `description` kwarg in the `provides` field for {request.target.address}." 85 ) 86 87 # Add classifiers. We preserve any that were already set. 88 standard_classifiers = [ 89 "Intended Audience :: Developers", 90 "License :: OSI Approved :: Apache Software License", 91 "Operating System :: MacOS :: MacOS X", 92 "Operating System :: POSIX :: Linux", 93 "Programming Language :: Python", 94 "Topic :: Software Development :: Build Tools", 95 ] 96 kwargs["classifiers"] = [*standard_classifiers, *kwargs.get("classifiers", [])] 97 98 # Determine the long description by reading from ABOUT.rst and the release notes. 99 notes_file = pants_releases.notes_file_for_version(PANTS_SEMVER) 100 digest_contents = await Get( 101 DigestContents, 102 PathGlobs( 103 ["src/python/pants/ABOUT.rst", notes_file], 104 description_of_origin="Pants release files", 105 glob_match_error_behavior=GlobMatchErrorBehavior.error, 106 ), 107 ) 108 long_description = "\n".join(file_content.content.decode() for file_content in digest_contents) 109 110 # Hardcode certain kwargs and validate that they weren't already set. 111 hardcoded_kwargs = dict( 112 version=VERSION, 113 long_description=long_description, 114 long_description_content_type="text/x-rst", 115 url="https://github.com/pantsbuild/pants", 116 project_urls={ 117 "Documentation": "https://www.pantsbuild.org/", 118 "Source": "https://github.com/pantsbuild/pants", 119 "Tracker": "https://github.com/pantsbuild/pants/issues", 120 }, 121 license="Apache License, Version 2.0", 122 zip_safe=True, 123 ) 124 conflicting_hardcoded_kwargs = set(kwargs.keys()).intersection(hardcoded_kwargs.keys()) 125 if conflicting_hardcoded_kwargs: 126 raise ValueError( 127 f"These kwargs should not be set in the `provides` field for {request.target.address} " 128 "because Pants's internal plugin will automatically set them: " 129 f"{sorted(conflicting_hardcoded_kwargs)}" 130 ) 131 kwargs.update(hardcoded_kwargs) 132 133 return SetupKwargs(kwargs, address=request.target.address) 134 135 136 def rules(): 137 return (*collect_rules(), UnionRule(SetupKwargsRequest, PantsSetupKwargsRequest)) 138 [end of pants-plugins/internal_plugins/releases/register.py] [start of src/python/pants/option/parser.py] 1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 import copy 7 import inspect 8 import json 9 import os 10 import re 11 import traceback 12 import typing 13 from collections import defaultdict 14 from dataclasses import dataclass 15 from enum import Enum 16 from pathlib import Path 17 from typing import Any, Callable, DefaultDict, Dict, Iterable, List, Mapping, Set, Tuple, Type 18 19 import yaml 20 21 from pants.base.build_environment import get_buildroot 22 from pants.base.deprecated import validate_deprecation_semver, warn_or_error 23 from pants.option.config import Config 24 from pants.option.custom_types import ( 25 DictValueComponent, 26 ListValueComponent, 27 UnsetBool, 28 dir_option, 29 file_option, 30 shell_str, 31 target_option, 32 ) 33 from pants.option.errors import ( 34 BooleanConversionError, 35 BooleanOptionNameWithNo, 36 FromfileError, 37 ImplicitValIsNone, 38 InvalidKwarg, 39 InvalidKwargNonGlobalScope, 40 InvalidMemberType, 41 MemberTypeNotAllowed, 42 MutuallyExclusiveOptionError, 43 NoOptionNames, 44 OptionAlreadyRegistered, 45 OptionNameDash, 46 OptionNameDoubleDash, 47 ParseError, 48 PassthroughType, 49 RecursiveSubsystemOption, 50 RegistrationError, 51 Shadowing, 52 UnknownFlagsError, 53 ) 54 from pants.option.option_util import flatten_shlexed_list, is_dict_option, is_list_option 55 from pants.option.option_value_container import OptionValueContainer, OptionValueContainerBuilder 56 from pants.option.ranked_value import Rank, RankedValue 57 from pants.option.scope import GLOBAL_SCOPE, GLOBAL_SCOPE_CONFIG_SECTION, ScopeInfo 58 from pants.util.meta import frozen_after_init 59 60 61 @dataclass(frozen=True) 62 class OptionValueHistory: 63 ranked_values: Tuple[RankedValue] 64 65 @property 66 def final_value(self) -> RankedValue: 67 return self.ranked_values[-1] 68 69 70 class Parser: 71 """An argument parser in a hierarchy. 72 73 Each node in the hierarchy is a 'scope': the root is the global scope, and the parent of 74 a node is the scope it's immediately contained in. E.g., the 'compile.java' scope is 75 a child of the 'compile' scope, which is a child of the global scope. 76 77 Options registered on a parser are also registered transitively on all the scopes it encloses. 78 We forbid registering options that shadow other options, and registration walks up and down the 79 hierarchy to enforce that. 80 """ 81 82 @staticmethod 83 def is_bool(kwargs: Mapping[str, Any]) -> bool: 84 type_arg = kwargs.get("type") 85 if type_arg is None: 86 return False 87 if type_arg is bool: 88 return True 89 try: 90 return typing.get_type_hints(type_arg).get("return") is bool 91 except TypeError: 92 return False 93 94 @staticmethod 95 def ensure_bool(val: bool | str) -> bool: 96 if isinstance(val, bool): 97 return val 98 if isinstance(val, str): 99 s = val.lower() 100 if s == "true": 101 return True 102 if s == "false": 103 return False 104 raise BooleanConversionError(f'Got "{val}". Expected "True" or "False".') 105 raise BooleanConversionError(f"Got {val}. Expected True or False.") 106 107 @classmethod 108 def _invert(cls, s: bool | str | None) -> bool | None: 109 if s is None: 110 return None 111 b = cls.ensure_bool(s) 112 return not b 113 114 @classmethod 115 def scope_str(cls, scope: str) -> str: 116 return "global scope" if scope == GLOBAL_SCOPE else f"scope '{scope}'" 117 118 @classmethod 119 def _check_shadowing(cls, parent_scope, parent_known_args, child_scope, child_known_args): 120 for arg in parent_known_args & child_known_args: 121 raise Shadowing(child_scope, arg, outer_scope=cls.scope_str(parent_scope)) 122 123 def __init__( 124 self, 125 env: Mapping[str, str], 126 config: Config, 127 scope_info: ScopeInfo, 128 parent_parser: Parser | None, 129 ) -> None: 130 """Create a Parser instance. 131 132 :param env: a dict of environment variables. 133 :param config: data from a config file. 134 :param scope_info: the scope this parser acts for. 135 :param parent_parser: the parser for the scope immediately enclosing this one, or 136 None if this is the global scope. 137 """ 138 self._env = env 139 self._config = config 140 self._scope_info = scope_info 141 self._scope = self._scope_info.scope 142 143 # All option args registered with this parser. Used to prevent shadowing args in inner scopes. 144 self._known_args: Set[str] = set() 145 146 # List of (args, kwargs) registration pairs, exactly as captured at registration time. 147 self._option_registrations: List[Tuple[Tuple[str, ...], Dict[str, Any]]] = [] 148 149 # Map of dest -> history. 150 self._history: Dict[str, OptionValueHistory] = {} 151 152 self._parent_parser = parent_parser 153 self._child_parsers: List["Parser"] = [] 154 155 if self._parent_parser: 156 self._parent_parser._register_child_parser(self) 157 158 @property 159 def scope_info(self) -> ScopeInfo: 160 return self._scope_info 161 162 @property 163 def scope(self) -> str: 164 return self._scope 165 166 @property 167 def known_args(self) -> Set[str]: 168 return self._known_args 169 170 def history(self, dest: str) -> OptionValueHistory | None: 171 return self._history.get(dest) 172 173 def walk(self, callback: Callable) -> None: 174 """Invoke callback on this parser and its descendants, in depth-first order.""" 175 callback(self) 176 for child in self._child_parsers: 177 child.walk(callback) 178 179 @frozen_after_init 180 @dataclass(unsafe_hash=True) 181 class ParseArgsRequest: 182 flag_value_map: Dict[str, List[Any]] 183 namespace: OptionValueContainerBuilder 184 passthrough_args: List[str] 185 allow_unknown_flags: bool 186 187 def __init__( 188 self, 189 flags_in_scope: Iterable[str], 190 namespace: OptionValueContainerBuilder, 191 passthrough_args: List[str], 192 allow_unknown_flags: bool, 193 ) -> None: 194 """ 195 :param flags_in_scope: Iterable of arg strings to parse into flag values. 196 :param namespace: The object to register the flag values on 197 """ 198 self.flag_value_map = self._create_flag_value_map(flags_in_scope) 199 self.namespace = namespace 200 self.passthrough_args = passthrough_args 201 self.allow_unknown_flags = allow_unknown_flags 202 203 @staticmethod 204 def _create_flag_value_map(flags: Iterable[str]) -> DefaultDict[str, list[str | None]]: 205 """Returns a map of flag -> list of values, based on the given flag strings. 206 207 None signals no value given (e.g., -x, --foo). The value is a list because the user may 208 specify the same flag multiple times, and that's sometimes OK (e.g., when appending to 209 list- valued options). 210 """ 211 flag_value_map: DefaultDict[str, list[str | None]] = defaultdict(list) 212 for flag in flags: 213 flag_val: str | None 214 key, has_equals_sign, flag_val = flag.partition("=") 215 if not has_equals_sign: 216 if not flag.startswith("--"): # '-xfoo' style. 217 key = flag[0:2] 218 flag_val = flag[2:] 219 if not flag_val: 220 # Either a short option with no value or a long option with no equals sign. 221 # Important so we can distinguish between no value ('--foo') and setting to an empty 222 # string ('--foo='), for options with an implicit_value. 223 flag_val = None 224 flag_value_map[key].append(flag_val) 225 return flag_value_map 226 227 def parse_args(self, parse_args_request: ParseArgsRequest) -> OptionValueContainer: 228 """Set values for this parser's options on the namespace object. 229 230 :raises: :class:`ParseError` if any flags weren't recognized. 231 """ 232 233 flag_value_map = parse_args_request.flag_value_map 234 namespace = parse_args_request.namespace 235 236 mutex_map: DefaultDict[str, List[str]] = defaultdict(list) 237 for args, kwargs in self._unnormalized_option_registrations_iter(): 238 self._validate(args, kwargs) 239 dest = self.parse_dest(*args, **kwargs) 240 241 # Compute the values provided on the command line for this option. Note that there may be 242 # multiple values, for any combination of the following reasons: 243 # - The user used the same flag multiple times. 244 # - The user specified a boolean flag (--foo) and its inverse (--no-foo). 245 # - The option has multiple names, and the user used more than one of them. 246 # 247 # We also check if the option is deprecated, but we only do so if the option is explicitly 248 # specified as a command-line flag, so we don't spam users with deprecated option values 249 # specified in config, which isn't something they control. 250 implicit_value = kwargs.get("implicit_value") 251 if implicit_value is None and self.is_bool(kwargs): 252 implicit_value = True # Allows --foo to mean --foo=true. 253 254 flag_vals: list[int | float | bool | str] = [] 255 256 def add_flag_val(v: int | float | bool | str | None) -> None: 257 if v is None: 258 if implicit_value is None: 259 raise ParseError( 260 f"Missing value for command line flag {arg} in {self._scope_str()}" 261 ) 262 flag_vals.append(implicit_value) 263 else: 264 flag_vals.append(v) 265 266 for arg in args: 267 # If the user specified --no-foo on the cmd line, treat it as if the user specified 268 # --foo, but with the inverse value. 269 if self.is_bool(kwargs): 270 inverse_arg = self._inverse_arg(arg) 271 if inverse_arg in flag_value_map: 272 flag_value_map[arg] = [self._invert(v) for v in flag_value_map[inverse_arg]] 273 implicit_value = self._invert(implicit_value) 274 del flag_value_map[inverse_arg] 275 276 if arg in flag_value_map: 277 for v in flag_value_map[arg]: 278 add_flag_val(v) 279 del flag_value_map[arg] 280 281 # Get the value for this option, falling back to defaults as needed. 282 try: 283 value_history = self._compute_value( 284 dest, kwargs, flag_vals, parse_args_request.passthrough_args 285 ) 286 self._history[dest] = value_history 287 val = value_history.final_value 288 except ParseError as e: 289 # Reraise a new exception with context on the option being processed at the time of error. 290 # Note that other exception types can be raised here that are caught by ParseError (e.g. 291 # BooleanConversionError), hence we reference the original exception type as type(e). 292 args_str = ", ".join(args) 293 raise type(e)( 294 f"Error computing value for {args_str} in {self._scope_str()} (may also be " 295 f"from PANTS_* environment variables).\nCaused by:\n{traceback.format_exc()}" 296 ) 297 298 # If the option is explicitly given, check deprecation and mutual exclusion. 299 if val.rank > Rank.HARDCODED: 300 self._check_deprecated(dest, kwargs) 301 mutex_dest = kwargs.get("mutually_exclusive_group") 302 mutex_map_key = mutex_dest or dest 303 mutex_map[mutex_map_key].append(dest) 304 if len(mutex_map[mutex_map_key]) > 1: 305 raise MutuallyExclusiveOptionError( 306 "Can only provide one of these mutually exclusive options in " 307 f"{self._scope_str()}, but multiple given: " 308 f"{', '.join(mutex_map[mutex_map_key])}" 309 ) 310 311 setattr(namespace, dest, val) 312 313 if not parse_args_request.allow_unknown_flags and flag_value_map: 314 # There were unconsumed flags. 315 raise UnknownFlagsError(tuple(flag_value_map.keys()), self.scope) 316 return namespace.build() 317 318 def option_registrations_iter(self): 319 """Returns an iterator over the normalized registration arguments of each option in this 320 parser. 321 322 Useful for generating help and other documentation. 323 324 Each yielded item is an (args, kwargs) pair, as passed to register(), except that kwargs 325 will be normalized in the following ways: 326 - It will always have 'dest' explicitly set. 327 - It will always have 'default' explicitly set, and the value will be a RankedValue. 328 - For recursive options, the original registrar will also have 'recursive_root' set. 329 330 Note that recursive options we inherit from a parent will also be yielded here, with 331 the correctly-scoped default value. 332 """ 333 334 def normalize_kwargs(orig_args, orig_kwargs): 335 nkwargs = copy.copy(orig_kwargs) 336 dest = self.parse_dest(*orig_args, **nkwargs) 337 nkwargs["dest"] = dest 338 if not ("default" in nkwargs and isinstance(nkwargs["default"], RankedValue)): 339 type_arg = nkwargs.get("type", str) 340 member_type = nkwargs.get("member_type", str) 341 default_val = self.to_value_type( 342 nkwargs.get("default"), type_arg, member_type, dest 343 ) 344 if isinstance(default_val, (ListValueComponent, DictValueComponent)): 345 default_val = default_val.val 346 nkwargs["default"] = RankedValue(Rank.HARDCODED, default_val) 347 return nkwargs 348 349 # First yield any recursive options we inherit from our parent. 350 if self._parent_parser: 351 for args, kwargs in self._parent_parser._recursive_option_registration_args(): 352 yield args, normalize_kwargs(args, kwargs) 353 354 # Then yield our directly-registered options. 355 # This must come after yielding inherited recursive options, so we can detect shadowing. 356 for args, kwargs in self._option_registrations: 357 normalized_kwargs = normalize_kwargs(args, kwargs) 358 if "recursive" in normalized_kwargs: 359 # If we're the original registrar, make sure we can distinguish that. 360 normalized_kwargs["recursive_root"] = True 361 yield args, normalized_kwargs 362 363 def _unnormalized_option_registrations_iter(self): 364 """Returns an iterator over the raw registration arguments of each option in this parser. 365 366 Each yielded item is an (args, kwargs) pair, exactly as passed to register(), except for 367 substituting list and dict types with list_option/dict_option. 368 369 Note that recursive options we inherit from a parent will also be yielded here. 370 """ 371 # First yield any recursive options we inherit from our parent. 372 if self._parent_parser: 373 for args, kwargs in self._parent_parser._recursive_option_registration_args(): 374 yield args, kwargs 375 # Then yield our directly-registered options. 376 for args, kwargs in self._option_registrations: 377 if "recursive" in kwargs and self._scope_info.scope != GLOBAL_SCOPE: 378 raise RecursiveSubsystemOption(self.scope, args[0]) 379 yield args, kwargs 380 381 def _recursive_option_registration_args(self): 382 """Yield args, kwargs pairs for just our recursive options. 383 384 Includes all the options we inherit recursively from our ancestors. 385 """ 386 if self._parent_parser: 387 for args, kwargs in self._parent_parser._recursive_option_registration_args(): 388 yield args, kwargs 389 for args, kwargs in self._option_registrations: 390 # Note that all subsystem options are implicitly recursive: a subscope of a subsystem 391 # scope is another (optionable-specific) instance of the same subsystem, so it needs 392 # all the same options. 393 if self._scope_info.scope != GLOBAL_SCOPE or "recursive" in kwargs: 394 yield args, kwargs 395 396 def register(self, *args, **kwargs) -> None: 397 """Register an option.""" 398 if args: 399 dest = self.parse_dest(*args, **kwargs) 400 self._check_deprecated(dest, kwargs, print_warning=False) 401 402 if self.is_bool(kwargs): 403 default = kwargs.get("default") 404 if default is None: 405 # Unless a tri-state bool is explicitly opted into with the `UnsetBool` default value, 406 # boolean options always have an implicit boolean-typed default. We make that default 407 # explicit here. 408 kwargs["default"] = not self.ensure_bool(kwargs.get("implicit_value", True)) 409 elif default is UnsetBool: 410 kwargs["default"] = None 411 412 # Record the args. We'll do the underlying parsing on-demand. 413 self._option_registrations.append((args, kwargs)) 414 415 # Look for shadowing options up and down the hierarchy. 416 args_set = set(args) 417 for parent in self._parents_transitive(): 418 self._check_shadowing(parent.scope, parent._known_args, self.scope, args_set) 419 for child in self._children_transitive(): 420 self._check_shadowing(self.scope, args_set, child.scope, child._known_args) 421 422 # And look for direct conflicts 423 for arg in args: 424 if arg in self._known_args: 425 raise OptionAlreadyRegistered(self.scope, arg) 426 self._known_args.update(args) 427 428 def _check_deprecated(self, dest: str, kwargs, print_warning: bool = True) -> None: 429 """Checks option for deprecation and issues a warning/error if necessary.""" 430 removal_version = kwargs.get("removal_version", None) 431 if removal_version is not None: 432 warn_or_error( 433 removal_version=removal_version, 434 entity=f"option '{dest}' in {self._scope_str()}", 435 start_version=kwargs.get("deprecation_start_version", None), 436 hint=kwargs.get("removal_hint", None), 437 print_warning=print_warning, 438 ) 439 440 _allowed_registration_kwargs = { 441 "type", 442 "member_type", 443 "choices", 444 "dest", 445 "default", 446 "default_help_repr", 447 "implicit_value", 448 "metavar", 449 "help", 450 "advanced", 451 "recursive", 452 "recursive_root", 453 "fingerprint", 454 "removal_version", 455 "removal_hint", 456 "deprecation_start_version", 457 "fromfile", 458 "mutually_exclusive_group", 459 "daemon", 460 "passthrough", 461 } 462 463 _allowed_member_types = { 464 str, 465 int, 466 float, 467 dict, 468 dir_option, 469 file_option, 470 target_option, 471 shell_str, 472 } 473 474 def _validate(self, args, kwargs) -> None: 475 """Validate option registration arguments.""" 476 477 def error( 478 exception_type: Type[RegistrationError], 479 arg_name: str | None = None, 480 **msg_kwargs, 481 ) -> None: 482 if arg_name is None: 483 arg_name = args[0] if args else "<unknown>" 484 raise exception_type(self.scope, arg_name, **msg_kwargs) 485 486 if not args: 487 error(NoOptionNames) 488 # validate args. 489 for arg in args: 490 if not arg.startswith("-"): 491 error(OptionNameDash, arg_name=arg) 492 if not arg.startswith("--") and len(arg) > 2: 493 error(OptionNameDoubleDash, arg_name=arg) 494 495 # Validate kwargs. 496 if "implicit_value" in kwargs and kwargs["implicit_value"] is None: 497 error(ImplicitValIsNone) 498 type_arg = kwargs.get("type", str) 499 if "member_type" in kwargs and type_arg != list: 500 error(MemberTypeNotAllowed, type_=type_arg.__name__) 501 member_type = kwargs.get("member_type", str) 502 is_enum = inspect.isclass(member_type) and issubclass(member_type, Enum) 503 if not is_enum and member_type not in self._allowed_member_types: 504 error(InvalidMemberType, member_type=member_type.__name__) 505 506 if ( 507 "passthrough" in kwargs 508 and kwargs["passthrough"] 509 and (type_arg != list or member_type not in (shell_str, str)) 510 ): 511 error(PassthroughType) 512 513 for kwarg in kwargs: 514 if kwarg not in self._allowed_registration_kwargs: 515 error(InvalidKwarg, kwarg=kwarg) 516 517 # Ensure `daemon=True` can't be passed on non-global scopes (except for `recursive=True`). 518 if ( 519 kwarg == "daemon" 520 and self._scope != GLOBAL_SCOPE 521 and kwargs.get("recursive") is False 522 ): 523 error(InvalidKwargNonGlobalScope, kwarg=kwarg) 524 525 removal_version = kwargs.get("removal_version") 526 if removal_version is not None: 527 validate_deprecation_semver(removal_version, "removal version") 528 529 def _parents_transitive(self): 530 ancestor = self._parent_parser 531 while ancestor: 532 yield ancestor 533 ancestor = ancestor._parent_parser 534 535 def _children_transitive(self): 536 for child in self._child_parsers: 537 yield child 538 yield from child._children_transitive() 539 540 _ENV_SANITIZER_RE = re.compile(r"[.-]") 541 542 @staticmethod 543 def parse_dest(*args, **kwargs): 544 """Return the dest for an option registration. 545 546 If an explicit `dest` is specified, returns that and otherwise derives a default from the 547 option flags where '--foo-bar' -> 'foo_bar' and '-x' -> 'x'. 548 549 The dest is used for: 550 - The name of the field containing the option value. 551 - The key in the config file. 552 - Computing the name of the env var used to set the option name. 553 """ 554 dest = kwargs.get("dest") 555 if dest: 556 return dest 557 # No explicit dest, so compute one based on the first long arg, or the short arg 558 # if that's all there is. 559 arg = next((a for a in args if a.startswith("--")), args[0]) 560 return arg.lstrip("-").replace("-", "_") 561 562 @staticmethod 563 def _convert_member_type(member_type, value): 564 if member_type == dict: 565 return DictValueComponent.create(value).val 566 try: 567 return member_type(value) 568 except ValueError as error: 569 raise ParseError(str(error)) 570 571 def to_value_type(self, val_str, type_arg, member_type, dest): 572 """Convert a string to a value of the option's type.""" 573 if val_str is None: 574 return None 575 if type_arg == bool: 576 return self.ensure_bool(val_str) 577 try: 578 if type_arg == list: 579 return ListValueComponent.create(val_str, member_type=member_type) 580 if type_arg == dict: 581 return DictValueComponent.create(val_str) 582 return type_arg(val_str) 583 except (TypeError, ValueError) as e: 584 raise ParseError( 585 f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " 586 f"for option '--{dest}' in {self._scope_str()}: {e}" 587 ) 588 589 @classmethod 590 def get_env_var_names(cls, scope: str, dest: str): 591 # Get value from environment, and capture details about its derivation. 592 udest = dest.upper() 593 if scope == GLOBAL_SCOPE: 594 # For convenience, we allow three forms of env var for global scope options. 595 # The fully-specified env var is PANTS_GLOBAL_FOO, which is uniform with PANTS_<SCOPE>_FOO 596 # for all the other scopes. However we also allow simply PANTS_FOO. And if the option name 597 # itself starts with 'pants-' then we also allow simply FOO. E.g., PANTS_WORKDIR instead of 598 # PANTS_PANTS_WORKDIR or PANTS_GLOBAL_PANTS_WORKDIR. We take the first specified value we 599 # find, in this order: PANTS_GLOBAL_FOO, PANTS_FOO, FOO. 600 env_vars = [f"PANTS_GLOBAL_{udest}", f"PANTS_{udest}"] 601 if udest.startswith("PANTS_"): 602 env_vars.append(udest) 603 else: 604 sanitized_env_var_scope = cls._ENV_SANITIZER_RE.sub("_", scope.upper()) 605 env_vars = [f"PANTS_{sanitized_env_var_scope}_{udest}"] 606 return env_vars 607 608 def _compute_value(self, dest, kwargs, flag_val_strs, passthru_arg_strs): 609 """Compute the value to use for an option. 610 611 The source of the value is chosen according to the ranking in Rank. 612 """ 613 type_arg = kwargs.get("type", str) 614 member_type = kwargs.get("member_type", str) 615 616 def to_value_type(val_str): 617 return self.to_value_type(val_str, type_arg, member_type, dest) 618 619 # Helper function to expand a fromfile=True value string, if needed. 620 # May return a string or a dict/list decoded from a json/yaml file. 621 def expand(val_or_str): 622 if ( 623 kwargs.get("fromfile", True) 624 and isinstance(val_or_str, str) 625 and val_or_str.startswith("@") 626 ): 627 if val_or_str.startswith("@@"): # Support a literal @ for fromfile values via @@. 628 return val_or_str[1:] 629 else: 630 fromfile = val_or_str[1:] 631 try: 632 with open(fromfile, "r") as fp: 633 s = fp.read().strip() 634 if fromfile.endswith(".json"): 635 return json.loads(s) 636 elif fromfile.endswith(".yml") or fromfile.endswith(".yaml"): 637 return yaml.safe_load(s) 638 else: 639 return s 640 except (IOError, ValueError, yaml.YAMLError) as e: 641 raise FromfileError( 642 f"Failed to read {dest} in {self._scope_str()} from file {fromfile}: {e!r}" 643 ) 644 else: 645 return val_or_str 646 647 # Get value from config files, and capture details about its derivation. 648 config_details = None 649 config_section = GLOBAL_SCOPE_CONFIG_SECTION if self._scope == GLOBAL_SCOPE else self._scope 650 config_default_val_or_str = expand( 651 self._config.get(Config.DEFAULT_SECTION, dest, default=None) 652 ) 653 config_val_or_str = expand(self._config.get(config_section, dest, default=None)) 654 config_source_file = self._config.get_source_for_option( 655 config_section, dest 656 ) or self._config.get_source_for_option(Config.DEFAULT_SECTION, dest) 657 if config_source_file is not None: 658 config_source_file = os.path.relpath(config_source_file) 659 config_details = f"from {config_source_file}" 660 661 # Get value from environment, and capture details about its derivation. 662 env_vars = self.get_env_var_names(self._scope, dest) 663 env_val_or_str = None 664 env_details = None 665 if self._env: 666 for env_var in env_vars: 667 if env_var in self._env: 668 env_val_or_str = expand(self._env.get(env_var)) 669 env_details = f"from env var {env_var}" 670 break 671 672 # Get value from cmd-line flags. 673 flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] 674 if kwargs.get("passthrough"): 675 # NB: Passthrough arguments are either of type `str` or `shell_str` 676 # (see self._validate): the former never need interpretation, and the latter do not 677 # need interpretation when they have been provided directly via `sys.argv` as the 678 # passthrough args have been. 679 flag_vals.append( 680 ListValueComponent(ListValueComponent.MODIFY, [*passthru_arg_strs], []) 681 ) 682 683 if is_list_option(kwargs): 684 # Note: It's important to set flag_val to None if no flags were specified, so we can 685 # distinguish between no flags set vs. explicit setting of the value to []. 686 flag_val = ListValueComponent.merge(flag_vals) if flag_vals else None 687 elif is_dict_option(kwargs): 688 # Note: It's important to set flag_val to None if no flags were specified, so we can 689 # distinguish between no flags set vs. explicit setting of the value to {}. 690 flag_val = DictValueComponent.merge(flag_vals) if flag_vals else None 691 elif len(flag_vals) > 1: 692 raise ParseError( 693 f"Multiple cmd line flags specified for option {dest} in {self._scope_str()}" 694 ) 695 elif len(flag_vals) == 1: 696 flag_val = flag_vals[0] 697 else: 698 flag_val = None 699 flag_details = None if flag_val is None else "from command-line flag" 700 701 # Rank all available values. 702 # Note that some of these values may already be of the value type, but type conversion 703 # is idempotent, so this is OK. 704 705 values_to_rank = [ 706 (to_value_type(x), detail) 707 for (x, detail) in [ 708 (flag_val, flag_details), 709 (env_val_or_str, env_details), 710 (config_val_or_str, config_details), 711 (config_default_val_or_str, config_details), 712 (kwargs.get("default"), None), 713 (None, None), 714 ] 715 ] 716 # Note that ranked_vals will always have at least one element, and all elements will be 717 # instances of RankedValue (so none will be None, although they may wrap a None value). 718 ranked_vals = list(reversed(list(RankedValue.prioritized_iter(*values_to_rank)))) 719 720 def group(value_component_type, process_val_func) -> List[RankedValue]: 721 # We group any values that are merged together, so that the history can reflect 722 # merges vs. replacements in a useful way. E.g., if we merge [a, b] and [c], 723 # and then replace it with [d, e], the history will contain: 724 # - [d, e] (from command-line flag) 725 # - [a, b, c] (from env var, from config) 726 # And similarly for dicts. 727 grouped: List[List[RankedValue]] = [[]] 728 for ranked_val in ranked_vals: 729 if ranked_val.value and ranked_val.value.action == value_component_type.REPLACE: 730 grouped.append([]) 731 grouped[-1].append(ranked_val) 732 return [ 733 RankedValue( 734 grp[-1].rank, 735 process_val_func( 736 value_component_type.merge( 737 rv.value for rv in grp if rv.value is not None 738 ).val 739 ), 740 ", ".join(rv.details for rv in grp if rv.details), 741 ) 742 for grp in grouped 743 if grp 744 ] 745 746 if is_list_option(kwargs): 747 748 def process_list(lst): 749 lst = [self._convert_member_type(member_type, val) for val in lst] 750 if member_type == shell_str: 751 lst = flatten_shlexed_list(lst) 752 return lst 753 754 historic_ranked_vals = group(ListValueComponent, process_list) 755 elif is_dict_option(kwargs): 756 historic_ranked_vals = group(DictValueComponent, lambda x: x) 757 else: 758 historic_ranked_vals = ranked_vals 759 760 value_history = OptionValueHistory(tuple(historic_ranked_vals)) 761 762 # Helper function to check various validity constraints on final option values. 763 def check_scalar_value(val): 764 if val is None: 765 return 766 choices = kwargs.get("choices") 767 if choices is None and "type" in kwargs: 768 if inspect.isclass(type_arg) and issubclass(type_arg, Enum): 769 choices = list(type_arg) 770 if choices is not None and val not in choices: 771 raise ParseError( 772 f"`{val}` is not an allowed value for option {dest} in {self._scope_str()}. " 773 f"Must be one of: {choices}" 774 ) 775 elif type_arg == file_option: 776 check_file_exists(val) 777 elif type_arg == dir_option: 778 check_dir_exists(val) 779 780 def check_file_exists(val) -> None: 781 error_prefix = f"File value `{val}` for option `{dest}` in `{self._scope_str()}`" 782 try: 783 path = Path(val) 784 path_with_buildroot = Path(get_buildroot(), val) 785 except TypeError: 786 raise ParseError(f"{error_prefix} cannot be parsed as a file path.") 787 if not path.is_file() and not path_with_buildroot.is_file(): 788 raise ParseError(f"{error_prefix} does not exist.") 789 790 def check_dir_exists(val) -> None: 791 error_prefix = f"Directory value `{val}` for option `{dest}` in `{self._scope_str()}`" 792 try: 793 path = Path(val) 794 path_with_buildroot = Path(get_buildroot(), val) 795 except TypeError: 796 raise ParseError(f"{error_prefix} cannot be parsed as a directory path.") 797 if not path.is_dir() and not path_with_buildroot.is_dir(): 798 raise ParseError(f"{error_prefix} does not exist.") 799 800 # Validate the final value. 801 final_val = value_history.final_value 802 if isinstance(final_val.value, list): 803 for component in final_val.value: 804 check_scalar_value(component) 805 if inspect.isclass(member_type) and issubclass(member_type, Enum): 806 if len(final_val.value) != len(set(final_val.value)): 807 raise ParseError(f"Duplicate enum values specified in list: {final_val.value}") 808 elif isinstance(final_val.value, dict): 809 for component in final_val.value.values(): 810 check_scalar_value(component) 811 else: 812 check_scalar_value(final_val.value) 813 814 return value_history 815 816 def _inverse_arg(self, arg: str) -> str | None: 817 if not arg.startswith("--"): 818 return None 819 if arg.startswith("--no-"): 820 raise BooleanOptionNameWithNo(self.scope, arg) 821 return f"--no-{arg[2:]}" 822 823 def _register_child_parser(self, child: "Parser") -> None: 824 self._child_parsers.append(child) 825 826 def _scope_str(self, scope: str | None = None) -> str: 827 return self.scope_str(scope if scope is not None else self.scope) 828 829 def __str__(self) -> str: 830 return f"Parser({self._scope})" 831 [end of src/python/pants/option/parser.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
2a496aa3687defca9b75e52f832443b5baa1fb61
Invalid option enum values lead to parse error that does not list valid choices. For example: ```console $ ./pants dependencies --type=x src/python/pants/util/:: 09:44:50.91 [ERROR] Error computing value for --type in scope 'dependencies' (may also be from PANTS_* environment variables). Caused by: ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 582, in to_value_type return type_arg(val_str) File "/usr/lib/python3.7/enum.py", line 315, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.7/enum.py", line 569, in __new__ raise exc File "/usr/lib/python3.7/enum.py", line 553, in __new__ result = cls._missing_(value) File "/usr/lib/python3.7/enum.py", line 582, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 284, in parse_args dest, kwargs, flag_vals, parse_args_request.passthrough_args File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in _compute_value flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in <listcomp> flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 617, in to_value_type return self.to_value_type(val_str, type_arg, member_type, dest) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 585, in to_value_type f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " pants.option.errors.ParseError: Error applying type 'DependencyType' to option value 'x', for option '--type' in scope 'dependencies': 'x' is not a valid DependencyType ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 582, in to_value_type return type_arg(val_str) File "/usr/lib/python3.7/enum.py", line 315, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.7/enum.py", line 569, in __new__ raise exc File "/usr/lib/python3.7/enum.py", line 553, in __new__ result = cls._missing_(value) File "/usr/lib/python3.7/enum.py", line 582, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 284, in parse_args dest, kwargs, flag_vals, parse_args_request.passthrough_args File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in _compute_value flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in <listcomp> flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 617, in to_value_type return self.to_value_type(val_str, type_arg, member_type, dest) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 585, in to_value_type f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " pants.option.errors.ParseError: Error applying type 'DependencyType' to option value 'x', for option '--type' in scope 'dependencies': 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/bin/daemon_pants_runner.py", line 130, in single_daemonized_run cancellation_latch=cancellation_latch, File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/bin/local_pants_runner.py", line 143, in create options.for_scope(scope) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/util/memo.py", line 123, in memoize result = func(*args, **kwargs) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/options.py", line 422, in for_scope values = self._parser_hierarchy.get_parser_by_scope(scope).parse_args(parse_args_request) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 294, in parse_args f"Error computing value for {args_str} in {self._scope_str()} (may also be " pants.option.errors.ParseError: Error computing value for --type in scope 'dependencies' (may also be from PANTS_* environment variables). Caused by: ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 582, in to_value_type return type_arg(val_str) File "/usr/lib/python3.7/enum.py", line 315, in __call__ return cls.__new__(cls, value) File "/usr/lib/python3.7/enum.py", line 569, in __new__ raise exc File "/usr/lib/python3.7/enum.py", line 553, in __new__ result = cls._missing_(value) File "/usr/lib/python3.7/enum.py", line 582, in _missing_ raise ValueError("%r is not a valid %s" % (value, cls.__name__)) ValueError: 'x' is not a valid DependencyType During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 284, in parse_args dest, kwargs, flag_vals, parse_args_request.passthrough_args File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in _compute_value flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 673, in <listcomp> flag_vals = [to_value_type(expand(x)) for x in flag_val_strs] File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 617, in to_value_type return self.to_value_type(val_str, type_arg, member_type, dest) File "/home/jsirois/dev/pantsbuild/jsirois-pants/src/python/pants/option/parser.py", line 585, in to_value_type f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " pants.option.errors.ParseError: Error applying type 'DependencyType' to option value 'x', for option '--type' in scope 'dependencies': 'x' is not a valid DependencyType (Use --no-process-execution-local-cleanup to inspect chroots and/or -ldebug for more logs. See https://www.pantsbuild.org/v2.6/docs/troubleshooting for common issues. Consider reaching out for help: https://www.pantsbuild.org/v2.6/docs/getting-help.) ``` This is as opposed to, for example, straight up argparse which gives something like this: ```console $ pex --venv bob usage: pex [-o OUTPUT.PEX] [options] [-- arg1 arg2 ...] pex builds a PEX (Python Executable) file based on the given specifications: sources, requirements, their dependencies and other options. Command-line options can be provided in one or more files by prefixing the filenames with an @ symbol. These files must contain one argument per line. pex: error: argument --venv: invalid choice: 'bob' (choose from 'prepend', 'append') ```
2021-07-05T04:53:31Z
<patch> diff --git a/src/python/pants/option/parser.py b/src/python/pants/option/parser.py --- a/src/python/pants/option/parser.py +++ b/src/python/pants/option/parser.py @@ -8,7 +8,6 @@ import json import os import re -import traceback import typing from collections import defaultdict from dataclasses import dataclass @@ -292,7 +291,7 @@ def add_flag_val(v: int | float | bool | str | None) -> None: args_str = ", ".join(args) raise type(e)( f"Error computing value for {args_str} in {self._scope_str()} (may also be " - f"from PANTS_* environment variables).\nCaused by:\n{traceback.format_exc()}" + f"from PANTS_* environment variables).\nCaused by:\n{e}" ) # If the option is explicitly given, check deprecation and mutual exclusion. @@ -581,9 +580,11 @@ def to_value_type(self, val_str, type_arg, member_type, dest): return DictValueComponent.create(val_str) return type_arg(val_str) except (TypeError, ValueError) as e: + if issubclass(type_arg, Enum): + choices = ", ".join(f"{choice.value}" for choice in type_arg) + raise ParseError(f"Invalid choice '{val_str}'. Choose from: {choices}") raise ParseError( - f"Error applying type '{type_arg.__name__}' to option value '{val_str}', " - f"for option '--{dest}' in {self._scope_str()}: {e}" + f"Error applying type '{type_arg.__name__}' to option value '{val_str}': {e}" ) @classmethod </patch>
[]
[]
huggingface__transformers-20735
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Tutorial on token classification throws casting error in Tensorflow 2.11 ### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35 - Python version: 3.9.0 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada, the tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` throws the following error in Tensorflow 2.11 but not in Tensorflow 2.9: `(0) UNIMPLEMENTED: Cast string to float is not supported [[{{node Cast_1}}]] (1) CANCELLED: Function was cancelled before it was started 0 successful operations. ` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` for Tensorflow ### Expected behavior Training should start, but it does not. </issue> <code> [start of README.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <b>English</b> | 44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | 47 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | 48 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | 49 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> 50 <p> 51 </h4> 52 53 <h3 align="center"> 54 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p> 55 </h3> 56 57 <h3 align="center"> 58 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 59 </h3> 60 61 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. 62 63 These models can be applied on: 64 65 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. 66 * 🖼️ Images, for tasks like image classification, object detection, and segmentation. 67 * 🗣️ Audio, for tasks like speech recognition and audio classification. 68 69 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. 70 71 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments. 72 73 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. 74 75 ## Online demos 76 77 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models. 78 79 Here are a few examples: 80 81 In Natural Language Processing: 82 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 83 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 84 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 85 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 86 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 87 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 88 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 89 90 In Computer Vision: 91 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224) 92 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50) 93 - [Semantic Segmentation with SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) 94 - [Panoptic Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic) 95 96 In Audio: 97 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) 98 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) 99 100 In Multimodal tasks: 101 - [Visual Question Answering with ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) 102 103 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities. 104 105 ## If you are looking for custom support from the Hugging Face team 106 107 <a target="_blank" href="https://huggingface.co/support"> 108 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 109 </a><br> 110 111 ## Quick tour 112 113 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts: 114 115 ```python 116 >>> from transformers import pipeline 117 118 # Allocate a pipeline for sentiment-analysis 119 >>> classifier = pipeline('sentiment-analysis') 120 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 121 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 122 ``` 123 124 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%. 125 126 Many tasks have a pre-trained `pipeline` ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image: 127 128 ``` python 129 >>> import requests 130 >>> from PIL import Image 131 >>> from transformers import pipeline 132 133 # Download an image with cute cats 134 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" 135 >>> image_data = requests.get(url, stream=True).raw 136 >>> image = Image.open(image_data) 137 138 # Allocate a pipeline for object detection 139 >>> object_detector = pipeline('object-detection') 140 >>> object_detector(image) 141 [{'score': 0.9982201457023621, 142 'label': 'remote', 143 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, 144 {'score': 0.9960021376609802, 145 'label': 'remote', 146 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, 147 {'score': 0.9954745173454285, 148 'label': 'couch', 149 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, 150 {'score': 0.9988006353378296, 151 'label': 'cat', 152 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, 153 {'score': 0.9986783862113953, 154 'label': 'cat', 155 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] 156 ``` 157 158 Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the left, with the predictions displayed on the right: 159 160 <h3 align="center"> 161 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> 162 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> 163 </h3> 164 165 You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary). 166 167 In addition to `pipeline`, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version: 168 ```python 169 >>> from transformers import AutoTokenizer, AutoModel 170 171 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 172 >>> model = AutoModel.from_pretrained("bert-base-uncased") 173 174 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 175 >>> outputs = model(**inputs) 176 ``` 177 178 And here is the equivalent code for TensorFlow: 179 ```python 180 >>> from transformers import AutoTokenizer, TFAutoModel 181 182 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 183 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 184 185 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 186 >>> outputs = model(**inputs) 187 ``` 188 189 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator. 190 191 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use as usual. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset. 192 193 ## Why should I use transformers? 194 195 1. Easy-to-use state-of-the-art models: 196 - High performance on natural language understanding & generation, computer vision, and audio tasks. 197 - Low barrier to entry for educators and practitioners. 198 - Few user-facing abstractions with just three classes to learn. 199 - A unified API for using all our pretrained models. 200 201 1. Lower compute costs, smaller carbon footprint: 202 - Researchers can share trained models instead of always retraining. 203 - Practitioners can reduce compute time and production costs. 204 - Dozens of architectures with over 60,000 pretrained models across all modalities. 205 206 1. Choose the right framework for every part of a model's lifetime: 207 - Train state-of-the-art models in 3 lines of code. 208 - Move a single model between TF2.0/PyTorch/JAX frameworks at will. 209 - Seamlessly pick the right framework for training, evaluation and production. 210 211 1. Easily customize a model or an example to your needs: 212 - We provide examples for each architecture to reproduce the results published by its original authors. 213 - Model internals are exposed as consistently as possible. 214 - Model files can be used independently of the library for quick experiments. 215 216 ## Why shouldn't I use transformers? 217 218 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. 219 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library (possibly, [Accelerate](https://huggingface.co/docs/accelerate)). 220 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. 221 222 ## Installation 223 224 ### With pip 225 226 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+. 227 228 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). 229 230 First, create a virtual environment with the version of Python you're going to use and activate it. 231 232 Then, you will need to install at least one of Flax, PyTorch or TensorFlow. 233 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific installation command for your platform. 234 235 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows: 236 237 ```bash 238 pip install transformers 239 ``` 240 241 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source). 242 243 ### With conda 244 245 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`. 246 247 🤗 Transformers can be installed using conda as follows: 248 249 ```shell script 250 conda install -c huggingface transformers 251 ``` 252 253 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda. 254 255 > **_NOTE:_** On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in [this issue](https://github.com/huggingface/huggingface_hub/issues/1062). 256 257 ## Model architectures 258 259 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations). 260 261 Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 262 263 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them): 264 265 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 266 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 267 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 268 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 269 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 270 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 271 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 272 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 273 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 274 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 275 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 276 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 277 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 278 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 279 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 280 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 281 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 282 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 283 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 284 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 285 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 286 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 287 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 288 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 289 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 290 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 291 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 292 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 293 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 294 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 295 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 296 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 297 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 298 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 299 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 300 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 301 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 302 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 303 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 304 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 305 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 306 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 307 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 308 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 309 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 310 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 311 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 312 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 313 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 314 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 315 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 316 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 317 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 318 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 319 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 320 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 321 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 322 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 323 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 324 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 325 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 326 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 327 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 328 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 329 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 330 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 331 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 332 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 333 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 334 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 335 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 336 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 337 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 338 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 339 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 340 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 341 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 342 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 343 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 344 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 345 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 346 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 347 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 348 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 349 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 350 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 351 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 352 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 353 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 354 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 355 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 356 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 357 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 358 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 359 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 360 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 361 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 362 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 363 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 364 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 365 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 366 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 367 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 368 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 369 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 370 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 371 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 372 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 373 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 374 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 375 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 376 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 377 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 378 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 379 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 380 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 381 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 382 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 383 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 384 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 385 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 386 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 387 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 388 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 389 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 390 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 391 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 392 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 393 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 394 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 395 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 396 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 397 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 398 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 399 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 400 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 401 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 402 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 403 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 404 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 405 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 406 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 407 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 408 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 409 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 410 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 411 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 412 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 413 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 414 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 415 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 416 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 417 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 418 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 419 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 420 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 421 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 422 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 423 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 424 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 425 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 426 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 427 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 428 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR. 429 430 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks). 431 432 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://github.com/huggingface/transformers/tree/main/examples). 433 434 435 ## Learn more 436 437 | Section | Description | 438 |-|-| 439 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials | 440 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers | 441 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models | 442 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API | 443 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks | 444 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community | 445 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` | 446 447 ## Citation 448 449 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library: 450 ```bibtex 451 @inproceedings{wolf-etal-2020-transformers, 452 title = "Transformers: State-of-the-Art Natural Language Processing", 453 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 454 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 455 month = oct, 456 year = "2020", 457 address = "Online", 458 publisher = "Association for Computational Linguistics", 459 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 460 pages = "38--45" 461 } 462 ``` 463 [end of README.md] [start of README_es.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <a href="https://github.com/huggingface/transformers/">English</a> | 44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | 47 <b>Español</b> | 48 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | 49 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> 50 <p> 51 </h4> 52 53 <h3 align="center"> 54 <p>Lo último de Machine Learning para JAX, PyTorch y TensorFlow</p> 55 </h3> 56 57 <h3 align="center"> 58 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 59 </h3> 60 61 🤗 Transformers aporta miles de modelos preentrenados Para realizar tareas en diferentes modalidades como texto, vision, y audio. 62 63 Estos modelos pueden ser aplicados en: 64 65 * 📝 Texto, Para tareas como clasificación de texto, extracción de información, responder preguntas, resumir, traducir, generación de texto, en más de 100 idiomas. 66 * 🖼️ Imágenes, para tareas como clasificación de imágenes, detección the objetos, y segmentación. 67 * 🗣️ Audio, para tareas como reconocimiento de voz y clasificación de audio. 68 69 Los modelos de Transformer también pueden realizar tareas en **muchas modalidades combinadas**, como responder pregunstas, reconocimiento de carácteres ópticos,extracción de información de documentos escaneados, clasificación de video, y respuesta de preguntas visuales. 70 71 🤗 Transformers aporta APIs para descargar rápidamente y usar estos modelos preentrenados en un texto dado, afinarlos en tus propios sets de datos y compartirlos con la comunidad en nuestro [centro de modelos](https://huggingface.co/models). Al mismo tiempo, cada módulo de Python que define una arquitectura es completamente independiente y se puede modificar para permitir experimentos de investigación rápidos. 72 73 🤗 Transformers está respaldado por las tres bibliotecas de deep learning más populares — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) y [TensorFlow](https://www.tensorflow.org/) — con una perfecta integración entre ellos. Es sencillo entrenar sus modelos con uno antes de cargarlos para la inferencia con el otro. 74 75 ## Demostraciones en línea 76 77 Puedes probar la mayoría de nuestros modelos directamente en sus páginas desde el [centro de modelos](https://huggingface.co/models). También ofrecemos [alojamiento de modelos privados, control de versiones y una API de inferencia](https://huggingface.co/pricing) para modelos públicos y privados. 78 79 Aquí hay algunos ejemplos: 80 81 En procesamiento del lenguaje natural: 82 - [Terminación de palabras enmascaradas con BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 83 - [Reconocimiento del nombre de la entidad con Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 84 - [Generación de texto con GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 85 - [Inferencia del lenguaje natural con RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 86 - [Resumen con BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 87 - [Responder a preguntas con DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 88 - [Traducción con T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 89 90 En visión de ordenador: 91 - [Clasificación de imágenes con ViT](https://huggingface.co/google/vit-base-patch16-224) 92 - [Detección de objetos con DETR](https://huggingface.co/facebook/detr-resnet-50) 93 - [Segmentación semántica con SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) 94 - [Segmentación panóptica con DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic) 95 96 En Audio: 97 - [Reconocimiento de voz automático con Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) 98 - [Detección de palabras clave con Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) 99 100 En tareas multimodales: 101 - [Respuesta visual a preguntas con ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) 102 103 **[Escribe con Transformer](https://transformer.huggingface.co)**, construido por el equipo de Hugging Face, es la demostración oficial de las capacidades de generación de texto de este repositorio. 104 105 ## Si está buscando soporte personalizado del equipo de Hugging Face 106 107 <a target="_blank" href="https://huggingface.co/support"> 108 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 109 </a><br> 110 111 ## Tour rápido 112 113 Para usar inmediatamente un modelo en una entrada determinada (texto, imagen, audio, ...), proporcionamos la API de `pipeline`. Los pipelines agrupan un modelo previamente entrenado con el preprocesamiento que se usó durante el entrenamiento de ese modelo. Aquí se explica cómo usar rápidamente un pipeline para clasificar textos positivos frente a negativos: 114 115 ```python 116 >>> from transformers import pipeline 117 118 # Allocate a pipeline for sentiment-analysis 119 >>> classifier = pipeline('sentiment-analysis') 120 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 121 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 122 ``` 123 124 La segunda línea de código descarga y almacena en caché el modelo previamente entrenado que usa la canalización, mientras que la tercera lo evalúa en el texto dado. Aquí la respuesta es "positiva" con una confianza del 99,97%. 125 126 Muchas tareas tienen un `pipeline` preentrenado listo para funcionar, en NLP pero también en visión por ordenador y habla. Por ejemplo, podemos extraer fácilmente los objetos detectados en una imagen: 127 128 ``` python 129 >>> import requests 130 >>> from PIL import Image 131 >>> from transformers import pipeline 132 133 # Download an image with cute cats 134 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" 135 >>> image_data = requests.get(url, stream=True).raw 136 >>> image = Image.open(image_data) 137 138 # Allocate a pipeline for object detection 139 >>> object_detector = pipeline('object_detection') 140 >>> object_detector(image) 141 [{'score': 0.9982201457023621, 142 'label': 'remote', 143 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, 144 {'score': 0.9960021376609802, 145 'label': 'remote', 146 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, 147 {'score': 0.9954745173454285, 148 'label': 'couch', 149 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, 150 {'score': 0.9988006353378296, 151 'label': 'cat', 152 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, 153 {'score': 0.9986783862113953, 154 'label': 'cat', 155 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] 156 ``` 157 158 Aquí obtenemos una lista de objetos detectados en la imagen, con un cuadro que rodea el objeto y una puntuación de confianza. Aquí está la imagen original a la derecha, con las predicciones mostradas a la izquierda: 159 160 <h3 align="center"> 161 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> 162 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> 163 </h3> 164 165 Puedes obtener más información sobre las tareas admitidas por la API de `pipeline` en [este tutorial](https://huggingface.co/docs/transformers/task_summary). 166 167 Además de `pipeline`, para descargar y usar cualquiera de los modelos previamente entrenados en su tarea dada, todo lo que necesita son tres líneas de código. Aquí está la versión de PyTorch: 168 ```python 169 >>> from transformers import AutoTokenizer, AutoModel 170 171 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 172 >>> model = AutoModel.from_pretrained("bert-base-uncased") 173 174 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 175 >>> outputs = model(**inputs) 176 ``` 177 178 Y aquí está el código equivalente para TensorFlow: 179 ```python 180 >>> from transformers import AutoTokenizer, TFAutoModel 181 182 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 183 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 184 185 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 186 >>> outputs = model(**inputs) 187 ``` 188 189 El tokenizador es responsable de todo el preprocesamiento que espera el modelo preentrenado y se puede llamar directamente en una sola cadena (como en los ejemplos anteriores) o en una lista. Dará como resultado un diccionario que puedes usar en el código descendente o simplemente pasarlo directamente a su modelo usando el operador de desempaquetado de argumento **. 190 191 El modelo en si es un [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) normal o un [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (dependiendo De tu backend) que puedes usar de forma habitual. [Este tutorial](https://huggingface.co/docs/transformers/training) explica cómo integrar un modelo de este tipo en un ciclo de entrenamiento PyTorch o TensorFlow clásico, o como usar nuestra API `Trainer` para ajustar rápidamente un nuevo conjunto de datos. 192 193 ## ¿Por qué debo usar transformers? 194 195 1. Modelos de última generación fáciles de usar: 196 - Alto rendimiento en comprensión y generación de lenguaje natural, visión artificial y tareas de audio. 197 - Baja barrera de entrada para educadores y profesionales. 198 - Pocas abstracciones de cara al usuario con solo tres clases para aprender. 199 - Una API unificada para usar todos nuestros modelos preentrenados. 200 201 1. Menores costes de cómputo, menor huella de carbono: 202 - Los investigadores pueden compartir modelos entrenados en lugar de siempre volver a entrenar. 203 - Los profesionales pueden reducir el tiempo de cómputo y los costos de producción. 204 - Docenas de arquitecturas con más de 60 000 modelos preentrenados en todas las modalidades. 205 206 1. Elija el marco adecuado para cada parte de la vida útil de un modelo: 207 - Entrene modelos de última generación en 3 líneas de código. 208 - Mueva un solo modelo entre los marcos TF2.0/PyTorch/JAX a voluntad. 209 - Elija sin problemas el marco adecuado para la formación, la evaluación y la producción. 210 211 1. Personalice fácilmente un modelo o un ejemplo según sus necesidades: 212 - Proporcionamos ejemplos de cada arquitectura para reproducir los resultados publicados por sus autores originales.. 213 - Los internos del modelo están expuestos lo más consistentemente posible.. 214 - Los archivos modelo se pueden usar independientemente de la biblioteca para experimentos rápidos. 215 216 ## ¿Por qué no debería usar transformers? 217 218 - Esta biblioteca no es una caja de herramientas modular de bloques de construcción para redes neuronales. El código en los archivos del modelo no se refactoriza con abstracciones adicionales a propósito, de modo que los investigadores puedan iterar rápidamente en cada uno de los modelos sin sumergirse en abstracciones/archivos adicionales. 219 - La API de entrenamiento no está diseñada para funcionar en ningún modelo, pero está optimizada para funcionar con los modelos proporcionados por la biblioteca. Para bucles genéricos de aprendizaje automático, debe usar otra biblioteca (posiblemente, [Accelerate](https://huggingface.co/docs/accelerate)). 220 - Si bien nos esforzamos por presentar tantos casos de uso como sea posible, los scripts en nuestra [carpeta de ejemplos](https://github.com/huggingface/transformers/tree/main/examples) son solo eso: ejemplos. Se espera que no funcionen de forma inmediata en su problema específico y que deba cambiar algunas líneas de código para adaptarlas a sus necesidades. 221 222 ## Instalación 223 224 ### Con pip 225 226 Este repositorio está probado en Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ y TensorFlow 2.3+. 227 228 Deberías instalar 🤗 Transformers en un [ambiente virtual](https://docs.python.org/3/library/venv.html). Si no estas familiarizado con los entornos virtuales de Python, consulta la [guía de usuario](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). 229 230 Primero, crea un entorno virtual con la versión de Python que vas a usar y actívalo. 231 232 Luego, deberás instalar al menos uno de Flax, PyTorch o TensorFlow. 233 Por favor, ve a la [página de instalación de TensorFlow](https://www.tensorflow.org/install/), [página de instalación de PyTorch](https://pytorch.org/get-started/locally/#start-locally) y/o las páginas de instalación de [Flax](https://github.com/google/flax#quick-install) y [Jax](https://github.com/google/jax#installation) con respecto al comando de instalación específico para tu plataforma. 234 235 Cuando se ha instalado uno de esos backends, los 🤗 Transformers se pueden instalar usando pip de la siguiente manera: 236 237 ```bash 238 pip install transformers 239 ``` 240 241 Si deseas jugar con los ejemplos o necesitas la última versión del código y no puedes esperar a una nueva versión, tienes que [instalar la librería de la fuente](https://huggingface.co/docs/transformers/installation#installing-from-source). 242 243 ### Con conda 244 245 Desde la versión v4.0.0 de Transformers, ahora tenemos un canal conda: `huggingface`. 246 247 🤗 Transformers se puede instalar usando conda de la siguiente manera: 248 249 ```shell script 250 conda install -c huggingface transformers 251 ``` 252 253 Sigue las páginas de instalación de Flax, PyTorch o TensorFlow para ver cómo instalarlos con conda. 254 255 > **_NOTA:_** En Windows, es posible que se le pida que active el modo de desarrollador para beneficiarse del almacenamiento en caché. Si esta no es una opción para usted, háganoslo saber en [esta issue](https://github.com/huggingface/huggingface_hub/issues/1062). 256 257 ## Arquitecturas modelo 258 259 **[Todos los puntos de control del modelo](https://huggingface.co/models)** aportados por 🤗 Transformers están perfectamente integrados desde huggingface.co [Centro de modelos](https://huggingface.co) donde son subidos directamente por los [usuarios](https://huggingface.co/users) y [organizaciones](https://huggingface.co/organizations). 260 261 Número actual de puntos de control: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 262 263 🤗 Transformers actualmente proporciona las siguientes arquitecturas (ver [aquí](https://huggingface.co/docs/transformers/model_summary) para un resumen de alto nivel de cada uno de ellas.): 264 265 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 266 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 267 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 268 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 269 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 270 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 271 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 272 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 273 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 274 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 275 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 276 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 277 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 278 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 279 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 280 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 281 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 282 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 283 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 284 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 285 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 286 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 287 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 288 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 289 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 290 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 291 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 292 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 293 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 294 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 295 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 296 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 297 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 298 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 299 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 300 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 301 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 302 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 303 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 304 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 305 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 306 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 307 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 308 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 309 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 310 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 311 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 312 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 313 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 314 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 315 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 316 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 317 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 318 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 319 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 320 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 321 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 322 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 323 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 324 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 325 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 326 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 327 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 328 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 329 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 330 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 331 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 332 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 333 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 334 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 335 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 336 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 337 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 338 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 339 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 340 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 341 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 342 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 343 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 344 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 345 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 346 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 347 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 348 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 349 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 350 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 351 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 352 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 353 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 354 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 355 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 356 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 357 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 358 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 359 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 360 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 361 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 362 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 363 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 364 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 365 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 366 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 367 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 368 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 369 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 370 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 371 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 372 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 373 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 374 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 375 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 376 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 377 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 378 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 379 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 380 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 381 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 382 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 383 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 384 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 385 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 386 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 387 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 388 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 389 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 390 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 391 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 392 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 393 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 394 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 395 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 396 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 397 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 398 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 399 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 400 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 401 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 402 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 403 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 404 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 405 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 406 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 407 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 408 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 409 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 410 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 411 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 412 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 413 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 414 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 415 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 416 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 417 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 418 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 419 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 420 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 421 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 422 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 423 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 424 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 425 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 426 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 427 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 428 1. ¿Quieres aportar un nuevo modelo? Hemos agregado una **guía detallada y plantillas** para guiarte en el proceso de agregar un nuevo modelo. Puedes encontrarlos en la carpeta de [`templates`](./templates) del repositorio. Asegúrate de revisar las [pautas de contribución](./CONTRIBUTING.md) y comunícate con los mantenedores o abra un problema para recopilar comentarios antes de comenzar su PR. 429 430 Para comprobar si cada modelo tiene una implementación en Flax, PyTorch o TensorFlow, o tiene un tokenizador asociado respaldado por la librería 🤗 Tokenizers , ve a [esta tabla](https://huggingface.co/docs/transformers/index#supported-frameworks). 431 432 Estas implementaciones se han probado en varios conjuntos de datos (consulte los scripts de ejemplo) y deberían coincidir con el rendimiento de las implementaciones originales. Puede encontrar más detalles sobre el rendimiento en la sección Examples de la [documentación](https://github.com/huggingface/transformers/tree/main/examples). 433 434 435 ## Aprender más 436 437 | Sección | Descripción | 438 |-|-| 439 | [Documentación](https://huggingface.co/docs/transformers/) | Toda la documentación de la API y tutoriales | 440 | [Resumen de tareas](https://huggingface.co/docs/transformers/task_summary) | Tareas soportadas 🤗 Transformers | 441 | [Tutorial de preprocesAmiento](https://huggingface.co/docs/transformers/preprocessing) | Usando la clase `Tokenizer` para preparar datos para los modelos | 442 | [Entrenamiento y puesta a punto](https://huggingface.co/docs/transformers/training) | Usando los modelos aportados por 🤗 Transformers en un bucle de entreno de PyTorch/TensorFlow y la API de `Trainer` | 443 | [Recorrido rápido: secuencias de comandos de ajuste/uso](https://github.com/huggingface/transformers/tree/main/examples) | Scripts de ejemplo para ajustar modelos en una amplia gama de tareas | 444 | [Compartir y subir modelos](https://huggingface.co/docs/transformers/model_sharing) | Carga y comparte tus modelos perfeccionados con la comunidad | 445 | [Migración](https://huggingface.co/docs/transformers/migration) | Migra a 🤗 Transformers desde `pytorch-transformers` o `pytorch-pretrained-bert` | 446 447 ## Citación 448 449 Ahora nosotros tenemos un [papel](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) que puedes citar para la librería de 🤗 Transformers: 450 ```bibtex 451 @inproceedings{wolf-etal-2020-transformers, 452 title = "Transformers: State-of-the-Art Natural Language Processing", 453 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 454 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 455 month = oct, 456 year = "2020", 457 address = "Online", 458 publisher = "Association for Computational Linguistics", 459 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 460 pages = "38--45" 461 } 462 ``` 463 [end of README_es.md] [start of README_hd.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Hindi translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Hindi characters. E.g., कुल मिलाकर 100 से अधिक भाषाएँ; ट्रांसफॉर्मर लाइब्रेरी का उपयोग करता है। 20 - वर्गाकार उद्धरणों का प्रयोग करें, जैसे, "उद्धरण" 21 22 Dictionary 23 24 Hugging Face: गले लगाओ चेहरा 25 token: शब्द (और मूल अंग्रेजी को कोष्ठक में चिह्नित करें) 26 tokenize: टोकननाइज़ करें (और मूल अंग्रेज़ी को चिह्नित करने के लिए कोष्ठक का उपयोग करें) 27 tokenizer: Tokenizer (मूल अंग्रेजी में कोष्ठक के साथ) 28 transformer: transformer 29 pipeline: समनुक्रम 30 API: API (अनुवाद के बिना) 31 inference: विचार 32 Trainer: प्रशिक्षक। कक्षा के नाम के रूप में प्रस्तुत किए जाने पर अनुवादित नहीं किया गया। 33 pretrained/pretrain: पूर्व प्रशिक्षण 34 finetune: फ़ाइन ट्यूनिंग 35 community: समुदाय 36 example: जब विशिष्ट गोदाम example कैटलॉग करते समय "केस केस" के रूप में अनुवादित 37 Python data structures (e.g., list, set, dict): मूल अंग्रेजी को चिह्नित करने के लिए सूचियों, सेटों, शब्दकोशों में अनुवाद करें और कोष्ठक का उपयोग करें 38 NLP/Natural Language Processing: द्वारा NLP अनुवाद के बिना प्रकट होते हैं Natural Language Processing प्रस्तुत किए जाने पर प्राकृतिक भाषा संसाधन में अनुवाद करें 39 checkpoint: जाँच बिंदु 40 --> 41 42 <p align="center"> 43 <br> 44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 45 <br> 46 <p> 47 <p align="center"> 48 <a href="https://circleci.com/gh/huggingface/transformers"> 49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 50 </a> 51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 53 </a> 54 <a href="https://huggingface.co/docs/transformers/index"> 55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 56 </a> 57 <a href="https://github.com/huggingface/transformers/releases"> 58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 59 </a> 60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 62 </a> 63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 64 </p> 65 66 <h4 align="center"> 67 <p> 68 <a href="https://github.com/huggingface/transformers/">English</a> | 69 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | 72 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | 73 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | 74 <b>हिन्दी</b> | 75 <p> 76 </h4> 77 78 <h3 align="center"> 79 <p>Jax, PyTorch और TensorFlow के लिए उन्नत मशीन लर्निंग</p> 80 </h3> 81 82 <h3 align="center"> 83 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 84 </h3> 85 86 🤗 Transformers 100 से अधिक भाषाओं में पाठ वर्गीकरण, सूचना निष्कर्षण, प्रश्न उत्तर, सारांशीकरण, अनुवाद, पाठ निर्माण का समर्थन करने के लिए हजारों पूर्व-प्रशिक्षित मॉडल प्रदान करता है। इसका उद्देश्य सबसे उन्नत एनएलपी तकनीक को सभी के लिए सुलभ बनाना है। 87 88 🤗 Transformers त्वरित डाउनलोड और उपयोग के लिए एक एपीआई प्रदान करता है, जिससे आप किसी दिए गए पाठ पर एक पूर्व-प्रशिक्षित मॉडल ले सकते हैं, इसे अपने डेटासेट पर ठीक कर सकते हैं और इसे [मॉडल हब] (https://huggingface.co/models) के माध्यम से समुदाय के साथ साझा कर सकते हैं। ) . इसी समय, प्रत्येक परिभाषित पायथन मॉड्यूल पूरी तरह से स्वतंत्र है, जो संशोधन और तेजी से अनुसंधान प्रयोगों के लिए सुविधाजनक है। 89 90 🤗 Transformers तीन सबसे लोकप्रिय गहन शिक्षण पुस्तकालयों का समर्थन करता है: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — और इसके साथ निर्बाध रूप से एकीकृत होता है। आप अपने मॉडल को सीधे एक ढांचे के साथ प्रशिक्षित कर सकते हैं और दूसरे के साथ लोड और अनुमान लगा सकते हैं। 91 92 ## ऑनलाइन डेमो 93 94 आप सबसे सीधे मॉडल पृष्ठ पर परीक्षण कर सकते हैं [model hub](https://huggingface.co/models) मॉडल पर। हम [निजी मॉडल होस्टिंग, मॉडल संस्करण, और अनुमान एपीआई] भी प्रदान करते हैं।(https://huggingface.co/pricing)。 95 96 यहाँ कुछ उदाहरण हैं: 97 - [शब्द को भरने के लिए मास्क के रूप में BERT का प्रयोग करें](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 98 - [इलेक्ट्रा के साथ नामित इकाई पहचान](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 99 - [जीपीटी-2 के साथ टेक्स्ट जनरेशन](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 100 - [रॉबर्टा के साथ प्राकृतिक भाषा निष्कर्ष](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 101 - [बार्ट के साथ पाठ सारांश](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 102 - [डिस्टिलबर्ट के साथ प्रश्नोत्तर](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 103 - [अनुवाद के लिए T5 का प्रयोग करें](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 104 105 **[Write With Transformer](https://transformer.huggingface.co)**,हगिंग फेस टीम द्वारा बनाया गया, यह एक आधिकारिक पाठ पीढ़ी है demo。 106 107 ## यदि आप हगिंग फेस टीम से बीस्पोक समर्थन की तलाश कर रहे हैं 108 109 <a target="_blank" href="https://huggingface.co/support"> 110 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 111 </a><br> 112 113 ## जल्दी शुरू करें 114 115 हम त्वरित उपयोग के लिए मॉडल प्रदान करते हैं `pipeline` (पाइपलाइन) एपीआई। पाइपलाइन पूर्व-प्रशिक्षित मॉडल और संबंधित पाठ प्रीप्रोसेसिंग को एकत्रित करती है। सकारात्मक और नकारात्मक भावना को निर्धारित करने के लिए पाइपलाइनों का उपयोग करने का एक त्वरित उदाहरण यहां दिया गया है: 116 117 ```python 118 >>> from transformers import pipeline 119 120 # भावना विश्लेषण पाइपलाइन का उपयोग करना 121 >>> classifier = pipeline('sentiment-analysis') 122 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 123 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 124 ``` 125 126 कोड की दूसरी पंक्ति पाइपलाइन द्वारा उपयोग किए गए पूर्व-प्रशिक्षित मॉडल को डाउनलोड और कैश करती है, जबकि कोड की तीसरी पंक्ति दिए गए पाठ पर मूल्यांकन करती है। यहां उत्तर 99 आत्मविश्वास के स्तर के साथ "सकारात्मक" है। 127 128 कई एनएलपी कार्यों में आउट ऑफ़ द बॉक्स पाइपलाइनों का पूर्व-प्रशिक्षण होता है। उदाहरण के लिए, हम किसी दिए गए पाठ से किसी प्रश्न का उत्तर आसानी से निकाल सकते हैं: 129 130 ``` python 131 >>> from transformers import pipeline 132 133 # प्रश्नोत्तर पाइपलाइन का उपयोग करना 134 >>> question_answerer = pipeline('question-answering') 135 >>> question_answerer({ 136 ... 'question': 'What is the name of the repository ?', 137 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 138 ... }) 139 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 140 141 ``` 142 143 उत्तर देने के अलावा, पूर्व-प्रशिक्षित मॉडल संगत आत्मविश्वास स्कोर भी देता है, जहां उत्तर टोकनयुक्त पाठ में शुरू और समाप्त होता है। आप [इस ट्यूटोरियल](https://huggingface.co/docs/transformers/task_summary) से पाइपलाइन एपीआई द्वारा समर्थित कार्यों के बारे में अधिक जान सकते हैं। 144 145 अपने कार्य पर किसी भी पूर्व-प्रशिक्षित मॉडल को डाउनलोड करना और उसका उपयोग करना भी कोड की तीन पंक्तियों की तरह सरल है। यहाँ PyTorch संस्करण के लिए एक उदाहरण दिया गया है: 146 ```python 147 >>> from transformers import AutoTokenizer, AutoModel 148 149 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 150 >>> model = AutoModel.from_pretrained("bert-base-uncased") 151 152 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 153 >>> outputs = model(**inputs) 154 ``` 155 यहाँ समकक्ष है TensorFlow कोड: 156 ```python 157 >>> from transformers import AutoTokenizer, TFAutoModel 158 159 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 160 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 161 162 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 163 >>> outputs = model(**inputs) 164 ``` 165 166 टोकननाइज़र सभी पूर्व-प्रशिक्षित मॉडलों के लिए प्रीप्रोसेसिंग प्रदान करता है और इसे सीधे एक स्ट्रिंग (जैसे ऊपर दिए गए उदाहरण) या किसी सूची पर बुलाया जा सकता है। यह एक डिक्शनरी (तानाशाही) को आउटपुट करता है जिसे आप डाउनस्ट्रीम कोड में उपयोग कर सकते हैं या `**` अनपैकिंग एक्सप्रेशन के माध्यम से सीधे मॉडल को पास कर सकते हैं। 167 168 मॉडल स्वयं एक नियमित [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) या [TensorFlow `tf.keras.Model`](https ://pytorch.org/docs/stable/nn.html#torch.nn.Module) ://www.tensorflow.org/api_docs/python/tf/keras/Model) (आपके बैकएंड के आधार पर), जो हो सकता है सामान्य तरीके से उपयोग किया जाता है। [यह ट्यूटोरियल](https://huggingface.co/transformers/training.html) बताता है कि इस तरह के मॉडल को क्लासिक PyTorch या TensorFlow प्रशिक्षण लूप में कैसे एकीकृत किया जाए, या हमारे `ट्रेनर` एपीआई का उपयोग कैसे करें ताकि इसे जल्दी से फ़ाइन ट्यून किया जा सके।एक नया डेटासेट पे। 169 170 ## ट्रांसफार्मर का उपयोग क्यों करें? 171 172 1. उपयोग में आसानी के लिए उन्नत मॉडल: 173 - एनएलयू और एनएलजी पर बेहतर प्रदर्शन 174 - प्रवेश के लिए कम बाधाओं के साथ शिक्षण और अभ्यास के अनुकूल 175 - उपयोगकर्ता-सामना करने वाले सार तत्व, केवल तीन वर्गों को जानने की जरूरत है 176 - सभी मॉडलों के लिए एकीकृत एपीआई 177 178 1. कम कम्प्यूटेशनल ओवरहेड और कम कार्बन उत्सर्जन: 179 - शोधकर्ता हर बार नए सिरे से प्रशिक्षण देने के बजाय प्रशिक्षित मॉडल साझा कर सकते हैं 180 - इंजीनियर गणना समय और उत्पादन ओवरहेड को कम कर सकते हैं 181 - दर्जनों मॉडल आर्किटेक्चर, 2,000 से अधिक पूर्व-प्रशिक्षित मॉडल, 100 से अधिक भाषाओं का समर्थन 182 183 1.मॉडल जीवनचक्र के हर हिस्से को शामिल करता है: 184 - कोड की केवल 3 पंक्तियों में उन्नत मॉडलों को प्रशिक्षित करें 185 - मॉडल को मनमाने ढंग से विभिन्न डीप लर्निंग फ्रेमवर्क के बीच स्थानांतरित किया जा सकता है, जैसा आप चाहते हैं 186 - निर्बाध रूप से प्रशिक्षण, मूल्यांकन और उत्पादन के लिए सबसे उपयुक्त ढांचा चुनें 187 188 1. आसानी से अनन्य मॉडल को अनुकूलित करें और अपनी आवश्यकताओं के लिए मामलों का उपयोग करें: 189 - हम मूल पेपर परिणामों को पुन: पेश करने के लिए प्रत्येक मॉडल आर्किटेक्चर के लिए कई उपयोग के मामले प्रदान करते हैं 190 - मॉडल की आंतरिक संरचना पारदर्शी और सुसंगत रहती है 191 - मॉडल फ़ाइल को अलग से इस्तेमाल किया जा सकता है, जो संशोधन और त्वरित प्रयोग के लिए सुविधाजनक है 192 193 ## मुझे ट्रांसफॉर्मर का उपयोग कब नहीं करना चाहिए? 194 195 - यह लाइब्रेरी मॉड्यूलर न्यूरल नेटवर्क टूलबॉक्स नहीं है। मॉडल फ़ाइल में कोड जानबूझकर अल्पविकसित है, बिना अतिरिक्त सार इनकैप्सुलेशन के, ताकि शोधकर्ता अमूर्तता और फ़ाइल जंपिंग में शामिल हुए जल्दी से पुनरावृति कर सकें। 196 - `ट्रेनर` एपीआई किसी भी मॉडल के साथ संगत नहीं है, यह केवल इस पुस्तकालय के मॉडल के लिए अनुकूलित है। यदि आप सामान्य मशीन लर्निंग के लिए उपयुक्त प्रशिक्षण लूप कार्यान्वयन की तलाश में हैं, तो कहीं और देखें। 197 - हमारे सर्वोत्तम प्रयासों के बावजूद, [उदाहरण निर्देशिका] (https://github.com/huggingface/transformers/tree/main/examples) में स्क्रिप्ट केवल उपयोग के मामले हैं। आपकी विशिष्ट समस्या के लिए, वे जरूरी नहीं कि बॉक्स से बाहर काम करें, और आपको कोड की कुछ पंक्तियों को सूट करने की आवश्यकता हो सकती है। 198 199 ## स्थापित करना 200 201 ### पिप का उपयोग करना 202 203 इस रिपॉजिटरी का परीक्षण Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ और TensorFlow 2.3+ के तहत किया गया है। 204 205 आप [वर्चुअल एनवायरनमेंट] (https://docs.python.org/3/library/venv.html) में 🤗 ट्रांसफॉर्मर इंस्टॉल कर सकते हैं। यदि आप अभी तक पायथन के वर्चुअल एनवायरनमेंट से परिचित नहीं हैं, तो कृपया इसे [उपयोगकर्ता निर्देश] (https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) पढ़ें। 206 207 सबसे पहले, पायथन के उस संस्करण के साथ एक आभासी वातावरण बनाएं जिसका आप उपयोग करने और उसे सक्रिय करने की योजना बना रहे हैं। 208 209 फिर, आपको Flax, PyTorch या TensorFlow में से किसी एक को स्थापित करने की आवश्यकता है। अपने प्लेटफ़ॉर्म पर इन फ़्रेमवर्क को स्थापित करने के लिए, [TensorFlow स्थापना पृष्ठ](https://www.tensorflow.org/install/), [PyTorch स्थापना पृष्ठ](https://pytorch.org/get-started /locally/# देखें) start-locally) या [Flax स्थापना पृष्ठ](https://github.com/google/flax#quick-install). 210 211 जब इनमें से कोई एक बैकएंड सफलतापूर्वक स्थापित हो जाता है, तो ट्रांसफॉर्मर निम्नानुसार स्थापित किए जा सकते हैं: 212 213 ```bash 214 pip install transformers 215 ``` 216 217 यदि आप उपयोग के मामलों को आज़माना चाहते हैं या आधिकारिक रिलीज़ से पहले नवीनतम इन-डेवलपमेंट कोड का उपयोग करना चाहते हैं, तो आपको [सोर्स से इंस्टॉल करना होगा](https://huggingface.co/docs/transformers/installation#installing-from- स्रोत)। 218 219 ### कोंडा का उपयोग करना 220 221 ट्रांसफॉर्मर संस्करण 4.0.0 के बाद से, हमारे पास एक कोंडा चैनल है: `हगिंगफेस`। 222 223 ट्रांसफॉर्मर कोंडा के माध्यम से निम्नानुसार स्थापित किया जा सकता है: 224 225 ```shell script 226 conda install -c huggingface transformers 227 ``` 228 229 कोंडा के माध्यम से Flax, PyTorch, या TensorFlow में से किसी एक को स्थापित करने के लिए, निर्देशों के लिए उनके संबंधित स्थापना पृष्ठ देखें। 230 231 ## मॉडल आर्किटेक्चर 232 [उपयोगकर्ता](https://huggingface.co/users) और [organization](https://huggingface.co) द्वारा ट्रांसफॉर्मर समर्थित [**सभी मॉडल चौकियों**](https://huggingface.co/models) /users) हगिंगफेस.को/ऑर्गनाइजेशन), सभी को बिना किसी बाधा के हगिंगफेस.को [मॉडल हब](https://huggingface.co) के साथ एकीकृत किया गया है। 233 234 चौकियों की वर्तमान संख्या: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 235 236 🤗 ट्रांसफॉर्मर वर्तमान में निम्नलिखित आर्किटेक्चर का समर्थन करते हैं (मॉडल के अवलोकन के लिए [यहां] देखें (https://huggingface.co/docs/transformers/model_summary)): 237 238 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago) साथ थीसिस [ALBERT: A Lite BERT for Self-supervised भाषा प्रतिनिधित्व सीखना](https://arxiv.org/abs/1909.11942), झेंझोंग लैन, मिंगदा चेन, सेबेस्टियन गुडमैन, केविन गिम्पेल, पीयूष शर्मा, राडू सोरिकट 239 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 240 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (फेसबुक) साथ थीसिस [बार्ट: प्राकृतिक भाषा निर्माण, अनुवाद के लिए अनुक्रम-से-अनुक्रम पूर्व प्रशिक्षण , और समझ] (https://arxiv.org/pdf/1910.13461.pdf) पर निर्भर माइक लुईस, यिनहान लियू, नमन गोयल, मार्जन ग़ज़विनिनेजाद, अब्देलरहमान मोहम्मद, ओमर लेवी, वेस स्टोयानोव और ल्यूक ज़ेटलमॉयर 241 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (से École polytechnique) साथ थीसिस [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) पर निर्भर Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis रिहाई। 242 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research से) साथ में पेपर [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701)गुयेन लुओंग ट्रान, डुओंग मिन्ह ले और डाट क्वोक गुयेन द्वारा पोस्ट किया गया। 243 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft से) साथ में कागज [BEiT: BERT इमेज ट्रांसफॉर्मर्स का प्री-ट्रेनिंग](https://arxiv.org/abs/2106.08254) Hangbo Bao, Li Dong, Furu Wei द्वारा। 244 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (गूगल से) साथ वाला पेपर [बीईआरटी: प्री-ट्रेनिंग ऑफ डीप बिडायरेक्शनल ट्रांसफॉर्मर्स फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv.org/abs/1810.04805) जैकब डेवलिन, मिंग-वेई चांग, ​​केंटन ली और क्रिस्टीना टौटानोवा द्वारा प्रकाशित किया गया था। . 245 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (गूगल से) साथ देने वाला पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https ://arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा। 246 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research से) साथ में पेपर [BERTweet: अंग्रेजी ट्वीट्स के लिए एक पूर्व-प्रशिक्षित भाषा मॉडल] (https://aclanthology.org/2020.emnlp-demos.2/) डाट क्वोक गुयेन, थान वु और अन्ह तुआन गुयेन द्वारा प्रकाशित। 247 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (गूगल रिसर्च से) साथ वाला पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv .org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानोन, फिलिप फाम, अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा। 248 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (गूगल रिसर्च से) साथ में पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv.org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानन, फिलिप फाम द्वारा , अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा पोस्ट किया गया। 249 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 250 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 251 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (फेसबुक से) साथ में कागज [एक ओपन-डोमेन चैटबॉट बनाने की विधि](https://arxiv.org /abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम। स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा। 252 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (फेसबुक से) साथ में पेपर [एक ओपन-डोमेन चैटबॉट बनाने की रेसिपी](https://arxiv .org/abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा। 253 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). 254 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (एलेक्सा से) कागज के साथ [बीईआरटी के लिए ऑप्टिमल सबआर्किटेक्चर एक्सट्रैक्शन](https://arxiv.org/abs/ 2010.10499) एड्रियन डी विंटर और डैनियल जे पेरी द्वारा। 255 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google अनुसंधान से) साथ में कागज [ByT5: पूर्व-प्रशिक्षित बाइट-टू-बाइट मॉडल के साथ एक टोकन-मुक्त भविष्य की ओर] (https://arxiv.org/abs/2105.13626) Linting Xue, Aditya Barua, Noah Constant, रामी अल-रफू, शरण नारंग, मिहिर काले, एडम रॉबर्ट्स, कॉलिन रैफेल द्वारा पोस्ट किया गया। 256 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (इनरिया/फेसबुक/सोरबोन से) साथ में कागज [CamemBERT: एक टेस्टी फ्रेंच लैंग्वेज मॉडल](https:// arxiv.org/abs/1911.03894) लुई मार्टिन*, बेंजामिन मुलर*, पेड्रो जेवियर ऑर्टिज़ सुआरेज़*, योआन ड्यूपॉन्ट, लॉरेंट रोमरी, एरिक विलेमोन्टे डे ला क्लर्जरी, जैमे सेडाह और बेनोइट सगोट द्वारा। 257 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google रिसर्च से) साथ में दिया गया पेपर [कैनाइन: प्री-ट्रेनिंग ए एफिशिएंट टोकनाइजेशन-फ्री एनकोडर फॉर लैंग्वेज रिप्रेजेंटेशन]( https://arxiv.org/abs/2103.06874) जोनाथन एच क्लार्क, डैन गैरेट, यूलिया टर्क, जॉन विएटिंग द्वारा। 258 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 259 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI से) साथ वाला पेपर [लर्निंग ट्रांसफरेबल विजुअल मॉडल फ्रॉम नेचुरल लैंग्वेज सुपरविजन](https://arxiv.org /abs/2103.00020) एलेक रैडफोर्ड, जोंग वूक किम, क्रिस हैलासी, आदित्य रमेश, गेब्रियल गोह, संध्या अग्रवाल, गिरीश शास्त्री, अमांडा एस्केल, पामेला मिश्किन, जैक क्लार्क, ग्रेचेन क्रुएगर, इल्या सुत्स्केवर द्वारा। 260 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 261 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (सेल्सफोर्स से) साथ में पेपर [प्रोग्राम सिंथेसिस के लिए एक संवादात्मक प्रतिमान](https://arxiv.org/abs/2203.13474) एरिक निजकैंप, बो पैंग, हिरोआकी हयाशी, लिफू तू, हुआन वांग, यिंगबो झोउ, सिल्वियो सावरेस, कैमिंग जिओंग रिलीज। 262 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (माइक्रोसॉफ्ट रिसर्च एशिया से) कागज के साथ [फास्ट ट्रेनिंग कन्वर्जेंस के लिए सशर्त डीईटीआर](https://arxiv. org/abs/2108.06152) डेपू मेंग, ज़ियाओकांग चेन, ज़ेजिया फैन, गैंग ज़ेंग, होउकियांग ली, युहुई युआन, लेई सन, जिंगडोंग वांग द्वारा। 263 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech से) साथ में कागज [ConvBERT: स्पैन-आधारित डायनेमिक कनवल्शन के साथ BERT में सुधार](https://arxiv .org/abs/2008.02496) जिहांग जियांग, वीहाओ यू, डाकान झोउ, युनपेंग चेन, जियाशी फेंग, शुइचेंग यान द्वारा। 264 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI से) साथ वाला पेपर [A ConvNet for the 2020s](https://arxiv.org/abs /2201.03545) ज़ुआंग लियू, हेंज़ी माओ, चाओ-युआन वू, क्रिस्टोफ़ फीचटेनहोफ़र, ट्रेवर डेरेल, सैनिंग ज़ी द्वारा। 265 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (सिंघुआ यूनिवर्सिटी से) साथ में पेपर [सीपीएम: ए लार्ज-स्केल जेनेरेटिव चाइनीज प्री-ट्रेंड लैंग्वेज मॉडल](https : //arxiv.org/abs/2012.00413) झेंग्यान झांग, जू हान, हाओ झोउ, पेई के, युक्सियन गु, डेमिंग ये, युजिया किन, युशेंग सु, हाओझे जी, जियान गुआन, फैंचाओ क्यूई, ज़ियाओझी वांग, यानान झेंग द्वारा , गुओयांग ज़ेंग, हुआनकी काओ, शेंगकी चेन, डाइक्सुआन ली, ज़ेनबो सन, ज़ियुआन लियू, मिनली हुआंग, वेंटाओ हान, जी तांग, जुआनज़ी ली, ज़ियाओयान झू, माओसोंग सन। 266 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (सेल्सफोर्स से) साथ में पेपर [CTRL: ए कंडिशनल ट्रांसफॉर्मर लैंग्वेज मॉडल फॉर कंट्रोलेबल जेनरेशन](https://arxiv.org/abs/1909.05858) नीतीश शिरीष केसकर*, ब्रायन मैककैन*, लव आर. वार्ष्णेय, कैमिंग जिओंग और रिचर्ड द्वारा सोचर द्वारा जारी किया गया। 267 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft से) साथ में दिया गया पेपर [CvT: इंट्रोड्यूसिंग कनवॉल्यूशन टू विजन ट्रांसफॉर्मर्स](https://arxiv.org/ एब्स/2103.15808) हैपिंग वू, बिन जिओ, नोएल कोडेला, मेंगचेन लियू, जियांग दाई, लू युआन, लेई झांग द्वारा। 268 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (फेसबुक से) साथ में कागज [Data2Vec: भाषण, दृष्टि और भाषा में स्व-पर्यवेक्षित सीखने के लिए एक सामान्य ढांचा] (https://arxiv.org/abs/2202.03555) एलेक्सी बाएव्स्की, वेई-निंग सू, कियानटोंग जू, अरुण बाबू, जियाताओ गु, माइकल औली द्वारा पोस्ट किया गया। 269 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft से) साथ में दिया गया पेपर [DeBERta: डिकोडिंग-एन्हांस्ड BERT विद डिसेंटैंगल्ड अटेंशन](https://arxiv. org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा। 270 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft से) साथ में दिया गया पेपर [DeBERTa: डिकोडिंग-एन्हांस्ड BERT विथ डिसेंन्गल्ड अटेंशन](https: //arxiv.org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा पोस्ट किया गया। 271 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (बर्कले/फेसबुक/गूगल से) पेपर के साथ [डिसीजन ट्रांसफॉर्मर: रीनफोर्समेंट लर्निंग वाया सीक्वेंस मॉडलिंग](https : //arxiv.org/abs/2106.01345) लिली चेन, केविन लू, अरविंद राजेश्वरन, किमिन ली, आदित्य ग्रोवर, माइकल लास्किन, पीटर एबील, अरविंद श्रीनिवास, इगोर मोर्डच द्वारा पोस्ट किया गया। 272 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (सेंसटाइम रिसर्च से) साथ में पेपर [डिफॉर्मेबल डीईटीआर: डिफॉर्मेबल ट्रांसफॉर्मर्स फॉर एंड-टू-एंड ऑब्जेक्ट डिटेक्शन] (https://arxiv.org/abs/2010.04159) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, जिफेंग दाई द्वारा पोस्ट किया गया। 273 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (फेसबुक से) साथ में पेपर [ट्रेनिंग डेटा-एफिशिएंट इमेज ट्रांसफॉर्मर और डिस्टिलेशन थ्रू अटेंशन](https://arxiv .org/abs/2012.12877) ह्यूगो टौव्रोन, मैथ्यू कॉर्ड, मैथिज्स डूज़, फ़्रांसिस्को मस्सा, एलेक्ज़ेंडर सबलेरोल्स, हर्वे जेगौ द्वारा। 274 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (फेसबुक से) साथ में कागज [ट्रांसफॉर्मर्स के साथ एंड-टू-एंड ऑब्जेक्ट डिटेक्शन](https://arxiv. org/abs/2005.12872) निकोलस कैरियन, फ़्रांसिस्को मस्सा, गेब्रियल सिनेव, निकोलस उसुनियर, अलेक्जेंडर किरिलोव, सर्गेई ज़ागोरुयको द्वारा। 275 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [DialoGPT: बड़े पैमाने पर जनरेटिव प्री-ट्रेनिंग फॉर कन्वर्सेशनल रिस्पांस जेनरेशन](https ://arxiv.org/abs/1911.00536) यिज़े झांग, सिकी सन, मिशेल गैली, येन-चुन चेन, क्रिस ब्रोकेट, जियांग गाओ, जियानफेंग गाओ, जिंगजिंग लियू, बिल डोलन द्वारा। 276 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 277 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (हगिंगफेस से), साथ में कागज [डिस्टिलबर्ट, बीईआरटी का डिस्टिल्ड वर्जन: छोटा, तेज, सस्ता और हल्का] (https://arxiv.org/abs/1910.01108) विक्टर सनह, लिसांड्रे डेब्यू और थॉमस वुल्फ द्वारा पोस्ट किया गया। यही तरीका GPT-2 को [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERta से [DistilRoBERta](https://github.com) पर कंप्रेस करने के लिए भी लागू किया जाता है। / हगिंगफेस/ट्रांसफॉर्मर्स/ट्री/मेन/उदाहरण/डिस्टिलेशन), बहुभाषी BERT से [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) और डिस्टिलबर्ट का जर्मन संस्करण। 278 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [DiT: सेल्फ सुपरवाइज्ड प्री-ट्रेनिंग फॉर डॉक्यूमेंट इमेज ट्रांसफॉर्मर](https://arxiv.org/abs/2203.02378) जुनलॉन्ग ली, यिहेंग जू, टेंगचाओ लव, लेई कुई, चा झांग द्वारा फुरु वेई द्वारा पोस्ट किया गया। 279 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER से) साथ में कागज [OCR-मुक्त डॉक्यूमेंट अंडरस्टैंडिंग ट्रांसफॉर्मर](https://arxiv.org/abs /2111.15664) गीवूक किम, टीकग्यू होंग, मूनबिन यिम, जियोंग्योन नाम, जिनयॉन्ग पार्क, जिनयॉन्ग यिम, वोनसेओक ह्वांग, सांगडू यूं, डोंगयून हान, सेउंग्युन पार्क द्वारा। 280 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (फेसबुक से) साथ में पेपर [ओपन-डोमेन क्वेश्चन आंसरिंग के लिए डेंस पैसेज रिट्रीवल](https://arxiv. org/abs/2004.04906) व्लादिमीर करपुखिन, बरलास ओज़ुज़, सेवन मिन, पैट्रिक लुईस, लेडेल वू, सर्गेई एडुनोव, डैनकी चेन, और वेन-ताऊ यिह द्वारा। 281 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (इंटेल लैब्स से) साथ में कागज [विज़न ट्रांसफॉर्मर्स फॉर डेंस प्रेडिक्शन](https://arxiv.org /abs/2103.13413) रेने रैनफ्टल, एलेक्सी बोचकोवस्की, व्लादलेन कोल्टन द्वारा। 282 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google रिसर्च/स्टैनफोर्ड यूनिवर्सिटी से) साथ में दिया गया पेपर [इलेक्ट्रा: जेनरेटर के बजाय भेदभाव करने वाले के रूप में टेक्स्ट एन्कोडर्स का पूर्व-प्रशिक्षण] (https://arxiv.org/abs/2003.10555) केविन क्लार्क, मिन्ह-थांग लुओंग, क्वोक वी. ले, क्रिस्टोफर डी. मैनिंग द्वारा पोस्ट किया गया। 283 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google रिसर्च से) साथ में दिया गया पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https:/ /arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा। 284 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)**(Baidu से) साथ देने वाला पेपर [ERNIE: एन्हांस्ड रिप्रेजेंटेशन थ्रू नॉलेज इंटीग्रेशन](https://arxiv.org/abs/1904.09223) यू सन, शुओहुआन वांग, युकुन ली, शिकुन फेंग, ज़ुई चेन, हान झांग, शिन तियान, डैनक्सियांग झू, हाओ तियान, हुआ वू द्वारा पोस्ट किया गया। 285 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (मेटा AI से) ट्रांसफॉर्मर प्रोटीन भाषा मॉडल हैं। **ESM-1b** पेपर के साथ जारी किया गया था [ अलेक्जेंडर राइव्स, जोशुआ मेयर, टॉम सर्कु, सिद्धार्थ गोयल, ज़ेमिंग लिन द्वारा जैविक संरचना और कार्य असुरक्षित सीखने को 250 मिलियन प्रोटीन अनुक्रमों तक स्केल करने से उभरता है] (https://www.pnas.org/content/118/15/e2016239118) जेसन लियू, डेमी गुओ, मायल ओट, सी. लॉरेंस ज़िटनिक, जेरी मा और रॉब फर्गस। **ESM-1v** को पेपर के साथ जारी किया गया था [भाषा मॉडल प्रोटीन फ़ंक्शन पर उत्परिवर्तन के प्रभावों की शून्य-शॉट भविष्यवाणी को सक्षम करते हैं] (https://doi.org/10.1101/2021.07.09.450648) जोशुआ मेयर, रोशन राव, रॉबर्ट वेरकुइल, जेसन लियू, टॉम सर्कु और अलेक्जेंडर राइव्स द्वारा। **ESM-2** को पेपर के साथ जारी किया गया था [भाषा मॉडल विकास के पैमाने पर प्रोटीन अनुक्रम सटीक संरचना भविष्यवाणी को सक्षम करते हैं](https://doi.org/10.1101/2022.07.20.500902) ज़ेमिंग लिन, हलील अकिन, रोशन राव, ब्रायन ही, झोंगकाई झू, वेंटिंग लू, ए द्वारा लान डॉस सैंटोस कोस्टा, मरियम फ़ज़ल-ज़रंडी, टॉम सर्कू, साल कैंडिडो, अलेक्जेंडर राइव्स। 286 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 287 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS से) साथ वाला पेपर [FlauBERT: Unsupervised Language Model Pre-training for फ़्रेंच](https://arxiv .org/abs/1912.05372) Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, बेंजामिन लेकोउटेक्स, अलेक्जेंड्रे अल्लाउज़ेन, बेनोइट क्रैबे, लॉरेंट बेसेसियर, डिडिएर श्वाब द्वारा। 288 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (FLAVA: A फाउंडेशनल लैंग्वेज एंड विजन अलाइनमेंट मॉडल) (https://arxiv) साथ वाला पेपर .org/abs/2112.04482) अमनप्रीत सिंह, रोंगहांग हू, वेदानुज गोस्वामी, गुइल्यूम कुएरॉन, वोज्शिएक गालुबा, मार्कस रोहरबैक, और डौवे कीला द्वारा। 289 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (गूगल रिसर्च से) साथ वाला पेपर [FNet: मिक्सिंग टोकन विद फूरियर ट्रांसफॉर्म्स](https://arxiv.org /abs/2105.03824) जेम्स ली-थॉर्प, जोशुआ आइंस्ली, इल्या एकस्टीन, सैंटियागो ओंटानन द्वारा। 290 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [फ़नल-ट्रांसफॉर्मर: कुशल भाषा प्रसंस्करण के लिए अनुक्रमिक अतिरेक को छानना](https://arxiv.org/abs/2006.03236) जिहांग दाई, गुओकुन लाई, यिमिंग यांग, क्वोक वी. ले ​​द्वारा रिहाई। 291 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST से) साथ वाला पेपर [वर्टिकल कटडेप्थ के साथ मोनोकुलर डेप्थ एस्टीमेशन के लिए ग्लोबल-लोकल पाथ नेटवर्क्स](https:/ /arxiv.org/abs/2201.07436) डोयोन किम, वूंगह्युन गा, प्युंगवान आह, डोंगग्यू जू, सेहवान चुन, जुनमो किम द्वारा। 292 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI से) साथ में दिया गया पेपर [जेनरेटिव प्री-ट्रेनिंग द्वारा भाषा की समझ में सुधार](https://blog .openai.com/language-unsupervised/) एलेक रैडफोर्ड, कार्तिक नरसिम्हन, टिम सालिमन्स और इल्या सुत्स्केवर द्वारा। 293 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI से) रिपॉजिटरी के साथ [EleutherAI/gpt-neo](https://github.com/ EleutherAI /gpt-neo) रिलीज। सिड ब्लैक, स्टेला बिडरमैन, लियो गाओ, फिल वांग और कॉनर लेही द्वारा पोस्ट किया गया। 294 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI से) पेपर के साथ जारी किया गया [GPT-NeoX-20B: एक ओपन-सोर्स ऑटोरेग्रेसिव लैंग्वेज मॉडल] (https://arxiv.org/abs/2204.06745) सिड ब्लैक, स्टेला बिडरमैन, एरिक हैलाहन, क्वेंटिन एंथोनी, लियो गाओ, लॉरेंस गोल्डिंग, होरेस हे, कॉनर लेही, काइल मैकडोनेल, जेसन फांग, माइकल पाइलर, यूएसवीएसएन साई प्रशांत द्वारा , शिवांशु पुरोहित, लारिया रेनॉल्ड्स, जोनाथन टो, बेन वांग, सैमुअल वेनबैक 295 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (अबेजा के जरिए) शिन्या ओटानी, ताकायोशी मकाबे, अनुज अरोड़ा, क्यो हटोरी द्वारा। 296 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (ओपनएआई से) साथ में पेपर [लैंग्वेज मॉडल्स अनसुपरवाइज्ड मल्टीटास्क लर्नर्स हैं](https://blog.openai.com/better-language-models/) एलेक रैडफोर्ड*, जेफरी वू*, रेवन चाइल्ड, डेविड लुआन, डारियो एमोडी* द्वारा * और इल्या सुत्सकेवर** ने पोस्ट किया। 297 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI से) साथ वाला पेपर [kingoflolz/mesh-transformer-jax](https://github. com/kingoflolz/mesh-transformer-jax/) बेन वांग और अरन कोमात्सुजाकी द्वारा। 298 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA से) साथ में कागज [GroupViT: टेक्स्ट सुपरविजन से सिमेंटिक सेगमेंटेशन इमर्जेस](https://arxiv .org/abs/2202.11094) जियारुई जू, शालिनी डी मेलो, सिफ़ी लियू, वोनमिन बायन, थॉमस ब्रेउएल, जान कौट्ज़, ज़ियाओलोंग वांग द्वारा। 299 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (फेसबुक से) साथ में पेपर [ह्यूबर्ट: सेल्फ सुपरवाइज्ड स्पीच रिप्रेजेंटेशन लर्निंग बाय मास्क्ड प्रेडिक्शन ऑफ हिडन यूनिट्स](https ://arxiv.org/abs/2106.07447) वेई-निंग सू, बेंजामिन बोल्टे, याओ-हंग ह्यूबर्ट त्साई, कुशाल लखोटिया, रुस्लान सालाखुतदीनोव, अब्देलरहमान मोहम्मद द्वारा। 300 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (बर्कले से) साथ में कागज [I-BERT: Integer-only BERT Quantization](https:// arxiv.org/abs/2101.01321) सेहून किम, अमीर घोलमी, ज़ेवेई याओ, माइकल डब्ल्यू महोनी, कर्ट केटज़र द्वारा। 301 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 302 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 303 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 304 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 305 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ देने वाला पेपर [लेआउटएलएमवी3: यूनिफाइड टेक्स्ट और इमेज मास्किंग के साथ दस्तावेज़ एआई के लिए पूर्व-प्रशिक्षण](https://arxiv.org/abs/2204.08387) युपन हुआंग, टेंगचाओ लव, लेई कुई, युटोंग लू, फुरु वेई द्वारा पोस्ट किया गया। 306 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 307 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 308 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (मेटा AI से) साथ वाला पेपर [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https:/ /arxiv.org/abs/2104.01136) बेन ग्राहम, अलाएल्डिन एल-नौबी, ह्यूगो टौवरन, पियरे स्टॉक, आर्मंड जौलिन, हर्वे जेगौ, मैथिज डूज़ द्वारा। 309 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (दक्षिण चीन प्रौद्योगिकी विश्वविद्यालय से) साथ में कागज [LiLT: एक सरल लेकिन प्रभावी भाषा-स्वतंत्र लेआउट ट्रांसफार्मर संरचित दस्तावेज़ समझ के लिए](https://arxiv.org/abs/2202.13669) जियापेंग वांग, लियानवेन जिन, काई डिंग द्वारा पोस्ट किया गया। 310 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 311 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (मैंडी गुओ, जोशुआ आइंस्ली, डेविड यूथस, सैंटियागो ओंटानन, जियानमो नि, यूं-हुआन सुंग, यिनफेई यांग द्वारा पोस्ट किया गया। 312 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (स्टूडियो औसिया से) साथ में पेपर [LUKE: डीप कॉन्टेक्स्टुअलाइज्ड एंटिटी रिप्रेजेंटेशन विद एंटिटी-अवेयर सेल्फ-अटेंशन](https ://arxiv.org/abs/2010.01057) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto द्वारा। 313 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC चैपल हिल से) साथ में पेपर [LXMERT: ओपन-डोमेन क्वेश्चन के लिए ट्रांसफॉर्मर से क्रॉस-मोडलिटी एनकोडर रिप्रेजेंटेशन सीखना Answering](https://arxiv.org/abs/1908.07490) हाओ टैन और मोहित बंसल द्वारा। 314 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 315 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (फेसबुक से) साथ देने वाला पेपर [बियॉन्ड इंग्लिश-सेंट्रिक मल्टीलिंगुअल मशीन ट्रांसलेशन](https://arxiv.org/ एब्स/2010.11125) एंजेला फैन, श्रुति भोसले, होल्गर श्वेन्क, झी मा, अहमद अल-किश्की, सिद्धार्थ गोयल, मनदीप बैनेस, ओनूर सेलेबी, गुइल्लाम वेन्जेक, विश्रव चौधरी, नमन गोयल, टॉम बर्च, विटाली लिपचिंस्की, सर्गेई एडुनोव, एडौर्ड द्वारा ग्रेव, माइकल औली, आर्मंड जौलिन द्वारा पोस्ट किया गया। 316 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jörg द्वारा [OPUS](http://opus.nlpl.eu/) डेटा से प्रशिक्षित मशीनी अनुवाद मॉडल पोस्ट किया गया टाइडेमैन द्वारा। [मैरियन फ्रेमवर्क](https://marian-nmt.github.io/) माइक्रोसॉफ्ट ट्रांसलेटर टीम द्वारा विकसित। 317 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ में पेपर [मार्कअपएलएम: विजुअली-रिच डॉक्यूमेंट अंडरस्टैंडिंग के लिए टेक्स्ट और मार्कअप लैंग्वेज का प्री-ट्रेनिंग] (https://arxiv.org/abs/2110.08518) जुनलॉन्ग ली, यिहेंग जू, लेई कुई, फुरु द्वारा वी द्वारा पोस्ट किया गया। 318 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (मेटा और UIUC से) पेपर के साथ जारी किया गया [प्रति-पिक्सेल वर्गीकरण वह सब नहीं है जिसकी आपको सिमेंटिक सेगमेंटेशन की आवश्यकता है] (https://arxiv.org/abs/2107.06278) बोवेन चेंग, अलेक्जेंडर जी. श्विंग, अलेक्जेंडर किरिलोव द्वारा >>>>>> रिबेस ठीक करें 319 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [न्यूरल मशीन ट्रांसलेशन के लिए मल्टीलिंगुअल डीनोइजिंग प्री-ट्रेनिंग](https://arxiv. org/abs/2001.08210) यिनहान लियू, जियाताओ गु, नमन गोयल, जियान ली, सर्गेई एडुनोव, मार्जन ग़ज़विनिनेजाद, माइक लुईस, ल्यूक ज़ेटलमॉयर द्वारा। 320 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [एक्स्टेंसिबल बहुभाषी प्रीट्रेनिंग और फाइनट्यूनिंग के साथ बहुभाषी अनुवाद](https://arxiv युकिंग टैंग, चाउ ट्रान, जियान ली, पेंग-जेन चेन, नमन गोयल, विश्रव चौधरी, जियाताओ गु, एंजेला फैन द्वारा .org/abs/2008.00401)। 321 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA से) कागज के साथ [Megatron-LM: मॉडल का उपयोग करके बहु-अरब पैरामीटर भाषा मॉडल का प्रशिक्षण Parallelism](https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा। 322 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA से) साथ वाला पेपर [Megatron-LM: ट्रेनिंग मल्टी-बिलियन पैरामीटर लैंग्वेज मॉडल्स यूजिंग मॉडल पैरेललिज़्म] (https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा पोस्ट किया गया। 323 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (फ्रॉम Studio Ousia) साथ में पेपर [mLUKE: द पावर ऑफ एंटिटी रिप्रेजेंटेशन इन मल्टीलिंगुअल प्रीट्रेन्ड लैंग्वेज मॉडल्स](https://arxiv.org/abs/2110.08151) रयोकन री, इकुया यामाडा, और योशिमासा त्सुरोका द्वारा। 324 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [मोबाइलबर्ट: संसाधन-सीमित उपकरणों के लिए एक कॉम्पैक्ट टास्क-अज्ञेय बीईआरटी] (https://arxiv.org/abs/2004.02984) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, और Denny Zhou द्वारा पोस्ट किया गया। 325 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 326 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 327 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple से) साथ में कागज [MobileViT: लाइट-वेट, जनरल-पर्पस, और मोबाइल-फ्रेंडली विजन ट्रांसफॉर्मर] (https://arxiv.org/abs/2110.02178) सचिन मेहता और मोहम्मद रस्तगरी द्वारा पोस्ट किया गया। 328 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 329 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI से) साथ वाला पेपर [mT5: एक व्यापक बहुभाषी पूर्व-प्रशिक्षित टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर]( https://arxiv.org/abs/2010.11934) लिंटिंग ज़ू, नोआ कॉन्सटेंट, एडम रॉबर्ट्स, मिहिर काले, रामी अल-रफू, आदित्य सिद्धांत, आदित्य बरुआ, कॉलिन रैफेल द्वारा पोस्ट किया गया। 330 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 331 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 332 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (हुआवेई नूह के आर्क लैब से) साथ में कागज़ [NEZHA: चीनी भाषा समझ के लिए तंत्रिका प्रासंगिक प्रतिनिधित्व](https :/ /arxiv.org/abs/1909.00204) जुन्किउ वेई, ज़ियाओज़े रेन, ज़िआओगुआंग ली, वेनयोंग हुआंग, यी लियाओ, याशेंग वांग, जियाशू लिन, शिन जियांग, जिओ चेन और कुन लियू द्वारा। 333 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (फ्रॉम मेटा) साथ में पेपर [नो लैंग्वेज लेफ्ट बिहाइंड: स्केलिंग ह्यूमन-सेंटेड मशीन ट्रांसलेशन] (https://arxiv.org/abs/2207.04672) एनएलएलबी टीम द्वारा प्रकाशित। 334 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में कागज [Nyströmformer: A Nyström- आधारित एल्गोरिथम आत्म-ध्यान का अनुमान लगाने के लिए ](https://arxiv.org/abs/2102.03902) युनयांग ज़िओंग, झानपेंग ज़ेंग, रुद्रसिस चक्रवर्ती, मिंगक्सिंग टैन, ग्लेन फंग, यिन ली, विकास सिंह द्वारा पोस्ट किया गया। 335 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 336 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI से) साथ में कागज [विज़न ट्रांसफॉर्मर्स के साथ सिंपल ओपन-वोकैबुलरी ऑब्जेक्ट डिटेक्शन](https:/ /arxiv.org/abs/2205.06230) मैथियास मिंडरर, एलेक्सी ग्रिट्सेंको, ऑस्टिन स्टोन, मैक्सिम न्यूमैन, डिर्क वीसेनबोर्न, एलेक्सी डोसोवित्स्की, अरविंद महेंद्रन, अनुराग अर्नब, मुस्तफा देहघानी, ज़ुओरन शेन, जिओ वांग, ज़ियाओहुआ झाई, थॉमस किफ़, और नील हॉल्सबी द्वारा पोस्ट किया गया। 337 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 338 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google की ओर से) साथ में दिया गया पेपर [लंबे इनपुट सारांश के लिए ट्रांसफ़ॉर्मरों को बेहतर तरीके से एक्सटेंड करना](https://arxiv .org/abs/2208.04347) जेसन फांग, याओ झाओ, पीटर जे लियू द्वारा। 339 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (दीपमाइंड से) साथ में पेपर [पर्सीवर आईओ: संरचित इनपुट और आउटपुट के लिए एक सामान्य वास्तुकला] (https://arxiv.org/abs/2107.14795) एंड्रयू जेगल, सेबेस्टियन बोरग्यूड, जीन-बैप्टिस्ट अलायराक, कार्ल डोर्श, कैटलिन इओनेस्कु, डेविड द्वारा डिंग, स्कंद कोप्पुला, डैनियल ज़ोरान, एंड्रयू ब्रॉक, इवान शेलहैमर, ओलिवियर हेनाफ, मैथ्यू एम। बोट्विनिक, एंड्रयू ज़िसरमैन, ओरिओल विनियल्स, जोआओ कैरेरा द्वारा पोस्ट किया गया। 340 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research से) कागज के साथ [PhoBERT: वियतनामी के लिए पूर्व-प्रशिक्षित भाषा मॉडल](https://www .aclweb.org/anthology/2020.findings-emnlp.92/) डैट क्वोक गुयेन और अन्ह तुआन गुयेन द्वारा पोस्ट किया गया। 341 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP से) साथ वाला पेपर [प्रोग्राम अंडरस्टैंडिंग एंड जेनरेशन के लिए यूनिफाइड प्री-ट्रेनिंग](https://arxiv .org/abs/2103.06333) वसी उद्दीन अहमद, सैकत चक्रवर्ती, बैशाखी रे, काई-वेई चांग द्वारा। 342 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 343 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू-सीक्वेंस प्री-ट्रेनिंग ](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा पोस्ट किया गया। 344 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA से) साथ वाला पेपर [डीप लर्निंग इंफ़ेक्शन के लिए इंटीजर क्वांटिज़ेशन: प्रिंसिपल्स एंड एम्पिरिकल इवैल्यूएशन](https:// arxiv.org/abs/2004.09602) हाओ वू, पैट्रिक जुड, जिआओजी झांग, मिखाइल इसेव और पॉलियस माइकेविसियस द्वारा। 345 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (फेसबुक से) साथ में कागज [रिट्रीवल-ऑगमेंटेड जेनरेशन फॉर नॉलेज-इंटेंसिव एनएलपी टास्क](https://arxiv .org/abs/2005.11401) पैट्रिक लुईस, एथन पेरेज़, अलेक्जेंड्रा पिक्टस, फैबियो पेट्रोनी, व्लादिमीर कारपुखिन, नमन गोयल, हेनरिक कुटलर, माइक लुईस, वेन-ताउ यिह, टिम रॉकटाशेल, सेबस्टियन रिडेल, डौवे कीला द्वारा। 346 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google अनुसंधान से) केल्विन गु, केंटन ली, ज़ोरा तुंग, पानुपोंग पसुपत और मिंग-वेई चांग द्वारा साथ में दिया गया पेपर [REALM: रिट्रीवल-ऑगमेंटेड लैंग्वेज मॉडल प्री-ट्रेनिंग](https://arxiv.org/abs/2002.08909)। 347 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 348 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META रिसर्च से) [डिज़ाइनिंग नेटवर्क डिज़ाइन स्पेस] (https://arxiv.org/) पेपर के साथ जारी किया गया एब्स/2003.13678) इलिजा राडोसावोविक, राज प्रतीक कोसाराजू, रॉस गिर्शिक, कैमिंग ही, पिओटर डॉलर द्वारा। 349 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (गूगल रिसर्च से) साथ वाला पेपर [पूर्व-प्रशिक्षित भाषा मॉडल में एम्बेडिंग कपलिंग पर पुनर्विचार](https://arxiv .org/pdf/2010.12821.pdf) ह्युंग वोन चुंग, थिबॉल्ट फ़ेवरी, हेनरी त्साई, एम. जॉनसन, सेबेस्टियन रुडर द्वारा। 350 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (माइक्रोसॉफ्ट रिसर्च से) [डीप रेसिडुअल लर्निंग फॉर इमेज रिकग्निशन] (https://arxiv. org/abs/1512.03385) कैमिंग हे, जियांग्यु झांग, शाओकिंग रेन, जियान सन द्वारा। 351 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (फेसबुक से), साथ में कागज [मजबूत रूप से अनुकूलित BERT प्रीट्रेनिंग दृष्टिकोण](https://arxiv.org/abs /1907.11692) यिनहान लियू, मायल ओट, नमन गोयल, जिंगफेई डू, मंदार जोशी, डैनकी चेन, ओमर लेवी, माइक लुईस, ल्यूक ज़ेटलमॉयर, वेसेलिन स्टोयानोव द्वारा। 352 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 353 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (झुईई टेक्नोलॉजी से), साथ में पेपर [रोफॉर्मर: रोटरी पोजिशन एंबेडिंग के साथ एन्हांस्ड ट्रांसफॉर्मर] (https://arxiv.org/pdf/2104.09864v1.pdf) जियानलिन सु और यू लू और शेंगफेंग पैन और बो वेन और युनफेंग लियू द्वारा प्रकाशित। 354 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 355 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP से) साथ देने वाला पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स](https ://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योव आर्टज़ी द्वारा। 356 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP से) साथ में पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स] (https://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योआव आर्टज़ी द्वारा पोस्ट किया गया। 357 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (फेसबुक से), साथ में पेपर [फेयरसेक S2T: फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग विद फेयरसेक](https: //arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया。 358 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (फेसबुक से) साथ में पेपर [लार्ज-स्केल सेल्फ- एंड सेमी-सुपरवाइज्ड लर्निंग फॉर स्पीच ट्रांसलेशन](https://arxiv.org/abs/2104.06678) चांगहान वांग, ऐनी वू, जुआन पिनो, एलेक्सी बेवस्की, माइकल औली, एलेक्सिस द्वारा Conneau द्वारा पोस्ट किया गया। 359 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (तेल अवीव यूनिवर्सिटी से) साथ में पेपर [स्पैन सिलेक्शन को प्री-ट्रेनिंग करके कुछ-शॉट क्वेश्चन आंसरिंग](https:// arxiv.org/abs/2101.00438) ओरि राम, युवल कर्स्टन, जोनाथन बेरेंट, अमीर ग्लोबर्सन, ओमर लेवी द्वारा। 360 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (बर्कले से) कागज के साथ [SqueezeBERT: कुशल तंत्रिका नेटवर्क के बारे में NLP को कंप्यूटर विज़न क्या सिखा सकता है?](https: //arxiv.org/abs/2006.11316) फॉरेस्ट एन. इनडोला, अल्बर्ट ई. शॉ, रवि कृष्णा, और कर्ट डब्ल्यू. केटज़र द्वारा। 361 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (माइक्रोसॉफ्ट से) साथ में कागज [स्वाइन ट्रांसफॉर्मर: शिफ्टेड विंडोज का उपयोग कर पदानुक्रमित विजन ट्रांसफॉर्मर](https://arxiv .org/abs/2103.14030) ज़ी लियू, युटोंग लिन, यू काओ, हान हू, यिक्सुआन वेई, झेंग झांग, स्टीफन लिन, बैनिंग गुओ द्वारा। 362 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft से) साथ वाला पेपर [Swin Transformer V2: स्केलिंग अप कैपेसिटी एंड रेजोल्यूशन](https:// ज़ी लियू, हान हू, युटोंग लिन, ज़ुलिआंग याओ, ज़ेंडा ज़ी, यिक्सुआन वेई, जिया निंग, यू काओ, झेंग झांग, ली डोंग, फुरु वेई, बैनिंग गुओ द्वारा arxiv.org/abs/2111.09883। 363 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 364 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI)कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग और माइकल मटेना द्वारा साथ में पेपर [एक एकीकृत टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर के साथ स्थानांतरण सीखने की सीमा की खोज] (https://arxiv.org/abs/1910.10683) और यांकी झोउ और वेई ली और पीटर जे लियू। 365 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI से) साथ वाला पेपर [google-research/text-to-text-transfer- ट्रांसफॉर्मर](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग द्वारा और माइकल मटेना और यांकी झोउ और वेई ली और पीटर जे लियू। 366 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [पबटेबल्स-1एम: टूवर्ड्स कॉम्प्रिहेंसिव टेबल एक्सट्रैक्शन फ्रॉम अनस्ट्रक्चर्ड डॉक्यूमेंट्स ](https://arxiv.org/abs/2110.00061) ब्रैंडन स्मॉक, रोहित पेसाला, रॉबिन अब्राहम द्वारा पोस्ट किया गया। 367 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI से) साथ में कागज [TAPAS: पूर्व-प्रशिक्षण के माध्यम से कमजोर पर्यवेक्षण तालिका पार्सिंग](https:// arxiv.org/abs/2004.02349) जोनाथन हर्ज़िग, पावेल क्रिज़िस्तोफ़ नोवाक, थॉमस मुलर, फ्रांसेस्को पिकिन्नो और जूलियन मार्टिन ईसेन्च्लोस द्वारा। 368 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [TAPEX: टेबल प्री-ट्रेनिंग थ्रू लर्निंग अ न्यूरल SQL एक्ज़ीक्यूटर](https: //arxiv.org/abs/2107.07653) कियान लियू, बेई चेन, जियाकी गुओ, मोर्टेज़ा ज़ियादी, ज़ेकी लिन, वीज़ू चेन, जियान-गुआंग लू द्वारा पोस्ट किया गया। 369 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 370 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 371 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 372 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU की ओर से) कागज के साथ [संस्करण-एक्स: एक ब्लॉग मॉडल चौकस चौक मॉडल मॉडल] (https://arxivorg/abs/1901.02860) क्वोकोक वी. ले, रुस्लैन सलाखुतदी 373 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 374 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 375 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (माइक्रोसॉफ्ट रिसर्च से) साथ में दिया गया पेपर [UniSpeech: यूनिफाइड स्पीच रिप्रेजेंटेशन लर्निंग विद लेबलेड एंड अनलेबल्ड डेटा](https:/ /arxiv.org/abs/2101.07597) चेंगई वांग, यू वू, याओ कियान, केनिची कुमातानी, शुजी लियू, फुरु वेई, माइकल ज़ेंग, ज़ुएदोंग हुआंग द्वारा। 376 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [UNISPEECH-SAT: यूनिवर्सल स्पीच रिप्रेजेंटेशन लर्निंग विद स्पीकर अवेयर प्री-ट्रेनिंग ](https://arxiv.org/abs/2110.05752) सानयुआन चेन, यू वू, चेंग्यी वांग, झेंगयांग चेन, झूओ चेन, शुजी लियू, जियान वू, याओ कियान, फुरु वेई, जिन्यु ली, जियांगज़ान यू द्वारा पोस्ट किया गया। 377 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (सिंघुआ यूनिवर्सिटी और ननकाई यूनिवर्सिटी से) साथ में पेपर [विजुअल अटेंशन नेटवर्क](https://arxiv.org/ pdf/2202.09741.pdf) मेंग-हाओ गुओ, चेंग-ज़े लू, झेंग-निंग लियू, मिंग-मिंग चेंग, शि-मिन हू द्वारा। 378 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (मल्टीमीडिया कम्प्यूटिंग ग्रुप, नानजिंग यूनिवर्सिटी से) साथ में पेपर [वीडियोएमएई: मास्क्ड ऑटोएन्कोडर स्व-पर्यवेक्षित वीडियो प्री-ट्रेनिंग के लिए डेटा-कुशल सीखने वाले हैं] (https://arxiv.org/abs/2203.12602) ज़ान टोंग, यिबिंग सॉन्ग, जुए द्वारा वांग, लिमिन वांग द्वारा पोस्ट किया गया। 379 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain से) साथ में कागज [ViLT: Vision-and-Language Transformer बिना कनवल्शन या रीजन सुपरविजन](https://arxiv.org/abs/2102.03334) वोनजे किम, बोक्यूंग सोन, इल्डू किम द्वारा पोस्ट किया गया। 380 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (गूगल एआई से) कागज के साथ [एक इमेज इज़ वर्थ 16x16 वर्ड्स: ट्रांसफॉर्मर्स फॉर इमेज रिकॉग्निशन एट स्केल](https://arxiv.org/abs/2010.11929) एलेक्सी डोसोवित्स्की, लुकास बेयर, अलेक्जेंडर कोलेसनिकोव, डिर्क वीसेनबोर्न, शियाओहुआ झाई, थॉमस अनटरथिनर, मुस्तफा देहघानी, मैथियास मिंडरर, जॉर्ज हेगोल्ड, सिल्वेन गेली, जैकब उस्ज़कोरेइट द्वारा हॉल्सबी द्वारा पोस्ट किया गया। 381 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP से) साथ वाला पेपर [VisualBERT: A Simple and Performant Baseline for Vision and Language](https:/ /arxiv.org/pdf/1908.03557) लियुनियन हेरोल्ड ली, मार्क यात्स्कर, दा यिन, चो-जुई हसीह, काई-वेई चांग द्वारा। 382 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 383 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (मेटा एआई से) साथ में कागज [मास्कड ऑटोएन्कोडर स्केलेबल विजन लर्नर्स हैं](https://arxiv.org/ एब्स/2111.06377) कैमिंग हे, ज़िनेली चेन, सेनिंग ज़ी, यांगहो ली, पिओट्र डॉलर, रॉस गिर्शिक द्वारा। 384 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (मेटा एआई से) साथ में कागज [लेबल-कुशल सीखने के लिए मास्क्ड स्याम देश के नेटवर्क](https://arxiv. org/abs/2204.07141) महमूद असरान, मथिल्डे कैरन, ईशान मिश्रा, पियोट्र बोजानोवस्की, फ्लोरियन बोर्डेस, पास्कल विंसेंट, आर्मंड जौलिन, माइकल रब्बत, निकोलस बल्लास द्वारा। 385 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (फेसबुक एआई से) साथ में पेपर [wav2vec 2.0: ए फ्रेमवर्क फॉर सेल्फ-सुपरवाइज्ड लर्निंग ऑफ स्पीच रिप्रेजेंटेशन] (https://arxiv.org/abs/2006.11477) एलेक्सी बेवस्की, हेनरी झोउ, अब्देलरहमान मोहम्मद, माइकल औली द्वारा। 386 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI से) साथ वाला पेपर [FAIRSEQ S2T: FAIRSEQ के साथ फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग ](https://arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, सरव्या पोपुरी, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया। 387 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI से) साथ वाला पेपर [सरल और प्रभावी जीरो-शॉट क्रॉस-लिंगुअल फोनेम रिकॉग्निशन](https:/ /arxiv.org/abs/2109.11680) कियानटोंग जू, एलेक्सी बाएव्स्की, माइकल औली द्वारा। 388 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (माइक्रोसॉफ्ट रिसर्च से) पेपर के साथ जारी किया गया [WavLM: फुल स्टैक के लिए बड़े पैमाने पर स्व-पर्यवेक्षित पूर्व-प्रशिक्षण स्पीच प्रोसेसिंग] (https://arxiv.org/abs/2110.13900) सानयुआन चेन, चेंगयी वांग, झेंगयांग चेन, यू वू, शुजी लियू, ज़ुओ चेन, जिन्यु ली, नाओयुकी कांडा, ताकुया योशियोका, ज़िओंग जिओ, जियान वू, लॉन्ग झोउ, शुओ रेन, यानमिन कियान, याओ कियान, जियान वू, माइकल ज़ेंग, फुरु वेई। 389 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI से) साथ में कागज [बड़े पैमाने पर कमजोर पर्यवेक्षण के माध्यम से मजबूत भाषण पहचान](https://cdn. openai.com/papers/whisper.pdf) एलेक रैडफोर्ड, जोंग वूक किम, ताओ जू, ग्रेग ब्रॉकमैन, क्रिस्टीन मैकलीवे, इल्या सुत्स्केवर द्वारा। 390 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [एक्सपैंडिंग लैंग्वेज-इमेज प्रीट्रेन्ड मॉडल फॉर जनरल वीडियो रिकग्निशन](https: //arxiv.org/abs/2208.02816) बोलिन नी, होउवेन पेंग, मिंगाओ चेन, सोंगयांग झांग, गाओफेंग मेंग, जियानलोंग फू, शिमिंग जियांग, हैबिन लिंग द्वारा। 391 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 392 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (फेसबुक से) साथ में पेपर [क्रॉस-लिंगुअल लैंग्वेज मॉडल प्रीट्रेनिंग] (https://arxiv.org/abs/1901.07291) गिलाउम लैम्पल और एलेक्सिस कोनो द्वारा। 393 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में कागज [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू- सीक्वेंस प्री-ट्रेनिंग](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा। 394 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (फेसबुक एआई से), साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग एट स्केल] (https://arxiv.org/abs/1911.02116) एलेक्सिस कोन्यू*, कार्तिकेय खंडेलवाल*, नमन गोयल, विश्रव चौधरी, गिलाउम वेनज़ेक, फ्रांसिस्को गुज़मैन द्वारा , एडौर्ड ग्रेव, मायल ओट, ल्यूक ज़ेटलमॉयर और वेसेलिन स्टोयानोव द्वारा। 395 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI से) साथ में कागज [बहुभाषी नकाबपोश भाषा के लिए बड़े पैमाने पर ट्रांसफॉर्मर ] मॉडलिंग](https://arxiv.org/abs/2105.00572) नमन गोयल, जिंगफेई डू, मायल ओट, गिरि अनंतरामन, एलेक्सिस कोनो द्वारा पोस्ट किया गया। 396 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU से) साथ वाला पेपर [XLNet: जनरलाइज्ड ऑटोरेग्रेसिव प्रीट्रेनिंग फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv ज़ीलिन यांग*, ज़िहांग दाई*, यिमिंग यांग, जैम कार्बोनेल, रुस्लान सलाखुतदीनोव, क्वोक वी. ले ​​द्वारा .org/abs/1906.08237)। 397 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI से) साथ वाला पेपर [XLS-R: सेल्फ सुपरवाइज्ड क्रॉस-लिंगुअल स्पीच रिप्रेजेंटेशन लर्निंग एट स्केल](https://arxiv.org/abs/2111.09296) अरुण बाबू, चांगहान वांग, एंड्रोस तजंद्रा, कुशाल लखोटिया, कियानटोंग जू, नमन गोयल, कृतिका सिंह, पैट्रिक वॉन प्लैटन, याथार्थ सराफ, जुआन पिनो, एलेक्सी बेवस्की, एलेक्सिस कोन्यू, माइकल औली द्वारा पोस्ट किया गया। 398 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (फेसबुक एआई से) साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग फॉर स्पीच रिकग्निशन] (https://arxiv.org/abs/2006.13979) एलेक्सिस कोन्यू, एलेक्सी बेवस्की, रोनन कोलोबर्ट, अब्देलरहमान मोहम्मद, माइकल औली द्वारा। 399 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (हुआझोंग यूनिवर्सिटी ऑफ साइंस एंड टेक्नोलॉजी से) साथ में पेपर [यू ओनली लुक एट वन सीक्वेंस: रीथिंकिंग ट्रांसफॉर्मर इन विज़न थ्रू ऑब्जेक्ट डिटेक्शन](https://arxiv.org/abs/2106.00666) युक्सिन फेंग, बेनचेंग लियाओ, जिंगगैंग वांग, जेमिन फेंग, जियांग क्यूई, रुई वू, जियानवेई नीयू, वेन्यू लियू द्वारा पोस्ट किया गया। 400 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में पेपर [यू ओनली सैंपल (लगभग) ज़ानपेंग ज़ेंग, युनयांग ज़िओंग द्वारा , सत्य एन. रवि, शैलेश आचार्य, ग्लेन फंग, विकास सिंह द्वारा पोस्ट किया गया। 401 1. एक नए मॉडल में योगदान देना चाहते हैं? नए मॉडल जोड़ने में आपका मार्गदर्शन करने के लिए हमारे पास एक **विस्तृत मार्गदर्शिका और टेम्प्लेट** है। आप उन्हें [`टेम्पलेट्स`](./templates) निर्देशिका में पा सकते हैं। पीआर शुरू करने से पहले [योगदान दिशानिर्देश] (./CONTRIBUTING.md) देखना और अनुरक्षकों से संपर्क करना या प्रतिक्रिया प्राप्त करने के लिए एक नया मुद्दा खोलना याद रखें। 402 403 यह जांचने के लिए कि क्या किसी मॉडल में पहले से ही Flax, PyTorch या TensorFlow का कार्यान्वयन है, या यदि उसके पास Tokenizers लाइब्रेरी में संबंधित टोकन है, तो [यह तालिका] (https://huggingface.co/ docs/transformers/index#supported) देखें। -फ्रेमवर्क)। 404 405 इन कार्यान्वयनों का परीक्षण कई डेटासेट पर किया गया है (देखें केस स्क्रिप्ट का उपयोग करें) और वैनिला कार्यान्वयन के लिए तुलनात्मक रूप से प्रदर्शन करना चाहिए। आप उपयोग के मामले के दस्तावेज़ [इस अनुभाग](https://huggingface.co/docs/transformers/examples) में व्यवहार का विवरण पढ़ सकते हैं। 406 407 408 ## अधिक समझें 409 410 |अध्याय | विवरण | 411 |-|-| 412 | [दस्तावेज़ीकरण](https://huggingface.co/transformers/) | पूरा एपीआई दस्तावेज़ीकरण और ट्यूटोरियल | 413 | [कार्य सारांश](https://huggingface.co/docs/transformers/task_summary) | ट्रांसफॉर्मर समर्थित कार्य | 414 | [प्रीप्रोसेसिंग ट्यूटोरियल](https://huggingface.co/docs/transformers/preprocessing) | मॉडल के लिए डेटा तैयार करने के लिए `टोकनाइज़र` का उपयोग करना | 415 | [प्रशिक्षण और फाइन-ट्यूनिंग](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow के ट्रेनिंग लूप या `ट्रेनर` API में ट्रांसफॉर्मर द्वारा दिए गए मॉडल का उपयोग करें | 416 | [क्विक स्टार्ट: ट्वीकिंग एंड यूज़ केस स्क्रिप्ट्स](https://github.com/huggingface/transformers/tree/main/examples) | विभिन्न कार्यों के लिए केस स्क्रिप्ट का उपयोग करें | 417 | [मॉडल साझा करना और अपलोड करना](https://huggingface.co/docs/transformers/model_sharing) | समुदाय के साथ अपने फाइन टूनड मॉडल अपलोड और साझा करें | 418 | [माइग्रेशन](https://huggingface.co/docs/transformers/migration) | `पाइटोरच-ट्रांसफॉर्मर्स` या `पाइटोरच-प्रीट्रेनड-बर्ट` से ट्रांसफॉर्मर में माइग्रेट करना | 419 420 ## उद्धरण 421 422 हमने आधिकारिक तौर पर इस लाइब्रेरी का [पेपर](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) प्रकाशित किया है, अगर आप ट्रान्सफ़ॉर्मर्स लाइब्रेरी का उपयोग करते हैं, तो कृपया उद्धृत करें: 423 ```bibtex 424 @inproceedings{wolf-etal-2020-transformers, 425 title = "Transformers: State-of-the-Art Natural Language Processing", 426 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 427 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 428 month = oct, 429 year = "2020", 430 address = "Online", 431 publisher = "Association for Computational Linguistics", 432 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 433 pages = "38--45" 434 } 435 ``` 436 [end of README_hd.md] [start of README_ja.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Traditional Japanese translation of Hugging Face documentation 19 - Use square quotes, e.g.,「引用」 20 21 Dictionary 22 23 API: API(翻訳しない) 24 add: 追加 25 checkpoint: チェックポイント 26 code: コード 27 community: コミュニティ 28 confidence: 信頼度 29 dataset: データセット 30 documentation: ドキュメント 31 example: 例 32 finetune: 微調整 33 Hugging Face: Hugging Face(翻訳しない) 34 implementation: 実装 35 inference: 推論 36 library: ライブラリ 37 module: モジュール 38 NLP/Natural Language Processing: NLPと表示される場合は翻訳されず、Natural Language Processingと表示される場合は翻訳される 39 online demos: オンラインデモ 40 pipeline: pipeline(翻訳しない) 41 pretrained/pretrain: 学習済み 42 Python data structures (e.g., list, set, dict): リスト、セット、ディクショナリと訳され、括弧内は原文英語 43 repository: repository(翻訳しない) 44 summary: 概要 45 token-: token-(翻訳しない) 46 Trainer: Trainer(翻訳しない) 47 transformer: transformer(翻訳しない) 48 tutorial: チュートリアル 49 user: ユーザ 50 --> 51 52 <p align="center"> 53 <br> 54 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 55 <br> 56 <p> 57 <p align="center"> 58 <a href="https://circleci.com/gh/huggingface/transformers"> 59 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 60 </a> 61 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 62 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 63 </a> 64 <a href="https://huggingface.co/docs/transformers/index"> 65 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 66 </a> 67 <a href="https://github.com/huggingface/transformers/releases"> 68 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 69 </a> 70 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 71 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 72 </a> 73 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 74 </p> 75 76 <h4 align="center"> 77 <p> 78 <a href="https://github.com/huggingface/transformers/">English</a> | 79 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 80 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 81 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | 82 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | 83 <b>日本語</b> | 84 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> 85 <p> 86 </h4> 87 88 <h3 align="center"> 89 <p>JAX、PyTorch、TensorFlowのための最先端機械学習</p> 90 </h3> 91 92 <h3 align="center"> 93 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 94 </h3> 95 96 🤗Transformersは、テキスト、視覚、音声などの異なるモダリティに対してタスクを実行するために、事前に学習させた数千のモデルを提供します。 97 98 これらのモデルは次のような場合に適用できます: 99 100 * 📝 テキストは、テキストの分類、情報抽出、質問応答、要約、翻訳、テキスト生成などのタスクのために、100以上の言語に対応しています。 101 * 🖼️ 画像分類、物体検出、セグメンテーションなどのタスクのための画像。 102 * 🗣️ 音声は、音声認識や音声分類などのタスクに使用します。 103 104 トランスフォーマーモデルは、テーブル質問応答、光学文字認識、スキャン文書からの情報抽出、ビデオ分類、視覚的質問応答など、**複数のモダリティを組み合わせた**タスクも実行可能です。 105 106 🤗Transformersは、与えられたテキストに対してそれらの事前学習されたモデルを素早くダウンロードして使用し、あなた自身のデータセットでそれらを微調整し、私たちの[model hub](https://huggingface.co/models)でコミュニティと共有するためのAPIを提供します。同時に、アーキテクチャを定義する各Pythonモジュールは完全にスタンドアロンであり、迅速な研究実験を可能にするために変更することができます。 107 108 🤗Transformersは[Jax](https://jax.readthedocs.io/en/latest/)、[PyTorch](https://pytorch.org/)、[TensorFlow](https://www.tensorflow.org/)という3大ディープラーニングライブラリーに支えられ、それぞれのライブラリをシームレスに統合しています。片方でモデルを学習してから、もう片方で推論用にロードするのは簡単なことです。 109 110 ## オンラインデモ 111 112 [model hub](https://huggingface.co/models)から、ほとんどのモデルのページで直接テストすることができます。また、パブリックモデル、プライベートモデルに対して、[プライベートモデルのホスティング、バージョニング、推論API](https://huggingface.co/pricing)を提供しています。 113 114 以下はその一例です: 115 116 自然言語処理にて: 117 - [BERTによるマスクドワード補完](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 118 - [Electraによる名前実体認識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 119 - [GPT-2によるテキスト生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 120 - [RoBERTaによる自然言語推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 121 - [BARTによる要約](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 122 - [DistilBERTによる質問応答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 123 - [T5による翻訳](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 124 125 コンピュータビジョンにて: 126 - [ViTによる画像分類](https://huggingface.co/google/vit-base-patch16-224) 127 - [DETRによる物体検出](https://huggingface.co/facebook/detr-resnet-50) 128 - [SegFormerによるセマンティックセグメンテーション](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) 129 - [DETRによるパノプティックセグメンテーション](https://huggingface.co/facebook/detr-resnet-50-panoptic) 130 131 オーディオにて: 132 - [Wav2Vec2による自動音声認識](https://huggingface.co/facebook/wav2vec2-base-960h) 133 - [Wav2Vec2によるキーワード検索](https://huggingface.co/superb/wav2vec2-base-superb-ks) 134 135 マルチモーダルなタスクにて: 136 - [ViLTによる視覚的質問応答](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) 137 138 Hugging Faceチームによって作られた **[トランスフォーマーを使った書き込み](https://transformer.huggingface.co)** は、このリポジトリのテキスト生成機能の公式デモである。 139 140 ## Hugging Faceチームによるカスタム・サポートをご希望の場合 141 142 <a target="_blank" href="https://huggingface.co/support"> 143 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 144 </a><br> 145 146 ## クイックツアー 147 148 与えられた入力(テキスト、画像、音声、...)に対してすぐにモデルを使うために、我々は`pipeline`というAPIを提供しております。pipelineは、学習済みのモデルと、そのモデルの学習時に使用された前処理をグループ化したものです。以下は、肯定的なテキストと否定的なテキストを分類するためにpipelineを使用する方法です: 149 150 ```python 151 >>> from transformers import pipeline 152 153 # Allocate a pipeline for sentiment-analysis 154 >>> classifier = pipeline('sentiment-analysis') 155 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 156 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 157 ``` 158 159 2行目のコードでは、pipelineで使用される事前学習済みモデルをダウンロードしてキャッシュし、3行目では与えられたテキストに対してそのモデルを評価します。ここでは、答えは99.97%の信頼度で「ポジティブ」です。 160 161 自然言語処理だけでなく、コンピュータビジョンや音声処理においても、多くのタスクにはあらかじめ訓練された`pipeline`が用意されている。例えば、画像から検出された物体を簡単に抽出することができる: 162 163 ``` python 164 >>> import requests 165 >>> from PIL import Image 166 >>> from transformers import pipeline 167 168 # Download an image with cute cats 169 >>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" 170 >>> image_data = requests.get(url, stream=True).raw 171 >>> image = Image.open(image_data) 172 173 # Allocate a pipeline for object detection 174 >>> object_detector = pipeline('object-detection') 175 >>> object_detector(image) 176 [{'score': 0.9982201457023621, 177 'label': 'remote', 178 'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}}, 179 {'score': 0.9960021376609802, 180 'label': 'remote', 181 'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}}, 182 {'score': 0.9954745173454285, 183 'label': 'couch', 184 'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}}, 185 {'score': 0.9988006353378296, 186 'label': 'cat', 187 'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}}, 188 {'score': 0.9986783862113953, 189 'label': 'cat', 190 'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}] 191 ``` 192 193 ここでは、画像から検出されたオブジェクトのリストが得られ、オブジェクトを囲むボックスと信頼度スコアが表示されます。左側が元画像、右側が予測結果を表示したものです: 194 195 <h3 align="center"> 196 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a> 197 <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a> 198 </h3> 199 200 [このチュートリアル](https://huggingface.co/docs/transformers/task_summary)では、`pipeline`APIでサポートされているタスクについて詳しく説明しています。 201 202 `pipeline`に加えて、与えられたタスクに学習済みのモデルをダウンロードして使用するために必要なのは、3行のコードだけです。以下はPyTorchのバージョンです: 203 ```python 204 >>> from transformers import AutoTokenizer, AutoModel 205 206 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 207 >>> model = AutoModel.from_pretrained("bert-base-uncased") 208 209 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 210 >>> outputs = model(**inputs) 211 ``` 212 213 And here is the equivalent code for TensorFlow: 214 ```python 215 >>> from transformers import AutoTokenizer, TFAutoModel 216 217 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 218 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 219 220 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 221 >>> outputs = model(**inputs) 222 ``` 223 224 トークナイザは学習済みモデルが期待するすべての前処理を担当し、単一の文字列 (上記の例のように) またはリストに対して直接呼び出すことができます。これは下流のコードで使用できる辞書を出力します。また、単純に ** 引数展開演算子を使用してモデルに直接渡すこともできます。 225 226 モデル自体は通常の[Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) または [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (バックエンドによって異なる)で、通常通り使用することが可能です。[このチュートリアル](https://huggingface.co/docs/transformers/training)では、このようなモデルを従来のPyTorchやTensorFlowの学習ループに統合する方法や、私たちの`Trainer`APIを使って新しいデータセットで素早く微調整を行う方法について説明します。 227 228 ## なぜtransformersを使う必要があるのでしょうか? 229 230 1. 使いやすい最新モデル: 231 - 自然言語理解・生成、コンピュータビジョン、オーディオの各タスクで高いパフォーマンスを発揮します。 232 - 教育者、実務者にとっての低い参入障壁。 233 - 学習するクラスは3つだけで、ユーザが直面する抽象化はほとんどありません。 234 - 学習済みモデルを利用するための統一されたAPI。 235 236 1. 低い計算コスト、少ないカーボンフットプリント: 237 - 研究者は、常に再トレーニングを行うのではなく、トレーニングされたモデルを共有することができます。 238 - 実務家は、計算時間や生産コストを削減することができます。 239 - すべてのモダリティにおいて、60,000以上の事前学習済みモデルを持つ数多くのアーキテクチャを提供します。 240 241 1. モデルのライフタイムのあらゆる部分で適切なフレームワークを選択可能: 242 - 3行のコードで最先端のモデルをトレーニング。 243 - TF2.0/PyTorch/JAXフレームワーク間で1つのモデルを自在に移動させる。 244 - 学習、評価、生産に適したフレームワークをシームレスに選択できます。 245 246 1. モデルやサンプルをニーズに合わせて簡単にカスタマイズ可能: 247 - 原著者が発表した結果を再現するために、各アーキテクチャの例を提供しています。 248 - モデル内部は可能な限り一貫して公開されています。 249 - モデルファイルはライブラリとは独立して利用することができ、迅速な実験が可能です。 250 251 ## なぜtransformersを使ってはいけないのでしょうか? 252 253 - このライブラリは、ニューラルネットのためのビルディングブロックのモジュール式ツールボックスではありません。モデルファイルのコードは、研究者が追加の抽象化/ファイルに飛び込むことなく、各モデルを素早く反復できるように、意図的に追加の抽象化でリファクタリングされていません。 254 - 学習APIはどのようなモデルでも動作するわけではなく、ライブラリが提供するモデルで動作するように最適化されています。一般的な機械学習のループには、別のライブラリ(おそらく[Accelerate](https://huggingface.co/docs/accelerate))を使用する必要があります。 255 - 私たちはできるだけ多くの使用例を紹介するよう努力していますが、[examples フォルダ](https://github.com/huggingface/transformers/tree/main/examples) にあるスクリプトはあくまで例です。あなたの特定の問題に対してすぐに動作するわけではなく、あなたのニーズに合わせるために数行のコードを変更する必要があることが予想されます。 256 257 ## インストール 258 259 ### pipにて 260 261 このリポジトリは、Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+ でテストされています。 262 263 🤗Transformersは[仮想環境](https://docs.python.org/3/library/venv.html)にインストールする必要があります。Pythonの仮想環境に慣れていない場合は、[ユーザーガイド](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)を確認してください。 264 265 まず、使用するバージョンのPythonで仮想環境を作成し、アクティベートします。 266 267 その後、Flax, PyTorch, TensorFlowのうち少なくとも1つをインストールする必要があります。 268 [TensorFlowインストールページ](https://www.tensorflow.org/install/)、[PyTorchインストールページ](https://pytorch.org/get-started/locally/#start-locally)、[Flax](https://github.com/google/flax#quick-install)、[Jax](https://github.com/google/jax#installation)インストールページで、お使いのプラットフォーム別のインストールコマンドを参照してください。 269 270 これらのバックエンドのいずれかがインストールされている場合、🤗Transformersは以下のようにpipを使用してインストールすることができます: 271 272 ```bash 273 pip install transformers 274 ``` 275 276 もしサンプルを試したい、またはコードの最先端が必要で、新しいリリースを待てない場合は、[ライブラリをソースからインストール](https://huggingface.co/docs/transformers/installation#installing-from-source)する必要があります。 277 278 ### condaにて 279 280 Transformersバージョン4.0.0から、condaチャンネルを搭載しました: `huggingface`。 281 282 🤗Transformersは以下のようにcondaを使って設置することができます: 283 284 ```shell script 285 conda install -c huggingface transformers 286 ``` 287 288 Flax、PyTorch、TensorFlowをcondaでインストールする方法は、それぞれのインストールページに従ってください。 289 290 > **_注意:_** Windowsでは、キャッシュの恩恵を受けるために、デベロッパーモードを有効にするよう促されることがあります。このような場合は、[このissue](https://github.com/huggingface/huggingface_hub/issues/1062)でお知らせください。 291 292 ## モデルアーキテクチャ 293 294 🤗Transformersが提供する **[全モデルチェックポイント](https://huggingface.co/models)** は、[ユーザー](https://huggingface.co/users)や[組織](https://huggingface.co/organizations)によって直接アップロードされるhuggingface.co [model hub](https://huggingface.co)からシームレスに統合されています。 295 296 現在のチェックポイント数: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 297 298 🤗Transformersは現在、以下のアーキテクチャを提供しています(それぞれのハイレベルな要約は[こちら](https://huggingface.co/docs/transformers/model_summary)を参照してください): 299 300 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 301 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 302 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 303 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 304 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 305 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 306 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 307 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 308 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 309 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 310 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 311 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 312 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 313 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 314 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 315 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 316 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 317 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 318 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 319 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 320 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 321 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 322 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 323 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 324 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 325 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 326 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 327 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 328 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 329 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 330 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 331 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 332 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 333 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 334 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 335 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 336 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 337 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 338 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 339 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 340 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 341 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 342 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 343 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 344 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 345 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 346 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 347 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 348 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 349 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 350 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 351 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 352 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 353 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 354 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 355 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 356 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 357 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 358 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 359 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 360 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 361 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 362 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 363 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 364 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 365 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 366 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 367 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 368 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 369 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 370 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 371 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 372 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 373 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 374 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 375 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 376 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 377 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 378 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 379 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 380 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 381 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 382 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 383 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 384 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 385 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 386 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 387 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 388 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 389 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 390 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 391 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 392 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 393 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 394 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 395 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 396 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 397 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 398 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 399 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 400 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 401 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 402 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 403 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 404 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 405 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 406 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 407 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 408 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 409 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 410 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 411 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 412 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 413 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 414 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 415 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 416 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 417 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 418 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 419 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 420 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 421 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 422 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 423 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 424 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 425 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 426 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 427 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 428 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 429 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 430 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 431 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 432 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 433 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 434 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 435 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 436 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 437 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 438 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 439 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 440 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 441 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 442 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 443 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 444 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 445 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 446 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 447 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 448 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 449 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 450 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 451 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 452 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 453 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 454 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 455 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 456 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 457 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 458 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 459 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 460 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 461 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 462 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 463 1. 新しいモデルを投稿したいですか?新しいモデルを追加するためのガイドとして、**詳細なガイドとテンプレート**が追加されました。これらはリポジトリの[`templates`](./templates)フォルダにあります。PRを始める前に、必ず[コントリビューションガイド](./CONTRIBUTING.md)を確認し、メンテナに連絡するか、フィードバックを収集するためにissueを開いてください。 464 465 各モデルがFlax、PyTorch、TensorFlowで実装されているか、🤗Tokenizersライブラリに支えられた関連トークナイザを持っているかは、[この表](https://huggingface.co/docs/transformers/index#supported-frameworks)を参照してください。 466 467 これらの実装はいくつかのデータセットでテストされており(サンプルスクリプトを参照)、オリジナルの実装の性能と一致するはずである。性能の詳細は[documentation](https://github.com/huggingface/transformers/tree/main/examples)のExamplesセクションで見ることができます。 468 469 470 ## さらに詳しく 471 472 | セクション | 概要 | 473 |-|-| 474 | [ドキュメント](https://huggingface.co/docs/transformers/) | 完全なAPIドキュメントとチュートリアル | 475 | [タスク概要](https://huggingface.co/docs/transformers/task_summary) | 🤗Transformersがサポートするタスク | 476 | [前処理チュートリアル](https://huggingface.co/docs/transformers/preprocessing) | モデル用のデータを準備するために`Tokenizer`クラスを使用 | 477 | [トレーニングと微調整](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlowの学習ループと`Trainer`APIで🤗Transformersが提供するモデルを使用 | 478 | [クイックツアー: 微調整/使用方法スクリプト](https://github.com/huggingface/transformers/tree/main/examples) | 様々なタスクでモデルの微調整を行うためのスクリプト例 | 479 | [モデルの共有とアップロード](https://huggingface.co/docs/transformers/model_sharing) | 微調整したモデルをアップロードしてコミュニティで共有する | 480 | [マイグレーション](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`または`pytorch-pretrained-bert`から🤗Transformers に移行する | 481 482 ## 引用 483 484 🤗 トランスフォーマーライブラリに引用できる[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)が出来ました: 485 ```bibtex 486 @inproceedings{wolf-etal-2020-transformers, 487 title = "Transformers: State-of-the-Art Natural Language Processing", 488 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 489 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 490 month = oct, 491 year = "2020", 492 address = "Online", 493 publisher = "Association for Computational Linguistics", 494 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 495 pages = "38--45" 496 } 497 ``` 498 [end of README_ja.md] [start of README_ko.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <a href="https://github.com/huggingface/transformers/">English</a> | 44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 46 <b>한국어</b> | 47 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | 48 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | 49 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> 50 <p> 51 </h4> 52 53 <h3 align="center"> 54 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p> 55 </h3> 56 57 <h3 align="center"> 58 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 59 </h3> 60 61 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다. 62 63 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다. 64 65 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다. 66 67 ## 온라인 데모 68 69 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다. 70 71 예시: 72 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 73 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 74 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 75 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 76 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 77 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 78 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 79 80 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다. 81 82 ## Hugging Face 팀의 커스텀 지원을 원한다면 83 84 <a target="_blank" href="https://huggingface.co/support"> 85 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 86 </a><br> 87 88 ## 퀵 투어 89 90 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다: 91 92 ```python 93 >>> from transformers import pipeline 94 95 # Allocate a pipeline for sentiment-analysis 96 >>> classifier = pipeline('sentiment-analysis') 97 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 98 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 99 ``` 100 101 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다. 102 103 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다: 104 105 ``` python 106 >>> from transformers import pipeline 107 108 # Allocate a pipeline for question-answering 109 >>> question_answerer = pipeline('question-answering') 110 >>> question_answerer({ 111 ... 'question': 'What is the name of the repository ?', 112 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 113 ... }) 114 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 115 116 ``` 117 118 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다. 119 120 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다: 121 ```python 122 >>> from transformers import AutoTokenizer, AutoModel 123 124 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 125 >>> model = AutoModel.from_pretrained("bert-base-uncased") 126 127 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 128 >>> outputs = model(**inputs) 129 ``` 130 다음은 TensorFlow 버전입니다: 131 ```python 132 >>> from transformers import AutoTokenizer, TFAutoModel 133 134 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 135 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 136 137 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 138 >>> outputs = model(**inputs) 139 ``` 140 141 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다. 142 143 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다. 144 145 ## 왜 transformers를 사용해야 할까요? 146 147 1. 손쉽게 사용할 수 있는 최첨단 모델: 148 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다. 149 - 교육자 실무자에게 진입 장벽이 낮습니다. 150 - 3개의 클래스만 배우면 바로 사용할 수 있습니다. 151 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다. 152 153 1. 더 적은 계산 비용, 더 적은 탄소 발자국: 154 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다. 155 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다. 156 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등. 157 158 1. 모델의 각 생애주기에 적합한 프레임워크: 159 - 코드 3줄로 최첨단 모델을 학습하세요. 160 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요. 161 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요. 162 163 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요: 164 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다. 165 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다. 166 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다. 167 168 ## 왜 transformers를 사용하지 말아야 할까요? 169 170 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다. 171 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요. 172 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다. 173 174 ## 설치 175 176 ### pip로 설치하기 177 178 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다. 179 180 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요. 181 182 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요. 183 184 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다. 185 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요. 186 187 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다: 188 189 ```bash 190 pip install transformers 191 ``` 192 193 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다. 194 195 ### conda로 설치하기 196 197 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`. 198 199 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다: 200 201 ```shell script 202 conda install -c huggingface transformers 203 ``` 204 205 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요. 206 207 ## 모델 구조 208 209 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다. 210 211 현재 사용 가능한 모델 체크포인트의 개수: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 212 213 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요): 214 215 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 216 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 217 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 218 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 219 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 220 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 221 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 222 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 223 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 224 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 225 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 226 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 227 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 228 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 229 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 230 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 231 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 232 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 233 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 234 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 235 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 236 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 237 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 238 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 239 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 240 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 241 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 242 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 243 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 244 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 245 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 246 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 247 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 248 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 249 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 250 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 251 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 252 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 253 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 254 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT. 255 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 256 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 257 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 258 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 259 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 260 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 261 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 262 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 263 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 264 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 265 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 266 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 267 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 268 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 269 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 270 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 271 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 272 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 273 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 274 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 275 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 276 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 277 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 278 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 279 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 280 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 281 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 282 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 283 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 284 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 285 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 286 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 287 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 288 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 289 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 290 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 291 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 292 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 293 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 294 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 295 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 296 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 297 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 298 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 299 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 300 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 301 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 302 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 303 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 304 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 305 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 306 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 307 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 308 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 309 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 310 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 311 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 312 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 313 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 314 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 315 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu. 316 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 317 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 318 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 319 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 320 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 321 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 322 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 323 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 324 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 325 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 326 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 327 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 328 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 329 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 330 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 331 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 332 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 333 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 334 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 335 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 336 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 337 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 338 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 339 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 340 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 341 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 342 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 343 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 344 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 345 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 346 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 347 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 348 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 349 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 350 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 351 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 352 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 353 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 354 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 355 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 356 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 357 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 358 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 359 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 360 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 361 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 362 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 363 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 364 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 365 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 366 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 367 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 368 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 369 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 370 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 371 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 372 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 373 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 374 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 375 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 376 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 377 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 378 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다. 379 380 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요. 381 382 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다. 383 384 ## 더 알아보기 385 386 | 섹션 | 설명 | 387 |-|-| 388 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 | 389 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 | 390 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 | 391 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 | 392 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 | 393 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 | 394 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기| 395 396 ## 인용 397 398 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요: 399 ```bibtex 400 @inproceedings{wolf-etal-2020-transformers, 401 title = "Transformers: State-of-the-Art Natural Language Processing", 402 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 403 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 404 month = oct, 405 year = "2020", 406 address = "Online", 407 publisher = "Association for Computational Linguistics", 408 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 409 pages = "38--45" 410 } 411 ``` 412 [end of README_ko.md] [start of README_zh-hans.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Chinese translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。 20 - Use square quotes, e.g.,「引用」 21 22 Dictionary 23 24 Hugging Face: 抱抱脸 25 token: 词符(并用括号标注原英文) 26 tokenize: 词符化(并用括号标注原英文) 27 tokenizer: 词符化器(并用括号标注原英文) 28 transformer: transformer(不翻译) 29 pipeline: 流水线 30 API: API (不翻译) 31 inference: 推理 32 Trainer: 训练器。当作为类名出现时不翻译。 33 pretrained/pretrain: 预训练 34 finetune: 微调 35 community: 社区 36 example: 当特指仓库中 example 目录时翻译为「用例」 37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文 38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理 39 checkpoint: 检查点 40 --> 41 42 <p align="center"> 43 <br> 44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 45 <br> 46 <p> 47 <p align="center"> 48 <a href="https://circleci.com/gh/huggingface/transformers"> 49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 50 </a> 51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 53 </a> 54 <a href="https://huggingface.co/docs/transformers/index"> 55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 56 </a> 57 <a href="https://github.com/huggingface/transformers/releases"> 58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 59 </a> 60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 62 </a> 63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 64 </p> 65 66 <h4 align="center"> 67 <p> 68 <a href="https://github.com/huggingface/transformers/">English</a> | 69 <b>简体中文</b> | 70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | 72 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | 73 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | 74 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> 75 <p> 76 </h4> 77 78 <h3 align="center"> 79 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p> 80 </h3> 81 82 <h3 align="center"> 83 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 84 </h3> 85 86 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。 87 88 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。 89 90 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。 91 92 ## 在线演示 93 94 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。 95 96 这里是一些例子: 97 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 98 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 99 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 100 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 101 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 102 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 103 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 104 105 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。 106 107 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务 108 109 <a target="_blank" href="https://huggingface.co/support"> 110 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 111 </a><br> 112 113 ## 快速上手 114 115 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子: 116 117 ```python 118 >>> from transformers import pipeline 119 120 # 使用情绪分析流水线 121 >>> classifier = pipeline('sentiment-analysis') 122 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 123 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 124 ``` 125 126 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。 127 128 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案: 129 130 ``` python 131 >>> from transformers import pipeline 132 133 # 使用问答流水线 134 >>> question_answerer = pipeline('question-answering') 135 >>> question_answerer({ 136 ... 'question': 'What is the name of the repository ?', 137 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 138 ... }) 139 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 140 141 ``` 142 143 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。 144 145 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例: 146 ```python 147 >>> from transformers import AutoTokenizer, AutoModel 148 149 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 150 >>> model = AutoModel.from_pretrained("bert-base-uncased") 151 152 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 153 >>> outputs = model(**inputs) 154 ``` 155 这里是等效的 TensorFlow 代码: 156 ```python 157 >>> from transformers import AutoTokenizer, TFAutoModel 158 159 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 160 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 161 162 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 163 >>> outputs = model(**inputs) 164 ``` 165 166 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。 167 168 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。 169 170 ## 为什么要用 transformers? 171 172 1. 便于使用的先进模型: 173 - NLU 和 NLG 上表现优越 174 - 对教学和实践友好且低门槛 175 - 高级抽象,只需了解三个类 176 - 对所有模型统一的API 177 178 1. 更低计算开销,更少的碳排放: 179 - 研究人员可以分享已训练的模型而非每次从头开始训练 180 - 工程师可以减少计算用时和生产环境开销 181 - 数十种模型架构、两千多个预训练模型、100多种语言支持 182 183 1. 对于模型生命周期的每一个部分都面面俱到: 184 - 训练先进的模型,只需 3 行代码 185 - 模型在不同深度学习框架间任意转移,随你心意 186 - 为训练、评估和生产选择最适合的框架,衔接无缝 187 188 1. 为你的需求轻松定制专属模型和用例: 189 - 我们为每种模型架构提供了多个用例来复现原论文结果 190 - 模型内部结构保持透明一致 191 - 模型文件可单独使用,方便魔改和快速实验 192 193 ## 什么情况下我不该用 transformers? 194 195 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。 196 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。 197 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。 198 199 ## 安装 200 201 ### 使用 pip 202 203 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。 204 205 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 206 207 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。 208 209 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。 210 211 当这些后端之一安装成功后, 🤗 Transformers 可依此安装: 212 213 ```bash 214 pip install transformers 215 ``` 216 217 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。 218 219 ### 使用 conda 220 221 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。 222 223 🤗 Transformers 可以通过 conda 依此安装: 224 225 ```shell script 226 conda install -c huggingface transformers 227 ``` 228 229 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。 230 231 ## 模型架构 232 233 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。 234 235 目前的检查点数量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 236 237 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)): 238 239 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。 240 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (来自 MIT) 伴随论文 [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) 由 Yuan Gong, Yu-An Chung, James Glass 发布。 241 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。 242 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。 243 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。 244 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。 245 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。 246 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 247 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。 248 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 249 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 250 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (来自 Microsoft Research AI4Science) 伴随论文 [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) 由 Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu 发布。 251 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (来自 Google AI) 伴随论文 [Big Transfer (BiT) 由 Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby 发布。 252 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 253 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 254 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 255 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。 256 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。 257 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。 258 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。 259 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (来自 OFA-Sys) 伴随论文 [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) 由 An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou 发布。 260 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。 261 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (来自 University of Göttingen) 伴随论文 [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) 由 Timo Lüddecke and Alexander Ecker 发布。 262 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。 263 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (来自 Microsoft Research Asia) 伴随论文 [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) 由 Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang 发布。 264 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。 265 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。 266 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。 267 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。 268 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。 269 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。 270 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 271 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 272 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。 273 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (来自 SenseTime Research) 伴随论文 [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) 由 Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai 发布。 274 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。 275 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。 276 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。 277 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (来自 SHI Labs) 伴随论文 [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) 由 Ali Hassani and Humphrey Shi 发布。 278 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。 279 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。 280 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (来自 NAVER) 伴随论文 [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) 由 Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park 发布。 281 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。 282 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。 283 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。 284 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 285 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (来自 Baidu) 伴随论文 [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu 发布。 286 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 287 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 288 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。 289 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。 290 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。 291 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。 292 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。 293 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。 294 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。 295 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 296 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (来自 ABEJA) 由 Shinya Otani, Takayoshi Makabe, Anuj Arora, Kyo Hattori。 297 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。 298 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。 299 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。 300 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。 301 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。 302 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。 303 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 304 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。 305 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。 306 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。 307 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。 308 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 309 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。 310 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (来自 South China University of Technology) 伴随论文 [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) 由 Jiapeng Wang, Lianwen Jin, Kai Ding 发布。 311 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 312 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。 313 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。 314 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。 315 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。 316 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。 317 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。 318 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (来自 Microsoft Research Asia) 伴随论文 [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) 由 Junlong Li, Yiheng Xu, Lei Cui, Furu Wei 发布。 319 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov >>>>>>> Fix rebase 320 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。 321 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。 322 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 323 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 324 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。 325 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。 326 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (来自 Google Inc.) 伴随论文 [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) 由 Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam 发布。 327 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (来自 Google Inc.) 伴随论文 [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) 由 Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen 发布。 328 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。 329 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。 330 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。 331 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。 332 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (来自 SHI Labs) 伴随论文 [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) 由 Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi 发布。 333 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。 334 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (来自 Meta) 伴随论文 [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) 由 the NLLB team 发布。 335 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。 336 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。 337 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (来自 Google AI) 伴随论文 [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) 由 Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby 发布。 338 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。 339 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (来自 Google) 伴随论文 [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) 由 Jason Phang, Yao Zhao, Peter J. Liu 发布。 340 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。 341 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。 342 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。 343 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。 344 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 345 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。 346 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。 347 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。 348 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。 349 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 350 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。 351 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 352 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。 353 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (来自 WeChatAI), 伴随论文 [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) 由 HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou 发布。 354 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。 355 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。 356 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 357 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 358 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。 359 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。 360 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。 361 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。 362 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。 363 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (来自 Microsoft) 伴随论文 [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) 由 Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo 发布。 364 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 365 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 366 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 367 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (来自 Microsoft Research) 伴随论文 [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) 由 Brandon Smock, Rohith Pesala, Robin Abraham 发布。 368 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。 369 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。 370 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 371 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 372 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 373 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。 374 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。 375 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 376 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。 377 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。 378 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。 379 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (来自 Multimedia Computing Group, Nanjing University) 伴随论文 [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) 由 Zhan Tong, Yibing Song, Jue Wang, Limin Wang 发布。 380 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。 381 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。 382 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。 383 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。 384 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。 385 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (来自 Meta AI) 伴随论文 [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas 发布. 386 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。 387 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。 388 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。 389 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 390 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (来自 OpenAI) 伴随论文 [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) 由 Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever 发布。 391 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (来自 Microsoft Research) 伴随论文 [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) 由 Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling 发布。 392 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 393 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。 394 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 395 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。 396 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。 397 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。 398 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。 399 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。 400 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。 401 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。 402 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。 403 404 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 405 406 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。 407 408 409 ## 了解更多 410 411 | 章节 | 描述 | 412 |-|-| 413 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 | 414 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 | 415 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 | 416 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 | 417 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 | 418 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 | 419 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers | 420 421 ## 引用 422 423 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用: 424 ```bibtex 425 @inproceedings{wolf-etal-2020-transformers, 426 title = "Transformers: State-of-the-Art Natural Language Processing", 427 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 428 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 429 month = oct, 430 year = "2020", 431 address = "Online", 432 publisher = "Association for Computational Linguistics", 433 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 434 pages = "38--45" 435 } 436 ``` 437 [end of README_zh-hans.md] [start of README_zh-hant.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。 20 - Use square quotes, e.g.,「引用」 21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese. 22 23 Dictionary 24 25 API: API (不翻譯) 26 add: 加入 27 checkpoint: 檢查點 28 code: 程式碼 29 community: 社群 30 confidence: 信賴度 31 dataset: 資料集 32 documentation: 文件 33 example: 基本翻譯為「範例」,或依語意翻為「例子」 34 finetune: 微調 35 Hugging Face: Hugging Face(不翻譯) 36 implementation: 實作 37 inference: 推論 38 library: 函式庫 39 module: 模組 40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理 41 online demos: 線上Demo 42 pipeline: pipeline(不翻譯) 43 pretrained/pretrain: 預訓練 44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文 45 repository: repository(不翻譯) 46 summary: 概覽 47 token-: token-(不翻譯) 48 Trainer: Trainer(不翻譯) 49 transformer: transformer(不翻譯) 50 tutorial: 教學 51 user: 使用者 52 --> 53 54 <p align="center"> 55 <br> 56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 57 <br> 58 <p> 59 <p align="center"> 60 <a href="https://circleci.com/gh/huggingface/transformers"> 61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 62 </a> 63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 65 </a> 66 <a href="https://huggingface.co/docs/transformers/index"> 67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 68 </a> 69 <a href="https://github.com/huggingface/transformers/releases"> 70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 71 </a> 72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 74 </a> 75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 76 </p> 77 78 <h4 align="center"> 79 <p> 80 <a href="https://github.com/huggingface/transformers/">English</a> | 81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 82 <b>繁體中文</b> | 83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | 84 <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | 85 <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | 86 <a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> 87 <p> 88 </h4> 89 90 <h3 align="center"> 91 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p> 92 </h3> 93 94 <h3 align="center"> 95 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 96 </h3> 97 98 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。 99 100 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。 101 102 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。 103 104 ## 線上Demo 105 106 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。 107 108 這裡是一些範例: 109 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 110 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 111 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 112 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 113 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 114 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 115 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 116 117 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。 118 119 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務 120 121 <a target="_blank" href="https://huggingface.co/support"> 122 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 123 </a><br> 124 125 ## 快速上手 126 127 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子: 128 129 ```python 130 >>> from transformers import pipeline 131 132 # 使用情緒分析 pipeline 133 >>> classifier = pipeline('sentiment-analysis') 134 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 135 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 136 ``` 137 138 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。 139 140 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案: 141 142 ``` python 143 >>> from transformers import pipeline 144 145 # 使用問答 pipeline 146 >>> question_answerer = pipeline('question-answering') 147 >>> question_answerer({ 148 ... 'question': 'What is the name of the repository ?', 149 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 150 ... }) 151 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 152 153 ``` 154 155 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。 156 157 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例: 158 ```python 159 >>> from transformers import AutoTokenizer, AutoModel 160 161 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 162 >>> model = AutoModel.from_pretrained("bert-base-uncased") 163 164 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 165 >>> outputs = model(**inputs) 166 ``` 167 這裡是對應的 TensorFlow 程式碼: 168 ```python 169 >>> from transformers import AutoTokenizer, TFAutoModel 170 171 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 172 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 173 174 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 175 >>> outputs = model(**inputs) 176 ``` 177 178 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。 179 180 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。 181 182 ## 為什麼要用 transformers? 183 184 1. 便於使用的先進模型: 185 - NLU 和 NLG 上性能卓越 186 - 對教學和實作友好且低門檻 187 - 高度抽象,使用者只須學習 3 個類別 188 - 對所有模型使用的制式化API 189 190 1. 更低的運算成本,更少的碳排放: 191 - 研究人員可以分享已訓練的模型而非每次從頭開始訓練 192 - 工程師可以減少計算時間以及生產成本 193 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援 194 195 1. 對於模型生命週期的每一個部分都面面俱到: 196 - 訓練先進的模型,只需 3 行程式碼 197 - 模型可以在不同深度學習框架之間任意轉換 198 - 為訓練、評估和生產選擇最適合的框架,並完美銜接 199 200 1. 為你的需求輕鬆客製化專屬模型和範例: 201 - 我們為每種模型架構提供了多個範例來重現原論文結果 202 - 一致的模型內部架構 203 - 模型檔案可單獨使用,便於修改和快速實驗 204 205 ## 什麼情況下我不該用 transformers? 206 207 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。 208 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。 209 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。 210 211 ## 安裝 212 213 ### 使用 pip 214 215 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。 216 217 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 218 219 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。 220 221 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。 222 223 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝: 224 225 ```bash 226 pip install transformers 227 ``` 228 229 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。 230 231 ### 使用 conda 232 233 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。 234 235 🤗 Transformers 可以藉由 conda 依此安裝: 236 237 ```shell script 238 conda install -c huggingface transformers 239 ``` 240 241 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。 242 243 ## 模型架構 244 245 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。 246 247 目前的檢查點數量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 248 249 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)): 250 251 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 252 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 253 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 254 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 255 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 256 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 257 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 258 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 259 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 260 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 261 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 262 1. **[BioGpt](https://huggingface.co/docs/transformers/main/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 263 1. **[BiT](https://huggingface.co/docs/transformers/main/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 264 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 265 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 266 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 267 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 268 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 269 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 270 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 271 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 272 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 273 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 274 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 275 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 276 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 277 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 278 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 279 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 280 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 281 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 282 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 283 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 284 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 285 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 286 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 287 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 288 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 289 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 290 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT. 291 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 292 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER) released with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 293 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 294 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 295 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 296 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 297 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 298 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2** was released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 299 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 300 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 301 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 302 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 303 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 304 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 305 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 306 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 307 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 308 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 309 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 310 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 311 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 312 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 313 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 314 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 315 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 316 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 317 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 318 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 319 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 320 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 321 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 322 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 323 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 324 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 325 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 326 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 327 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 328 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 329 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 330 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 331 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov 332 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 333 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 334 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 335 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 336 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 337 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 338 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 339 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 340 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 341 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 342 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 343 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 344 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 345 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 346 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 347 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 348 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 349 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 350 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 351 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, Peter J. Liu. 352 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 353 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 354 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 355 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 356 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 357 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 358 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 359 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 360 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 361 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 362 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 363 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 364 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 365 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 366 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 367 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 368 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 369 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 370 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 371 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 372 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 373 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 374 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 375 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 376 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 377 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 378 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 379 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 380 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 381 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 382 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 383 1. **[TimeSformer](https://huggingface.co/docs/transformers/main/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 384 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 385 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 386 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 387 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 388 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 389 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 390 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 391 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 392 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 393 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 394 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 395 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/main/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 396 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 397 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 398 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 399 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 400 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 401 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 402 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 403 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 404 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 405 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 406 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 407 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 408 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 409 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 410 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 411 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 412 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 413 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 414 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。 415 416 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 417 418 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。 419 420 421 ## 了解更多 422 423 | 章節 | 描述 | 424 |-|-| 425 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 | 426 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 | 427 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 | 428 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 | 429 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 | 430 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 | 431 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers | 432 433 ## 引用 434 435 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用: 436 ```bibtex 437 @inproceedings{wolf-etal-2020-transformers, 438 title = "Transformers: State-of-the-Art Natural Language Processing", 439 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 440 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 441 month = oct, 442 year = "2020", 443 address = "Online", 444 publisher = "Association for Computational Linguistics", 445 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 446 pages = "38--45" 447 } 448 ``` 449 [end of README_zh-hant.md] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
huggingface/transformers
c1b9a11dd4be8af32b3274be7c9774d5a917c56d
Tutorial on token classification throws casting error in Tensorflow 2.11 ### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35 - Python version: 3.9.0 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada, the tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` throws the following error in Tensorflow 2.11 but not in Tensorflow 2.9: `(0) UNIMPLEMENTED: Cast string to float is not supported [[{{node Cast_1}}]] (1) CANCELLED: Function was cancelled before it was started 0 successful operations. ` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The tutorial at `https://huggingface.co/docs/transformers/tasks/token_classification` for Tensorflow ### Expected behavior Training should start, but it does not.
Can you please specify where is the error happen? Which step? The error is thrown after `model.fit` Okay I'll have a look Also cc @Rocketknight1 Yeah, I should probably take this one. Investigating! Managed to reproduce it. This is actually a problem with our `AdamWeightDecay`, likely caused by the change in Keras optimizers in 2.11. Figuring out a fix now.
2022-12-12T16:56:36Z
<patch> diff --git a/src/transformers/optimization_tf.py b/src/transformers/optimization_tf.py --- a/src/transformers/optimization_tf.py +++ b/src/transformers/optimization_tf.py @@ -21,6 +21,12 @@ import tensorflow as tf +if hasattr(tf.keras, "optimizer") and hasattr(tf.keras.optimizer, "legacy"): + Adam = tf.keras.optimizer.legacy.Adam +else: + Adam = tf.keras.optimizers.Adam + + class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule): """ Applies a warmup schedule on a given learning rate decay schedule. @@ -163,7 +169,7 @@ def create_optimizer( return optimizer, lr_schedule -class AdamWeightDecay(tf.keras.optimizers.Adam): +class AdamWeightDecay(Adam): """ Adam enables L2 weight decay and clip_by_global_norm on gradients. Just adding the square of the weights to the loss function is *not* the correct way of using L2 regularization/weight decay with Adam, since that will interact </patch>
[]
[]
conan-io__conan-9360
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [question] Is new cmake_layout compatible with cpp_info.components ? Hi, I've been using CMakeToolchain for a while with the CMake build helper that comes with it. But after updating today I saw that `CMake` build helper does not accept a `build_folder` parameter anymore. Looking at why it happened I arrived at https://github.com/conan-io/conan/pull/8554/files#diff-d72013a45b00a0adf06f4536d6a8c8844461e51b72911937d63e6dda9a3d440aR64 this means that the new way of providing the folder topology is via `def layout(self)` right? My first move was to employ the new `cmake_layout` utility, but I was unable to create a package if this one used `self.cpp_info.components` ``` ERROR: ConanException: say/0.1 package_info(): self.cpp_info.components cannot be used with self.cpp_info global values at the same time ``` Conanfile example: ``` from conans import ConanFile, CMake from conan.tools.layout import cmake_layout, LayoutPackager class Pkg(ConanFile): name = "say" version = "0.1" settings = "os", "compiler", "arch", "build_type" generators = "cmake" exports_sources = "src/*" def layout(self): cmake_layout(self) def build(self): cmake = CMake(self) cmake.configure() cmake.build() def package(self): LayoutPackager(self).package() def package_info(self): self.cpp_info.components["say"].libs = ["say"] ``` So, I am doing something wrong? Is this intended or just a bug/missing feature? Thanks! - [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). </issue> <code> [start of README.rst] 1 |Logo| 2 3 Conan 4 ===== 5 6 Decentralized, open-source (MIT), C/C++ package manager. 7 8 - Homepage: https://conan.io/ 9 - Github: https://github.com/conan-io/conan 10 - Docs: https://docs.conan.io/en/latest/ 11 - Slack: https://cpplang-inviter.cppalliance.org/ (#conan channel) 12 - Twitter: https://twitter.com/conan_io 13 14 15 Conan is a package manager for C and C++ developers: 16 17 - It is fully decentralized. Users can host their packages on their servers, privately. Integrates with Artifactory and Bintray. 18 - Portable. Works across all platforms, including Linux, OSX, Windows (with native and first-class support, WSL, MinGW), 19 Solaris, FreeBSD, embedded and cross-compiling, docker, WSL 20 - Manage binaries. It can create, upload and download binaries for any configuration and platform, 21 even cross-compiling, saving lots of time in development and continuous integration. The binary compatibility can be configured 22 and customized. Manage all your artifacts in the same way on all platforms. 23 - Integrates with any build system, including any proprietary and custom one. Provides tested support for major build systems 24 (CMake, MSBuild, Makefiles, Meson, etc). 25 - Extensible: Its python based recipes, together with extensions points allows for great power and flexibility. 26 - Large and active community, especially in Github (https://github.com/conan-io/conan) and Slack (https://cpplang-inviter.cppalliance.org/ #conan channel). 27 This community also creates and maintains packages in ConanCenter and Bincrafters repositories in Bintray. 28 - Stable. Used in production by many companies, since 1.0 there is a commitment not to break package recipes and documented behavior. 29 30 31 32 +-------------------------+-------------------------+ 33 | **develop** | **Code Climate** | 34 +=========================+=========================+ 35 | |Build Status Develop| | |Develop climate| | 36 +-------------------------+-------------------------+ 37 38 39 Setup 40 ===== 41 42 Please read https://docs.conan.io/en/latest/installation.html to know how to 43 install and start using Conan. TL;DR: 44 45 .. code-block:: 46 47 $ pip install conan 48 49 50 Install a development version 51 ----------------------------- 52 53 You can run **Conan** client and server in Windows, MacOS, and Linux. 54 55 - **Install pip following** `pip docs`_. 56 57 - **Clone Conan repository:** 58 59 .. code-block:: bash 60 61 $ git clone https://github.com/conan-io/conan.git conan-io 62 63 NOTE: repository directory name matters, some directories are known to be problematic to run tests (e.g. `conan`). `conan-io` directory name was tested and guaranteed to be working. 64 65 - **Install in editable mode** 66 67 .. code-block:: bash 68 69 $ cd conan-io && sudo pip install -e . 70 71 If you are in Windows, using ``sudo`` is not required. 72 73 - **You are ready, try to run Conan:** 74 75 .. code-block:: 76 77 $ conan --help 78 79 Consumer commands 80 install Installs the requirements specified in a conanfile (.py or .txt). 81 config Manages configuration. Edits the conan.conf or installs config files. 82 get Gets a file or list a directory of a given reference or package. 83 info Gets information about the dependency graph of a recipe. 84 search Searches package recipes and binaries in the local cache or in a remote. 85 Creator commands 86 new Creates a new package recipe template with a 'conanfile.py'. 87 create Builds a binary package for a recipe (conanfile.py) located in the current dir. 88 upload Uploads a recipe and binary packages to a remote. 89 export Copies the recipe (conanfile.py & associated files) to your local cache. 90 export-pkg Exports a recipe & creates a package with given files calling 'package'. 91 test Test a package, consuming it with a conanfile recipe with a test() method. 92 Package development commands 93 source Calls your local conanfile.py 'source()' method. 94 build Calls your local conanfile.py 'build()' method. 95 package Calls your local conanfile.py 'package()' method. 96 Misc commands 97 profile Lists profiles in the '.conan/profiles' folder, or shows profile details. 98 remote Manages the remote list and the package recipes associated with a remote. 99 user Authenticates against a remote with user/pass, caching the auth token. 100 imports Calls your local conanfile.py or conanfile.txt 'imports' method. 101 copy Copies conan recipes and packages to another user/channel. 102 remove Removes packages or binaries matching pattern from local cache or remote. 103 alias Creates and exports an 'alias recipe'. 104 download Downloads recipe and binaries to the local cache, without using settings. 105 106 Conan commands. Type "conan <command> -h" for help 107 108 Contributing to the project 109 =========================== 110 111 Feedback and contribution are always welcome in this project. 112 Please read our `contributing guide <https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md>`_. 113 Also, if you plan to contribute, please add some testing for your changes. You can read the `Conan 114 tests guidelines section <https://github.com/conan-io/conan/blob/develop/conans/test/README.md>`_ for 115 some advise on how to write tests for Conan. 116 117 Running the tests 118 ================= 119 120 Using tox 121 --------- 122 123 .. code-block:: bash 124 125 $ python -m tox 126 127 It will install the needed requirements and launch `pytest` skipping some heavy and slow tests. 128 If you want to run the full test suite: 129 130 .. code-block:: bash 131 132 $ python -m tox -e full 133 134 Without tox 135 ----------- 136 137 **Install python requirements** 138 139 .. code-block:: bash 140 141 $ python -m pip install -r conans/requirements.txt 142 $ python -m pip install -r conans/requirements_server.txt 143 $ python -m pip install -r conans/requirements_dev.txt 144 145 If you are not Windows and you are not using a python virtual environment, you will need to run these 146 commands using `sudo`. 147 148 Before you can run the tests, you need to set a few environment variables first. 149 150 .. code-block:: bash 151 152 $ export PYTHONPATH=$PYTHONPATH:$(pwd) 153 154 On Windows it would be (while being in the Conan root directory): 155 156 .. code-block:: bash 157 158 $ set PYTHONPATH=. 159 160 Ensure that your ``cmake`` has version 2.8 or later. You can see the 161 version with the following command: 162 163 .. code-block:: bash 164 165 $ cmake --version 166 167 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your 168 operating system and your requirements. 169 170 These should work for the GCC from ``build-essential`` on Ubuntu 14.04: 171 172 .. code-block:: bash 173 174 $ export CONAN_COMPILER=gcc 175 $ export CONAN_COMPILER_VERSION=4.8 176 177 These should work for OS X: 178 179 .. code-block:: bash 180 181 $ export CONAN_COMPILER=clang 182 $ export CONAN_COMPILER_VERSION=3.5 183 184 You can run the actual tests like this: 185 186 .. code-block:: bash 187 188 $ python -m pytest . 189 190 191 There are a couple of test attributes defined, as ``slow`` that you can use 192 to filter the tests, and do not execute them: 193 194 .. code-block:: bash 195 196 $ python -m pytest . -m "not slow" 197 198 A few minutes later it should print ``OK``: 199 200 .. code-block:: bash 201 202 ............................................................................................ 203 ---------------------------------------------------------------------- 204 Ran 146 tests in 50.993s 205 206 OK 207 208 To run specific tests, you can specify the test name too, something like: 209 210 .. code-block:: bash 211 212 $ python -m pytest conans/test/unittests/client/cmd/export_test.py::ExportTest::test_export_warning -s 213 214 The ``-s`` argument can be useful to see some output that otherwise is captured by pytest. 215 216 Also, you can run tests against an instance of Artifactory. Those tests should add the attribute 217 ``artifactory_ready``. 218 219 .. code-block:: bash 220 221 $ python -m pytest . -m artifactory_ready 222 223 Some environment variables have to be defined to run them. For example, for an 224 Artifactory instance that is running on the localhost with default user and password configured, the 225 variables could take the values: 226 227 .. code-block:: bash 228 229 $ export CONAN_TEST_WITH_ARTIFACTORY=1 230 $ export ARTIFACTORY_DEFAULT_URL=http://localhost:8081/artifactory 231 $ export ARTIFACTORY_DEFAULT_USER=admin 232 $ export ARTIFACTORY_DEFAULT_PASSWORD=password 233 234 ``ARTIFACTORY_DEFAULT_URL`` is the base url for the Artifactory repo, not one for a specific 235 repository. Running the tests with a real Artifactory instance will create repos on the fly so please 236 use a separate server for testing purposes. 237 238 License 239 ------- 240 241 `MIT LICENSE <./LICENSE.md>`__ 242 243 .. |Build Status Develop| image:: https://ci.conan.io/buildStatus/icon?job=ConanTestSuite/develop 244 :target: https://ci.conan.io/job/ConanTestSuite/job/develop/ 245 246 .. |Develop climate| image:: https://api.codeclimate.com/v1/badges/081b53e570d5220b34e4/maintainability.svg 247 :target: https://codeclimate.com/github/conan-io/conan/maintainability 248 249 .. |Logo| image:: https://conan.io/img/jfrog_conan_logo.png 250 251 252 .. _`pip docs`: https://pip.pypa.io/en/stable/installation/ 253 254 [end of README.rst] [start of conans/assets/templates/new_v2_cmake.py] 1 conanfile_sources_v2 = """from conans import ConanFile 2 from conan.tools.cmake import CMakeToolchain, CMake 3 4 5 class {package_name}Conan(ConanFile): 6 name = "{name}" 7 version = "{version}" 8 9 # Optional metadata 10 license = "<Put the package license here>" 11 author = "<Put your name here> <And your email here>" 12 url = "<Package recipe repository url here, for issues about the package>" 13 description = "<Description of {package_name} here>" 14 topics = ("<Put some tag here>", "<here>", "<and here>") 15 16 # Binary configuration 17 settings = "os", "compiler", "build_type", "arch" 18 options = {{"shared": [True, False], "fPIC": [True, False]}} 19 default_options = {{"shared": False, "fPIC": True}} 20 21 # Sources are located in the same place as this recipe, copy them to the recipe 22 exports_sources = "src/*" 23 24 def config_options(self): 25 if self.settings.os == "Windows": 26 del self.options.fPIC 27 28 def layout(self): 29 self.folders.source = "src" 30 self.folders.build = "build/{{}}".format(self.settings.build_type) 31 self.folders.generators = "build/generators" 32 33 def generate(self): 34 tc = CMakeToolchain(self) 35 tc.generate() 36 37 def build(self): 38 cmake = CMake(self) 39 cmake.configure() 40 cmake.build() 41 42 def package(self): 43 self.copy("*.h", dst="include") 44 self.copy("*.lib", dst="lib", keep_path=False) 45 self.copy("*.dll", dst="bin", keep_path=False) 46 self.copy("*.dylib*", dst="lib", keep_path=False) 47 self.copy("*.so", dst="lib", keep_path=False) 48 self.copy("*.a", dst="lib", keep_path=False) 49 50 def package_info(self): 51 self.cpp_info.libs = ["{name}"] 52 """ 53 54 55 test_conanfile_v2 = """import os 56 57 from conans import ConanFile, tools 58 from conan.tools.cmake import CMake 59 60 61 class {package_name}TestConan(ConanFile): 62 settings = "os", "compiler", "build_type", "arch" 63 generators = "CMakeDeps", "CMakeToolchain", "VirtualBuildEnv", "VirtualRunEnv" 64 apply_env = False 65 66 def build(self): 67 cmake = CMake(self) 68 cmake.configure() 69 cmake.build() 70 71 def test(self): 72 if not tools.cross_building(self): 73 self.run(os.path.sep.join([".", "bin", "example"]), env="conanrunenv") 74 """ 75 76 77 test_cmake_v2 = """cmake_minimum_required(VERSION 3.15) 78 project(PackageTest CXX) 79 80 # TODO: Remove this when layouts are available 81 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${{CMAKE_CURRENT_BINARY_DIR}}/bin) 82 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELEASE ${{CMAKE_RUNTIME_OUTPUT_DIRECTORY}}) 83 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_RELWITHDEBINFO ${{CMAKE_RUNTIME_OUTPUT_DIRECTORY}}) 84 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_MINSIZEREL ${{CMAKE_RUNTIME_OUTPUT_DIRECTORY}}) 85 set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${{CMAKE_RUNTIME_OUTPUT_DIRECTORY}}) 86 87 find_package({name} CONFIG REQUIRED) 88 89 add_executable(example example.cpp) 90 target_link_libraries(example {name}::{name}) 91 """ 92 93 94 cmake_v2 = """cmake_minimum_required(VERSION 3.15) 95 project({name} CXX) 96 97 add_library({name} {name}.cpp) 98 """ 99 100 101 source_h = """#pragma once 102 103 #ifdef WIN32 104 #define {name}_EXPORT __declspec(dllexport) 105 #else 106 #define {name}_EXPORT 107 #endif 108 109 {name}_EXPORT void {name}(); 110 """ 111 112 113 source_cpp = """#include <iostream> 114 #include "{name}.h" 115 116 void {name}(){{ 117 #ifdef NDEBUG 118 std::cout << "{name}/{version}: Hello World Release!" <<std::endl; 119 #else 120 std::cout << "{name}/{version}: Hello World Debug!" <<std::endl; 121 #endif 122 }} 123 """ 124 125 126 test_main = """#include "{name}.h" 127 128 int main() {{ 129 {name}(); 130 }} 131 """ 132 133 134 def get_files(name, version, package_name="Pkg"): 135 files = {"conanfile.py": conanfile_sources_v2.format(name=name, version=version, 136 package_name=package_name), 137 "src/{}.cpp".format(name): source_cpp.format(name=name, version=version), 138 "src/{}.h".format(name): source_h.format(name=name, version=version), 139 "src/CMakeLists.txt": cmake_v2.format(name=name, version=version), 140 "test_package/conanfile.py": test_conanfile_v2.format(name=name, 141 version=version, 142 package_name=package_name), 143 "test_package/example.cpp": test_main.format(name=name), 144 "test_package/CMakeLists.txt": test_cmake_v2.format(name=name)} 145 return files 146 [end of conans/assets/templates/new_v2_cmake.py] [start of conans/client/cmd/new.py] 1 import os 2 import re 3 4 from jinja2 import Template 5 6 from conans import __version__ as client_version 7 from conans.client.cmd.new_ci import ci_get_files 8 from conans.errors import ConanException 9 from conans.model.ref import ConanFileReference, get_reference_fields 10 from conans.util.files import load 11 12 13 conanfile = """from conans import ConanFile, CMake, tools 14 15 16 class {package_name}Conan(ConanFile): 17 name = "{name}" 18 version = "{version}" 19 license = "<Put the package license here>" 20 author = "<Put your name here> <And your email here>" 21 url = "<Package recipe repository url here, for issues about the package>" 22 description = "<Description of {package_name} here>" 23 topics = ("<Put some tag here>", "<here>", "<and here>") 24 settings = "os", "compiler", "build_type", "arch" 25 options = {{"shared": [True, False], "fPIC": [True, False]}} 26 default_options = {{"shared": False, "fPIC": True}} 27 generators = "cmake" 28 29 def config_options(self): 30 if self.settings.os == "Windows": 31 del self.options.fPIC 32 33 def source(self): 34 self.run("git clone https://github.com/conan-io/hello.git") 35 # This small hack might be useful to guarantee proper /MT /MD linkage 36 # in MSVC if the packaged project doesn't have variables to set it 37 # properly 38 tools.replace_in_file("hello/CMakeLists.txt", "PROJECT(HelloWorld)", 39 '''PROJECT(HelloWorld) 40 include(${{CMAKE_BINARY_DIR}}/conanbuildinfo.cmake) 41 conan_basic_setup()''') 42 43 def build(self): 44 cmake = CMake(self) 45 cmake.configure(source_folder="hello") 46 cmake.build() 47 48 # Explicit way: 49 # self.run('cmake %s/hello %s' 50 # % (self.source_folder, cmake.command_line)) 51 # self.run("cmake --build . %s" % cmake.build_config) 52 53 def package(self): 54 self.copy("*.h", dst="include", src="hello") 55 self.copy("*hello.lib", dst="lib", keep_path=False) 56 self.copy("*.dll", dst="bin", keep_path=False) 57 self.copy("*.so", dst="lib", keep_path=False) 58 self.copy("*.dylib", dst="lib", keep_path=False) 59 self.copy("*.a", dst="lib", keep_path=False) 60 61 def package_info(self): 62 self.cpp_info.libs = ["hello"] 63 64 """ 65 66 conanfile_bare = """from conans import ConanFile, tools 67 68 69 class {package_name}Conan(ConanFile): 70 name = "{name}" 71 version = "{version}" 72 settings = "os", "compiler", "build_type", "arch" 73 description = "<Description of {package_name} here>" 74 url = "None" 75 license = "None" 76 author = "None" 77 topics = None 78 79 def package(self): 80 self.copy("*") 81 82 def package_info(self): 83 self.cpp_info.libs = tools.collect_libs(self) 84 """ 85 86 conanfile_sources = """from conans import ConanFile, CMake 87 88 89 class {package_name}Conan(ConanFile): 90 name = "{name}" 91 version = "{version}" 92 license = "<Put the package license here>" 93 author = "<Put your name here> <And your email here>" 94 url = "<Package recipe repository url here, for issues about the package>" 95 description = "<Description of {package_name} here>" 96 topics = ("<Put some tag here>", "<here>", "<and here>") 97 settings = "os", "compiler", "build_type", "arch" 98 options = {{"shared": [True, False], "fPIC": [True, False]}} 99 default_options = {{"shared": False, "fPIC": True}} 100 generators = "cmake" 101 exports_sources = "src/*" 102 {configure} 103 def config_options(self): 104 if self.settings.os == "Windows": 105 del self.options.fPIC 106 107 def build(self): 108 cmake = CMake(self) 109 cmake.configure(source_folder="src") 110 cmake.build() 111 112 # Explicit way: 113 # self.run('cmake %s/hello %s' 114 # % (self.source_folder, cmake.command_line)) 115 # self.run("cmake --build . %s" % cmake.build_config) 116 117 def package(self): 118 self.copy("*.h", dst="include", src="src") 119 self.copy("*.lib", dst="lib", keep_path=False) 120 self.copy("*.dll", dst="bin", keep_path=False) 121 self.copy("*.dylib*", dst="lib", keep_path=False) 122 self.copy("*.so", dst="lib", keep_path=False) 123 self.copy("*.a", dst="lib", keep_path=False) 124 125 def package_info(self): 126 self.cpp_info.libs = ["{name}"] 127 """ 128 129 conanfile_header = """import os 130 131 from conans import ConanFile, tools 132 133 134 class {package_name}Conan(ConanFile): 135 name = "{name}" 136 version = "{version}" 137 license = "<Put the package license here>" 138 author = "<Put your name here> <And your email here>" 139 url = "<Package recipe repository url here, for issues about the package>" 140 description = "<Description of {package_name} here>" 141 topics = ("<Put some tag here>", "<here>", "<and here>") 142 no_copy_source = True 143 # No settings/options are necessary, this is header only 144 145 def source(self): 146 '''retrieval of the source code here. Remember you can also put the code 147 in the folder and use exports instead of retrieving it with this 148 source() method 149 ''' 150 # self.run("git clone ...") or 151 # tools.download("url", "file.zip") 152 # tools.unzip("file.zip" ) 153 154 def package(self): 155 self.copy("*.h", "include") 156 157 def package_id(self): 158 self.info.header_only() 159 """ 160 161 162 test_conanfile = """import os 163 164 from conans import ConanFile, CMake, tools 165 166 167 class {package_name}TestConan(ConanFile): 168 settings = "os", "compiler", "build_type", "arch" 169 generators = "cmake" 170 171 def build(self): 172 cmake = CMake(self) 173 # Current dir is "test_package/build/<build_id>" and CMakeLists.txt is 174 # in "test_package" 175 cmake.configure() 176 cmake.build() 177 178 def imports(self): 179 self.copy("*.dll", dst="bin", src="bin") 180 self.copy("*.dylib*", dst="bin", src="lib") 181 self.copy('*.so*', dst='bin', src='lib') 182 183 def test(self): 184 if not tools.cross_building(self): 185 os.chdir("bin") 186 self.run(".%sexample" % os.sep) 187 """ 188 189 test_cmake = """cmake_minimum_required(VERSION 3.1) 190 project(PackageTest CXX) 191 192 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake) 193 conan_basic_setup() 194 195 add_executable(example example.cpp) 196 target_link_libraries(example ${CONAN_LIBS}) 197 198 # CTest is a testing tool that can be used to test your project. 199 # enable_testing() 200 # add_test(NAME example 201 # WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/bin 202 # COMMAND example) 203 """ 204 205 test_cmake_pure_c = """cmake_minimum_required(VERSION 3.1) 206 project(PackageTest C) 207 208 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake) 209 conan_basic_setup() 210 211 add_executable(example example.c) 212 target_link_libraries(example ${CONAN_LIBS}) 213 214 # CTest is a testing tool that can be used to test your project. 215 # enable_testing() 216 # add_test(NAME example 217 # WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/bin 218 # COMMAND example) 219 """ 220 221 test_main = """#include "{name}.h" 222 223 int main() {{ 224 {name}(); 225 }} 226 """ 227 228 hello_c = """ #include <stdio.h> 229 #include "{name}.h" 230 231 void {name}() {{ 232 int class = 0; //This will be an error in C++ 233 #ifdef NDEBUG 234 printf("{name}/{version}-(pure C): Hello World Release!\\n"); 235 #else 236 printf("{name}/{version}-(pure C): Hello World Debug!\\n"); 237 #endif 238 }} 239 """ 240 241 hello_h = """#pragma once 242 243 #ifdef WIN32 244 #define {name}_EXPORT __declspec(dllexport) 245 #else 246 #define {name}_EXPORT 247 #endif 248 249 {name}_EXPORT void {name}(); 250 """ 251 252 hello_cpp = """#include <iostream> 253 #include "{name}.h" 254 255 void {name}(){{ 256 #ifdef NDEBUG 257 std::cout << "{name}/{version}: Hello World Release!" <<std::endl; 258 #else 259 std::cout << "{name}/{version}: Hello World Debug!" <<std::endl; 260 #endif 261 }} 262 """ 263 264 cmake_pure_c = """cmake_minimum_required(VERSION 3.1) 265 project({name} C) 266 267 include(${{CMAKE_BINARY_DIR}}/conanbuildinfo.cmake) 268 conan_basic_setup() 269 270 add_library({name} {name}.c) 271 """ 272 273 cmake = """cmake_minimum_required(VERSION 3.1) 274 project({name} CXX) 275 276 include(${{CMAKE_BINARY_DIR}}/conanbuildinfo.cmake) 277 conan_basic_setup() 278 279 add_library({name} {name}.cpp) 280 """ 281 282 gitignore_template = """ 283 *.pyc 284 test_package/build 285 286 """ 287 288 289 def _render_template(text, name, version, package_name, defines): 290 context = {'name': name, 291 'version': version, 292 'package_name': package_name, 293 'conan_version': client_version} 294 context.update(defines) 295 t = Template(text, keep_trailing_newline=True) 296 return t.render(**context) 297 298 299 def _get_files_from_template_dir(template_dir, name, version, package_name, defines): 300 files = [] 301 for d, _, fs in os.walk(template_dir): 302 for f in fs: 303 rel_d = os.path.relpath(d, template_dir) 304 rel_f = os.path.join(rel_d, f) 305 files.append(rel_f) 306 307 out_files = dict() 308 for f in files: 309 f_path = os.path.join(template_dir, f) 310 rendered_path = _render_template(f, name=name, version=version, package_name=package_name, 311 defines=defines) 312 rendered_file = _render_template(load(f_path), name=name, version=version, 313 package_name=package_name, defines=defines) 314 out_files[rendered_path] = rendered_file 315 316 return out_files 317 318 319 def cmd_new(ref, header=False, pure_c=False, test=False, exports_sources=False, bare=False, 320 visual_versions=None, linux_gcc_versions=None, linux_clang_versions=None, 321 osx_clang_versions=None, shared=None, upload_url=None, gitignore=None, 322 gitlab_gcc_versions=None, gitlab_clang_versions=None, 323 circleci_gcc_versions=None, circleci_clang_versions=None, circleci_osx_versions=None, 324 template=None, cache=None, defines=None): 325 try: 326 name, version, user, channel, revision = get_reference_fields(ref, user_channel_input=False) 327 # convert "package_name" -> "PackageName" 328 package_name = re.sub(r"(?:^|[\W_])(\w)", lambda x: x.group(1).upper(), name) 329 except ValueError: 330 raise ConanException("Bad parameter, please use full package name," 331 "e.g.: MyLib/1.2.3@user/testing") 332 333 # Validate it is a valid reference 334 ConanFileReference(name, version, user, channel) 335 336 if header and exports_sources: 337 raise ConanException("'header' and 'sources' are incompatible options") 338 if pure_c and header: 339 raise ConanException("'pure_c' is incompatible with 'header'") 340 if pure_c and not exports_sources: 341 raise ConanException("'pure_c' requires the use of --source") 342 if bare and (header or exports_sources): 343 raise ConanException("'bare' is incompatible with 'header' and 'sources'") 344 if template and (header or exports_sources or bare or pure_c): 345 raise ConanException("'template' is incompatible with 'header', " 346 "'sources', 'pure-c' and 'bare'") 347 348 defines = defines or dict() 349 350 if header: 351 files = {"conanfile.py": conanfile_header.format(name=name, version=version, 352 package_name=package_name)} 353 elif exports_sources: 354 if not pure_c: 355 files = {"conanfile.py": conanfile_sources.format(name=name, version=version, 356 package_name=package_name, 357 configure=""), 358 "src/{}.cpp".format(name): hello_cpp.format(name=name, version=version), 359 "src/{}.h".format(name): hello_h.format(name=name, version=version), 360 "src/CMakeLists.txt": cmake.format(name=name, version=version)} 361 else: 362 config = ("\n def configure(self):\n" 363 " del self.settings.compiler.libcxx\n" 364 " del self.settings.compiler.cppstd\n") 365 files = {"conanfile.py": conanfile_sources.format(name=name, version=version, 366 package_name=package_name, 367 configure=config), 368 "src/{}.c".format(name): hello_c.format(name=name, version=version), 369 "src/{}.h".format(name): hello_h.format(name=name, version=version), 370 "src/CMakeLists.txt": cmake_pure_c.format(name=name, version=version)} 371 elif bare: 372 files = {"conanfile.py": conanfile_bare.format(name=name, version=version, 373 package_name=package_name)} 374 elif template: 375 is_file_template = os.path.basename(template).endswith('.py') 376 if is_file_template: 377 if not os.path.isabs(template): 378 # FIXME: Conan 2.0. The old path should be removed 379 old_path = os.path.join(cache.cache_folder, "templates", template) 380 new_path = os.path.join(cache.cache_folder, "templates", "command/new", template) 381 template = new_path if os.path.isfile(new_path) else old_path 382 if not os.path.isfile(template): 383 raise ConanException("Template doesn't exist: %s" % template) 384 replaced = _render_template(load(template), 385 name=name, 386 version=version, 387 package_name=package_name, 388 defines=defines) 389 files = {"conanfile.py": replaced} 390 elif template == "v2_cmake": 391 from conans.assets.templates.new_v2_cmake import get_files 392 files = get_files(name, version, package_name) 393 else: 394 if not os.path.isabs(template): 395 template = os.path.join(cache.cache_folder, "templates", "command/new", template) 396 if not os.path.isdir(template): 397 raise ConanException("Template doesn't exist: {}".format(template)) 398 template = os.path.normpath(template) 399 files = _get_files_from_template_dir(template_dir=template, 400 name=name, 401 version=version, 402 package_name=package_name, 403 defines=defines) 404 else: 405 files = {"conanfile.py": conanfile.format(name=name, version=version, 406 package_name=package_name)} 407 408 if test: 409 files["test_package/conanfile.py"] = test_conanfile.format(name=name, version=version, 410 user=user, channel=channel, 411 package_name=package_name) 412 if pure_c: 413 files["test_package/example.c"] = test_main.format(name=name) 414 files["test_package/CMakeLists.txt"] = test_cmake_pure_c 415 else: 416 include_name = name if exports_sources else "hello" 417 files["test_package/example.cpp"] = test_main.format(name=include_name) 418 files["test_package/CMakeLists.txt"] = test_cmake 419 420 if gitignore: 421 files[".gitignore"] = gitignore_template 422 423 files.update(ci_get_files(name, version, user, channel, visual_versions, 424 linux_gcc_versions, linux_clang_versions, 425 osx_clang_versions, shared, upload_url, 426 gitlab_gcc_versions, gitlab_clang_versions, 427 circleci_gcc_versions, circleci_clang_versions, 428 circleci_osx_versions)) 429 return files 430 [end of conans/client/cmd/new.py] [start of conans/model/conan_file.py] 1 import os 2 import platform 3 from contextlib import contextmanager 4 5 import six 6 from six import string_types 7 8 from conan.tools.env import Environment 9 from conan.tools.env.environment import environment_wrap_command 10 from conans.client import tools 11 from conans.client.output import ScopedOutput 12 from conans.client.tools.env import environment_append, no_op, pythonpath 13 from conans.client.tools.oss import OSInfo 14 from conans.errors import ConanException, ConanInvalidConfiguration 15 from conans.model.build_info import DepsCppInfo 16 from conans.model.conf import Conf 17 from conans.model.dependencies import ConanFileDependencies 18 from conans.model.env_info import DepsEnvInfo 19 from conans.model.layout import Folders, Patterns, Infos 20 from conans.model.new_build_info import from_old_cppinfo 21 from conans.model.options import Options, OptionsValues, PackageOptions 22 from conans.model.requires import Requirements 23 from conans.model.user_info import DepsUserInfo 24 from conans.paths import RUN_LOG_NAME 25 from conans.util.conan_v2_mode import conan_v2_error 26 27 28 def create_options(conanfile): 29 try: 30 package_options = PackageOptions(getattr(conanfile, "options", None)) 31 options = Options(package_options) 32 33 default_options = getattr(conanfile, "default_options", None) 34 if default_options: 35 if isinstance(default_options, dict): 36 default_values = OptionsValues(default_options) 37 elif isinstance(default_options, (list, tuple)): 38 conan_v2_error("Declare 'default_options' as a dictionary") 39 default_values = OptionsValues(default_options) 40 elif isinstance(default_options, six.string_types): 41 conan_v2_error("Declare 'default_options' as a dictionary") 42 default_values = OptionsValues.loads(default_options) 43 else: 44 raise ConanException("Please define your default_options as list, " 45 "multiline string or dictionary") 46 options.values = default_values 47 return options 48 except Exception as e: 49 raise ConanException("Error while initializing options. %s" % str(e)) 50 51 52 def create_requirements(conanfile): 53 try: 54 # Actual requirements of this package 55 if not hasattr(conanfile, "requires"): 56 return Requirements() 57 else: 58 if not conanfile.requires: 59 return Requirements() 60 if isinstance(conanfile.requires, (tuple, list)): 61 return Requirements(*conanfile.requires) 62 else: 63 return Requirements(conanfile.requires, ) 64 except Exception as e: 65 raise ConanException("Error while initializing requirements. %s" % str(e)) 66 67 68 def create_settings(conanfile, settings): 69 try: 70 defined_settings = getattr(conanfile, "settings", None) 71 if isinstance(defined_settings, str): 72 defined_settings = [defined_settings] 73 current = defined_settings or {} 74 settings.constraint(current) 75 return settings 76 except Exception as e: 77 raise ConanInvalidConfiguration("The recipe %s is constraining settings. %s" % ( 78 conanfile.display_name, str(e))) 79 80 81 @contextmanager 82 def _env_and_python(conanfile): 83 with environment_append(conanfile.env): 84 # FIXME Conan 2.0, Remove old ways of reusing python code 85 with pythonpath(conanfile): 86 yield 87 88 89 def get_env_context_manager(conanfile, without_python=False): 90 if not conanfile.apply_env: 91 return no_op() 92 if without_python: 93 return environment_append(conanfile.env) 94 return _env_and_python(conanfile) 95 96 97 class ConanFile(object): 98 """ The base class for all package recipes 99 """ 100 101 name = None 102 version = None # Any str, can be "1.1" or whatever 103 url = None # The URL where this File is located, as github, to collaborate in package 104 # The license of the PACKAGE, just a shortcut, does not replace or 105 # change the actual license of the source code 106 license = None 107 author = None # Main maintainer/responsible for the package, any format 108 description = None 109 topics = None 110 homepage = None 111 build_policy = None 112 short_paths = False 113 apply_env = True # Apply environment variables from requires deps_env_info and profiles 114 exports = None 115 exports_sources = None 116 generators = ["txt"] 117 revision_mode = "hash" 118 119 # Vars to control the build steps (build(), package()) 120 should_configure = True 121 should_build = True 122 should_install = True 123 should_test = True 124 in_local_cache = True 125 develop = False 126 127 # Defaulting the reference fields 128 default_channel = None 129 default_user = None 130 131 # Settings and Options 132 settings = None 133 options = None 134 default_options = None 135 136 provides = None 137 deprecated = None 138 139 # Folders 140 folders = None 141 patterns = None 142 143 # Run in windows bash 144 win_bash = None 145 146 def __init__(self, output, runner, display_name="", user=None, channel=None): 147 # an output stream (writeln, info, warn error) 148 self.output = ScopedOutput(display_name, output) 149 self.display_name = display_name 150 # something that can run commands, as os.sytem 151 self._conan_runner = runner 152 self._conan_user = user 153 self._conan_channel = channel 154 155 self.compatible_packages = [] 156 self._conan_using_build_profile = False 157 self._conan_requester = None 158 159 self.buildenv_info = Environment(self) 160 self.runenv_info = Environment(self) 161 # At the moment only for build_requires, others will be ignored 162 self.conf_info = Conf() 163 self._conan_buildenv = None # The profile buildenv, will be assigned initialize() 164 self._conan_node = None # access to container Node object, to access info, context, deps... 165 self._conan_new_cpp_info = None # Will be calculated lazy in the getter 166 self._conan_dependencies = None 167 168 self.environment_scripts = [] # Accumulate the env scripts generated in order 169 170 # layout() method related variables: 171 self.folders = Folders() 172 self.patterns = Patterns() 173 self.cpp = Infos() 174 175 self.patterns.source.include = ["*.h", "*.hpp", "*.hxx"] 176 self.patterns.source.lib = [] 177 self.patterns.source.bin = [] 178 179 self.patterns.build.include = ["*.h", "*.hpp", "*.hxx"] 180 self.patterns.build.lib = ["*.so", "*.so.*", "*.a", "*.lib", "*.dylib"] 181 self.patterns.build.bin = ["*.exe", "*.dll"] 182 183 self.cpp.package.includedirs = ["include"] 184 self.cpp.package.libdirs = ["lib"] 185 self.cpp.package.bindirs = ["bin"] 186 self.cpp.package.resdirs = ["res"] 187 self.cpp.package.builddirs = [""] 188 self.cpp.package.frameworkdirs = ["Frameworks"] 189 190 @property 191 def context(self): 192 return self._conan_node.context 193 194 @property 195 def dependencies(self): 196 # Caching it, this object is requested many times 197 if self._conan_dependencies is None: 198 self._conan_dependencies = ConanFileDependencies.from_node(self._conan_node) 199 return self._conan_dependencies 200 201 @property 202 def ref(self): 203 return self._conan_node.ref 204 205 @property 206 def pref(self): 207 return self._conan_node.pref 208 209 @property 210 def buildenv(self): 211 # Lazy computation of the package buildenv based on the profileone 212 if not isinstance(self._conan_buildenv, Environment): 213 # TODO: missing user/channel 214 ref_str = "{}/{}".format(self.name, self.version) 215 self._conan_buildenv = self._conan_buildenv.get_env(self, ref_str) 216 return self._conan_buildenv 217 218 def initialize(self, settings, env, buildenv=None): 219 self._conan_buildenv = buildenv 220 if isinstance(self.generators, str): 221 self.generators = [self.generators] 222 # User defined options 223 self.options = create_options(self) 224 self.requires = create_requirements(self) 225 self.settings = create_settings(self, settings) 226 227 conan_v2_error("Setting 'cppstd' is deprecated in favor of 'compiler.cppstd'," 228 " please update your recipe.", 'cppstd' in self.settings.fields) 229 230 # needed variables to pack the project 231 self.cpp_info = None # Will be initialized at processing time 232 self._conan_dep_cpp_info = None # Will be initialized at processing time 233 self.deps_cpp_info = DepsCppInfo() 234 235 # environment variables declared in the package_info 236 self.env_info = None # Will be initialized at processing time 237 self.deps_env_info = DepsEnvInfo() 238 239 # user declared variables 240 self.user_info = None 241 # Keys are the package names (only 'host' if different contexts) 242 self.deps_user_info = DepsUserInfo() 243 244 # user specified env variables 245 self._conan_env_values = env.copy() # user specified -e 246 247 if self.description is not None and not isinstance(self.description, six.string_types): 248 raise ConanException("Recipe 'description' must be a string.") 249 250 if not hasattr(self, "virtualenv"): # Allow the user to override it with True or False 251 self.virtualenv = True 252 253 @property 254 def new_cpp_info(self): 255 if not self._conan_new_cpp_info: 256 self._conan_new_cpp_info = from_old_cppinfo(self.cpp_info) 257 return self._conan_new_cpp_info 258 259 @property 260 def source_folder(self): 261 return self.folders.source_folder 262 263 @source_folder.setter 264 def source_folder(self, folder): 265 self.folders.set_base_source(folder) 266 267 @property 268 def build_folder(self): 269 return self.folders.build_folder 270 271 @build_folder.setter 272 def build_folder(self, folder): 273 self.folders.set_base_build(folder) 274 275 @property 276 def package_folder(self): 277 return self.folders.package_folder 278 279 @package_folder.setter 280 def package_folder(self, folder): 281 self.folders.set_base_package(folder) 282 283 @property 284 def install_folder(self): 285 # FIXME: Remove in 2.0, no self.install_folder 286 return self.folders.base_install 287 288 @install_folder.setter 289 def install_folder(self, folder): 290 # FIXME: Remove in 2.0, no self.install_folder 291 self.folders.set_base_install(folder) 292 293 @property 294 def generators_folder(self): 295 # FIXME: Remove in 2.0, no self.install_folder 296 return self.folders.generators_folder if self.folders.generators else self.install_folder 297 298 @property 299 def imports_folder(self): 300 return self.folders.imports_folder 301 302 @imports_folder.setter 303 def imports_folder(self, folder): 304 self.folders.set_base_imports(folder) 305 306 @property 307 def env(self): 308 """Apply the self.deps_env_info into a copy of self._conan_env_values (will prioritize the 309 self._conan_env_values, user specified from profiles or -e first, then inherited)""" 310 # Cannot be lazy cached, because it's called in configure node, and we still don't have 311 # the deps_env_info objects available 312 tmp_env_values = self._conan_env_values.copy() 313 tmp_env_values.update(self.deps_env_info) 314 ret, multiple = tmp_env_values.env_dicts(self.name, self.version, self._conan_user, 315 self._conan_channel) 316 ret.update(multiple) 317 return ret 318 319 @property 320 def channel(self): 321 if not self._conan_channel: 322 _env_channel = os.getenv("CONAN_CHANNEL") 323 conan_v2_error("Environment variable 'CONAN_CHANNEL' is deprecated", _env_channel) 324 self._conan_channel = _env_channel or self.default_channel 325 if not self._conan_channel: 326 raise ConanException("channel not defined, but self.channel is used in conanfile") 327 return self._conan_channel 328 329 @property 330 def user(self): 331 if not self._conan_user: 332 _env_username = os.getenv("CONAN_USERNAME") 333 conan_v2_error("Environment variable 'CONAN_USERNAME' is deprecated", _env_username) 334 self._conan_user = _env_username or self.default_user 335 if not self._conan_user: 336 raise ConanException("user not defined, but self.user is used in conanfile") 337 return self._conan_user 338 339 def collect_libs(self, folder=None): 340 conan_v2_error("'self.collect_libs' is deprecated, use 'tools.collect_libs(self)' instead") 341 return tools.collect_libs(self, folder=folder) 342 343 @property 344 def build_policy_missing(self): 345 return self.build_policy == "missing" 346 347 @property 348 def build_policy_always(self): 349 return self.build_policy == "always" 350 351 def source(self): 352 pass 353 354 def system_requirements(self): 355 """ this method can be overwritten to implement logic for system package 356 managers, as apt-get 357 358 You can define self.global_system_requirements = True, if you want the installation 359 to be for all packages (not depending on settings/options/requirements) 360 """ 361 362 def config_options(self): 363 """ modify options, probably conditioned to some settings. This call is executed 364 before config_settings. E.g. 365 if self.settings.os == "Windows": 366 del self.options.shared # shared/static not supported in win 367 """ 368 369 def configure(self): 370 """ modify settings, probably conditioned to some options. This call is executed 371 after config_options. E.g. 372 if self.options.header_only: 373 self.settings.clear() 374 This is also the place for conditional requirements 375 """ 376 377 def build(self): 378 """ build your project calling the desired build tools as done in the command line. 379 E.g. self.run("cmake --build .") Or use the provided build helpers. E.g. cmake.build() 380 """ 381 self.output.warn("This conanfile has no build step") 382 383 def package(self): 384 """ package the needed files from source and build folders. 385 E.g. self.copy("*.h", src="src/includes", dst="includes") 386 """ 387 self.output.warn("This conanfile has no package step") 388 389 def package_info(self): 390 """ define cpp_build_info, flags, etc 391 """ 392 393 def run(self, command, output=True, cwd=None, win_bash=False, subsystem=None, msys_mingw=True, 394 ignore_errors=False, run_environment=False, with_login=True, env=None): 395 # NOTE: "self.win_bash" is the new parameter "win_bash" for Conan 2.0 396 397 def _run(cmd, _env): 398 # FIXME: run in windows bash is not using output 399 if platform.system() == "Windows": 400 if win_bash: 401 return tools.run_in_windows_bash(self, bashcmd=cmd, cwd=cwd, subsystem=subsystem, 402 msys_mingw=msys_mingw, with_login=with_login) 403 elif self.win_bash: # New, Conan 2.0 404 from conan.tools.microsoft.subsystems import run_in_windows_bash 405 return run_in_windows_bash(self, command=cmd, cwd=cwd, env=_env) 406 _env = _env or "conanenv" 407 wrapped_cmd = environment_wrap_command(self, _env, cmd, cwd=self.generators_folder) 408 return self._conan_runner(wrapped_cmd, output, os.path.abspath(RUN_LOG_NAME), cwd) 409 410 if run_environment: 411 # When using_build_profile the required environment is already applied through 412 # 'conanfile.env' in the contextmanager 'get_env_context_manager' 413 with tools.run_environment(self) if not self._conan_using_build_profile else no_op(): 414 if OSInfo().is_macos and isinstance(command, string_types): 415 # Security policy on macOS clears this variable when executing /bin/sh. To 416 # keep its value, set it again inside the shell when running the command. 417 command = 'DYLD_LIBRARY_PATH="%s" DYLD_FRAMEWORK_PATH="%s" %s' % \ 418 (os.environ.get('DYLD_LIBRARY_PATH', ''), 419 os.environ.get("DYLD_FRAMEWORK_PATH", ''), 420 command) 421 retcode = _run(command, env) 422 else: 423 retcode = _run(command, env) 424 425 if not ignore_errors and retcode != 0: 426 raise ConanException("Error %d while executing %s" % (retcode, command)) 427 428 return retcode 429 430 def package_id(self): 431 """ modify the binary info, typically to narrow values 432 e.g.: self.info.settings.compiler = "Any" => All compilers will generate same ID 433 """ 434 435 def test(self): 436 """ test the generated executable. 437 E.g. self.run("./example") 438 """ 439 raise ConanException("You need to create a method 'test' in your test/conanfile.py") 440 441 def __repr__(self): 442 return self.display_name 443 [end of conans/model/conan_file.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conan-io/conan
916cb10b99ed92d319cd0719462708ee0501ecd4
[question] Is new cmake_layout compatible with cpp_info.components ? Hi, I've been using CMakeToolchain for a while with the CMake build helper that comes with it. But after updating today I saw that `CMake` build helper does not accept a `build_folder` parameter anymore. Looking at why it happened I arrived at https://github.com/conan-io/conan/pull/8554/files#diff-d72013a45b00a0adf06f4536d6a8c8844461e51b72911937d63e6dda9a3d440aR64 this means that the new way of providing the folder topology is via `def layout(self)` right? My first move was to employ the new `cmake_layout` utility, but I was unable to create a package if this one used `self.cpp_info.components` ``` ERROR: ConanException: say/0.1 package_info(): self.cpp_info.components cannot be used with self.cpp_info global values at the same time ``` Conanfile example: ``` from conans import ConanFile, CMake from conan.tools.layout import cmake_layout, LayoutPackager class Pkg(ConanFile): name = "say" version = "0.1" settings = "os", "compiler", "arch", "build_type" generators = "cmake" exports_sources = "src/*" def layout(self): cmake_layout(self) def build(self): cmake = CMake(self) cmake.configure() cmake.build() def package(self): LayoutPackager(self).package() def package_info(self): self.cpp_info.components["say"].libs = ["say"] ``` So, I am doing something wrong? Is this intended or just a bug/missing feature? Thanks! - [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
Hi @Hopobcn I have managed to reproduce this, indeed seems a bug. Thanks for the complete ``conanfile.py`` to reproduce, we'll check it asap. Apparently doing: ```python settings = "os", "arch", "compiler", "build_type" def layout(self): self.cpp.package.components["say"].includedirs = ["include"] ``` Fixes the issue. However, I think we intended to keep supporting the other ``package_info()`` definition as well for ``self.cpp_info`` that would be equivalent to ``self.cpp.package``. Probably need the input of @lasote here. BTW, the test to reproduce is: ```python def test_components_error(): # https://github.com/conan-io/conan/issues/9331 client = TestClient() conan_hello = textwrap.dedent(""" import os from conans import ConanFile from conan.tools.files import save class Pkg(ConanFile): settings = "os", "arch", "compiler", "build_type" def layout(self): pass def package_info(self): self.cpp_info.components["say"].includedirs = ["include"] """) client.save({"conanfile.py": conan_hello}) client.run("create . hello/1.0@") ``` It is fixed by adding ``self.cpp.package.components["say"].includedirs = ["include"]`` to ``layout()`` (then the ``package_info()`` can be removed too, or can stay, it pass either way. Thanks for the fast reply! Yep, can confirm that using the `self.cpp.package.components` notation works. Also, note to other readers, `.names` property seems to not be accepted with `self.cpp.package.components`. Entries like `self.cpp_info.components["say"].names["cmake_find_package"] = "Say"` must be translated to `self.cpp.package.components["say"].set_property("cmake_find_package", "Say")`. @memsharded I'm not sure that the generator (`cmake_find_package`) is able to generate code for `self.cpp.package.components`. Using a test_package like this: ``` cmake_minimum_required(VERSION 3.8) project(test_package) include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake) conan_basic_setup() find_package(say REQUIRED) add_executable(test_package test_package.cpp) target_link_libraries(test_package PRIVATE say::Say) ``` ``` from conans import ConanFile, CMake, tools import os class TestConan(ConanFile): settings = "os", "compiler", "build_type", "arch" generators = "cmake", "cmake_find_package" def build(self): cmake = CMake(self) cmake.configure() cmake.build() def test(self): if not tools.cross_building(self.settings): bin_path = os.path.join("bin", "test_pacakge") self.run(bin_path, run_environment=True) ``` The resulting `Findsay.cmake` doesn't contain any reference to `components["say"]`. Hi @Hopobcn yes, please check the warning notes in https://docs.conan.io/en/latest/reference/conanfile/methods.html#layout. It is extremely risky to try to introduce the ``layout()`` functionality changes in the old generators. Only the new ones will fully support this functionality (Note that internally we are replacing all the logic of the generators to work, from the ``self.dependencies``, to new ``cpp_info`` structure definitions.) @lasote This error also happens here: https://github.com/conan-io/conan-center-index/pull/6578 Probably the workaround suggested by @memsharded would work here, but that would require a full refactor of the `package_info` method (not ideal) on my way!
2021-08-02T06:12:55Z
<patch> diff --git a/conans/client/installer.py b/conans/client/installer.py --- a/conans/client/installer.py +++ b/conans/client/installer.py @@ -699,9 +699,14 @@ def _call_package_info(self, conanfile, package_folder, ref, is_editable): if conanfile._conan_dep_cpp_info is None: try: - if not is_editable: + if not is_editable and not hasattr(conanfile, "layout"): # FIXME: The default for the cppinfo from build are not the same # so this check fails when editable + # FIXME: Remove when new cppinfo model. If using the layout method + # the cppinfo object is filled from self.cpp.package new + # model and we cannot check if the defaults have been modified + # because it doesn't exist in the new model where the defaults + # for the components are always empty conanfile.cpp_info._raise_incorrect_components_definition( conanfile.name, conanfile.requires) except ConanException as e: </patch>
[]
[]
pandas-dev__pandas-19973
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: scalar assignment of a tz-aware is object dtype [3] should be a ``datetime64[ns, UTC]`` ``` In [1]: df = pd.DataFrame({'A': [0, 1]}) In [3]: df['now'] = pd.Timestamp('20130101', tz='UTC') In [4]: df Out[4]: A now 0 0 2013-01-01 00:00:00+00:00 1 1 2013-01-01 00:00:00+00:00 In [5]: df.dtypes Out[5]: A int64 now object dtype: object In [6]: df['now2'] = pd.DatetimeIndex([pd.Timestamp('20130101', tz='UTC')]).repeat(len(df)) In [7]: df.dtypes Out[7]: A int64 now object now2 datetime64[ns, UTC] dtype: object ``` </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://circleci.com/gh/pandas-dev/pandas"> 52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td></td> 58 <td> 59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas"> 60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Coverage</td> 66  <td> 67 <a href="https://codecov.io/gh/pandas-dev/pandas"> 68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Downloads</td> 74 <td> 75 <a href="https://pandas.pydata.org"> 76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 77 </a> 78 </td> 79 </tr> 80 <tr> 81 <td>Gitter</td> 82 <td> 83 <a href="https://gitter.im/pydata/pandas"> 84 <img src="https://badges.gitter.im/Join%20Chat.svg" 85 </a> 86 </td> 87 </tr> 88 </table> 89 90 91 92 ## What is it 93 94 **pandas** is a Python package providing fast, flexible, and expressive data 95 structures designed to make working with "relational" or "labeled" data both 96 easy and intuitive. It aims to be the fundamental high-level building block for 97 doing practical, **real world** data analysis in Python. Additionally, it has 98 the broader goal of becoming **the most powerful and flexible open source data 99 analysis / manipulation tool available in any language**. It is already well on 100 its way toward this goal. 101 102 ## Main Features 103 Here are just a few of the things that pandas does well: 104 105 - Easy handling of [**missing data**][missing-data] (represented as 106 `NaN`) in floating point as well as non-floating point data 107 - Size mutability: columns can be [**inserted and 108 deleted**][insertion-deletion] from DataFrame and higher dimensional 109 objects 110 - Automatic and explicit [**data alignment**][alignment]: objects can 111 be explicitly aligned to a set of labels, or the user can simply 112 ignore the labels and let `Series`, `DataFrame`, etc. automatically 113 align the data for you in computations 114 - Powerful, flexible [**group by**][groupby] functionality to perform 115 split-apply-combine operations on data sets, for both aggregating 116 and transforming data 117 - Make it [**easy to convert**][conversion] ragged, 118 differently-indexed data in other Python and NumPy data structures 119 into DataFrame objects 120 - Intelligent label-based [**slicing**][slicing], [**fancy 121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 122 large data sets 123 - Intuitive [**merging**][merging] and [**joining**][joining] data 124 sets 125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 126 data sets 127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 128 labels per tick) 129 - Robust IO tools for loading data from [**flat files**][flat-files] 130 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 132 - [**Time series**][timeseries]-specific functionality: date range 133 generation and frequency conversion, moving window statistics, 134 moving window linear regressions, date shifting and lagging, etc. 135 136 137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 155 156 ## Where to get it 157 The source code is currently hosted on GitHub at: 158 https://github.com/pandas-dev/pandas 159 160 Binary installers for the latest released version are available at the [Python 161 package index](https://pypi.org/project/pandas) and on conda. 162 163 ```sh 164 # conda 165 conda install pandas 166 ``` 167 168 ```sh 169 # or PyPI 170 pip install pandas 171 ``` 172 173 ## Dependencies 174 - [NumPy](https://www.numpy.org): 1.9.0 or higher 175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 177 178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 179 for recommended and optional dependencies. 180 181 ## Installation from sources 182 To install pandas from source you need Cython in addition to the normal 183 dependencies above. Cython can be installed from pypi: 184 185 ```sh 186 pip install cython 187 ``` 188 189 In the `pandas` directory (same one where you found this file after 190 cloning the git repo), execute: 191 192 ```sh 193 python setup.py install 194 ``` 195 196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 197 198 ```sh 199 python setup.py develop 200 ``` 201 202 Alternatively, you can use `pip` if you want all the dependencies pulled 203 in automatically (the `-e` option is for installing it in [development 204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 205 206 ```sh 207 pip install -e . 208 ``` 209 210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 211 212 ## License 213 [BSD 3](LICENSE) 214 215 ## Documentation 216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 217 218 ## Background 219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 220 has been under active development since then. 221 222 ## Getting Help 223 224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 226 227 ## Discussion and Development 228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 229 230 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 231 232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 233 234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)** 235 236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 237 238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 239 240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 241 242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 243 [end of README.md] [start of asv_bench/benchmarks/timeseries.py] 1 import warnings 2 from datetime import timedelta 3 4 import numpy as np 5 from pandas import to_datetime, date_range, Series, DataFrame, period_range 6 from pandas.tseries.frequencies import infer_freq 7 try: 8 from pandas.plotting._converter import DatetimeConverter 9 except ImportError: 10 from pandas.tseries.converter import DatetimeConverter 11 12 from .pandas_vb_common import setup # noqa 13 14 15 class DatetimeIndex(object): 16 17 goal_time = 0.2 18 params = ['dst', 'repeated', 'tz_aware', 'tz_naive'] 19 param_names = ['index_type'] 20 21 def setup(self, index_type): 22 N = 100000 23 dtidxes = {'dst': date_range(start='10/29/2000 1:00:00', 24 end='10/29/2000 1:59:59', freq='S'), 25 'repeated': date_range(start='2000', 26 periods=N / 10, 27 freq='s').repeat(10), 28 'tz_aware': date_range(start='2000', 29 periods=N, 30 freq='s', 31 tz='US/Eastern'), 32 'tz_naive': date_range(start='2000', 33 periods=N, 34 freq='s')} 35 self.index = dtidxes[index_type] 36 37 def time_add_timedelta(self, index_type): 38 self.index + timedelta(minutes=2) 39 40 def time_normalize(self, index_type): 41 self.index.normalize() 42 43 def time_unique(self, index_type): 44 self.index.unique() 45 46 def time_to_time(self, index_type): 47 self.index.time 48 49 def time_get(self, index_type): 50 self.index[0] 51 52 def time_timeseries_is_month_start(self, index_type): 53 self.index.is_month_start 54 55 def time_to_date(self, index_type): 56 self.index.date 57 58 def time_to_pydatetime(self, index_type): 59 self.index.to_pydatetime() 60 61 62 class TzLocalize(object): 63 64 goal_time = 0.2 65 66 def setup(self): 67 dst_rng = date_range(start='10/29/2000 1:00:00', 68 end='10/29/2000 1:59:59', freq='S') 69 self.index = date_range(start='10/29/2000', 70 end='10/29/2000 00:59:59', freq='S') 71 self.index = self.index.append(dst_rng) 72 self.index = self.index.append(dst_rng) 73 self.index = self.index.append(date_range(start='10/29/2000 2:00:00', 74 end='10/29/2000 3:00:00', 75 freq='S')) 76 77 def time_infer_dst(self): 78 self.index.tz_localize('US/Eastern', ambiguous='infer') 79 80 81 class ResetIndex(object): 82 83 goal_time = 0.2 84 params = [None, 'US/Eastern'] 85 param_names = 'tz' 86 87 def setup(self, tz): 88 idx = date_range(start='1/1/2000', periods=1000, freq='H', tz=tz) 89 self.df = DataFrame(np.random.randn(1000, 2), index=idx) 90 91 def time_reest_datetimeindex(self, tz): 92 self.df.reset_index() 93 94 95 class Factorize(object): 96 97 goal_time = 0.2 98 params = [None, 'Asia/Tokyo'] 99 param_names = 'tz' 100 101 def setup(self, tz): 102 N = 100000 103 self.dti = date_range('2011-01-01', freq='H', periods=N, tz=tz) 104 self.dti = self.dti.repeat(5) 105 106 def time_factorize(self, tz): 107 self.dti.factorize() 108 109 110 class InferFreq(object): 111 112 goal_time = 0.2 113 params = [None, 'D', 'B'] 114 param_names = ['freq'] 115 116 def setup(self, freq): 117 if freq is None: 118 self.idx = date_range(start='1/1/1700', freq='D', periods=10000) 119 self.idx.freq = None 120 else: 121 self.idx = date_range(start='1/1/1700', freq=freq, periods=10000) 122 123 def time_infer_freq(self, freq): 124 infer_freq(self.idx) 125 126 127 class TimeDatetimeConverter(object): 128 129 goal_time = 0.2 130 131 def setup(self): 132 N = 100000 133 self.rng = date_range(start='1/1/2000', periods=N, freq='T') 134 135 def time_convert(self): 136 DatetimeConverter.convert(self.rng, None, None) 137 138 139 class Iteration(object): 140 141 goal_time = 0.2 142 params = [date_range, period_range] 143 param_names = ['time_index'] 144 145 def setup(self, time_index): 146 N = 10**6 147 self.idx = time_index(start='20140101', freq='T', periods=N) 148 self.exit = 10000 149 150 def time_iter(self, time_index): 151 for _ in self.idx: 152 pass 153 154 def time_iter_preexit(self, time_index): 155 for i, _ in enumerate(self.idx): 156 if i > self.exit: 157 break 158 159 160 class ResampleDataFrame(object): 161 162 goal_time = 0.2 163 params = ['max', 'mean', 'min'] 164 param_names = ['method'] 165 166 def setup(self, method): 167 rng = date_range(start='20130101', periods=100000, freq='50L') 168 df = DataFrame(np.random.randn(100000, 2), index=rng) 169 self.resample = getattr(df.resample('1s'), method) 170 171 def time_method(self, method): 172 self.resample() 173 174 175 class ResampleSeries(object): 176 177 goal_time = 0.2 178 params = (['period', 'datetime'], ['5min', '1D'], ['mean', 'ohlc']) 179 param_names = ['index', 'freq', 'method'] 180 181 def setup(self, index, freq, method): 182 indexes = {'period': period_range(start='1/1/2000', 183 end='1/1/2001', 184 freq='T'), 185 'datetime': date_range(start='1/1/2000', 186 end='1/1/2001', 187 freq='T')} 188 idx = indexes[index] 189 ts = Series(np.random.randn(len(idx)), index=idx) 190 self.resample = getattr(ts.resample(freq), method) 191 192 def time_resample(self, index, freq, method): 193 self.resample() 194 195 196 class ResampleDatetetime64(object): 197 # GH 7754 198 goal_time = 0.2 199 200 def setup(self): 201 rng3 = date_range(start='2000-01-01 00:00:00', 202 end='2000-01-01 10:00:00', freq='555000U') 203 self.dt_ts = Series(5, rng3, dtype='datetime64[ns]') 204 205 def time_resample(self): 206 self.dt_ts.resample('1S').last() 207 208 209 class AsOf(object): 210 211 goal_time = 0.2 212 params = ['DataFrame', 'Series'] 213 param_names = ['constructor'] 214 215 def setup(self, constructor): 216 N = 10000 217 M = 10 218 rng = date_range(start='1/1/1990', periods=N, freq='53s') 219 data = {'DataFrame': DataFrame(np.random.randn(N, M)), 220 'Series': Series(np.random.randn(N))} 221 self.ts = data[constructor] 222 self.ts.index = rng 223 self.ts2 = self.ts.copy() 224 self.ts2.iloc[250:5000] = np.nan 225 self.ts3 = self.ts.copy() 226 self.ts3.iloc[-5000:] = np.nan 227 self.dates = date_range(start='1/1/1990', periods=N * 10, freq='5s') 228 self.date = self.dates[0] 229 self.date_last = self.dates[-1] 230 self.date_early = self.date - timedelta(10) 231 232 # test speed of pre-computing NAs. 233 def time_asof(self, constructor): 234 self.ts.asof(self.dates) 235 236 # should be roughly the same as above. 237 def time_asof_nan(self, constructor): 238 self.ts2.asof(self.dates) 239 240 # test speed of the code path for a scalar index 241 # without *while* loop 242 def time_asof_single(self, constructor): 243 self.ts.asof(self.date) 244 245 # test speed of the code path for a scalar index 246 # before the start. should be the same as above. 247 def time_asof_single_early(self, constructor): 248 self.ts.asof(self.date_early) 249 250 # test the speed of the code path for a scalar index 251 # with a long *while* loop. should still be much 252 # faster than pre-computing all the NAs. 253 def time_asof_nan_single(self, constructor): 254 self.ts3.asof(self.date_last) 255 256 257 class SortIndex(object): 258 259 goal_time = 0.2 260 params = [True, False] 261 param_names = ['monotonic'] 262 263 def setup(self, monotonic): 264 N = 10**5 265 idx = date_range(start='1/1/2000', periods=N, freq='s') 266 self.s = Series(np.random.randn(N), index=idx) 267 if not monotonic: 268 self.s = self.s.sample(frac=1) 269 270 def time_sort_index(self, monotonic): 271 self.s.sort_index() 272 273 def time_get_slice(self, monotonic): 274 self.s[:10000] 275 276 277 class IrregularOps(object): 278 279 goal_time = 0.2 280 281 def setup(self): 282 N = 10**5 283 idx = date_range(start='1/1/2000', periods=N, freq='s') 284 s = Series(np.random.randn(N), index=idx) 285 self.left = s.sample(frac=1) 286 self.right = s.sample(frac=1) 287 288 def time_add(self): 289 self.left + self.right 290 291 292 class Lookup(object): 293 294 goal_time = 0.2 295 296 def setup(self): 297 N = 1500000 298 rng = date_range(start='1/1/2000', periods=N, freq='S') 299 self.ts = Series(1, index=rng) 300 self.lookup_val = rng[N // 2] 301 302 def time_lookup_and_cleanup(self): 303 self.ts[self.lookup_val] 304 self.ts.index._cleanup() 305 306 307 class ToDatetimeYYYYMMDD(object): 308 309 goal_time = 0.2 310 311 def setup(self): 312 rng = date_range(start='1/1/2000', periods=10000, freq='D') 313 self.stringsD = Series(rng.strftime('%Y%m%d')) 314 315 def time_format_YYYYMMDD(self): 316 to_datetime(self.stringsD, format='%Y%m%d') 317 318 319 class ToDatetimeISO8601(object): 320 321 goal_time = 0.2 322 323 def setup(self): 324 rng = date_range(start='1/1/2000', periods=20000, freq='H') 325 self.strings = rng.strftime('%Y-%m-%d %H:%M:%S').tolist() 326 self.strings_nosep = rng.strftime('%Y%m%d %H:%M:%S').tolist() 327 self.strings_tz_space = [x.strftime('%Y-%m-%d %H:%M:%S') + ' -0800' 328 for x in rng] 329 330 def time_iso8601(self): 331 to_datetime(self.strings) 332 333 def time_iso8601_nosep(self): 334 to_datetime(self.strings_nosep) 335 336 def time_iso8601_format(self): 337 to_datetime(self.strings, format='%Y-%m-%d %H:%M:%S') 338 339 def time_iso8601_format_no_sep(self): 340 to_datetime(self.strings_nosep, format='%Y%m%d %H:%M:%S') 341 342 def time_iso8601_tz_spaceformat(self): 343 to_datetime(self.strings_tz_space) 344 345 346 class ToDatetimeNONISO8601(object): 347 348 goal_time = 0.2 349 350 def setup(self): 351 N = 10000 352 half = int(N / 2) 353 ts_string_1 = 'March 1, 2018 12:00:00+0400' 354 ts_string_2 = 'March 1, 2018 12:00:00+0500' 355 self.same_offset = [ts_string_1] * N 356 self.diff_offset = [ts_string_1] * half + [ts_string_2] * half 357 358 def time_same_offset(self): 359 to_datetime(self.same_offset) 360 361 def time_different_offset(self): 362 to_datetime(self.diff_offset) 363 364 365 class ToDatetimeFormat(object): 366 367 goal_time = 0.2 368 369 def setup(self): 370 self.s = Series(['19MAY11', '19MAY11:00:00:00'] * 100000) 371 self.s2 = self.s.str.replace(':\\S+$', '') 372 373 def time_exact(self): 374 to_datetime(self.s2, format='%d%b%y') 375 376 def time_no_exact(self): 377 to_datetime(self.s, format='%d%b%y', exact=False) 378 379 380 class ToDatetimeCache(object): 381 382 goal_time = 0.2 383 params = [True, False] 384 param_names = ['cache'] 385 386 def setup(self, cache): 387 N = 10000 388 self.unique_numeric_seconds = list(range(N)) 389 self.dup_numeric_seconds = [1000] * N 390 self.dup_string_dates = ['2000-02-11'] * N 391 self.dup_string_with_tz = ['2000-02-11 15:00:00-0800'] * N 392 393 def time_unique_seconds_and_unit(self, cache): 394 to_datetime(self.unique_numeric_seconds, unit='s', cache=cache) 395 396 def time_dup_seconds_and_unit(self, cache): 397 to_datetime(self.dup_numeric_seconds, unit='s', cache=cache) 398 399 def time_dup_string_dates(self, cache): 400 to_datetime(self.dup_string_dates, cache=cache) 401 402 def time_dup_string_dates_and_format(self, cache): 403 to_datetime(self.dup_string_dates, format='%Y-%m-%d', cache=cache) 404 405 def time_dup_string_tzoffset_dates(self, cache): 406 to_datetime(self.dup_string_with_tz, cache=cache) 407 408 409 class DatetimeAccessor(object): 410 411 def setup(self): 412 N = 100000 413 self.series = Series(date_range(start='1/1/2000', periods=N, freq='T')) 414 415 def time_dt_accessor(self): 416 self.series.dt 417 418 def time_dt_accessor_normalize(self): 419 self.series.dt.normalize() 420 [end of asv_bench/benchmarks/timeseries.py] [start of pandas/core/indexes/accessors.py] 1 """ 2 datetimelike delegation 3 """ 4 5 import numpy as np 6 7 from pandas.core.dtypes.generic import ABCSeries 8 from pandas.core.dtypes.common import ( 9 is_period_arraylike, 10 is_datetime_arraylike, is_integer_dtype, 11 is_datetime64_dtype, is_datetime64tz_dtype, 12 is_timedelta64_dtype, is_categorical_dtype, 13 is_list_like) 14 15 from pandas.core.accessor import PandasDelegate 16 from pandas.core.base import NoNewAttributesMixin, PandasObject 17 from pandas.core.indexes.datetimes import DatetimeIndex 18 from pandas.core.indexes.period import PeriodIndex 19 from pandas.core.indexes.timedeltas import TimedeltaIndex 20 from pandas.core.algorithms import take_1d 21 22 23 class Properties(PandasDelegate, PandasObject, NoNewAttributesMixin): 24 25 def __init__(self, data, orig): 26 if not isinstance(data, ABCSeries): 27 raise TypeError("cannot convert an object of type {0} to a " 28 "datetimelike index".format(type(data))) 29 30 self.values = data 31 self.orig = orig 32 self.name = getattr(data, 'name', None) 33 self.index = getattr(data, 'index', None) 34 self._freeze() 35 36 def _get_values(self): 37 data = self.values 38 if is_datetime64_dtype(data.dtype): 39 return DatetimeIndex(data, copy=False, name=self.name) 40 41 elif is_datetime64tz_dtype(data.dtype): 42 return DatetimeIndex(data, copy=False, name=self.name) 43 44 elif is_timedelta64_dtype(data.dtype): 45 return TimedeltaIndex(data, copy=False, name=self.name) 46 47 else: 48 if is_period_arraylike(data): 49 return PeriodIndex(data, copy=False, name=self.name) 50 if is_datetime_arraylike(data): 51 return DatetimeIndex(data, copy=False, name=self.name) 52 53 raise TypeError("cannot convert an object of type {0} to a " 54 "datetimelike index".format(type(data))) 55 56 def _delegate_property_get(self, name): 57 from pandas import Series 58 values = self._get_values() 59 60 result = getattr(values, name) 61 62 # maybe need to upcast (ints) 63 if isinstance(result, np.ndarray): 64 if is_integer_dtype(result): 65 result = result.astype('int64') 66 elif not is_list_like(result): 67 return result 68 69 result = np.asarray(result) 70 71 # blow up if we operate on categories 72 if self.orig is not None: 73 result = take_1d(result, self.orig.cat.codes) 74 index = self.orig.index 75 else: 76 index = self.index 77 78 # return the result as a Series, which is by definition a copy 79 result = Series(result, index=index, name=self.name) 80 81 # setting this object will show a SettingWithCopyWarning/Error 82 result._is_copy = ("modifications to a property of a datetimelike " 83 "object are not supported and are discarded. " 84 "Change values on the original.") 85 86 return result 87 88 def _delegate_property_set(self, name, value, *args, **kwargs): 89 raise ValueError("modifications to a property of a datetimelike " 90 "object are not supported. Change values on the " 91 "original.") 92 93 def _delegate_method(self, name, *args, **kwargs): 94 from pandas import Series 95 values = self._get_values() 96 97 method = getattr(values, name) 98 result = method(*args, **kwargs) 99 100 if not is_list_like(result): 101 return result 102 103 result = Series(result, index=self.index, name=self.name) 104 105 # setting this object will show a SettingWithCopyWarning/Error 106 result._is_copy = ("modifications to a method of a datetimelike " 107 "object are not supported and are discarded. " 108 "Change values on the original.") 109 110 return result 111 112 113 class DatetimeProperties(Properties): 114 """ 115 Accessor object for datetimelike properties of the Series values. 116 117 Examples 118 -------- 119 >>> s.dt.hour 120 >>> s.dt.second 121 >>> s.dt.quarter 122 123 Returns a Series indexed like the original Series. 124 Raises TypeError if the Series does not contain datetimelike values. 125 """ 126 127 def to_pydatetime(self): 128 """ 129 Return the data as an array of native Python datetime objects 130 131 Timezone information is retained if present. 132 133 .. warning:: 134 135 Python's datetime uses microsecond resolution, which is lower than 136 pandas (nanosecond). The values are truncated. 137 138 Returns 139 ------- 140 numpy.ndarray 141 object dtype array containing native Python datetime objects. 142 143 See Also 144 -------- 145 datetime.datetime : Standard library value for a datetime. 146 147 Examples 148 -------- 149 >>> s = pd.Series(pd.date_range('20180310', periods=2)) 150 >>> s 151 0 2018-03-10 152 1 2018-03-11 153 dtype: datetime64[ns] 154 155 >>> s.dt.to_pydatetime() 156 array([datetime.datetime(2018, 3, 10, 0, 0), 157 datetime.datetime(2018, 3, 11, 0, 0)], dtype=object) 158 159 pandas' nanosecond precision is truncated to microseconds. 160 161 >>> s = pd.Series(pd.date_range('20180310', periods=2, freq='ns')) 162 >>> s 163 0 2018-03-10 00:00:00.000000000 164 1 2018-03-10 00:00:00.000000001 165 dtype: datetime64[ns] 166 167 >>> s.dt.to_pydatetime() 168 array([datetime.datetime(2018, 3, 10, 0, 0), 169 datetime.datetime(2018, 3, 10, 0, 0)], dtype=object) 170 """ 171 return self._get_values().to_pydatetime() 172 173 @property 174 def freq(self): 175 return self._get_values().inferred_freq 176 177 178 DatetimeProperties._add_delegate_accessors( 179 delegate=DatetimeIndex, 180 accessors=DatetimeIndex._datetimelike_ops, 181 typ='property') 182 DatetimeProperties._add_delegate_accessors( 183 delegate=DatetimeIndex, 184 accessors=DatetimeIndex._datetimelike_methods, 185 typ='method') 186 187 188 class TimedeltaProperties(Properties): 189 """ 190 Accessor object for datetimelike properties of the Series values. 191 192 Examples 193 -------- 194 >>> s.dt.hours 195 >>> s.dt.seconds 196 197 Returns a Series indexed like the original Series. 198 Raises TypeError if the Series does not contain datetimelike values. 199 """ 200 201 def to_pytimedelta(self): 202 """ 203 Return an array of native `datetime.timedelta` objects. 204 205 Python's standard `datetime` library uses a different representation 206 timedelta's. This method converts a Series of pandas Timedeltas 207 to `datetime.timedelta` format with the same length as the original 208 Series. 209 210 Returns 211 ------- 212 a : numpy.ndarray 213 1D array containing data with `datetime.timedelta` type. 214 215 Examples 216 -------- 217 >>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='d')) 218 >>> s 219 0 0 days 220 1 1 days 221 2 2 days 222 3 3 days 223 4 4 days 224 dtype: timedelta64[ns] 225 226 >>> s.dt.to_pytimedelta() 227 array([datetime.timedelta(0), datetime.timedelta(1), 228 datetime.timedelta(2), datetime.timedelta(3), 229 datetime.timedelta(4)], dtype=object) 230 231 See Also 232 -------- 233 datetime.timedelta 234 """ 235 return self._get_values().to_pytimedelta() 236 237 @property 238 def components(self): 239 """ 240 Return a Dataframe of the components of the Timedeltas. 241 242 Returns 243 ------- 244 DataFrame 245 246 Examples 247 -------- 248 >>> s = pd.Series(pd.to_timedelta(np.arange(5), unit='s')) 249 >>> s 250 0 00:00:00 251 1 00:00:01 252 2 00:00:02 253 3 00:00:03 254 4 00:00:04 255 dtype: timedelta64[ns] 256 >>> s.dt.components 257 days hours minutes seconds milliseconds microseconds nanoseconds 258 0 0 0 0 0 0 0 0 259 1 0 0 0 1 0 0 0 260 2 0 0 0 2 0 0 0 261 3 0 0 0 3 0 0 0 262 4 0 0 0 4 0 0 0 263 """ # noqa: E501 264 return self._get_values().components.set_index(self.index) 265 266 @property 267 def freq(self): 268 return self._get_values().inferred_freq 269 270 271 TimedeltaProperties._add_delegate_accessors( 272 delegate=TimedeltaIndex, 273 accessors=TimedeltaIndex._datetimelike_ops, 274 typ='property') 275 TimedeltaProperties._add_delegate_accessors( 276 delegate=TimedeltaIndex, 277 accessors=TimedeltaIndex._datetimelike_methods, 278 typ='method') 279 280 281 class PeriodProperties(Properties): 282 """ 283 Accessor object for datetimelike properties of the Series values. 284 285 Examples 286 -------- 287 >>> s.dt.hour 288 >>> s.dt.second 289 >>> s.dt.quarter 290 291 Returns a Series indexed like the original Series. 292 Raises TypeError if the Series does not contain datetimelike values. 293 """ 294 295 296 PeriodProperties._add_delegate_accessors( 297 delegate=PeriodIndex, 298 accessors=PeriodIndex._datetimelike_ops, 299 typ='property') 300 PeriodProperties._add_delegate_accessors( 301 delegate=PeriodIndex, 302 accessors=PeriodIndex._datetimelike_methods, 303 typ='method') 304 305 306 class CombinedDatetimelikeProperties(DatetimeProperties, TimedeltaProperties): 307 308 def __new__(cls, data): 309 # CombinedDatetimelikeProperties isn't really instantiated. Instead 310 # we need to choose which parent (datetime or timedelta) is 311 # appropriate. Since we're checking the dtypes anyway, we'll just 312 # do all the validation here. 313 from pandas import Series 314 315 if not isinstance(data, Series): 316 raise TypeError("cannot convert an object of type {0} to a " 317 "datetimelike index".format(type(data))) 318 319 orig = data if is_categorical_dtype(data) else None 320 if orig is not None: 321 data = Series(orig.values.categories, 322 name=orig.name, 323 copy=False) 324 325 try: 326 if is_datetime64_dtype(data.dtype): 327 return DatetimeProperties(data, orig) 328 elif is_datetime64tz_dtype(data.dtype): 329 return DatetimeProperties(data, orig) 330 elif is_timedelta64_dtype(data.dtype): 331 return TimedeltaProperties(data, orig) 332 else: 333 if is_period_arraylike(data): 334 return PeriodProperties(data, orig) 335 if is_datetime_arraylike(data): 336 return DatetimeProperties(data, orig) 337 except Exception: 338 pass # we raise an attribute error anyway 339 340 raise AttributeError("Can only use .dt accessor with datetimelike " 341 "values") 342 [end of pandas/core/indexes/accessors.py] [start of pandas/core/tools/timedeltas.py] 1 """ 2 timedelta support tools 3 """ 4 5 import numpy as np 6 import pandas as pd 7 from pandas._libs import tslibs 8 from pandas._libs.tslibs.timedeltas import (convert_to_timedelta64, 9 array_to_timedelta64) 10 11 from pandas.core.dtypes.common import ( 12 ensure_object, 13 is_integer_dtype, 14 is_timedelta64_dtype, 15 is_list_like) 16 from pandas.core.dtypes.generic import ABCSeries, ABCIndexClass 17 18 19 def to_timedelta(arg, unit='ns', box=True, errors='raise'): 20 """ 21 Convert argument to timedelta 22 23 Parameters 24 ---------- 25 arg : string, timedelta, list, tuple, 1-d array, or Series 26 unit : unit of the arg (D,h,m,s,ms,us,ns) denote the unit, which is an 27 integer/float number 28 box : boolean, default True 29 - If True returns a Timedelta/TimedeltaIndex of the results 30 - if False returns a np.timedelta64 or ndarray of values of dtype 31 timedelta64[ns] 32 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 33 - If 'raise', then invalid parsing will raise an exception 34 - If 'coerce', then invalid parsing will be set as NaT 35 - If 'ignore', then invalid parsing will return the input 36 37 Returns 38 ------- 39 ret : timedelta64/arrays of timedelta64 if parsing succeeded 40 41 Examples 42 -------- 43 44 Parsing a single string to a Timedelta: 45 46 >>> pd.to_timedelta('1 days 06:05:01.00003') 47 Timedelta('1 days 06:05:01.000030') 48 >>> pd.to_timedelta('15.5us') 49 Timedelta('0 days 00:00:00.000015') 50 51 Parsing a list or array of strings: 52 53 >>> pd.to_timedelta(['1 days 06:05:01.00003', '15.5us', 'nan']) 54 TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015', NaT], 55 dtype='timedelta64[ns]', freq=None) 56 57 Converting numbers by specifying the `unit` keyword argument: 58 59 >>> pd.to_timedelta(np.arange(5), unit='s') 60 TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', 61 '00:00:03', '00:00:04'], 62 dtype='timedelta64[ns]', freq=None) 63 >>> pd.to_timedelta(np.arange(5), unit='d') 64 TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], 65 dtype='timedelta64[ns]', freq=None) 66 67 See also 68 -------- 69 pandas.DataFrame.astype : Cast argument to a specified dtype. 70 pandas.to_datetime : Convert argument to datetime. 71 """ 72 unit = _validate_timedelta_unit(unit) 73 74 if errors not in ('ignore', 'raise', 'coerce'): 75 raise ValueError("errors must be one of 'ignore', " 76 "'raise', or 'coerce'}") 77 78 if arg is None: 79 return arg 80 elif isinstance(arg, ABCSeries): 81 from pandas import Series 82 values = _convert_listlike(arg._values, unit=unit, 83 box=False, errors=errors) 84 return Series(values, index=arg.index, name=arg.name) 85 elif isinstance(arg, ABCIndexClass): 86 return _convert_listlike(arg, unit=unit, box=box, 87 errors=errors, name=arg.name) 88 elif isinstance(arg, np.ndarray) and arg.ndim == 0: 89 # extract array scalar and process below 90 arg = arg.item() 91 elif is_list_like(arg) and getattr(arg, 'ndim', 1) == 1: 92 return _convert_listlike(arg, unit=unit, box=box, errors=errors) 93 elif getattr(arg, 'ndim', 1) > 1: 94 raise TypeError('arg must be a string, timedelta, list, tuple, ' 95 '1-d array, or Series') 96 97 # ...so it must be a scalar value. Return scalar. 98 return _coerce_scalar_to_timedelta_type(arg, unit=unit, 99 box=box, errors=errors) 100 101 102 _unit_map = { 103 'Y': 'Y', 104 'y': 'Y', 105 'W': 'W', 106 'w': 'W', 107 'D': 'D', 108 'd': 'D', 109 'days': 'D', 110 'Days': 'D', 111 'day': 'D', 112 'Day': 'D', 113 'M': 'M', 114 'H': 'h', 115 'h': 'h', 116 'm': 'm', 117 'T': 'm', 118 'S': 's', 119 's': 's', 120 'L': 'ms', 121 'MS': 'ms', 122 'ms': 'ms', 123 'US': 'us', 124 'us': 'us', 125 'NS': 'ns', 126 'ns': 'ns', 127 } 128 129 130 def _validate_timedelta_unit(arg): 131 """ provide validation / translation for timedelta short units """ 132 try: 133 return _unit_map[arg] 134 except (KeyError, TypeError): 135 if arg is None: 136 return 'ns' 137 raise ValueError("invalid timedelta unit {arg} provided" 138 .format(arg=arg)) 139 140 141 def _coerce_scalar_to_timedelta_type(r, unit='ns', box=True, errors='raise'): 142 """Convert string 'r' to a timedelta object.""" 143 144 try: 145 result = convert_to_timedelta64(r, unit) 146 except ValueError: 147 if errors == 'raise': 148 raise 149 elif errors == 'ignore': 150 return r 151 152 # coerce 153 result = pd.NaT 154 155 if box: 156 result = tslibs.Timedelta(result) 157 return result 158 159 160 def _convert_listlike(arg, unit='ns', box=True, errors='raise', name=None): 161 """Convert a list of objects to a timedelta index object.""" 162 163 if isinstance(arg, (list, tuple)) or not hasattr(arg, 'dtype'): 164 arg = np.array(list(arg), dtype='O') 165 166 # these are shortcut-able 167 if is_timedelta64_dtype(arg): 168 value = arg.astype('timedelta64[ns]') 169 elif is_integer_dtype(arg): 170 value = arg.astype('timedelta64[{unit}]'.format(unit=unit)).astype( 171 'timedelta64[ns]', copy=False) 172 else: 173 try: 174 value = array_to_timedelta64(ensure_object(arg), 175 unit=unit, errors=errors) 176 value = value.astype('timedelta64[ns]', copy=False) 177 except ValueError: 178 if errors == 'ignore': 179 return arg 180 else: 181 # This else-block accounts for the cases when errors='raise' 182 # and errors='coerce'. If errors == 'raise', these errors 183 # should be raised. If errors == 'coerce', we shouldn't 184 # expect any errors to be raised, since all parsing errors 185 # cause coercion to pd.NaT. However, if an error / bug is 186 # introduced that causes an Exception to be raised, we would 187 # like to surface it. 188 raise 189 190 if box: 191 from pandas import TimedeltaIndex 192 value = TimedeltaIndex(value, unit='ns', name=name) 193 return value 194 [end of pandas/core/tools/timedeltas.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
cf1462767e61466746c9bba5e71ebd0f1e87d5e3
BUG: scalar assignment of a tz-aware is object dtype [3] should be a ``datetime64[ns, UTC]`` ``` In [1]: df = pd.DataFrame({'A': [0, 1]}) In [3]: df['now'] = pd.Timestamp('20130101', tz='UTC') In [4]: df Out[4]: A now 0 0 2013-01-01 00:00:00+00:00 1 1 2013-01-01 00:00:00+00:00 In [5]: df.dtypes Out[5]: A int64 now object dtype: object In [6]: df['now2'] = pd.DatetimeIndex([pd.Timestamp('20130101', tz='UTC')]).repeat(len(df)) In [7]: df.dtypes Out[7]: A int64 now object now2 datetime64[ns, UTC] dtype: object ```
I will try and fix this. great! Currently, ```infer_dtype_from_scalar``` (on datetimey/timestampy objects) returns a ```np.datetime64``` if no timezone is given, and defaults to ```np.object_``` on objects with timezones. Fixing this problem means returning something else, rather than ```np.object_```. Ideally return ```DatetimeTZDtypeType```. However, this crashes on ```np.empty(shape, dtype=dtype)``` in ```cast_scalar_to_array```. Seems like this should work, but it doesn't. Quick fix is returning ```np.datetime64``` rather than ```np.object_```. You lose the timezone name, but numpy applies the correct offset before saving so the numbers are correct. This change doesn't break any tests, and results in the following behavior: ``` In [1]: df = pd.DataFrame({'A': [0, 1]}) In [3]: df['now'] = pd.Timestamp('20130101', tz='UTC') In [5]: df.dtypes Out[5]: A int64 now datetime64[ns] dtype: object In [6]: df['now2'] = pd.DatetimeIndex([pd.Timestamp('20130101', tz='UTC')]).repeat(len(df)) In [7]: df.dtypes Out[7]: A int64 now datetime64[ns] now2 datetime64[ns, UTC] dtype: object ``` Raises some inconsistencies, potentially problems with mixing in timezone-naive datetimes. Is the quick fix good enough? @DylanDmitri you don't want to *ever* have numpy deal with timezones, they are completely wrong. ``infer_dtype_from_scalar`` has a ``pandas_dtype`` parameter that will make this work. We should actully just change this to do this by default (though this might break other things) Been busy the last week, sorry. Here's the problem code (from line 2874 of ```frame.py```) ``` # BEFORE value = cast_scalar_to_array(len(self.index), value) value = maybe_cast_to_datetime(value, value.dtype) ``` Main issue: ```cast_scalar_to_array``` defaults to dtype ```np.object_```, which is then ignored by ```maybe_cast_to_datetime```. Want to capture the real pandas dtype, and then pass that into ```maybe_cast_to_datetime```, which then works properly. ``` # AFTER from pandas.core.dtypes.cast import infer_dtype_from_scalar pandas_dtype, _ = infer_dtype_from_scalar(value, pandas_dtype=True) value = cast_scalar_to_array(len(self.index), value) value = maybe_cast_to_datetime(value, pandas_dtype) ``` This fixes the problem. Will check tests, and have a PR soon.
2018-03-02T20:36:25Z
<patch> diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -559,6 +559,7 @@ Timezones - Bug in :class:`DatetimeIndex` where constructing with an integer and tz would not localize correctly (:issue:`12619`) - Fixed bug where :meth:`DataFrame.describe` and :meth:`Series.describe` on tz-aware datetimes did not show `first` and `last` result (:issue:`21328`) - Bug in :class:`DatetimeIndex` comparisons failing to raise ``TypeError`` when comparing timezone-aware ``DatetimeIndex`` against ``np.datetime64`` (:issue:`22074`) +- Bug in ``DataFrame`` assignment with a timezone-aware scalar (:issue:`19843`) Offsets ^^^^^^^ diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -27,6 +27,7 @@ maybe_upcast, cast_scalar_to_array, construct_1d_arraylike_from_scalar, + infer_dtype_from_scalar, maybe_cast_to_datetime, maybe_infer_to_datetimelike, maybe_convert_platform, @@ -3507,9 +3508,13 @@ def reindexer(value): value = maybe_infer_to_datetimelike(value) else: - # upcast the scalar + # cast ignores pandas dtypes. so save the dtype first + infer_dtype, _ = infer_dtype_from_scalar( + value, pandas_dtype=True) + + # upcast value = cast_scalar_to_array(len(self.index), value) - value = maybe_cast_to_datetime(value, value.dtype) + value = maybe_cast_to_datetime(value, infer_dtype) # return internal types directly if is_extension_type(value) or is_extension_array_dtype(value): </patch>
[]
[]
Lightning-AI__lightning-2842
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> More granular callbacks ## 🚀 Make callback system more granular ### Motivation I am currently implementing #765 (make progress bar into a callback) and I need additional callback methods to do this. ### Pitch introduce these new callback methods: - `on_train_batch_start` (currently named `on_batch_start`) - `on_train_batch_end` (currently named `on_batch_end`) - `on_val_batch_start` - `on_val_batch_end` - `on_test_batch_start` - `on_test_batch_end` and make `on_batch_start` run on any of the above `*_start` (same for `on_batch_end`) Further suggestions: - introduce `on_train_epoch_start`, `on_val_epoch_start`, `on_test_epoch_start` and corresponding `*_end` methods. ### Alternatives Keep as is, but I don't know how to implement the progress bar callback otherwise for validation/test updates. </issue> <code> [start of README.md] 1 <div align="center"> 2 3 ![Logo](docs/source/_images/logos/lightning_logo.svg) 4 5 # PyTorch Lightning 6 7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.** 8 9 10 [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning) 11 [![PyPI Status](https://pepy.tech/badge/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning) 12 [![codecov](https://codecov.io/gh/PyTorchLightning/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/PyTorchLightning/pytorch-lightning) 13 14 [![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=stable)](https://pytorch-lightning.readthedocs.io/en/stable/) 15 [![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A) 16 [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE) 17 [![Next Release](https://img.shields.io/badge/Next%20Release-May%2029-<COLOR>.svg)](https://shields.io/) 18 19 <!-- 20 [![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning) 21 --> 22 </div> 23 24 --- 25 ## Trending contributors 26 27 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/0)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/0) 28 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/1)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/1) 29 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/2)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/2) 30 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/3)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/3) 31 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/4)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/4) 32 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/5)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/5) 33 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/6)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/6) 34 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/7)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/7) 35 36 --- 37 38 ## Continuous Integration 39 <center> 40 41 | System / PyTorch ver. | 1.3 (min. req.)* | 1.4 | 1.5 | 1.6 (latest) | 42 | :---: | :---: | :---: | :---: | :---: | 43 | Conda py3.7 [linux] | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | 44 | Linux py3.7 [GPUs**] | - | - | - | [![Build Status](http://35.192.60.23/api/badges/PyTorchLightning/pytorch-lightning/status.svg)](http://35.192.60.23/PyTorchLightning/pytorch-lightning) | 45 | Linux py3.7 [TPUs***] | - | - | - | ![TPU tests](https://github.com/PyTorchLightning/pytorch-lightning/workflows/TPU%20tests/badge.svg) | 46 | Linux py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 47 | OSX py3.6 / py3.7 | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 48 | Windows py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) 49 50 - _\* `torch>=1.4` is the minimal pytorch version for Python 3.8_ 51 - _\** tests run on two NVIDIA K80_ 52 - _\*** tests run on Google GKE TPUv2/3_ 53 54 </center> 55 56 Simple installation from PyPI 57 ```bash 58 pip install pytorch-lightning 59 ``` 60 61 From Conda 62 ```bash 63 conda install pytorch-lightning -c conda-forge 64 ``` 65 66 ## Docs 67 - [master](https://pytorch-lightning.readthedocs.io/en/latest) 68 - [stable](https://pytorch-lightning.readthedocs.io/en/stable) 69 - [0.8.5](https://pytorch-lightning.readthedocs.io/en/0.8.5/) 70 - [0.8.4](https://pytorch-lightning.readthedocs.io/en/0.8.4/) 71 - [0.8.3](https://pytorch-lightning.readthedocs.io/en/0.8.3/) 72 - [0.8.1](https://pytorch-lightning.readthedocs.io/en/0.8.1/) 73 - [0.7.6](https://pytorch-lightning.readthedocs.io/en/0.7.6/) 74 75 ## PyTorch Lightning is just organized PyTorch 76 ![PT to PL](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/_images/general/fast_2.gif) 77 78 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. 79 It's more of a PyTorch style-guide than a framework. 80 81 In Lightning, you organize your code into 3 distinct categories: 82 83 1. Research code (goes in the LightningModule). 84 2. Engineering code (you delete, and is handled by the Trainer). 85 3. Non-essential research code (logging, etc... this goes in Callbacks). 86 87 Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code! 88 89 Get started with our [QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html) 90 91 ## Refactoring your PyTorch code + benefits + full walk-through 92 [![Watch the video](docs/source/_images/general/tutorial_cover.jpg)](https://www.youtube.com/watch?v=QHww1JH7IDU) 93 94 ## Demo 95 Here's a minimal example without a validation or test loop. 96 97 ```python 98 # this is just a plain nn.Module with some structure 99 100 class LitClassifier(pl.LightningModule): 101 102 def __init__(self): 103 super().__init__() 104 self.l1 = torch.nn.Linear(28 * 28, 10) 105 106 def forward(self, x): 107 return torch.relu(self.l1(x.view(x.size(0), -1))) 108 109 def training_step(self, batch, batch_nb): 110 x, y = batch 111 loss = F.cross_entropy(self(x), y) 112 tensorboard_logs = {'train_loss': loss} 113 return {'loss': loss, 'log': tensorboard_logs} 114 115 def configure_optimizers(self): 116 return torch.optim.Adam(self.parameters(), lr=0.02) 117 118 # train! 119 train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32) 120 121 model = LitClassifier() 122 trainer = pl.Trainer(gpus=8, precision=16) 123 trainer.fit(model, train_loader) 124 ``` 125 126 Other examples: 127 [MNIST hello world](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=gEulmrbxwaYL) 128 [GAN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=P0bSmCw57aV5) 129 [BERT](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=7uQVI-xv9Ddj) 130 [DQN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=NWvMLBDySQI5) 131 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3) 132 133 ## Testing Rigour 134 All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). 135 136 For every PR we test all combinations of: 137 - PyTorch 1.3, 1.4, 1.5 138 - Python 3.6, 3.7, 3.8 139 - Linux, OSX, Windows 140 - Multiple GPUs 141 142 **How does performance compare with vanilla PyTorch?** 143 We have tests to ensure we get the EXACT same results in under 600 ms difference per epoch. In reality, lightning adds about a 300 ms overhead per epoch. 144 [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks). 145 146 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts. 147 148 ## How flexible is it? 149 As you see, you're just organizing your PyTorch code - there's no abstraction. 150 151 And for the stuff that the Trainer abstracts out, you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backward pass. 152 153 For example, here you could do your own backward pass without worrying about GPUs, TPUs or 16-bit since we already handle it. 154 155 ```python 156 class LitModel(LightningModule): 157 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, 158 second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): 159 optimizer.step() 160 161 def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx): 162 optimizer.zero_grad() 163 ``` 164 165 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer. 166 167 ## Who is Lightning for? 168 - Professional researchers 169 - Ph.D. students 170 - Corporate production teams 171 172 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :) 173 174 ## What does lightning control for me? 175 176 Everything in Blue! 177 This is how lightning separates the science (red) from engineering (blue). 178 179 ![Overview](docs/source/_images/general/pl_overview.gif) 180 181 ## How much effort is it to convert? 182 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour. 183 If your code IS a mess, then you needed to clean up anyhow ;) 184 185 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09). 186 [Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU). 187 188 189 ## Starting a new project? 190 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed) 191 192 ## Why do I want to use lightning? 193 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you. 194 195 ## Support 196 - [8 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, Ph.D. students from top AI labs. 197 - 100+ community contributors. 198 199 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support. 200 201 --- 202 203 ## README Table of Contents 204 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) 205 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me) 206 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard) 207 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable) 208 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples) 209 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials) 210 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help) 211 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md) 212 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge) 213 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles) 214 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team) 215 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq) 216 217 --- 218 219 ## Realistic example 220 Here's how you would organize a realistic PyTorch project into Lightning. 221 222 ![PT to PL](docs/source/_images/mnist_imgs/pt_to_pl.jpg) 223 224 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... 225 It can ALSO define a simple classifier. 226 227 In summary, you: 228 229 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) 230 ```python 231 class LitSystem(pl.LightningModule): 232 233 def __init__(self): 234 super().__init__() 235 # not the best model... 236 self.l1 = torch.nn.Linear(28 * 28, 10) 237 238 def forward(self, x): 239 return torch.relu(self.l1(x.view(x.size(0), -1))) 240 241 def training_step(self, batch, batch_idx): 242 ... 243 ``` 244 245 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html) 246 ```python 247 from pytorch_lightning import Trainer 248 249 model = LitSystem() 250 251 # most basic trainer, uses good defaults 252 trainer = Trainer() 253 trainer.fit(model) 254 ``` 255 256 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 257 258 ## What types of research works? 259 Anything! Remember, that this is just organized PyTorch code. 260 The Training step defines the core complexity found in the training loop. 261 262 #### Could be as complex as a seq2seq 263 264 ```python 265 # define what happens for training here 266 def training_step(self, batch, batch_idx): 267 x, y = batch 268 269 # define your own forward and loss calculation 270 hidden_states = self.encoder(x) 271 272 # even as complex as a seq-2-seq + attn model 273 # (this is just a toy, non-working example to illustrate) 274 start_token = '<SOS>' 275 last_hidden = torch.zeros(...) 276 loss = 0 277 for step in range(max_seq_len): 278 attn_context = self.attention_nn(hidden_states, start_token) 279 pred = self.decoder(start_token, attn_context, last_hidden) 280 last_hidden = pred 281 pred = self.predict_nn(pred) 282 loss += self.loss(last_hidden, y[step]) 283 284 #toy example as well 285 loss = loss / max_seq_len 286 return {'loss': loss} 287 ``` 288 289 #### Or as basic as CNN image classification 290 291 ```python 292 # define what happens for validation here 293 def validation_step(self, batch, batch_idx): 294 x, y = batch 295 296 # or as basic as a CNN classification 297 out = self(x) 298 loss = my_loss(out, y) 299 return {'loss': loss} 300 ``` 301 302 And without changing a single line of code, you could run on CPUs 303 ```python 304 trainer = Trainer(max_epochs=1) 305 ``` 306 307 308 Or GPUs 309 ```python 310 # 8 GPUs 311 trainer = Trainer(max_epochs=1, gpus=8) 312 313 # 256 GPUs 314 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32) 315 ``` 316 317 Or TPUs 318 ```python 319 # Distributes TPU core training 320 trainer = Trainer(tpu_cores=8) 321 322 # Single TPU core training 323 trainer = Trainer(tpu_cores=[1]) 324 ``` 325 326 When you're done training, run the test accuracy 327 ```python 328 trainer.test() 329 ``` 330 331 ## Visualization 332 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks 333 334 - [Tensorboard](https://pytorch.org/docs/stable/tensorboard.html) 335 - [MLFlow](https://mlflow.org/) 336 - [Neptune.ai](https://neptune.ai/) 337 - [Comet.ml](https://www.comet.ml/site/) 338 - [Wandb](https://www.wandb.com/) 339 - ... 340 341 ![tensorboard-support](docs/source/_images/general/tf_loss.jpg) 342 343 344 ## Lightning automates 40+ parts of DL/ML research 345 - GPU training 346 - Distributed GPU (cluster) training 347 - TPU training 348 - EarlyStopping 349 - Logging/Visualizing 350 - Checkpointing 351 - Experiment management 352 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) 353 354 355 ## Running speed 356 Migrating to lightning does not mean compromising on speed! You can expect an overhead of about 300 ms per epoch compared with pure PyTorch. 357 358 359 ## Examples 360 Check out this awesome list of research papers and implementations done with Lightning. 361 362 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer) 363 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0) 364 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py) 365 - [Hyperparameter optimization with Ray Tune](https://docs.ray.io/en/master/tune/tutorials/tune-pytorch-lightning.html) 366 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting) 367 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_) 368 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D) 369 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture) 370 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes) 371 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks) 372 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning) 373 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf) 374 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification) 375 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE) 376 - [Transformers Question Answering (SQuAD)](https://github.com/tshrjn/Finetune-QA/) 377 - [Pytorch-Lightning + Microsoft NNI with Docker](https://github.com/davinnovation/pytorch-boilerplate) 378 379 ## Tutorials 380 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started. 381 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials). 382 383 --- 384 385 ## Asking for help 386 Welcome to the Lightning community! 387 388 If you have any questions, feel free to: 389 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/). 390 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question). 391 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning. 392 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A). 393 394 --- 395 396 ## FAQ 397 **How do I use Lightning for rapid research?** 398 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) 399 400 **Why was Lightning created?** 401 Lightning has 3 goals in mind: 402 403 1. Maximal flexibility while abstracting out the common boilerplate across research projects. 404 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format. 405 3. Democratizing PyTorch power-user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning. 406 407 **How does Lightning compare with Ignite and fast.ai?** 408 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a). 409 410 **Is this another library I have to learn?** 411 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions! 412 413 **Are there plans to support Python 2?** 414 Nope. 415 416 **Are there plans to support virtualenv?** 417 Nope. Please use anaconda or miniconda. 418 ```bash 419 conda activate my_env 420 pip install pytorch-lightning 421 ``` 422 423 ## Custom installation 424 425 ### Bleeding edge 426 427 If you can't wait for the next release, install the most up to date code with: 428 * using GIT (locally clone whole repo with full history) 429 ```bash 430 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade 431 ``` 432 * using instant zip (last state of the repo without git history) 433 ```bash 434 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade 435 ``` 436 437 ### Any release installation 438 439 You can also install any past release `0.X.Y` from this repository: 440 ```bash 441 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade 442 ``` 443 444 --- 445 446 ## Lightning team 447 448 #### Leads 449 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder) 450 - Jirka Borovec [(Borda)](https://github.com/Borda) (ghost :) 451 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder) 452 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder) 453 - Justus Schock [(justusschock)](https://github.com/justusschock) (Former Core Member PyTorch Ignite) 454 455 #### Core Maintainers 456 457 - Nick Eggert [(neggert)](https://github.com/neggert) 458 - Jeff Ling [(jeffling)](https://github.com/jeffling) 459 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan) 460 - Tullie Murrell [(tullie)](https://github.com/tullie) 461 - Adrian Wälchli [(awaelchli)](https://github.com/awaelchli) 462 - Nicki Skafte [(skaftenicki)](https://github.com/SkafteNicki) 463 - Peter Yu [(yukw777)](https://github.com/yukw777) 464 - Rohit Gupta [(rohitgr7)](https://github.com/rohitgr7) 465 466 --- 467 468 #### Funding 469 Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can 470 hire a full-time staff, attend conferences, and move faster through implementing features you request. 471 472 Our goal is to build an incredible research platform and a big supportive community. Many open-source projects 473 have gone on to fund operations through things like support and special help for big corporations! 474 475 If you are one of these corporations, please feel free to reach out to [email protected]! 476 477 ## BibTeX 478 If you want to cite the framework feel free to use this (but only if you loved it 😊): 479 480 ```bibtex 481 @article{falcon2019pytorch, 482 title={PyTorch Lightning}, 483 author={Falcon, WA}, 484 journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by}, 485 volume={3}, 486 year={2019} 487 } 488 ``` 489 [end of README.md] [start of pytorch_lightning/callbacks/base.py] 1 r""" 2 Callback Base 3 ------------- 4 5 Abstract base class used to build new callbacks. 6 7 """ 8 9 import abc 10 11 12 class Callback(abc.ABC): 13 r""" 14 Abstract base class used to build new callbacks. 15 """ 16 17 def setup(self, trainer, stage: str): 18 """Called when fit or test begins""" 19 pass 20 21 def teardown(self, trainer, stage: str): 22 """Called when fit or test ends""" 23 pass 24 25 def on_init_start(self, trainer): 26 """Called when the trainer initialization begins, model has not yet been set.""" 27 pass 28 29 def on_init_end(self, trainer): 30 """Called when the trainer initialization ends, model has not yet been set.""" 31 pass 32 33 def on_fit_start(self, trainer): 34 """Called when fit begins""" 35 pass 36 37 def on_fit_end(self, trainer): 38 """Called when fit ends""" 39 pass 40 41 def on_sanity_check_start(self, trainer, pl_module): 42 """Called when the validation sanity check starts.""" 43 pass 44 45 def on_sanity_check_end(self, trainer, pl_module): 46 """Called when the validation sanity check ends.""" 47 pass 48 49 def on_train_epoch_start(self, trainer, pl_module): 50 """Called when the train epoch begins.""" 51 pass 52 53 def on_train_epoch_end(self, trainer, pl_module): 54 """Called when the train epoch ends.""" 55 pass 56 57 def on_validation_epoch_start(self, trainer, pl_module): 58 """Called when the val epoch begins.""" 59 pass 60 61 def on_validation_epoch_end(self, trainer, pl_module): 62 """Called when the val epoch ends.""" 63 pass 64 65 def on_test_epoch_start(self, trainer, pl_module): 66 """Called when the test epoch begins.""" 67 pass 68 69 def on_test_epoch_end(self, trainer, pl_module): 70 """Called when the test epoch ends.""" 71 pass 72 73 def on_epoch_start(self, trainer, pl_module): 74 """Called when the epoch begins.""" 75 pass 76 77 def on_epoch_end(self, trainer, pl_module): 78 """Called when the epoch ends.""" 79 pass 80 81 def on_batch_start(self, trainer, pl_module): 82 """Called when the training batch begins.""" 83 pass 84 85 def on_validation_batch_start(self, trainer, pl_module): 86 """Called when the validation batch begins.""" 87 pass 88 89 def on_validation_batch_end(self, trainer, pl_module): 90 """Called when the validation batch ends.""" 91 pass 92 93 def on_test_batch_start(self, trainer, pl_module): 94 """Called when the test batch begins.""" 95 pass 96 97 def on_test_batch_end(self, trainer, pl_module): 98 """Called when the test batch ends.""" 99 pass 100 101 def on_batch_end(self, trainer, pl_module): 102 """Called when the training batch ends.""" 103 pass 104 105 def on_train_start(self, trainer, pl_module): 106 """Called when the train begins.""" 107 pass 108 109 def on_train_end(self, trainer, pl_module): 110 """Called when the train ends.""" 111 pass 112 113 def on_validation_start(self, trainer, pl_module): 114 """Called when the validation loop begins.""" 115 pass 116 117 def on_validation_end(self, trainer, pl_module): 118 """Called when the validation loop ends.""" 119 pass 120 121 def on_test_start(self, trainer, pl_module): 122 """Called when the test begins.""" 123 pass 124 125 def on_test_end(self, trainer, pl_module): 126 """Called when the test ends.""" 127 pass 128 129 def on_keyboard_interrupt(self, trainer, pl_module): 130 """Called when the training is interrupted by KeyboardInterrupt.""" 131 [end of pytorch_lightning/callbacks/base.py] [start of pytorch_lightning/callbacks/progress.py] 1 """ 2 Progress Bars 3 ============= 4 5 Use or override one of the progress bar callbacks. 6 7 """ 8 import importlib 9 import sys 10 11 12 # check if ipywidgets is installed before importing tqdm.auto 13 # to ensure it won't fail and a progress bar is displayed 14 if importlib.util.find_spec('ipywidgets') is not None: 15 from tqdm.auto import tqdm 16 else: 17 from tqdm import tqdm 18 19 from pytorch_lightning.callbacks import Callback 20 21 22 class ProgressBarBase(Callback): 23 r""" 24 The base class for progress bars in Lightning. It is a :class:`~pytorch_lightning.callbacks.Callback` 25 that keeps track of the batch progress in the :class:`~pytorch_lightning.trainer.trainer.Trainer`. 26 You should implement your highly custom progress bars with this as the base class. 27 28 Example:: 29 30 class LitProgressBar(ProgressBarBase): 31 32 def __init__(self): 33 super().__init__() # don't forget this :) 34 self.enable = True 35 36 def disable(self): 37 self.enable = False 38 39 def on_batch_end(self, trainer, pl_module): 40 super().on_batch_end(trainer, pl_module) # don't forget this :) 41 percent = (self.train_batch_idx / self.total_train_batches) * 100 42 sys.stdout.flush() 43 sys.stdout.write(f'{percent:.01f} percent complete \r') 44 45 bar = LitProgressBar() 46 trainer = Trainer(callbacks=[bar]) 47 48 """ 49 def __init__(self): 50 51 self._trainer = None 52 self._train_batch_idx = 0 53 self._val_batch_idx = 0 54 self._test_batch_idx = 0 55 56 @property 57 def trainer(self): 58 return self._trainer 59 60 @property 61 def train_batch_idx(self) -> int: 62 """ 63 The current batch index being processed during training. 64 Use this to update your progress bar. 65 """ 66 return self._train_batch_idx 67 68 @property 69 def val_batch_idx(self) -> int: 70 """ 71 The current batch index being processed during validation. 72 Use this to update your progress bar. 73 """ 74 return self._val_batch_idx 75 76 @property 77 def test_batch_idx(self) -> int: 78 """ 79 The current batch index being processed during testing. 80 Use this to update your progress bar. 81 """ 82 return self._test_batch_idx 83 84 @property 85 def total_train_batches(self) -> int: 86 """ 87 The total number of training batches during training, which may change from epoch to epoch. 88 Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the 89 training dataloader is of infinite size. 90 """ 91 return self.trainer.num_training_batches 92 93 @property 94 def total_val_batches(self) -> int: 95 """ 96 The total number of training batches during validation, which may change from epoch to epoch. 97 Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the 98 validation dataloader is of infinite size. 99 """ 100 total_val_batches = 0 101 if not self.trainer.disable_validation: 102 is_val_epoch = (self.trainer.current_epoch + 1) % self.trainer.check_val_every_n_epoch == 0 103 total_val_batches = sum(self.trainer.num_val_batches) if is_val_epoch else 0 104 return total_val_batches 105 106 @property 107 def total_test_batches(self) -> int: 108 """ 109 The total number of training batches during testing, which may change from epoch to epoch. 110 Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the 111 test dataloader is of infinite size. 112 """ 113 return sum(self.trainer.num_test_batches) 114 115 def disable(self): 116 """ 117 You should provide a way to disable the progress bar. 118 The :class:`~pytorch_lightning.trainer.trainer.Trainer` will call this to disable the 119 output on processes that have a rank different from 0, e.g., in multi-node training. 120 """ 121 raise NotImplementedError 122 123 def enable(self): 124 """ 125 You should provide a way to enable the progress bar. 126 The :class:`~pytorch_lightning.trainer.trainer.Trainer` will call this in e.g. pre-training 127 routines like the `learning rate finder <lr_finder.rst>`_ to temporarily enable and 128 disable the main progress bar. 129 """ 130 raise NotImplementedError 131 132 def on_init_end(self, trainer): 133 self._trainer = trainer 134 135 def on_train_start(self, trainer, pl_module): 136 self._train_batch_idx = trainer.batch_idx 137 138 def on_epoch_start(self, trainer, pl_module): 139 self._train_batch_idx = 0 140 141 def on_batch_end(self, trainer, pl_module): 142 self._train_batch_idx += 1 143 144 def on_validation_start(self, trainer, pl_module): 145 self._val_batch_idx = 0 146 147 def on_validation_batch_end(self, trainer, pl_module): 148 self._val_batch_idx += 1 149 150 def on_test_start(self, trainer, pl_module): 151 self._test_batch_idx = 0 152 153 def on_test_batch_end(self, trainer, pl_module): 154 self._test_batch_idx += 1 155 156 157 class ProgressBar(ProgressBarBase): 158 r""" 159 This is the default progress bar used by Lightning. It prints to `stdout` using the 160 :mod:`tqdm` package and shows up to four different bars: 161 162 - **sanity check progress:** the progress during the sanity check run 163 - **main progress:** shows training + validation progress combined. It also accounts for 164 multiple validation runs during training when 165 :paramref:`~pytorch_lightning.trainer.trainer.Trainer.val_check_interval` is used. 166 - **validation progress:** only visible during validation; 167 shows total progress over all validation datasets. 168 - **test progress:** only active when testing; shows total progress over all test datasets. 169 170 For infinite datasets, the progress bar never ends. 171 172 If you want to customize the default ``tqdm`` progress bars used by Lightning, you can override 173 specific methods of the callback class and pass your custom implementation to the 174 :class:`~pytorch_lightning.trainer.trainer.Trainer`: 175 176 Example:: 177 178 class LitProgressBar(ProgressBar): 179 180 def init_validation_tqdm(self): 181 bar = super().init_validation_tqdm() 182 bar.set_description('running validation ...') 183 return bar 184 185 bar = LitProgressBar() 186 trainer = Trainer(callbacks=[bar]) 187 188 Args: 189 refresh_rate: 190 Determines at which rate (in number of batches) the progress bars get updated. 191 Set it to ``0`` to disable the display. By default, the 192 :class:`~pytorch_lightning.trainer.trainer.Trainer` uses this implementation of the progress 193 bar and sets the refresh rate to the value provided to the 194 :paramref:`~pytorch_lightning.trainer.trainer.Trainer.progress_bar_refresh_rate` argument in the 195 :class:`~pytorch_lightning.trainer.trainer.Trainer`. 196 process_position: 197 Set this to a value greater than ``0`` to offset the progress bars by this many lines. 198 This is useful when you have progress bars defined elsewhere and want to show all of them 199 together. This corresponds to 200 :paramref:`~pytorch_lightning.trainer.trainer.Trainer.process_position` in the 201 :class:`~pytorch_lightning.trainer.trainer.Trainer`. 202 203 """ 204 def __init__(self, refresh_rate: int = 1, process_position: int = 0): 205 super().__init__() 206 self._refresh_rate = refresh_rate 207 self._process_position = process_position 208 self._enabled = True 209 self.main_progress_bar = None 210 self.val_progress_bar = None 211 self.test_progress_bar = None 212 213 def __getstate__(self): 214 # can't pickle the tqdm objects 215 state = self.__dict__.copy() 216 state['main_progress_bar'] = None 217 state['val_progress_bar'] = None 218 state['test_progress_bar'] = None 219 return state 220 221 @property 222 def refresh_rate(self) -> int: 223 return self._refresh_rate 224 225 @property 226 def process_position(self) -> int: 227 return self._process_position 228 229 @property 230 def is_enabled(self) -> bool: 231 return self._enabled and self.refresh_rate > 0 232 233 @property 234 def is_disabled(self) -> bool: 235 return not self.is_enabled 236 237 def disable(self) -> None: 238 self._enabled = False 239 240 def enable(self) -> None: 241 self._enabled = True 242 243 def init_sanity_tqdm(self) -> tqdm: 244 """ Override this to customize the tqdm bar for the validation sanity run. """ 245 bar = tqdm( 246 desc='Validation sanity check', 247 position=(2 * self.process_position), 248 disable=self.is_disabled, 249 leave=False, 250 dynamic_ncols=True, 251 file=sys.stdout, 252 ) 253 return bar 254 255 def init_train_tqdm(self) -> tqdm: 256 """ Override this to customize the tqdm bar for training. """ 257 bar = tqdm( 258 desc='Training', 259 initial=self.train_batch_idx, 260 position=(2 * self.process_position), 261 disable=self.is_disabled, 262 leave=True, 263 dynamic_ncols=True, 264 file=sys.stdout, 265 smoothing=0, 266 ) 267 return bar 268 269 def init_validation_tqdm(self) -> tqdm: 270 """ Override this to customize the tqdm bar for validation. """ 271 bar = tqdm( 272 desc='Validating', 273 position=(2 * self.process_position + 1), 274 disable=self.is_disabled, 275 leave=False, 276 dynamic_ncols=True, 277 file=sys.stdout 278 ) 279 return bar 280 281 def init_test_tqdm(self) -> tqdm: 282 """ Override this to customize the tqdm bar for testing. """ 283 bar = tqdm( 284 desc='Testing', 285 position=(2 * self.process_position), 286 disable=self.is_disabled, 287 leave=True, 288 dynamic_ncols=True, 289 file=sys.stdout 290 ) 291 return bar 292 293 def on_sanity_check_start(self, trainer, pl_module): 294 super().on_sanity_check_start(trainer, pl_module) 295 self.val_progress_bar = self.init_sanity_tqdm() 296 self.val_progress_bar.total = convert_inf(trainer.num_sanity_val_steps * len(trainer.val_dataloaders)) 297 self.main_progress_bar = tqdm(disable=True) # dummy progress bar 298 299 def on_sanity_check_end(self, trainer, pl_module): 300 super().on_sanity_check_end(trainer, pl_module) 301 self.main_progress_bar.close() 302 self.val_progress_bar.close() 303 304 def on_train_start(self, trainer, pl_module): 305 super().on_train_start(trainer, pl_module) 306 self.main_progress_bar = self.init_train_tqdm() 307 308 def on_epoch_start(self, trainer, pl_module): 309 super().on_epoch_start(trainer, pl_module) 310 total_train_batches = self.total_train_batches 311 total_val_batches = self.total_val_batches 312 if total_train_batches != float('inf') and not trainer.fast_dev_run: 313 # val can be checked multiple times per epoch 314 val_checks_per_epoch = total_train_batches // trainer.val_check_batch 315 total_val_batches = total_val_batches * val_checks_per_epoch 316 total_batches = total_train_batches + total_val_batches 317 if not self.main_progress_bar.disable: 318 self.main_progress_bar.reset(convert_inf(total_batches)) 319 self.main_progress_bar.set_description(f'Epoch {trainer.current_epoch + 1}') 320 321 def on_batch_end(self, trainer, pl_module): 322 super().on_batch_end(trainer, pl_module) 323 if self.is_enabled and self.train_batch_idx % self.refresh_rate == 0: 324 self.main_progress_bar.update(self.refresh_rate) 325 self.main_progress_bar.set_postfix(trainer.progress_bar_dict) 326 327 def on_validation_start(self, trainer, pl_module): 328 super().on_validation_start(trainer, pl_module) 329 self.val_progress_bar = self.init_validation_tqdm() 330 self.val_progress_bar.total = convert_inf(self.total_val_batches) 331 332 def on_validation_batch_end(self, trainer, pl_module): 333 super().on_validation_batch_end(trainer, pl_module) 334 if self.is_enabled and self.val_batch_idx % self.refresh_rate == 0: 335 self.val_progress_bar.update(self.refresh_rate) 336 self.main_progress_bar.update(self.refresh_rate) 337 338 def on_validation_end(self, trainer, pl_module): 339 super().on_validation_end(trainer, pl_module) 340 self.main_progress_bar.set_postfix(trainer.progress_bar_dict) 341 self.val_progress_bar.close() 342 343 def on_train_end(self, trainer, pl_module): 344 super().on_train_end(trainer, pl_module) 345 self.main_progress_bar.close() 346 347 def on_test_start(self, trainer, pl_module): 348 super().on_test_start(trainer, pl_module) 349 self.test_progress_bar = self.init_test_tqdm() 350 self.test_progress_bar.total = convert_inf(self.total_test_batches) 351 352 def on_test_batch_end(self, trainer, pl_module): 353 super().on_test_batch_end(trainer, pl_module) 354 if self.is_enabled and self.test_batch_idx % self.refresh_rate == 0: 355 self.test_progress_bar.update(self.refresh_rate) 356 357 def on_test_end(self, trainer, pl_module): 358 super().on_test_end(trainer, pl_module) 359 self.test_progress_bar.close() 360 361 362 def convert_inf(x): 363 """ The tqdm doesn't support inf values. We have to convert it to None. """ 364 if x == float('inf'): 365 return None 366 return x 367 [end of pytorch_lightning/callbacks/progress.py] [start of pytorch_lightning/trainer/callback_config.py] 1 import os 2 from abc import ABC, abstractmethod 3 from typing import List, Callable, Optional 4 5 6 from pytorch_lightning.callbacks import Callback, ModelCheckpoint, EarlyStopping, ProgressBarBase, ProgressBar 7 from pytorch_lightning.loggers import LightningLoggerBase 8 from pytorch_lightning.utilities.exceptions import MisconfigurationException 9 10 11 class TrainerCallbackConfigMixin(ABC): 12 13 # this is just a summary on variables used in this abstract class, 14 # the proper values/initialisation should be done in child class 15 callbacks: List[Callback] 16 default_root_dir: str 17 logger: LightningLoggerBase 18 weights_save_path: Optional[str] 19 ckpt_path: str 20 checkpoint_callback: Optional[ModelCheckpoint] 21 22 @property 23 @abstractmethod 24 def slurm_job_id(self) -> int: 25 """Warning: this is just empty shell for code implemented in other class.""" 26 27 @abstractmethod 28 def save_checkpoint(self, *args): 29 """Warning: this is just empty shell for code implemented in other class.""" 30 31 @abstractmethod 32 def is_overridden(self, *args): 33 """Warning: this is just empty shell for code implemented in other class.""" 34 35 def configure_checkpoint_callback(self, checkpoint_callback): 36 if checkpoint_callback is True: 37 # when no val step is defined, use 'loss' otherwise 'val_loss' 38 train_step_only = not self.is_overridden('validation_step') 39 monitor_key = 'loss' if train_step_only else 'val_loss' 40 checkpoint_callback = ModelCheckpoint( 41 filepath=None, 42 monitor=monitor_key 43 ) 44 elif checkpoint_callback is False: 45 checkpoint_callback = None 46 47 if checkpoint_callback: 48 checkpoint_callback.save_function = self.save_checkpoint 49 50 return checkpoint_callback 51 52 def configure_early_stopping(self, early_stop_callback): 53 if early_stop_callback is True or None: 54 early_stop_callback = EarlyStopping( 55 monitor='val_loss', 56 patience=3, 57 strict=True, 58 verbose=True, 59 mode='min' 60 ) 61 elif not early_stop_callback: 62 early_stop_callback = None 63 else: 64 early_stop_callback = early_stop_callback 65 return early_stop_callback 66 67 def configure_progress_bar(self, refresh_rate=1, process_position=0): 68 progress_bars = [c for c in self.callbacks if isinstance(c, ProgressBarBase)] 69 if len(progress_bars) > 1: 70 raise MisconfigurationException( 71 'You added multiple progress bar callbacks to the Trainer, but currently only one' 72 ' progress bar is supported.' 73 ) 74 elif len(progress_bars) == 1: 75 progress_bar_callback = progress_bars[0] 76 elif refresh_rate > 0: 77 progress_bar_callback = ProgressBar( 78 refresh_rate=refresh_rate, 79 process_position=process_position, 80 ) 81 self.callbacks.append(progress_bar_callback) 82 else: 83 progress_bar_callback = None 84 85 return progress_bar_callback 86 [end of pytorch_lightning/trainer/callback_config.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Lightning-AI/lightning
a5f2b89ed08172c25fd1cdc3d884d0fbb60bc45c
More granular callbacks ## 🚀 Make callback system more granular ### Motivation I am currently implementing #765 (make progress bar into a callback) and I need additional callback methods to do this. ### Pitch introduce these new callback methods: - `on_train_batch_start` (currently named `on_batch_start`) - `on_train_batch_end` (currently named `on_batch_end`) - `on_val_batch_start` - `on_val_batch_end` - `on_test_batch_start` - `on_test_batch_end` and make `on_batch_start` run on any of the above `*_start` (same for `on_batch_end`) Further suggestions: - introduce `on_train_epoch_start`, `on_val_epoch_start`, `on_test_epoch_start` and corresponding `*_end` methods. ### Alternatives Keep as is, but I don't know how to implement the progress bar callback otherwise for validation/test updates.
For the record, #1450 introduced callbacks for validation_start/end, test_start/end, and sanity_check_start/end. The question remains whether we should rename `on_batch_*` to `on_training_batch_*` for consistency with the naming of the others. > The question remains whether we should rename `on_batch_*` to `on_training_batch_*` for consistency with the naming of the others. that it shall be for all batch train/valid... This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
2020-08-05T22:30:00Z
<patch> diff --git a/pytorch_lightning/callbacks/base.py b/pytorch_lightning/callbacks/base.py --- a/pytorch_lightning/callbacks/base.py +++ b/pytorch_lightning/callbacks/base.py @@ -46,6 +46,14 @@ def on_sanity_check_end(self, trainer, pl_module): """Called when the validation sanity check ends.""" pass + def on_train_batch_start(self, trainer, pl_module): + """Called when the validation batch begins.""" + pass + + def on_train_batch_end(self, trainer, pl_module): + """Called when the validation batch ends.""" + pass + def on_train_epoch_start(self, trainer, pl_module): """Called when the train epoch begins.""" pass diff --git a/pytorch_lightning/callbacks/lr_logger.py b/pytorch_lightning/callbacks/lr_logger.py --- a/pytorch_lightning/callbacks/lr_logger.py +++ b/pytorch_lightning/callbacks/lr_logger.py @@ -64,7 +64,7 @@ def on_train_start(self, trainer, pl_module): # Initialize for storing values self.lrs = {name: [] for name in names} - def on_batch_start(self, trainer, pl_module): + def on_train_batch_start(self, trainer, pl_module): latest_stat = self._extract_lr(trainer, 'step') if trainer.logger and latest_stat: trainer.logger.log_metrics(latest_stat, step=trainer.global_step) diff --git a/pytorch_lightning/callbacks/progress.py b/pytorch_lightning/callbacks/progress.py --- a/pytorch_lightning/callbacks/progress.py +++ b/pytorch_lightning/callbacks/progress.py @@ -36,8 +36,8 @@ def __init__(self): def disable(self): self.enable = False - def on_batch_end(self, trainer, pl_module): - super().on_batch_end(trainer, pl_module) # don't forget this :) + def on_train_batch_end(self, trainer, pl_module): + super().on_train_batch_end(trainer, pl_module) # don't forget this :) percent = (self.train_batch_idx / self.total_train_batches) * 100 sys.stdout.flush() sys.stdout.write(f'{percent:.01f} percent complete \r') @@ -138,7 +138,7 @@ def on_train_start(self, trainer, pl_module): def on_epoch_start(self, trainer, pl_module): self._train_batch_idx = 0 - def on_batch_end(self, trainer, pl_module): + def on_train_batch_end(self, trainer, pl_module): self._train_batch_idx += 1 def on_validation_start(self, trainer, pl_module): @@ -318,8 +318,8 @@ def on_epoch_start(self, trainer, pl_module): self.main_progress_bar.reset(convert_inf(total_batches)) self.main_progress_bar.set_description(f'Epoch {trainer.current_epoch + 1}') - def on_batch_end(self, trainer, pl_module): - super().on_batch_end(trainer, pl_module) + def on_train_batch_end(self, trainer, pl_module): + super().on_train_batch_end(trainer, pl_module) if self.is_enabled and self.train_batch_idx % self.refresh_rate == 0: self.main_progress_bar.update(self.refresh_rate) self.main_progress_bar.set_postfix(trainer.progress_bar_dict) diff --git a/pytorch_lightning/core/hooks.py b/pytorch_lightning/core/hooks.py --- a/pytorch_lightning/core/hooks.py +++ b/pytorch_lightning/core/hooks.py @@ -77,6 +77,23 @@ def on_train_end(self) -> None: """ # do something at the end of training + def on_train_batch_start(self, batch: Any) -> None: + """ + Called in the training loop before anything happens for that batch. + + If you return -1 here, you will skip training for the rest of the current epoch. + + Args: + batch: The batched data as it is returned by the training DataLoader. + """ + # do something when the batch starts + + def on_train_batch_end(self) -> None: + """ + Called in the training loop after the batch. + """ + # do something when the batch end + def on_batch_start(self, batch: Any) -> None: """ Called in the training loop before anything happens for that batch. @@ -85,12 +102,16 @@ def on_batch_start(self, batch: Any) -> None: Args: batch: The batched data as it is returned by the training DataLoader. + + .. warning:: Deprecated in 0.9.0 will remove 1.0.0 (use `on_train_batch_start` instead) """ # do something when the batch starts def on_batch_end(self) -> None: """ Called in the training loop after the batch. + + .. warning:: Deprecated in 0.9.0 will remove 1.0.0 (use `on_train_batch_end` instead) """ # do something when the batch ends diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py --- a/pytorch_lightning/core/lightning.py +++ b/pytorch_lightning/core/lightning.py @@ -1771,7 +1771,7 @@ def to_onnx(self, file_path: str, input_sample: Optional[Tensor] = None, **kwarg elif self.example_input_array is not None: input_data = self.example_input_array else: - raise ValueError(f'input_sample and example_input_array tensors are both missing.') + raise ValueError('input_sample and example_input_array tensors are both missing.') if 'example_outputs' not in kwargs: self.eval() diff --git a/pytorch_lightning/trainer/callback_hook.py b/pytorch_lightning/trainer/callback_hook.py --- a/pytorch_lightning/trainer/callback_hook.py +++ b/pytorch_lightning/trainer/callback_hook.py @@ -9,7 +9,7 @@ class TrainerCallbackHookMixin(ABC): # this is just a summary on variables used in this abstract class, # the proper values/initialisation should be done in child class callbacks: List[Callback] = [] - get_model: Callable = ... + get_model: Callable def setup(self, stage: str): """Called in the beginning of fit and test""" @@ -111,6 +111,16 @@ def on_batch_end(self): for callback in self.callbacks: callback.on_batch_end(self, self.get_model()) + def on_train_batch_start(self): + """Called when the training batch begins.""" + for callback in self.callbacks: + callback.on_train_batch_start(self, self.get_model()) + + def on_train_batch_end(self): + """Called when the training batch ends.""" + for callback in self.callbacks: + callback.on_train_batch_end(self, self.get_model()) + def on_validation_batch_start(self): """Called when the validation batch begins.""" for callback in self.callbacks: diff --git a/pytorch_lightning/trainer/lr_finder.py b/pytorch_lightning/trainer/lr_finder.py --- a/pytorch_lightning/trainer/lr_finder.py +++ b/pytorch_lightning/trainer/lr_finder.py @@ -382,7 +382,7 @@ def on_batch_start(self, trainer, pl_module): self.lrs.append(trainer.lr_schedulers[0]['scheduler'].lr[0]) - def on_batch_end(self, trainer, pl_module): + def on_train_batch_end(self, trainer, pl_module): """ Called when the training batch ends, logs the calculated loss """ if (trainer.batch_idx + 1) % trainer.accumulate_grad_batches != 0: return diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py --- a/pytorch_lightning/trainer/training_loop.py +++ b/pytorch_lightning/trainer/training_loop.py @@ -263,6 +263,8 @@ class TrainerTrainLoopMixin(ABC): on_train_end: Callable on_batch_start: Callable on_batch_end: Callable + on_train_batch_start: Callable + on_train_batch_end: Callable on_epoch_start: Callable on_epoch_end: Callable on_validation_end: Callable @@ -690,6 +692,7 @@ def run_training_batch(self, batch, batch_idx): return AttributeDict(signal=0, grad_norm_dic=grad_norm_dic) # Batch start events + # TODO: deprecate 1.0 with self.profiler.profile('on_batch_start'): # callbacks self.on_batch_start() @@ -699,6 +702,15 @@ def run_training_batch(self, batch, batch_idx): if response == -1: return AttributeDict(signal=-1, grad_norm_dic=grad_norm_dic) + with self.profiler.profile('on_train_batch_start'): + # callbacks + self.on_train_batch_start() + # hooks + if self.is_function_implemented('on_train_batch_start'): + response = self.get_model().on_train_batch_start(batch) + if response == -1: + return AttributeDict(signal=-1, grad_norm_dic=grad_norm_dic) + splits = [batch] if self.truncated_bptt_steps is not None: model_ref = self.get_model() @@ -785,6 +797,13 @@ def run_training_batch(self, batch, batch_idx): if self.is_function_implemented('on_batch_end'): self.get_model().on_batch_end() + with self.profiler.profile('on_train_batch_end'): + # callbacks + self.on_train_batch_end() + # model hooks + if self.is_function_implemented('on_train_batch_end'): + self.get_model().on_train_batch_end() + # collapse all metrics into one dict batch_log_metrics = {k: v for d in batch_log_metrics for k, v in d.items()} </patch>
[]
[]
Lightning-AI__lightning-1104
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Support IterableDatasets for validation and test, not just train set [blocked by #953] ## 🚀 Feature Currently Lightning supports `IterableDatasets` only in the training set (see [code](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L177)). This makes them second-class citizens compared to the map-style datasets, and supporting them seems a low hanging fruit. ### Motivation This enables having larger test sets that may not fit into a machine's memory (they could be very large in production settings, or of modest size running in a student's cheap laptop). Moreover, datasets are usually generated together (eg train, val, test can come from the same process). It is very likely that the same process has the same signature, so you may end up having IterableDatasets even when their size may not deem it strictly necessary. ### Pitch <!-- A clear and concise description of what you want to happen. --> Changing a few lines of code by bringing in the checks we are doing for training should be enough unless I'm missing something. ### Additional context <!-- Add any other context or screenshots about the feature request here. --> Are there any gotchas that make this harder than it looks? </issue> <code> [start of README.md] 1 <div align="center"> 2 3 ![Logo](docs/source/_static/images/lightning_logo.svg) 4 5 # PyTorch Lightning 6 7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.** 8 9 10 [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning) 11 [![PyPI Status](https://pepy.tech/badge/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning) 12 [![Coverage](docs/source/_static/images/coverage.svg)](https://github.com/PytorchLightning/pytorch-lightning/tree/master/tests#running-coverage) 13 [![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning) 14 15 [![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=0.7.1)](https://pytorch-lightning.readthedocs.io/en/0.7.1/) 16 [![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ) 17 [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE) 18 [![Next Release](https://img.shields.io/badge/Next%20Release-May%2006-<COLOR>.svg)](https://shields.io/) 19 20 <!-- 21 removed until codecov badge isn't empy. likely a config error showing nothing on master. 22 [![codecov](https://codecov.io/gh/Borda/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/Borda/pytorch-lightning) 23 --> 24 </div> 25 26 --- 27 ## Continuous Integration 28 <center> 29 30 | System / PyTorch ver. | 1.1 | 1.2 | 1.3 | 1.4 | 31 | :---: | :---: | :---: | :---: | :---: | 32 | Linux py3.6 [CPU] | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | 33 | Linux py3.7 [GPU] | <center>—</center> | <center>—</center> | <center>—</center> | [![Build Status](http://35.192.60.23/api/badges/PyTorchLightning/pytorch-lightning/status.svg)](http://35.192.60.23/PyTorchLightning/pytorch-lightning) | 34 | Linux py3.6 / py3.7 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | 35 | OSX py3.6 / py3.7| ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | 36 | Windows py3.6 / py3.7 | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | <center>—</center> | <center>—</center> | ![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push) | 37 38 </center> 39 40 Simple installation from PyPI 41 ```bash 42 pip install pytorch-lightning 43 ``` 44 45 ## Docs 46 - [master](https://pytorch-lightning.readthedocs.io/en/latest) 47 - [0.7.1](https://pytorch-lightning.readthedocs.io/en/0.7.1/) 48 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/) 49 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/) 50 51 ## Demo 52 [MNIST, GAN, BERT on COLAB!](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 53 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3) 54 55 ## What is it? 56 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. It's more of a style-guide than a framework. 57 58 To use Lightning, first refactor your research code into a [LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html). 59 60 ![PT to PL](docs/source/_images/lightning_module/pt_to_pl.png) 61 62 And Lightning automates the rest using the [Trainer](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html)! 63 ![PT to PL](docs/source/_images/lightning_module/pt_trainer.png) 64 65 Lightning guarantees riguously tested, correct, modern best practices for the automated parts. 66 67 ## How flexible is it? 68 As you see, you're just organizing your PyTorch code - there's no abstraction. 69 70 And for the stuff that the Trainer abstracts out you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backwards pass. 71 72 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer. 73 74 ## Who is Lightning for? 75 - Professional researchers 76 - PhD students 77 - Corporate production teams 78 79 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :) 80 81 ## What does lightning control for me? 82 83 Everything in Blue! 84 This is how lightning separates the science (red) from the engineering (blue). 85 86 ![Overview](docs/source/_static/images/pl_overview.gif) 87 88 ## How much effort is it to convert? 89 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour. 90 If your code IS a mess, then you needed to clean up anyhow ;) 91 92 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09). 93 94 95 ## Starting a new project? 96 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed) 97 98 ## Why do I want to use lightning? 99 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you. 100 101 ## Support 102 - [7 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, PhD students from top AI labs. 103 - 100+ community contributors. 104 105 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support. 106 107 --- 108 109 ## README Table of Contents 110 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) 111 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me) 112 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard) 113 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable) 114 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples) 115 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials) 116 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help) 117 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md) 118 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge) 119 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles) 120 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team) 121 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq) 122 123 --- 124 125 ## Realistic example 126 Here's how you would organize a realistic PyTorch project into Lightning. 127 128 ![PT to PL](docs/source/_images/mnist_imgs/pt_to_pl.jpg) 129 130 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... 131 It can ALSO define a simple classifier. 132 133 In summary, you: 134 135 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) 136 ```python 137 class LitSystem(pl.LightningModule): 138 139 def __init__(self): 140 super(CoolSystem, self).__init__() 141 # not the best model... 142 self.l1 = torch.nn.Linear(28 * 28, 10) 143 144 def forward(self, x): 145 return torch.relu(self.l1(x.view(x.size(0), -1))) 146 147 def training_step(self, batch, batch_idx): 148 ... 149 ``` 150 151 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html) 152 ```python 153 from pytorch_lightning import Trainer 154 155 model = CoolSystem() 156 157 # most basic trainer, uses good defaults 158 trainer = Trainer() 159 trainer.fit(model) 160 ``` 161 162 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 163 164 ## What types of research works? 165 Anything! Remember, that this is just organized PyTorch code. 166 The Training step defines the core complexity found in the training loop. 167 168 #### Could be as complex as a seq2seq 169 170 ```python 171 # define what happens for training here 172 def training_step(self, batch, batch_idx): 173 x, y = batch 174 175 # define your own forward and loss calculation 176 hidden_states = self.encoder(x) 177 178 # even as complex as a seq-2-seq + attn model 179 # (this is just a toy, non-working example to illustrate) 180 start_token = '<SOS>' 181 last_hidden = torch.zeros(...) 182 loss = 0 183 for step in range(max_seq_len): 184 attn_context = self.attention_nn(hidden_states, start_token) 185 pred = self.decoder(start_token, attn_context, last_hidden) 186 last_hidden = pred 187 pred = self.predict_nn(pred) 188 loss += self.loss(last_hidden, y[step]) 189 190 #toy example as well 191 loss = loss / max_seq_len 192 return {'loss': loss} 193 ``` 194 195 #### Or as basic as CNN image classification 196 197 ```python 198 # define what happens for validation here 199 def validation_step(self, batch, batch_idx): 200 x, y = batch 201 202 # or as basic as a CNN classification 203 out = self.forward(x) 204 loss = my_loss(out, y) 205 return {'loss': loss} 206 ``` 207 208 And without changing a single line of code, you could run on CPUs 209 ```python 210 trainer = Trainer(max_epochs=1) 211 ``` 212 213 214 Or GPUs 215 ```python 216 # 8 GPUs 217 trainer = Trainer(max_epochs=1, gpus=8) 218 219 # 256 GPUs 220 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32) 221 ``` 222 223 Or TPUs 224 ```python 225 trainer = Trainer(num_tpu_cores=8) 226 ``` 227 228 When you're done training, run the test accuracy 229 ```python 230 trainer.test() 231 ``` 232 233 ## Visualization 234 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks 235 236 - Tensorboard 237 - MLFlow 238 - Neptune.ai 239 - Comet.ml 240 - ... 241 242 ![tensorboard-support](docs/source/_static/images/tf_loss.png) 243 244 245 ## Lightning automates 40+ parts of DL/ML research 246 - GPU training 247 - Distributed GPU (cluster) training 248 - TPU training 249 - EarlyStopping 250 - Logging/Visualizing 251 - Checkpointing 252 - Experiment management 253 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) 254 255 256 ## Examples 257 Check out this awesome list of research papers and implementations done with Lightning. 258 259 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer) 260 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0) 261 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py) 262 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting) 263 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_) 264 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D) 265 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture) 266 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes) 267 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks) 268 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning) 269 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf) 270 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification) 271 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE) 272 273 ## Tutorials 274 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started. 275 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials). 276 277 --- 278 279 ## Asking for help 280 Welcome to the Lightning community! 281 282 If you have any questions, feel free to: 283 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/). 284 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question). 285 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning. 286 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ). 287 288 --- 289 ## FAQ 290 **How do I use Lightning for rapid research?** 291 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) 292 293 **Why was Lightning created?** 294 Lightning has 3 goals in mind: 295 296 1. Maximal flexibility while abstracting out the common boilerplate across research projects. 297 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format. 298 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning. 299 300 **How does Lightning compare with Ignite and fast.ai?** 301 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a). 302 303 **Is this another library I have to learn?** 304 Nope! We use pure Pytorch everywhere and don't add unecessary abstractions! 305 306 **Are there plans to support Python 2?** 307 Nope. 308 309 **Are there plans to support virtualenv?** 310 Nope. Please use anaconda or miniconda. 311 312 **Which PyTorch versions do you support?** 313 - **PyTorch 1.1.0** 314 ```bash 315 # install pytorch 1.1.0 using the official instructions 316 317 # install test-tube 0.6.7.6 which supports 1.1.0 318 pip install test-tube==0.6.7.6 319 320 # install latest Lightning version without upgrading deps 321 pip install -U --no-deps pytorch-lightning 322 ``` 323 - **PyTorch 1.2.0, 1.3.0,** 324 Install via pip as normal 325 326 ## Custom installation 327 328 ### Bleeding edge 329 330 If you can't wait for the next release, install the most up to date code with: 331 * using GIT (locally clone whole repo with full history) 332 ```bash 333 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade 334 ``` 335 * using instant zip (last state of the repo without git history) 336 ```bash 337 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade 338 ``` 339 340 ### Any release installation 341 342 You can also install any past release `0.X.Y` from this repository: 343 ```bash 344 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade 345 ``` 346 347 ### Lightning team 348 349 #### Leads 350 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder) 351 - Jirka Borovec [(Borda)](https://github.com/Borda) (-_-) 352 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder) 353 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder) 354 355 #### Core Maintainers 356 357 - Nick Eggert [(neggert)](https://github.com/neggert) 358 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan) 359 - Jeff Ling [(jeffling)](https://github.com/jeffling) 360 - Tullie Murrell [(tullie)](https://github.com/tullie) 361 362 ## Bibtex 363 If you want to cite the framework feel free to use this (but only if you loved it 😊): 364 ``` 365 @misc{Falcon2019, 366 author = {Falcon, W.A. et al.}, 367 title = {PyTorch Lightning}, 368 year = {2019}, 369 publisher = {GitHub}, 370 journal = {GitHub repository}, 371 howpublished = {\url{https://github.com/PytorchLightning/pytorch-lightning}} 372 } 373 ``` 374 [end of README.md] [start of pytorch_lightning/trainer/distrib_parts.py] 1 """ 2 Lightning makes multi-gpu training and 16 bit training trivial. 3 4 .. note:: None of the flags below require changing anything about your lightningModel definition. 5 6 Choosing a backend 7 ================== 8 9 Lightning supports two backends. DataParallel and DistributedDataParallel. 10 Both can be used for single-node multi-GPU training. 11 For multi-node training you must use DistributedDataParallel. 12 13 DataParallel (dp) 14 ----------------- 15 16 Splits a batch across multiple GPUs on the same node. Cannot be used for multi-node training. 17 18 DistributedDataParallel (ddp) 19 ----------------------------- 20 21 Trains a copy of the model on each GPU and only syncs gradients. If used with DistributedSampler, each GPU trains 22 on a subset of the full dataset. 23 24 DistributedDataParallel-2 (ddp2) 25 -------------------------------- 26 27 Works like DDP, except each node trains a single copy of the model using ALL GPUs on that node. 28 Very useful when dealing with negative samples, etc... 29 30 You can toggle between each mode by setting this flag. 31 32 .. code-block:: python 33 34 # DEFAULT (when using single GPU or no GPUs) 35 trainer = Trainer(distributed_backend=None) 36 37 # Change to DataParallel (gpus > 1) 38 trainer = Trainer(distributed_backend='dp') 39 40 # change to distributed data parallel (gpus > 1) 41 trainer = Trainer(distributed_backend='ddp') 42 43 # change to distributed data parallel (gpus > 1) 44 trainer = Trainer(distributed_backend='ddp2') 45 46 If you request multiple nodes, the back-end will auto-switch to ddp. 47 We recommend you use DistributedDataparallel even for single-node multi-GPU training. 48 It is MUCH faster than DP but *may* have configuration issues depending on your cluster. 49 50 For a deeper understanding of what lightning is doing, feel free to read this 51 `guide <https://medium.com/@_willfalcon/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565>`_. 52 53 Distributed and 16-bit precision 54 -------------------------------- 55 56 Due to an issue with apex and DistributedDataParallel (PyTorch and NVIDIA issue), Lightning does 57 not allow 16-bit and DP training. We tried to get this to work, but it's an issue on their end. 58 59 Below are the possible configurations we support. 60 61 +-------+---------+----+-----+---------+------------------------------------------------------------+ 62 | 1 GPU | 1+ GPUs | DP | DDP | 16-bit | command | 63 +=======+=========+====+=====+=========+============================================================+ 64 | Y | | | | | `Trainer(gpus=1)` | 65 +-------+---------+----+-----+---------+------------------------------------------------------------+ 66 | Y | | | | Y | `Trainer(gpus=1, use_amp=True)` | 67 +-------+---------+----+-----+---------+------------------------------------------------------------+ 68 | | Y | Y | | | `Trainer(gpus=k, distributed_backend='dp')` | 69 +-------+---------+----+-----+---------+------------------------------------------------------------+ 70 | | Y | | Y | | `Trainer(gpus=k, distributed_backend='ddp')` | 71 +-------+---------+----+-----+---------+------------------------------------------------------------+ 72 | | Y | | Y | Y | `Trainer(gpus=k, distributed_backend='ddp', use_amp=True)` | 73 +-------+---------+----+-----+---------+------------------------------------------------------------+ 74 75 You also have the option of specifying which GPUs to use by passing a list: 76 77 .. code-block:: python 78 79 # DEFAULT (int) specifies how many GPUs to use. 80 Trainer(gpus=k) 81 82 # Above is equivalent to 83 Trainer(gpus=list(range(k))) 84 85 # You specify which GPUs (don't use if running on cluster) 86 Trainer(gpus=[0, 1]) 87 88 # can also be a string 89 Trainer(gpus='0, 1') 90 91 # can also be -1 or '-1', this uses all available GPUs 92 # this is equivalent to list(range(torch.cuda.available_devices())) 93 Trainer(gpus=-1) 94 95 96 CUDA flags 97 ---------- 98 99 CUDA flags make certain GPUs visible to your script. 100 Lightning sets these for you automatically, there's NO NEED to do this yourself. 101 102 .. code-block:: python 103 104 # lightning will set according to what you give the trainer 105 os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" 106 os.environ["CUDA_VISIBLE_DEVICES"] = "0" 107 108 109 However, when using a cluster, Lightning will NOT set these flags (and you should not either). 110 SLURM will set these for you. 111 112 16-bit mixed precision 113 ---------------------- 114 115 16 bit precision can cut your memory footprint by half. If using volta architecture GPUs 116 it can give a dramatic training speed-up as well. 117 First, install apex (if install fails, look `here <https://github.com/NVIDIA/apex>`_:: 118 119 $ git clone https://github.com/NVIDIA/apex 120 $ cd apex 121 122 # ------------------------ 123 # OPTIONAL: on your cluster you might need to load cuda 10 or 9 124 # depending on how you installed PyTorch 125 126 # see available modules 127 module avail 128 129 # load correct cuda before install 130 module load cuda-10.0 131 # ------------------------ 132 133 # make sure you've loaded a cuda version > 4.0 and < 7.0 134 module load gcc-6.1.0 135 136 $ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ 137 138 139 then set this use_amp to True.:: 140 141 # DEFAULT 142 trainer = Trainer(amp_level='O2', use_amp=False) 143 144 145 Single-gpu 146 ---------- 147 148 Make sure you're on a GPU machine.:: 149 150 # DEFAULT 151 trainer = Trainer(gpus=1) 152 153 Multi-gpu 154 --------- 155 156 Make sure you're on a GPU machine. You can set as many GPUs as you want. 157 In this setting, the model will run on all 8 GPUs at once using DataParallel under the hood. 158 159 .. code-block:: python 160 161 # to use DataParallel 162 trainer = Trainer(gpus=8, distributed_backend='dp') 163 164 # RECOMMENDED use DistributedDataParallel 165 trainer = Trainer(gpus=8, distributed_backend='ddp') 166 167 Custom device selection 168 ----------------------- 169 170 The number of GPUs can also be selected with a list of indices or a string containing 171 a comma separated list of GPU ids. 172 The table below lists examples of possible input formats and how they are interpreted by Lightning. 173 Note in particular the difference between `gpus=0`, `gpus=[0]` and `gpus="0"`. 174 175 +---------------+-----------+---------------------+---------------------------------+ 176 | `gpus` | Type | Parsed | Meaning | 177 +===============+===========+=====================+=================================+ 178 | None | NoneType | None | CPU | 179 +---------------+-----------+---------------------+---------------------------------+ 180 | 0 | int | None | CPU | 181 +---------------+-----------+---------------------+---------------------------------+ 182 | 3 | int | [0, 1, 2] | first 3 GPUs | 183 +---------------+-----------+---------------------+---------------------------------+ 184 | -1 | int | [0, 1, 2, ...] | all available GPUs | 185 +---------------+-----------+---------------------+---------------------------------+ 186 | [0] | list | [0] | GPU 0 | 187 +---------------+-----------+---------------------+---------------------------------+ 188 | [1, 3] | list | [1, 3] | GPUs 1 and 3 | 189 +---------------+-----------+---------------------+---------------------------------+ 190 | "0" | str | [0] | GPU 0 | 191 +---------------+-----------+---------------------+---------------------------------+ 192 | "3" | str | [3] | GPU 3 | 193 +---------------+-----------+---------------------+---------------------------------+ 194 | "1, 3" | str | [1, 3] | GPUs 1 and 3 | 195 +---------------+-----------+---------------------+---------------------------------+ 196 | "-1" | str | [0, 1, 2, ...] | all available GPUs | 197 +---------------+-----------+---------------------+---------------------------------+ 198 199 200 Multi-node 201 ---------- 202 203 Multi-node training is easily done by specifying these flags. 204 205 .. code-block:: python 206 207 # train on 12*8 GPUs 208 trainer = Trainer(gpus=8, num_nodes=12, distributed_backend='ddp') 209 210 211 You must configure your job submission script correctly for the trainer to work. 212 Here is an example script for the above trainer configuration. 213 214 .. code-block:: bash 215 216 #!/bin/bash -l 217 218 # SLURM SUBMIT SCRIPT 219 #SBATCH --nodes=12 220 #SBATCH --gres=gpu:8 221 #SBATCH --ntasks-per-node=8 222 #SBATCH --mem=0 223 #SBATCH --time=0-02:00:00 224 225 # activate conda env 226 conda activate my_env 227 228 # ------------------------- 229 # OPTIONAL 230 # ------------------------- 231 # debugging flags (optional) 232 # export NCCL_DEBUG=INFO 233 # export PYTHONFAULTHANDLER=1 234 235 # PyTorch comes with prebuilt NCCL support... but if you have issues with it 236 # you might need to load the latest version from your modules 237 # module load NCCL/2.4.7-1-cuda.10.0 238 239 # on your cluster you might need these: 240 # set the network interface 241 # export NCCL_SOCKET_IFNAME=^docker0,lo 242 # ------------------------- 243 244 # random port between 12k and 20k 245 export MASTER_PORT=$((12000 + RANDOM % 20000)) 246 247 # run script from above 248 python my_main_file.py 249 250 .. note:: When running in DDP mode, any errors in your code will show up as an NCCL issue. 251 Set the `NCCL_DEBUG=INFO` flag to see the ACTUAL error. 252 253 Finally, make sure to add a distributed sampler to your dataset. The distributed sampler copies a 254 portion of your dataset onto each GPU. (World_size = gpus_per_node * nb_nodes). 255 256 .. code-block:: python 257 258 # ie: this: 259 dataset = myDataset() 260 dataloader = Dataloader(dataset) 261 262 # becomes: 263 dataset = myDataset() 264 dist_sampler = torch.utils.data.distributed.DistributedSampler(dataset) 265 dataloader = Dataloader(dataset, sampler=dist_sampler) 266 267 268 Auto-slurm-job-submission 269 ------------------------- 270 271 Instead of manually building SLURM scripts, you can use the 272 `SlurmCluster object <https://williamfalcon.github.io/test-tube/hpc/SlurmCluster>`_ 273 to do this for you. The SlurmCluster can also run a grid search if you pass 274 in a `HyperOptArgumentParser 275 <https://williamfalcon.github.io/test-tube/hyperparameter_optimization/HyperOptArgumentParser>`_. 276 277 Here is an example where you run a grid search of 9 combinations of hyperparams. 278 The full examples are 279 `here <https://git.io/Jv87p>`_. 280 281 .. code-block:: python 282 283 # grid search 3 values of learning rate and 3 values of number of layers for your net 284 # this generates 9 experiments (lr=1e-3, layers=16), (lr=1e-3, layers=32), 285 # (lr=1e-3, layers=64), ... (lr=1e-1, layers=64) 286 parser = HyperOptArgumentParser(strategy='grid_search', add_help=False) 287 parser.opt_list('--learning_rate', default=0.001, type=float, 288 options=[1e-3, 1e-2, 1e-1], tunable=True) 289 parser.opt_list('--layers', default=1, type=float, options=[16, 32, 64], tunable=True) 290 hyperparams = parser.parse_args() 291 292 # Slurm cluster submits 9 jobs, each with a set of hyperparams 293 cluster = SlurmCluster( 294 hyperparam_optimizer=hyperparams, 295 log_path='/some/path/to/save', 296 ) 297 298 # OPTIONAL FLAGS WHICH MAY BE CLUSTER DEPENDENT 299 # which interface your nodes use for communication 300 cluster.add_command('export NCCL_SOCKET_IFNAME=^docker0,lo') 301 302 # see output of the NCCL connection process 303 # NCCL is how the nodes talk to each other 304 cluster.add_command('export NCCL_DEBUG=INFO') 305 306 # setting a master port here is a good idea. 307 cluster.add_command('export MASTER_PORT=%r' % PORT) 308 309 # ************** DON'T FORGET THIS *************** 310 # MUST load the latest NCCL version 311 cluster.load_modules(['NCCL/2.4.7-1-cuda.10.0']) 312 313 # configure cluster 314 cluster.per_experiment_nb_nodes = 12 315 cluster.per_experiment_nb_gpus = 8 316 317 cluster.add_slurm_cmd(cmd='ntasks-per-node', value=8, comment='1 task per gpu') 318 319 # submit a script with 9 combinations of hyper params 320 # (lr=1e-3, layers=16), (lr=1e-3, layers=32), (lr=1e-3, layers=64), ... (lr=1e-1, layers=64) 321 cluster.optimize_parallel_cluster_gpu( 322 main, 323 nb_trials=9, # how many permutations of the grid search to run 324 job_name='name_for_squeue' 325 ) 326 327 328 The other option is that you generate scripts on your own via a bash command or use another library... 329 330 Self-balancing architecture 331 --------------------------- 332 333 Here lightning distributes parts of your module across available GPUs to optimize for speed and memory. 334 335 """ 336 337 from abc import ABC, abstractmethod 338 import logging as log 339 import os 340 import signal 341 342 import torch 343 344 from pytorch_lightning.overrides.data_parallel import ( 345 LightningDistributedDataParallel, 346 LightningDataParallel, 347 ) 348 from pytorch_lightning.utilities.debugging import MisconfigurationException 349 350 try: 351 from apex import amp 352 except ImportError: 353 APEX_AVAILABLE = False 354 else: 355 APEX_AVAILABLE = True 356 357 try: 358 import torch_xla.core.xla_model as xm 359 except ImportError: 360 XLA_AVAILABLE = False 361 else: 362 XLA_AVAILABLE = True 363 364 365 class TrainerDPMixin(ABC): 366 367 # this is just a summary on variables used in this abstract class, 368 # the proper values/initialisation should be done in child class 369 on_gpu: bool 370 use_dp: bool 371 use_ddp2: bool 372 use_ddp: bool 373 use_amp: bool 374 testing: bool 375 single_gpu: bool 376 root_gpu: ... 377 amp_level: str 378 precision: ... 379 current_tpu_idx: ... 380 proc_rank: int 381 tpu_local_core_rank: int 382 tpu_global_core_rank: int 383 use_tpu: bool 384 data_parallel_device_ids: ... 385 386 @abstractmethod 387 def run_pretrain_routine(self, *args): 388 """Warning: this is just empty shell for code implemented in other class.""" 389 390 @abstractmethod 391 def init_optimizers(self, *args): 392 """Warning: this is just empty shell for code implemented in other class.""" 393 394 def copy_trainer_model_properties(self, model): 395 if isinstance(model, LightningDataParallel): 396 ref_model = model.module 397 elif isinstance(model, LightningDistributedDataParallel): 398 ref_model = model.module 399 else: 400 ref_model = model 401 402 for m in [model, ref_model]: 403 m.trainer = self 404 m.on_gpu = self.on_gpu 405 m.use_dp = self.use_dp 406 m.use_ddp2 = self.use_ddp2 407 m.use_ddp = self.use_ddp 408 m.use_amp = self.use_amp 409 m.testing = self.testing 410 m.single_gpu = self.single_gpu 411 m.use_tpu = self.use_tpu 412 m.tpu_local_core_rank = self.tpu_local_core_rank 413 m.tpu_global_core_rank = self.tpu_global_core_rank 414 415 def transfer_batch_to_tpu(self, batch): 416 return self.__transfer_data_to_device(batch, device='tpu') 417 418 def transfer_batch_to_gpu(self, batch, gpu_id): 419 return self.__transfer_data_to_device(batch, device='gpu', gpu_id=gpu_id) 420 421 def __transfer_data_to_device(self, batch, device, gpu_id=None): 422 if device == 'tpu' and XLA_AVAILABLE: 423 # base case: object can be directly moved using `to` 424 if callable(getattr(batch, 'to', None)): 425 return batch.to(xm.xla_device()) 426 427 if device == 'gpu': 428 # base case: object can be directly moved using `cuda` or `to` 429 if callable(getattr(batch, 'cuda', None)): 430 return batch.cuda(gpu_id) 431 432 if callable(getattr(batch, 'to', None)): 433 return batch.to(torch.device('cuda', gpu_id)) 434 435 # when list 436 if isinstance(batch, list): 437 for i, x in enumerate(batch): 438 batch[i] = self.__transfer_data_to_device(x, device, gpu_id) 439 return batch 440 441 # when tuple 442 if isinstance(batch, tuple): 443 batch = list(batch) 444 for i, x in enumerate(batch): 445 batch[i] = self.__transfer_data_to_device(x, device, gpu_id) 446 return tuple(batch) 447 448 # when dict 449 if isinstance(batch, dict): 450 for k, v in batch.items(): 451 batch[k] = self.__transfer_data_to_device(v, device, gpu_id) 452 453 return batch 454 455 # nothing matches, return the value as is without transform 456 return batch 457 458 def single_gpu_train(self, model): 459 model.cuda(self.root_gpu) 460 461 # CHOOSE OPTIMIZER 462 # allow for lr schedulers as well 463 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers()) 464 465 if self.use_amp: 466 # An example 467 model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level) 468 self.optimizers = optimizers 469 470 self.run_pretrain_routine(model) 471 472 def tpu_train(self, tpu_core_idx, model): 473 # put model on tpu 474 model.to(xm.xla_device()) 475 476 # get the appropriate tpu ranks 477 self.tpu_local_core_rank = xm.get_local_ordinal() 478 self.tpu_global_core_rank = xm.get_ordinal() 479 480 # avoid duplicating progress bar 481 self.show_progress_bar = self.show_progress_bar and self.tpu_global_core_rank == 0 482 483 # track current tpu 484 self.current_tpu_idx = tpu_core_idx 485 self.proc_rank = self.tpu_local_core_rank 486 487 # CHOOSE OPTIMIZER 488 # allow for lr schedulers as well 489 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers()) 490 491 # init 16 bit for TPU 492 if self.precision == 16: 493 os.environ['XLA_USE_BF16'] = str(1) 494 495 m = f'INIT TPU local core: {self.tpu_local_core_rank}, ' \ 496 f'global rank: {self.tpu_global_core_rank}' 497 log.info(m) 498 499 # continue training routine 500 self.run_pretrain_routine(model) 501 502 self.save_spawn_weights(model) 503 504 def dp_train(self, model): 505 506 # CHOOSE OPTIMIZER 507 # allow for lr schedulers as well 508 self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers()) 509 510 model.cuda(self.root_gpu) 511 512 # check for this bug (amp + dp + !01 doesn't work) 513 # https://github.com/NVIDIA/apex/issues/227 514 if self.use_dp and self.use_amp: 515 if self.amp_level == 'O2': # pragma: no cover 516 m = f""" 517 Amp level {self.amp_level} with DataParallel is not supported. 518 See this note from NVIDIA for more info: https://github.com/NVIDIA/apex/issues/227. 519 We recommend you switch to ddp if you want to use amp 520 """ 521 raise MisconfigurationException(m) 522 else: 523 model, optimizers = model.configure_apex(amp, model, self.optimizers, self.amp_level) 524 525 # create list of device ids 526 device_ids = self.data_parallel_device_ids 527 if isinstance(device_ids, int): 528 device_ids = list(range(device_ids)) 529 530 model = LightningDataParallel(model, device_ids=device_ids) 531 532 self.run_pretrain_routine(model) 533 534 535 def normalize_parse_gpu_string_input(s): 536 if isinstance(s, str): 537 if s == '-1': 538 return -1 539 else: 540 return [int(x.strip()) for x in s.split(',')] 541 else: 542 return s 543 544 545 def get_all_available_gpus(): 546 """ 547 :return: a list of all available gpus 548 """ 549 return list(range(torch.cuda.device_count())) 550 551 552 def check_gpus_data_type(gpus): 553 """ 554 :param gpus: gpus parameter as passed to the Trainer 555 Function checks that it is one of: None, Int, String or List 556 Throws otherwise 557 :return: return unmodified gpus variable 558 """ 559 560 if gpus is not None and type(gpus) not in (int, str, list): 561 raise MisconfigurationException("GPUs must be int, string or list of ints or None.") 562 563 564 def normalize_parse_gpu_input_to_list(gpus): 565 assert gpus is not None 566 if isinstance(gpus, list): 567 return gpus 568 569 # must be an int 570 if not gpus: # gpus==0 571 return None 572 if gpus == -1: 573 return get_all_available_gpus() 574 575 return list(range(gpus)) 576 577 578 def sanitize_gpu_ids(gpus): 579 """ 580 :param gpus: list of ints corresponding to GPU indices 581 Checks that each of the GPUs in the list is actually available. 582 Throws if any of the GPUs is not available. 583 :return: unmodified gpus variable 584 """ 585 all_available_gpus = get_all_available_gpus() 586 for gpu in gpus: 587 if gpu not in all_available_gpus: 588 message = f""" 589 You requested GPUs: {gpus} 590 But your machine only has: {all_available_gpus} 591 """ 592 raise MisconfigurationException(message) 593 return gpus 594 595 596 def parse_gpu_ids(gpus): 597 """ 598 :param gpus: Int, string or list 599 An int -1 or string '-1' indicate that all available GPUs should be used. 600 A list of ints or a string containing list of comma separated integers 601 indicates specific GPUs to use 602 An int 0 means that no GPUs should be used 603 Any int N > 0 indicates that GPUs [0..N) should be used. 604 :return: List of gpus to be used 605 606 If no GPUs are available but the value of gpus variable indicates request for GPUs 607 then a misconfiguration exception is raised. 608 """ 609 610 # Check that gpus param is None, Int, String or List 611 check_gpus_data_type(gpus) 612 613 # Handle the case when no gpus are requested 614 if gpus is None or isinstance(gpus, int) and gpus == 0: 615 return None 616 617 # We know user requested GPUs therefore if some of the 618 # requested GPUs are not available an exception is thrown. 619 620 gpus = normalize_parse_gpu_string_input(gpus) 621 gpus = normalize_parse_gpu_input_to_list(gpus) 622 gpus = sanitize_gpu_ids(gpus) 623 624 if not gpus: 625 raise MisconfigurationException("GPUs requested but none are available.") 626 return gpus 627 628 629 def determine_root_gpu_device(gpus): 630 """ 631 :param gpus: non empty list of ints representing which gpus to use 632 :return: designated root GPU device 633 """ 634 if gpus is None: 635 return None 636 637 assert isinstance(gpus, list), "gpus should be a list" 638 assert len(gpus) > 0, "gpus should be a non empty list" 639 640 # set root gpu 641 root_gpu = gpus[0] 642 643 return root_gpu 644 [end of pytorch_lightning/trainer/distrib_parts.py] [start of pytorch_lightning/utilities/debugging.py] 1 """ 2 These flags are useful to help debug a model. 3 4 Fast dev run 5 ------------ 6 7 This flag is meant for debugging a full train/val/test loop. 8 It'll activate callbacks, everything but only with 1 training and 1 validation batch. 9 Use this to debug a full run of your program quickly 10 11 .. code-block:: python 12 13 # DEFAULT 14 trainer = Trainer(fast_dev_run=False) 15 16 17 Inspect gradient norms 18 ---------------------- 19 20 Looking at grad norms can help you figure out where training might be going wrong. 21 22 .. code-block:: python 23 24 # DEFAULT (-1 doesn't track norms) 25 trainer = Trainer(track_grad_norm=-1) 26 27 # track the LP norm (P=2 here) 28 trainer = Trainer(track_grad_norm=2) 29 30 31 Make model overfit on subset of data 32 ------------------------------------ 33 34 A useful debugging trick is to make your model overfit a tiny fraction of the data. 35 36 setting `overfit_pct > 0` will overwrite train_percent_check, val_percent_check, test_percent_check 37 38 .. code-block:: python 39 40 # DEFAULT don't overfit (ie: normal training) 41 trainer = Trainer(overfit_pct=0.0) 42 43 # overfit on 1% of data 44 trainer = Trainer(overfit_pct=0.01) 45 46 47 Print the parameter count by layer 48 ---------------------------------- 49 50 By default lightning prints a list of parameters *and submodules* when it starts training. 51 52 .. code-block:: python 53 54 # DEFAULT print a full list of all submodules and their parameters. 55 trainer = Trainer(weights_summary='full') 56 57 # only print the top-level modules (i.e. the children of LightningModule). 58 trainer = Trainer(weights_summary='top') 59 60 Print which gradients are nan 61 ----------------------------- 62 63 This option prints a list of tensors with nan gradients:: 64 65 # DEFAULT 66 trainer = Trainer(print_nan_grads=False) 67 68 Log GPU usage 69 ------------- 70 71 Lightning automatically logs gpu usage to the test tube logs. 72 It'll only do it at the metric logging interval, so it doesn't slow down training. 73 74 """ 75 76 77 class MisconfigurationException(Exception): 78 pass 79 [end of pytorch_lightning/utilities/debugging.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Lightning-AI/lightning
5e013f6e2ff555e56c75d6e6148830024d47bb29
Support IterableDatasets for validation and test, not just train set [blocked by #953] ## 🚀 Feature Currently Lightning supports `IterableDatasets` only in the training set (see [code](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/data_loading.py#L177)). This makes them second-class citizens compared to the map-style datasets, and supporting them seems a low hanging fruit. ### Motivation This enables having larger test sets that may not fit into a machine's memory (they could be very large in production settings, or of modest size running in a student's cheap laptop). Moreover, datasets are usually generated together (eg train, val, test can come from the same process). It is very likely that the same process has the same signature, so you may end up having IterableDatasets even when their size may not deem it strictly necessary. ### Pitch <!-- A clear and concise description of what you want to happen. --> Changing a few lines of code by bringing in the checks we are doing for training should be enough unless I'm missing something. ### Additional context <!-- Add any other context or screenshots about the feature request here. --> Are there any gotchas that make this harder than it looks?
Hey, thanks for your contribution! Great first issue! @Darktex this looks straightforward! I can’t think if any gotchas right now. The only thing would be if you don’t have the length of a dataset up front but i think we’re refactoring to clear that up right now. want to do a PR? @ethanwharris @jeffling thoughts? fyi @srush @luiscape It seems there's an opportunity to clean stuff up a bit here. Really the only check we need is to see if `len(dataloader)` raises an error. If it does, then check if number of steps to run is set elsewhere and throw a warning if not (i.e. if not set elsewhere this will just run forever). That way you could get rid of the check for whether `IterableDataset` exists and the dependence on `DataLoader.dataset`, solving several issues. maybe step 1 is to refactor the code to minimize the len(dataloader) calls? we likely only need them to: - figure out when to do validation checks (percent into epoch) - set the tqdm bar length Agreed. Then it would be easier to see where the `IterableDataset` stuff will fall over, and just do something different when `len` is not available. Ok, #953 is blocking this issue at the moment. @ethanwharris @Darktex i think 0.7.1 fixed this problem. Mind checking now? @williamFalcon Not quite, still tires to call len on val / test dataloders - will PR in a bit is the easier thing to try catch for the len exception and set to inf if caught? then when the epoch ends, set the length when we know it? is the easier thing to try catch for the len exception and set to inf if caught? then when the epoch ends, set the length when we know it? Yeah, that's the plan - currently have the `is_infinite_dataloader` method which tries to call len and catches the exception, just need to get the TQDM stuff to not do `total=float('inf')` as that raises an error Not sure about setting the lenght once we know it - maybe in a seperate PR?
2020-03-09T16:14:45Z
<patch> diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py --- a/pytorch_lightning/trainer/data_loading.py +++ b/pytorch_lightning/trainer/data_loading.py @@ -1,9 +1,11 @@ from abc import ABC, abstractmethod +from typing import Union, List, Tuple, Callable import torch.distributed as dist from torch.utils.data import SequentialSampler, DataLoader from torch.utils.data.distributed import DistributedSampler +from pytorch_lightning.core import LightningModule from pytorch_lightning.utilities.debugging import MisconfigurationException try: @@ -23,6 +25,15 @@ XLA_AVAILABLE = True +def _has_len(dataloader: DataLoader) -> bool: + try: + # try getting the length + _ = len(dataloader) + return True + except TypeError: + return False + + class TrainerDataLoadingMixin(ABC): # this is just a summary on variables used in this abstract class, @@ -35,27 +46,30 @@ class TrainerDataLoadingMixin(ABC): use_tpu: bool tpu_local_core_rank: int train_dataloader: DataLoader - num_training_batches: int + num_training_batches: Union[int, float] val_check_batch: ... - val_dataloaders: DataLoader - num_val_batches: int - test_dataloaders: DataLoader - num_test_batches: int + val_dataloaders: List[DataLoader] + num_val_batches: Union[int, float] + test_dataloaders: List[DataLoader] + num_test_batches: Union[int, float] + train_percent_check: float + val_percent_check: float + test_percent_check: float @abstractmethod def is_overriden(self, *args): """Warning: this is just empty shell for code implemented in other class.""" - def _percent_range_check(self, name): + def _percent_range_check(self, name: str) -> None: value = getattr(self, name) - msg = f"`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}." - if name == "val_check_interval": - msg += " If you want to disable validation set `val_percent_check` to 0.0 instead." + msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.' + if name == 'val_check_interval': + msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.' if not 0. <= value <= 1.: raise ValueError(msg) - def auto_add_sampler(self, dataloader, train): + def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader: if self.use_ddp or self.use_ddp2 or self.use_tpu: dl_args = { 'dataset': dataloader.dataset, @@ -88,14 +102,14 @@ def auto_add_sampler(self, dataloader, train): dataloader = DataLoader(**dl_args) return dataloader - def reset_train_dataloader(self, model): - """ - Dataloaders are provided by the model - :param model: - :return: - """ + def reset_train_dataloader(self, model: LightningModule) -> None: + """Resets the train dataloader and initialises required variables + (number of batches, when to validate, etc.). - self.train_dataloader = self.request_data_loader(model.train_dataloader) + Args: + model: The current `LightningModule` + """ + self.train_dataloader = self.request_dataloader(model.train_dataloader) self.num_training_batches = 0 # automatically add samplers @@ -103,7 +117,7 @@ def reset_train_dataloader(self, model): self._percent_range_check('train_percent_check') - if self.is_infinite_dataloader(self.train_dataloader): + if not _has_len(self.train_dataloader): self.num_training_batches = float('inf') else: # try getting the length @@ -117,122 +131,119 @@ def reset_train_dataloader(self, model): self.val_check_batch = self.val_check_interval if self.val_check_batch > self.num_training_batches: raise ValueError( - f"`val_check_interval` ({self.val_check_interval}) must be less than or equal " - f"to the number of the training batches ({self.num_training_batches}). " - f"If you want to disable validation set `val_percent_check` to 0.0 instead.") + f'`val_check_interval` ({self.val_check_interval}) must be less than or equal ' + f'to the number of the training batches ({self.num_training_batches}). ' + 'If you want to disable validation set `val_percent_check` to 0.0 instead.') else: - if self.is_infinite_dataloader(self.train_dataloader): - m = ''' - When using an infinite DataLoader (e.g. with an IterableDataset or when DataLoader - does not implement `__len__`) for `train_dataloader`, `Trainer(val_check_interval)` - must be an int. An int k specifies checking validation every k training batches. - ''' - raise MisconfigurationException(m) + if not _has_len(self.train_dataloader): + raise MisconfigurationException( + 'When using an infinite DataLoader (e.g. with an IterableDataset or when ' + 'DataLoader does not implement `__len__`) for `train_dataloader`, ' + '`Trainer(val_check_interval)` must be an int. An int k specifies checking ' + 'validation every k training batches.') self._percent_range_check('val_check_interval') self.val_check_batch = int(self.num_training_batches * self.val_check_interval) self.val_check_batch = max(1, self.val_check_batch) - def is_infinite_dataloader(self, dataloader): - try: - # try getting the length - _ = len(dataloader) - return False - except TypeError as e: - return True + def _reset_eval_dataloader(self, model: LightningModule, + mode: str) -> Tuple[int, List[DataLoader]]: + """Generic method to reset a dataloader for evaluation. - def reset_val_dataloader(self, model): - """ - Dataloaders are provided by the model - :param model: - :return: + Args: + model: The current `LightningModule` + mode: Either `'val'` or `'test'` + + Returns: + Tuple (num_batches, dataloaders) """ - if not self.is_overriden('validation_step'): - return + dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader')) - self.val_dataloaders = self.request_data_loader(model.val_dataloader) - if not isinstance(self.val_dataloaders, list): - self.val_dataloaders = [self.val_dataloaders] - self.num_val_batches = 0 + if not isinstance(dataloaders, list): + dataloaders = [dataloaders] # add samplers - self.val_dataloaders = [self.auto_add_sampler(dl, train=False) - for dl in self.val_dataloaders if dl] + dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl] - # determine number of validation batches - # val datasets could be none, 1 or 2+ - if self.val_dataloaders is not None: - self._percent_range_check('val_percent_check') + num_batches = 0 - self.num_val_batches = sum(len(dataloader) for dataloader in self.val_dataloaders) - self.num_val_batches = int(self.num_val_batches * self.val_percent_check) + # determine number of batches + # datasets could be none, 1 or 2+ + if len(dataloaders) != 0: + for dataloader in dataloaders: + if not _has_len(dataloader): + num_batches = float('inf') + break - def reset_test_dataloader(self, model): - """Dataloaders are provided by the model. + percent_check = getattr(self, f'{mode}_percent_check') - :param model: - """ - if not self.is_overriden('test_step'): - return + if num_batches != float('inf'): + self._percent_range_check(f'{mode}_percent_check') - # get actual loader - self.test_dataloaders = self.request_data_loader(model.test_dataloader) - if not isinstance(self.test_dataloaders, list): - self.test_dataloaders = [self.test_dataloaders] - self.num_test_batches = 0 + num_batches = sum(len(dataloader) for dataloader in dataloaders) + num_batches = int(num_batches * percent_check) + elif percent_check not in (0.0, 1.0): + raise MisconfigurationException( + 'When using an infinite DataLoader (e.g. with an IterableDataset or when ' + f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, ' + f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.') + return num_batches, dataloaders - # add samplers - self.test_dataloaders = [self.auto_add_sampler(dl, train=False) - for dl in self.test_dataloaders if dl] + def reset_val_dataloader(self, model: LightningModule) -> None: + """Resets the validation dataloader and determines the number of batches. - # determine number of test batches - if self.test_dataloaders is not None: - self._percent_range_check('test_percent_check') + Args: + model: The current `LightningModule` + """ + if self.is_overriden('validation_step'): + self.num_val_batches, self.val_dataloaders =\ + self._reset_eval_dataloader(model, 'val') - len_sum = sum(len(dataloader) for dataloader in self.test_dataloaders) - self.num_test_batches = len_sum - self.num_test_batches = int(self.num_test_batches * self.test_percent_check) + def reset_test_dataloader(self, model) -> None: + """Resets the validation dataloader and determines the number of batches. - def request_data_loader(self, data_loader_fx): + Args: + model: The current `LightningModule` """ - Handles downloading data in the GPU or TPU case. + if self.is_overriden('test_step'): + self.num_test_batches, self.test_dataloaders =\ + self._reset_eval_dataloader(model, 'test') - :param data_loader_fx: - :return: + def request_dataloader(self, dataloader_fx: Callable) -> DataLoader: + """Handles downloading data in the GPU or TPU case. + + Args: + dataloader_fx: The bound dataloader getter + + Returns: + The dataloader """ + dataloader = dataloader_fx() + # get the function we'll use to get data if self.use_ddp or self.use_ddp2: - data_loader = data_loader_fx() - # all processes wait until data download has happened dist.barrier() # data download/load on TPU elif self.use_tpu and XLA_AVAILABLE: - data_loader = data_loader_fx() - # all processes wait until data download has happened - torch_xla.core.xla_model.rendezvous("pl.TrainerDataLoadingMixin.get_dataloaders") + torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders') - # regular start - else: - data_loader = data_loader_fx() - - return data_loader + return dataloader - def determine_data_use_amount(self, train_percent_check, val_percent_check, - test_percent_check, overfit_pct): - """ - Use less data for debugging purposes + def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float, + test_percent_check: float, overfit_pct: float) -> None: + """Use less data for debugging purposes """ self.train_percent_check = train_percent_check self.val_percent_check = val_percent_check self.test_percent_check = test_percent_check if overfit_pct > 0: if overfit_pct > 1: - raise ValueError(f"`overfit_pct` must be not greater than 1.0, but got " - f"{overfit_pct:.3f}.") + raise ValueError( + f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.') self.train_percent_check = overfit_pct self.val_percent_check = overfit_pct diff --git a/pytorch_lightning/trainer/evaluation_loop.py b/pytorch_lightning/trainer/evaluation_loop.py --- a/pytorch_lightning/trainer/evaluation_loop.py +++ b/pytorch_lightning/trainer/evaluation_loop.py @@ -359,9 +359,9 @@ def run_evaluation(self, test_mode: bool = False): # main progress bar will already be closed when testing so initial position is free position = 2 * self.process_position + (not test_mode) desc = 'Testing' if test_mode else 'Validating' - pbar = tqdm(desc=desc, total=max_batches, leave=test_mode, position=position, - disable=not self.show_progress_bar, dynamic_ncols=True, - file=sys.stdout) + total = max_batches if max_batches != float('inf') else None + pbar = tqdm(desc=desc, total=total, leave=test_mode, position=position, + disable=not self.show_progress_bar, dynamic_ncols=True, file=sys.stdout) setattr(self, f'{"test" if test_mode else "val"}_progress_bar', pbar) # run evaluation diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py --- a/pytorch_lightning/trainer/training_loop.py +++ b/pytorch_lightning/trainer/training_loop.py @@ -223,10 +223,6 @@ def get_model(self): def is_function_implemented(self, *args): """Warning: this is just empty shell for code implemented in other class.""" - @abstractmethod - def is_infinite_dataloader(self, *args): - """Warning: this is just empty shell for code implemented in other class.""" - @abstractmethod def run_evaluation(self, *args): """Warning: this is just empty shell for code implemented in other class.""" @@ -310,7 +306,7 @@ def train(self): total_val_batches = 0 is_val_epoch = False - if not self.disable_validation: + if not self.disable_validation and self.num_training_batches != float('inf'): # val can be checked multiple times in epoch is_val_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0 val_checks_per_epoch = self.num_training_batches // self.val_check_batch @@ -324,8 +320,8 @@ def train(self): if self.fast_dev_run: # limit the number of batches to 2 (1 train and 1 val) in fast_dev_run num_iterations = 2 - elif self.is_infinite_dataloader(self.train_dataloader): - # for infinite train loader, the progress bar never ends + elif self.total_batches == float('inf'): + # for infinite train or val loader, the progress bar never ends num_iterations = None else: num_iterations = self.total_batches @@ -334,7 +330,7 @@ def train(self): # .reset() doesn't work on disabled progress bar so we should check if not self.main_progress_bar.disable: self.main_progress_bar.reset(num_iterations) - desc = f'Epoch {epoch + 1}' if not self.is_infinite_dataloader(self.train_dataloader) else '' + desc = f'Epoch {epoch + 1}' self.main_progress_bar.set_description(desc) # ----------------- </patch>
[]
[]
pandas-dev__pandas-22695
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Change in behavior of DatetimeIndex + Offset ```python import pandas as pd offset = pd.tseries.offsets.Week(weekday=6) idx = pd.DatetimeIndex(['1999-12-26', '2000-05-14']) assert (idx[0] + offset) == (idx + offset)[0] ``` bisected to https://github.com/pandas-dev/pandas/pull/18952 Before that the assertion was true, after it's false ``` # before >>> idx + offset DatetimeIndex(['2000-01-02', '2000-05-21'], dtype='datetime64[ns]', freq=None) # after >>> idx + offset DatetimeIndex(['1999-12-26', '2000-05-14'], dtype='datetime64[ns]', freq=None) ``` cc @reidy-p is this an expected change due to #18952, or a new bug? If it's intentional, we should document that. </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://circleci.com/gh/pandas-dev/pandas"> 52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td></td> 58 <td> 59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas"> 60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Coverage</td> 66  <td> 67 <a href="https://codecov.io/gh/pandas-dev/pandas"> 68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Downloads</td> 74 <td> 75 <a href="https://pandas.pydata.org"> 76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 77 </a> 78 </td> 79 </tr> 80 <tr> 81 <td>Gitter</td> 82 <td> 83 <a href="https://gitter.im/pydata/pandas"> 84 <img src="https://badges.gitter.im/Join%20Chat.svg" 85 </a> 86 </td> 87 </tr> 88 </table> 89 90 91 92 ## What is it? 93 94 **pandas** is a Python package providing fast, flexible, and expressive data 95 structures designed to make working with "relational" or "labeled" data both 96 easy and intuitive. It aims to be the fundamental high-level building block for 97 doing practical, **real world** data analysis in Python. Additionally, it has 98 the broader goal of becoming **the most powerful and flexible open source data 99 analysis / manipulation tool available in any language**. It is already well on 100 its way toward this goal. 101 102 ## Main Features 103 Here are just a few of the things that pandas does well: 104 105 - Easy handling of [**missing data**][missing-data] (represented as 106 `NaN`) in floating point as well as non-floating point data 107 - Size mutability: columns can be [**inserted and 108 deleted**][insertion-deletion] from DataFrame and higher dimensional 109 objects 110 - Automatic and explicit [**data alignment**][alignment]: objects can 111 be explicitly aligned to a set of labels, or the user can simply 112 ignore the labels and let `Series`, `DataFrame`, etc. automatically 113 align the data for you in computations 114 - Powerful, flexible [**group by**][groupby] functionality to perform 115 split-apply-combine operations on data sets, for both aggregating 116 and transforming data 117 - Make it [**easy to convert**][conversion] ragged, 118 differently-indexed data in other Python and NumPy data structures 119 into DataFrame objects 120 - Intelligent label-based [**slicing**][slicing], [**fancy 121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 122 large data sets 123 - Intuitive [**merging**][merging] and [**joining**][joining] data 124 sets 125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 126 data sets 127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 128 labels per tick) 129 - Robust IO tools for loading data from [**flat files**][flat-files] 130 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 132 - [**Time series**][timeseries]-specific functionality: date range 133 generation and frequency conversion, moving window statistics, 134 moving window linear regressions, date shifting and lagging, etc. 135 136 137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 155 156 ## Where to get it 157 The source code is currently hosted on GitHub at: 158 https://github.com/pandas-dev/pandas 159 160 Binary installers for the latest released version are available at the [Python 161 package index](https://pypi.org/project/pandas) and on conda. 162 163 ```sh 164 # conda 165 conda install pandas 166 ``` 167 168 ```sh 169 # or PyPI 170 pip install pandas 171 ``` 172 173 ## Dependencies 174 - [NumPy](https://www.numpy.org): 1.9.0 or higher 175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 177 178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 179 for recommended and optional dependencies. 180 181 ## Installation from sources 182 To install pandas from source you need Cython in addition to the normal 183 dependencies above. Cython can be installed from pypi: 184 185 ```sh 186 pip install cython 187 ``` 188 189 In the `pandas` directory (same one where you found this file after 190 cloning the git repo), execute: 191 192 ```sh 193 python setup.py install 194 ``` 195 196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 197 198 ```sh 199 python setup.py develop 200 ``` 201 202 Alternatively, you can use `pip` if you want all the dependencies pulled 203 in automatically (the `-e` option is for installing it in [development 204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 205 206 ```sh 207 pip install -e . 208 ``` 209 210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 211 212 ## License 213 [BSD 3](LICENSE) 214 215 ## Documentation 216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 217 218 ## Background 219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 220 has been under active development since then. 221 222 ## Getting Help 223 224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 226 227 ## Discussion and Development 228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 229 230 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 231 232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 233 234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)** 235 236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 237 238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 239 240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 241 242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 243 [end of README.md] [start of pandas/core/tools/datetimes.py] 1 from functools import partial 2 from datetime import datetime, time 3 from collections import MutableMapping 4 5 import numpy as np 6 7 from pandas._libs import tslib, tslibs 8 from pandas._libs.tslibs.strptime import array_strptime 9 from pandas._libs.tslibs import parsing, conversion, Timestamp 10 from pandas._libs.tslibs.parsing import ( # noqa 11 parse_time_string, 12 DateParseError, 13 _format_is_iso, 14 _guess_datetime_format) 15 16 from pandas.core.dtypes.common import ( 17 ensure_object, 18 is_datetime64_ns_dtype, 19 is_datetime64_dtype, 20 is_datetime64tz_dtype, 21 is_integer_dtype, 22 is_integer, 23 is_float, 24 is_list_like, 25 is_scalar, 26 is_numeric_dtype, 27 is_object_dtype) 28 from pandas.core.dtypes.generic import ( 29 ABCIndexClass, ABCSeries, 30 ABCDataFrame) 31 from pandas.core.dtypes.missing import notna 32 from pandas.core import algorithms 33 from pandas.compat import zip 34 35 36 def _guess_datetime_format_for_array(arr, **kwargs): 37 # Try to guess the format based on the first non-NaN element 38 non_nan_elements = notna(arr).nonzero()[0] 39 if len(non_nan_elements): 40 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs) 41 42 43 def _maybe_cache(arg, format, cache, convert_listlike): 44 """ 45 Create a cache of unique dates from an array of dates 46 47 Parameters 48 ---------- 49 arg : integer, float, string, datetime, list, tuple, 1-d array, Series 50 format : string 51 Strftime format to parse time 52 cache : boolean 53 True attempts to create a cache of converted values 54 convert_listlike : function 55 Conversion function to apply on dates 56 57 Returns 58 ------- 59 cache_array : Series 60 Cache of converted, unique dates. Can be empty 61 """ 62 from pandas import Series 63 cache_array = Series() 64 if cache: 65 # Perform a quicker unique check 66 from pandas import Index 67 if not Index(arg).is_unique: 68 unique_dates = algorithms.unique(arg) 69 cache_dates = convert_listlike(unique_dates, True, format) 70 cache_array = Series(cache_dates, index=unique_dates) 71 return cache_array 72 73 74 def _convert_and_box_cache(arg, cache_array, box, errors, name=None): 75 """ 76 Convert array of dates with a cache and box the result 77 78 Parameters 79 ---------- 80 arg : integer, float, string, datetime, list, tuple, 1-d array, Series 81 cache_array : Series 82 Cache of converted, unique dates 83 box : boolean 84 True boxes result as an Index-like, False returns an ndarray 85 errors : string 86 'ignore' plus box=True will convert result to Index 87 name : string, default None 88 Name for a DatetimeIndex 89 90 Returns 91 ------- 92 result : datetime of converted dates 93 Returns: 94 95 - Index-like if box=True 96 - ndarray if box=False 97 """ 98 from pandas import Series, DatetimeIndex, Index 99 result = Series(arg).map(cache_array) 100 if box: 101 if errors == 'ignore': 102 return Index(result) 103 else: 104 return DatetimeIndex(result, name=name) 105 return result.values 106 107 108 def _return_parsed_timezone_results(result, timezones, box, tz): 109 """ 110 Return results from array_strptime if a %z or %Z directive was passed. 111 112 Parameters 113 ---------- 114 result : ndarray 115 int64 date representations of the dates 116 timezones : ndarray 117 pytz timezone objects 118 box : boolean 119 True boxes result as an Index-like, False returns an ndarray 120 tz : object 121 None or pytz timezone object 122 Returns 123 ------- 124 tz_result : ndarray of parsed dates with timezone 125 Returns: 126 127 - Index-like if box=True 128 - ndarray of Timestamps if box=False 129 130 """ 131 if tz is not None: 132 raise ValueError("Cannot pass a tz argument when " 133 "parsing strings with timezone " 134 "information.") 135 tz_results = np.array([Timestamp(res).tz_localize(zone) for res, zone 136 in zip(result, timezones)]) 137 if box: 138 from pandas import Index 139 return Index(tz_results) 140 return tz_results 141 142 143 def _convert_listlike_datetimes(arg, box, format, name=None, tz=None, 144 unit=None, errors=None, 145 infer_datetime_format=None, dayfirst=None, 146 yearfirst=None, exact=None): 147 """ 148 Helper function for to_datetime. Performs the conversions of 1D listlike 149 of dates 150 151 Parameters 152 ---------- 153 arg : list, tuple, ndarray, Series, Index 154 date to be parced 155 box : boolean 156 True boxes result as an Index-like, False returns an ndarray 157 name : object 158 None or string for the Index name 159 tz : object 160 None or 'utc' 161 unit : string 162 None or string of the frequency of the passed data 163 errors : string 164 error handing behaviors from to_datetime, 'raise', 'coerce', 'ignore' 165 infer_datetime_format : boolean 166 inferring format behavior from to_datetime 167 dayfirst : boolean 168 dayfirst parsing behavior from to_datetime 169 yearfirst : boolean 170 yearfirst parsing behavior from to_datetime 171 exact : boolean 172 exact format matching behavior from to_datetime 173 174 Returns 175 ------- 176 ndarray of parsed dates 177 Returns: 178 179 - Index-like if box=True 180 - ndarray of Timestamps if box=False 181 """ 182 from pandas import DatetimeIndex 183 if isinstance(arg, (list, tuple)): 184 arg = np.array(arg, dtype='O') 185 186 # these are shortcutable 187 if is_datetime64tz_dtype(arg): 188 if not isinstance(arg, DatetimeIndex): 189 return DatetimeIndex(arg, tz=tz, name=name) 190 if tz == 'utc': 191 arg = arg.tz_convert(None).tz_localize(tz) 192 return arg 193 194 elif is_datetime64_ns_dtype(arg): 195 if box and not isinstance(arg, DatetimeIndex): 196 try: 197 return DatetimeIndex(arg, tz=tz, name=name) 198 except ValueError: 199 pass 200 201 return arg 202 203 elif unit is not None: 204 if format is not None: 205 raise ValueError("cannot specify both format and unit") 206 arg = getattr(arg, 'values', arg) 207 result = tslib.array_with_unit_to_datetime(arg, unit, 208 errors=errors) 209 if box: 210 if errors == 'ignore': 211 from pandas import Index 212 return Index(result) 213 214 return DatetimeIndex(result, tz=tz, name=name) 215 return result 216 elif getattr(arg, 'ndim', 1) > 1: 217 raise TypeError('arg must be a string, datetime, list, tuple, ' 218 '1-d array, or Series') 219 220 arg = ensure_object(arg) 221 require_iso8601 = False 222 223 if infer_datetime_format and format is None: 224 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst) 225 226 if format is not None: 227 # There is a special fast-path for iso8601 formatted 228 # datetime strings, so in those cases don't use the inferred 229 # format because this path makes process slower in this 230 # special case 231 format_is_iso8601 = _format_is_iso(format) 232 if format_is_iso8601: 233 require_iso8601 = not infer_datetime_format 234 format = None 235 236 try: 237 result = None 238 239 if format is not None: 240 # shortcut formatting here 241 if format == '%Y%m%d': 242 try: 243 result = _attempt_YYYYMMDD(arg, errors=errors) 244 except: 245 raise ValueError("cannot convert the input to " 246 "'%Y%m%d' date format") 247 248 # fallback 249 if result is None: 250 try: 251 result, timezones = array_strptime( 252 arg, format, exact=exact, errors=errors) 253 if '%Z' in format or '%z' in format: 254 return _return_parsed_timezone_results( 255 result, timezones, box, tz) 256 except tslibs.OutOfBoundsDatetime: 257 if errors == 'raise': 258 raise 259 result = arg 260 except ValueError: 261 # if format was inferred, try falling back 262 # to array_to_datetime - terminate here 263 # for specified formats 264 if not infer_datetime_format: 265 if errors == 'raise': 266 raise 267 result = arg 268 269 if result is None and (format is None or infer_datetime_format): 270 result, tz_parsed = tslib.array_to_datetime( 271 arg, 272 errors=errors, 273 utc=tz == 'utc', 274 dayfirst=dayfirst, 275 yearfirst=yearfirst, 276 require_iso8601=require_iso8601 277 ) 278 if tz_parsed is not None: 279 if box: 280 # We can take a shortcut since the datetime64 numpy array 281 # is in UTC 282 return DatetimeIndex._simple_new(result, name=name, 283 tz=tz_parsed) 284 else: 285 # Convert the datetime64 numpy array to an numpy array 286 # of datetime objects 287 result = [Timestamp(ts, tz=tz_parsed).to_pydatetime() 288 for ts in result] 289 return np.array(result, dtype=object) 290 291 if box: 292 # Ensure we return an Index in all cases where box=True 293 if is_datetime64_dtype(result): 294 return DatetimeIndex(result, tz=tz, name=name) 295 elif is_object_dtype(result): 296 # e.g. an Index of datetime objects 297 from pandas import Index 298 return Index(result, name=name) 299 return result 300 301 except ValueError as e: 302 try: 303 values, tz = conversion.datetime_to_datetime64(arg) 304 return DatetimeIndex._simple_new(values, name=name, tz=tz) 305 except (ValueError, TypeError): 306 raise e 307 308 309 def _adjust_to_origin(arg, origin, unit): 310 """ 311 Helper function for to_datetime. 312 Adjust input argument to the specified origin 313 314 Parameters 315 ---------- 316 arg : list, tuple, ndarray, Series, Index 317 date to be adjusted 318 origin : 'julian' or Timestamp 319 origin offset for the arg 320 unit : string 321 passed unit from to_datetime, must be 'D' 322 323 Returns 324 ------- 325 ndarray or scalar of adjusted date(s) 326 """ 327 if origin == 'julian': 328 original = arg 329 j0 = Timestamp(0).to_julian_date() 330 if unit != 'D': 331 raise ValueError("unit must be 'D' for origin='julian'") 332 try: 333 arg = arg - j0 334 except: 335 raise ValueError("incompatible 'arg' type for given " 336 "'origin'='julian'") 337 338 # premptively check this for a nice range 339 j_max = Timestamp.max.to_julian_date() - j0 340 j_min = Timestamp.min.to_julian_date() - j0 341 if np.any(arg > j_max) or np.any(arg < j_min): 342 raise tslibs.OutOfBoundsDatetime( 343 "{original} is Out of Bounds for " 344 "origin='julian'".format(original=original)) 345 else: 346 # arg must be numeric 347 if not ((is_scalar(arg) and (is_integer(arg) or is_float(arg))) or 348 is_numeric_dtype(np.asarray(arg))): 349 raise ValueError( 350 "'{arg}' is not compatible with origin='{origin}'; " 351 "it must be numeric with a unit specified ".format( 352 arg=arg, 353 origin=origin)) 354 355 # we are going to offset back to unix / epoch time 356 try: 357 offset = Timestamp(origin) 358 except tslibs.OutOfBoundsDatetime: 359 raise tslibs.OutOfBoundsDatetime( 360 "origin {origin} is Out of Bounds".format(origin=origin)) 361 except ValueError: 362 raise ValueError("origin {origin} cannot be converted " 363 "to a Timestamp".format(origin=origin)) 364 365 if offset.tz is not None: 366 raise ValueError( 367 "origin offset {} must be tz-naive".format(offset)) 368 offset -= Timestamp(0) 369 370 # convert the offset to the unit of the arg 371 # this should be lossless in terms of precision 372 offset = offset // tslibs.Timedelta(1, unit=unit) 373 374 # scalars & ndarray-like can handle the addition 375 if is_list_like(arg) and not isinstance( 376 arg, (ABCSeries, ABCIndexClass, np.ndarray)): 377 arg = np.asarray(arg) 378 arg = arg + offset 379 return arg 380 381 382 def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, 383 utc=None, box=True, format=None, exact=True, 384 unit=None, infer_datetime_format=False, origin='unix', 385 cache=False): 386 """ 387 Convert argument to datetime. 388 389 Parameters 390 ---------- 391 arg : integer, float, string, datetime, list, tuple, 1-d array, Series 392 393 .. versionadded:: 0.18.1 394 395 or DataFrame/dict-like 396 397 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 398 399 - If 'raise', then invalid parsing will raise an exception 400 - If 'coerce', then invalid parsing will be set as NaT 401 - If 'ignore', then invalid parsing will return the input 402 dayfirst : boolean, default False 403 Specify a date parse order if `arg` is str or its list-likes. 404 If True, parses dates with the day first, eg 10/11/12 is parsed as 405 2012-11-10. 406 Warning: dayfirst=True is not strict, but will prefer to parse 407 with day first (this is a known bug, based on dateutil behavior). 408 yearfirst : boolean, default False 409 Specify a date parse order if `arg` is str or its list-likes. 410 411 - If True parses dates with the year first, eg 10/11/12 is parsed as 412 2010-11-12. 413 - If both dayfirst and yearfirst are True, yearfirst is preceded (same 414 as dateutil). 415 416 Warning: yearfirst=True is not strict, but will prefer to parse 417 with year first (this is a known bug, based on dateutil behavior). 418 419 .. versionadded:: 0.16.1 420 421 utc : boolean, default None 422 Return UTC DatetimeIndex if True (converting any tz-aware 423 datetime.datetime objects as well). 424 box : boolean, default True 425 426 - If True returns a DatetimeIndex or Index-like object 427 - If False returns ndarray of values. 428 format : string, default None 429 strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse 430 all the way up to nanoseconds. 431 exact : boolean, True by default 432 433 - If True, require an exact format match. 434 - If False, allow the format to match anywhere in the target string. 435 436 unit : string, default 'ns' 437 unit of the arg (D,s,ms,us,ns) denote the unit, which is an 438 integer or float number. This will be based off the origin. 439 Example, with unit='ms' and origin='unix' (the default), this 440 would calculate the number of milliseconds to the unix epoch start. 441 infer_datetime_format : boolean, default False 442 If True and no `format` is given, attempt to infer the format of the 443 datetime strings, and if it can be inferred, switch to a faster 444 method of parsing them. In some cases this can increase the parsing 445 speed by ~5-10x. 446 origin : scalar, default is 'unix' 447 Define the reference date. The numeric values would be parsed as number 448 of units (defined by `unit`) since this reference date. 449 450 - If 'unix' (or POSIX) time; origin is set to 1970-01-01. 451 - If 'julian', unit must be 'D', and origin is set to beginning of 452 Julian Calendar. Julian day number 0 is assigned to the day starting 453 at noon on January 1, 4713 BC. 454 - If Timestamp convertible, origin is set to Timestamp identified by 455 origin. 456 457 .. versionadded:: 0.20.0 458 cache : boolean, default False 459 If True, use a cache of unique, converted dates to apply the datetime 460 conversion. May produce significant speed-up when parsing duplicate 461 date strings, especially ones with timezone offsets. 462 463 .. versionadded:: 0.23.0 464 465 Returns 466 ------- 467 ret : datetime if parsing succeeded. 468 Return type depends on input: 469 470 - list-like: DatetimeIndex 471 - Series: Series of datetime64 dtype 472 - scalar: Timestamp 473 474 In case when it is not possible to return designated types (e.g. when 475 any element of input is before Timestamp.min or after Timestamp.max) 476 return will have datetime.datetime type (or corresponding 477 array/Series). 478 479 Examples 480 -------- 481 Assembling a datetime from multiple columns of a DataFrame. The keys can be 482 common abbreviations like ['year', 'month', 'day', 'minute', 'second', 483 'ms', 'us', 'ns']) or plurals of the same 484 485 >>> df = pd.DataFrame({'year': [2015, 2016], 486 'month': [2, 3], 487 'day': [4, 5]}) 488 >>> pd.to_datetime(df) 489 0 2015-02-04 490 1 2016-03-05 491 dtype: datetime64[ns] 492 493 If a date does not meet the `timestamp limitations 494 <http://pandas.pydata.org/pandas-docs/stable/timeseries.html 495 #timeseries-timestamp-limits>`_, passing errors='ignore' 496 will return the original input instead of raising any exception. 497 498 Passing errors='coerce' will force an out-of-bounds date to NaT, 499 in addition to forcing non-dates (or non-parseable dates) to NaT. 500 501 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore') 502 datetime.datetime(1300, 1, 1, 0, 0) 503 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce') 504 NaT 505 506 Passing infer_datetime_format=True can often-times speedup a parsing 507 if its not an ISO8601 format exactly, but in a regular format. 508 509 >>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000']*1000) 510 511 >>> s.head() 512 0 3/11/2000 513 1 3/12/2000 514 2 3/13/2000 515 3 3/11/2000 516 4 3/12/2000 517 dtype: object 518 519 >>> %timeit pd.to_datetime(s,infer_datetime_format=True) 520 100 loops, best of 3: 10.4 ms per loop 521 522 >>> %timeit pd.to_datetime(s,infer_datetime_format=False) 523 1 loop, best of 3: 471 ms per loop 524 525 Using a unix epoch time 526 527 >>> pd.to_datetime(1490195805, unit='s') 528 Timestamp('2017-03-22 15:16:45') 529 >>> pd.to_datetime(1490195805433502912, unit='ns') 530 Timestamp('2017-03-22 15:16:45.433502912') 531 532 .. warning:: For float arg, precision rounding might happen. To prevent 533 unexpected behavior use a fixed-width exact type. 534 535 Using a non-unix epoch origin 536 537 >>> pd.to_datetime([1, 2, 3], unit='D', 538 origin=pd.Timestamp('1960-01-01')) 539 0 1960-01-02 540 1 1960-01-03 541 2 1960-01-04 542 543 See also 544 -------- 545 pandas.DataFrame.astype : Cast argument to a specified dtype. 546 pandas.to_timedelta : Convert argument to timedelta. 547 """ 548 if arg is None: 549 return None 550 551 if origin != 'unix': 552 arg = _adjust_to_origin(arg, origin, unit) 553 554 tz = 'utc' if utc else None 555 convert_listlike = partial(_convert_listlike_datetimes, tz=tz, unit=unit, 556 dayfirst=dayfirst, yearfirst=yearfirst, 557 errors=errors, exact=exact, 558 infer_datetime_format=infer_datetime_format) 559 560 if isinstance(arg, Timestamp): 561 result = arg 562 elif isinstance(arg, ABCSeries): 563 cache_array = _maybe_cache(arg, format, cache, convert_listlike) 564 if not cache_array.empty: 565 result = arg.map(cache_array) 566 else: 567 from pandas import Series 568 values = convert_listlike(arg._values, True, format) 569 result = Series(values, index=arg.index, name=arg.name) 570 elif isinstance(arg, (ABCDataFrame, MutableMapping)): 571 result = _assemble_from_unit_mappings(arg, errors=errors) 572 elif isinstance(arg, ABCIndexClass): 573 cache_array = _maybe_cache(arg, format, cache, convert_listlike) 574 if not cache_array.empty: 575 result = _convert_and_box_cache(arg, cache_array, box, errors, 576 name=arg.name) 577 else: 578 convert_listlike = partial(convert_listlike, name=arg.name) 579 result = convert_listlike(arg, box, format) 580 elif is_list_like(arg): 581 cache_array = _maybe_cache(arg, format, cache, convert_listlike) 582 if not cache_array.empty: 583 result = _convert_and_box_cache(arg, cache_array, box, errors) 584 else: 585 result = convert_listlike(arg, box, format) 586 else: 587 result = convert_listlike(np.array([arg]), box, format)[0] 588 589 return result 590 591 592 # mappings for assembling units 593 _unit_map = {'year': 'year', 594 'years': 'year', 595 'month': 'month', 596 'months': 'month', 597 'day': 'day', 598 'days': 'day', 599 'hour': 'h', 600 'hours': 'h', 601 'minute': 'm', 602 'minutes': 'm', 603 'second': 's', 604 'seconds': 's', 605 'ms': 'ms', 606 'millisecond': 'ms', 607 'milliseconds': 'ms', 608 'us': 'us', 609 'microsecond': 'us', 610 'microseconds': 'us', 611 'ns': 'ns', 612 'nanosecond': 'ns', 613 'nanoseconds': 'ns' 614 } 615 616 617 def _assemble_from_unit_mappings(arg, errors): 618 """ 619 assemble the unit specified fields from the arg (DataFrame) 620 Return a Series for actual parsing 621 622 Parameters 623 ---------- 624 arg : DataFrame 625 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 626 627 - If 'raise', then invalid parsing will raise an exception 628 - If 'coerce', then invalid parsing will be set as NaT 629 - If 'ignore', then invalid parsing will return the input 630 631 Returns 632 ------- 633 Series 634 """ 635 from pandas import to_timedelta, to_numeric, DataFrame 636 arg = DataFrame(arg) 637 if not arg.columns.is_unique: 638 raise ValueError("cannot assemble with duplicate keys") 639 640 # replace passed unit with _unit_map 641 def f(value): 642 if value in _unit_map: 643 return _unit_map[value] 644 645 # m is case significant 646 if value.lower() in _unit_map: 647 return _unit_map[value.lower()] 648 649 return value 650 651 unit = {k: f(k) for k in arg.keys()} 652 unit_rev = {v: k for k, v in unit.items()} 653 654 # we require at least Ymd 655 required = ['year', 'month', 'day'] 656 req = sorted(list(set(required) - set(unit_rev.keys()))) 657 if len(req): 658 raise ValueError("to assemble mappings requires at least that " 659 "[year, month, day] be specified: [{required}] " 660 "is missing".format(required=','.join(req))) 661 662 # keys we don't recognize 663 excess = sorted(list(set(unit_rev.keys()) - set(_unit_map.values()))) 664 if len(excess): 665 raise ValueError("extra keys have been passed " 666 "to the datetime assemblage: " 667 "[{excess}]".format(excess=','.join(excess))) 668 669 def coerce(values): 670 # we allow coercion to if errors allows 671 values = to_numeric(values, errors=errors) 672 673 # prevent overflow in case of int8 or int16 674 if is_integer_dtype(values): 675 values = values.astype('int64', copy=False) 676 return values 677 678 values = (coerce(arg[unit_rev['year']]) * 10000 + 679 coerce(arg[unit_rev['month']]) * 100 + 680 coerce(arg[unit_rev['day']])) 681 try: 682 values = to_datetime(values, format='%Y%m%d', errors=errors) 683 except (TypeError, ValueError) as e: 684 raise ValueError("cannot assemble the " 685 "datetimes: {error}".format(error=e)) 686 687 for u in ['h', 'm', 's', 'ms', 'us', 'ns']: 688 value = unit_rev.get(u) 689 if value is not None and value in arg: 690 try: 691 values += to_timedelta(coerce(arg[value]), 692 unit=u, 693 errors=errors) 694 except (TypeError, ValueError) as e: 695 raise ValueError("cannot assemble the datetimes [{value}]: " 696 "{error}".format(value=value, error=e)) 697 698 return values 699 700 701 def _attempt_YYYYMMDD(arg, errors): 702 """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like, 703 arg is a passed in as an object dtype, but could really be ints/strings 704 with nan-like/or floats (e.g. with nan) 705 706 Parameters 707 ---------- 708 arg : passed value 709 errors : 'raise','ignore','coerce' 710 """ 711 712 def calc(carg): 713 # calculate the actual result 714 carg = carg.astype(object) 715 parsed = parsing.try_parse_year_month_day(carg / 10000, 716 carg / 100 % 100, 717 carg % 100) 718 return tslib.array_to_datetime(parsed, errors=errors)[0] 719 720 def calc_with_mask(carg, mask): 721 result = np.empty(carg.shape, dtype='M8[ns]') 722 iresult = result.view('i8') 723 iresult[~mask] = tslibs.iNaT 724 result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)). \ 725 astype('M8[ns]') 726 return result 727 728 # try intlike / strings that are ints 729 try: 730 return calc(arg.astype(np.int64)) 731 except: 732 pass 733 734 # a float with actual np.nan 735 try: 736 carg = arg.astype(np.float64) 737 return calc_with_mask(carg, notna(carg)) 738 except: 739 pass 740 741 # string with NaN-like 742 try: 743 mask = ~algorithms.isin(arg, list(tslib.nat_strings)) 744 return calc_with_mask(arg, mask) 745 except: 746 pass 747 748 return None 749 750 751 # Fixed time formats for time parsing 752 _time_formats = ["%H:%M", "%H%M", "%I:%M%p", "%I%M%p", 753 "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"] 754 755 756 def _guess_time_format_for_array(arr): 757 # Try to guess the format based on the first non-NaN element 758 non_nan_elements = notna(arr).nonzero()[0] 759 if len(non_nan_elements): 760 element = arr[non_nan_elements[0]] 761 for time_format in _time_formats: 762 try: 763 datetime.strptime(element, time_format) 764 return time_format 765 except ValueError: 766 pass 767 768 return None 769 770 771 def to_time(arg, format=None, infer_time_format=False, errors='raise'): 772 """ 773 Parse time strings to time objects using fixed strptime formats ("%H:%M", 774 "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", 775 "%I%M%S%p") 776 777 Use infer_time_format if all the strings are in the same format to speed 778 up conversion. 779 780 Parameters 781 ---------- 782 arg : string in time format, datetime.time, list, tuple, 1-d array, Series 783 format : str, default None 784 Format used to convert arg into a time object. If None, fixed formats 785 are used. 786 infer_time_format: bool, default False 787 Infer the time format based on the first non-NaN element. If all 788 strings are in the same format, this will speed up conversion. 789 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 790 - If 'raise', then invalid parsing will raise an exception 791 - If 'coerce', then invalid parsing will be set as None 792 - If 'ignore', then invalid parsing will return the input 793 794 Returns 795 ------- 796 datetime.time 797 """ 798 from pandas.core.series import Series 799 800 def _convert_listlike(arg, format): 801 802 if isinstance(arg, (list, tuple)): 803 arg = np.array(arg, dtype='O') 804 805 elif getattr(arg, 'ndim', 1) > 1: 806 raise TypeError('arg must be a string, datetime, list, tuple, ' 807 '1-d array, or Series') 808 809 arg = ensure_object(arg) 810 811 if infer_time_format and format is None: 812 format = _guess_time_format_for_array(arg) 813 814 times = [] 815 if format is not None: 816 for element in arg: 817 try: 818 times.append(datetime.strptime(element, format).time()) 819 except (ValueError, TypeError): 820 if errors == 'raise': 821 msg = ("Cannot convert {element} to a time with given " 822 "format {format}").format(element=element, 823 format=format) 824 raise ValueError(msg) 825 elif errors == 'ignore': 826 return arg 827 else: 828 times.append(None) 829 else: 830 formats = _time_formats[:] 831 format_found = False 832 for element in arg: 833 time_object = None 834 for time_format in formats: 835 try: 836 time_object = datetime.strptime(element, 837 time_format).time() 838 if not format_found: 839 # Put the found format in front 840 fmt = formats.pop(formats.index(time_format)) 841 formats.insert(0, fmt) 842 format_found = True 843 break 844 except (ValueError, TypeError): 845 continue 846 847 if time_object is not None: 848 times.append(time_object) 849 elif errors == 'raise': 850 raise ValueError("Cannot convert arg {arg} to " 851 "a time".format(arg=arg)) 852 elif errors == 'ignore': 853 return arg 854 else: 855 times.append(None) 856 857 return times 858 859 if arg is None: 860 return arg 861 elif isinstance(arg, time): 862 return arg 863 elif isinstance(arg, Series): 864 values = _convert_listlike(arg._values, format) 865 return Series(values, index=arg.index, name=arg.name) 866 elif isinstance(arg, ABCIndexClass): 867 return _convert_listlike(arg, format) 868 elif is_list_like(arg): 869 return _convert_listlike(arg, format) 870 871 return _convert_listlike(np.array([arg]), format)[0] 872 [end of pandas/core/tools/datetimes.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
0473aab048e43c5bbecd58b1a42b61460ca03518
Change in behavior of DatetimeIndex + Offset ```python import pandas as pd offset = pd.tseries.offsets.Week(weekday=6) idx = pd.DatetimeIndex(['1999-12-26', '2000-05-14']) assert (idx[0] + offset) == (idx + offset)[0] ``` bisected to https://github.com/pandas-dev/pandas/pull/18952 Before that the assertion was true, after it's false ``` # before >>> idx + offset DatetimeIndex(['2000-01-02', '2000-05-21'], dtype='datetime64[ns]', freq=None) # after >>> idx + offset DatetimeIndex(['1999-12-26', '2000-05-14'], dtype='datetime64[ns]', freq=None) ``` cc @reidy-p is this an expected change due to #18952, or a new bug? If it's intentional, we should document that.
Hmm if I had to guess, https://github.com/pandas-dev/pandas/pull/18952/files#diff-63e1050a658278e30a3e7a744c4d6435R1324 is the line directly to blame, which seems pretty intentional :) @TomAugspurger thanks for spotting this. It seems like the problem is a bit deeper than the line you're pointing to - I'll take a closer look and see if I can fix it. Fails for me on master. ```python In [1]: import pandas as pd ...: ...: offset = pd.tseries.offsets.Week(weekday=6) ...: idx = pd.DatetimeIndex(['1999-12-26', '2000-05-14']) ...: ...: assert (idx[0] + offset) == (idx + offset)[0] ...: ...: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-1-8c110b991a5b> in <module>() 4 idx = pd.DatetimeIndex(['1999-12-26', '2000-05-14']) 5 ----> 6 assert (idx[0] + offset) == (idx + offset)[0] AssertionError: ``` are your C-extensions up to date? @TomAugspurger Sorry that was a mistake on my part. Forgot that I had made some changes that I didn't stash before testing.
2018-09-13T17:17:07Z
<patch> diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -1313,7 +1313,7 @@ def _end_apply_index(self, dtindex): base_period = dtindex.to_period(base) if self.n > 0: # when adding, dates on end roll to next - normed = dtindex - off + normed = dtindex - off + Timedelta(1, 'D') - Timedelta(1, 'ns') roll = np.where(base_period.to_timestamp(how='end') == normed, self.n, self.n - 1) else: </patch>
[]
[]
Qiskit__qiskit-10126
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Quantum shannon decomposition failing for some inputs ### Environment - **Qiskit Terra version**: 0.42.1 - **Python version**: 3.10.8 ### What is happening? qs_decomposition in [qsd.py](https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/quantum_info/synthesis/qsd.py) throws an error for some examples, seemingly due to a bug in the code ### How can we reproduce the issue? ``` from qiskit.quantum_info.synthesis.qsd import qs_decomposition qs_decomposition(np.array([[0,1],[1,0]])) ``` Output: ``` Traceback (most recent call last): Cell In[14], line 3 qs_decomposition(np.array([[0,1],[1,0]])) File /opt/conda/lib/python3.10/site-packages/qiskit/quantum_info/synthesis/qsd.py:122 in qs_decomposition return _apply_a2(circ) File /opt/conda/lib/python3.10/site-packages/qiskit/quantum_info/synthesis/qsd.py:253 in _apply_a2 qc3 = two_qubit_decompose.two_qubit_cnot_decompose(mat2) UnboundLocalError: local variable 'mat2' referenced before assignment Use %tb to get the full traceback. ``` Alternatively, ``` qs_decomposition(qiskit.quantum_info.random_unitary(4).to_matrix()) ``` Giving the same error ### What should happen? We should still be able to produce a decomposed circuit from these examples ### Any suggestions? This seems to occur when line 233 of qsd.py in the function ‘_apply_a2()’ does not transpile to include any of the ‘qsd2q’ gate type for an instance. To fix this add something such as the following before the loop on line 242: ``` if not ind2q: return ccirc ``` Additionally should add a test for this kind of case to the [test](https://github.com/Qiskit/qiskit-terra/blob/main/test/python/quantum_info/test_synthesis.py) </issue> <code> [start of README.md] 1 # Qiskit Terra 2 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)<!--- long-description-skip-begin -->[![Release](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![Downloads](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=main)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=main)[![Minimum rustc 1.61.0](https://img.shields.io/badge/rustc-1.61.0+-blue.svg)](https://rust-lang.github.io/rfcs/2495-min-rust-version.html)<!--- long-description-skip-end --> 3 4 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms. 5 6 This library is the core component of Qiskit, **Terra**, which contains the building blocks for creating 7 and working with quantum circuits, programs, and algorithms. It also contains a compiler that supports 8 different quantum computers and a common interface for running programs on different quantum computer architectures. 9 10 For more details on how to use Qiskit you can refer to the documentation located here: 11 12 https://qiskit.org/documentation/ 13 14 15 ## Installation 16 17 We encourage installing Qiskit via ``pip``. The following command installs the core Qiskit components, including Terra. 18 19 ```bash 20 pip install qiskit 21 ``` 22 23 Pip will handle all dependencies automatically and you will always install the latest (and well-tested) version. 24 25 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-install-from-source-label). 26 27 ## Creating Your First Quantum Program in Qiskit Terra 28 29 Now that Qiskit is installed, it's time to begin working with Qiskit. To do this 30 we create a `QuantumCircuit` object to define a basic quantum program. 31 32 ```python 33 from qiskit import QuantumCircuit 34 qc = QuantumCircuit(2, 2) 35 qc.h(0) 36 qc.cx(0, 1) 37 qc.measure([0,1], [0,1]) 38 ``` 39 40 This simple example makes an entangled state, also called a [Bell state](https://qiskit.org/textbook/ch-gates/multiple-qubits-entangled-states.html#3.2-Entangled-States-). 41 42 Once you've made your first quantum circuit, you can then simulate it. 43 To do this, first we need to compile your circuit for the target backend we're going to run 44 on. In this case we are leveraging the built-in `BasicAer` simulator. However, this 45 simulator is primarily for testing and is limited in performance and functionality (as the name 46 implies). You should consider more sophisticated simulators, such as [`qiskit-aer`](https://github.com/Qiskit/qiskit-aer/), 47 for any real simulation work. 48 49 ```python 50 from qiskit import transpile 51 from qiskit.providers.basicaer import QasmSimulatorPy 52 backend_sim = QasmSimulatorPy() 53 transpiled_qc = transpile(qc, backend_sim) 54 ``` 55 56 After compiling the circuit we can then run this on the ``backend`` object with: 57 58 ```python 59 result = backend_sim.run(transpiled_qc).result() 60 print(result.get_counts(qc)) 61 ``` 62 63 The output from this execution will look similar to this: 64 65 ```python 66 {'00': 513, '11': 511} 67 ``` 68 69 For further examples of using Qiskit you can look at the example scripts in **examples/python**. You can start with 70 [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in the levels. Also 71 you can refer to the tutorials in the documentation here: 72 73 https://qiskit.org/documentation/tutorials.html 74 75 76 ### Executing your code on a real quantum chip 77 78 You can also use Qiskit to execute your code on a **real quantum processor**. 79 Qiskit provides an abstraction layer that lets users run quantum circuits on hardware from any 80 vendor that provides an interface to their systems through Qiskit. Using these ``providers`` you can run any Qiskit code against 81 real quantum computers. Some examples of published provider packages for running on real hardware are: 82 83 * https://github.com/Qiskit/qiskit-ibmq-provider 84 * https://github.com/Qiskit-Partners/qiskit-ionq 85 * https://github.com/Qiskit-Partners/qiskit-aqt-provider 86 * https://github.com/qiskit-community/qiskit-braket-provider 87 * https://github.com/qiskit-community/qiskit-quantinuum-provider 88 * https://github.com/rigetti/qiskit-rigetti 89 90 <!-- This is not an exhasutive list, and if you maintain a provider package please feel free to open a PR to add new providers --> 91 92 You can refer to the documentation of these packages for further instructions 93 on how to get access and use these systems. 94 95 ## Contribution Guidelines 96 97 If you'd like to contribute to Qiskit Terra, please take a look at our 98 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 99 100 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please 101 [join the Qiskit Slack community](https://qisk.it/join-slack) 102 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions. 103 For questions that are more suited for a forum we use the `qiskit` tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit). 104 105 ## Next Steps 106 107 Now you're set up and ready to check out some of the other examples from our 108 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository. 109 110 ## Authors and Citation 111 112 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 113 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](CITATION.bib). 114 115 ## Changelog and Release Notes 116 117 The changelog for a particular release is dynamically generated and gets 118 written to the release page on Github for each release. For example, you can 119 find the page for the `0.9.0` release here: 120 121 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0 122 123 The changelog for the current release can be found in the releases tab: 124 [![Releases](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases) 125 The changelog provides a quick overview of notable changes for a given 126 release. 127 128 Additionally, as part of each release detailed release notes are written to 129 document in detail what has changed as part of a release. This includes any 130 documentation on potential breaking changes on upgrade and new features. 131 For example, you can find the release notes for the `0.9.0` release in the 132 Qiskit documentation here: 133 134 https://qiskit.org/documentation/release_notes.html#terra-0-9 135 136 ## License 137 138 [Apache License 2.0](LICENSE.txt) 139 [end of README.md] [start of qiskit/qasm2/__init__.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2023. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 r""" 14 ================================ 15 OpenQASM 2 (:mod:`qiskit.qasm2`) 16 ================================ 17 18 Qiskit has support for interoperation with OpenQASM 2.0 programs, both parsing into Qiskit formats 19 and exporting back to OpenQASM 2. The parsing components live in this module, while currently the 20 export capabilities are limited to being the :meth:`.QuantumCircuit.qasm` method. 21 22 .. note:: 23 24 OpenQASM 2 is a simple language, and not suitable for general serialisation of Qiskit objects. 25 See :ref:`some discussion of alternatives below <qasm2-alternatives>`, if that is what you are 26 looking for. 27 28 Parsing API 29 =========== 30 31 This module contains two public functions, both of which create a :class:`.QuantumCircuit` from an 32 OpenQASM 2 program. :func:`load` takes a filename, while :func:`loads` takes the program itself as a 33 string. Their internals are very similar, so both offer almost the same API. 34 35 .. autofunction:: load 36 37 .. autofunction:: loads 38 39 Both of these loading functions also take an argument ``include_path``, which is an iterable of 40 directory names to use when searching for files in ``include`` statements. The directories are 41 tried from index 0 onwards, and the first match is used. The import ``qelib1.inc`` is treated 42 specially; it is always found before looking in the include path, and contains exactly the content 43 of the `paper describing the OpenQASM 2 language <https://arxiv.org/abs/1707.03429>`__. The gates 44 in this include file are mapped to circuit-library gate objects defined by Qiskit. 45 46 .. _qasm2-custom-instructions: 47 48 Specifying custom instructions 49 ------------------------------ 50 51 You can extend the quantum components of the OpenQASM 2 language by passing an iterable of 52 information on custom instructions as the argument ``custom_instructions``. In files that have 53 compatible definitions for these instructions, the given ``constructor`` will be used in place of 54 whatever other handling :mod:`qiskit.qasm2` would have done. These instructions may optionally be 55 marked as ``builtin``, which causes them to not require an ``opaque`` or ``gate`` declaration, but 56 they will silently ignore a compatible declaration. Either way, it is an error to provide a custom 57 instruction that has a different number of parameters or qubits as a defined instruction in a parsed 58 program. Each element of the argument iterable should be a particular data class: 59 60 .. autoclass:: CustomInstruction 61 62 .. _qasm2-custom-classical: 63 64 Specifying custom classical functions 65 ------------------------------------- 66 67 Similar to the quantum extensions above, you can also extend the processing done to classical 68 expressions (arguments to gates) by passing an iterable to the argument ``custom_classical`` to either 69 loader. This needs the ``name`` (a valid OpenQASM 2 identifier), the number ``num_params`` of 70 parameters it takes, and a Python callable that implements the function. The Python callable must 71 be able to accept ``num_params`` positional floating-point arguments, and must return a float or 72 integer (which will be converted to a float). Builtin functions cannot be overridden. 73 74 .. autoclass:: CustomClassical 75 76 .. _qasm2-strict-mode: 77 78 Strict mode 79 ----------- 80 81 Both of the loader functions have an optional "strict" mode. By default, this parser is a little 82 bit more relaxed than the official specification: it allows trailing commas in parameter lists; 83 unnecessary (empty-statement) semicolons; the ``OPENQASM 2.0;`` version statement to be omitted; and 84 a couple of other quality-of-life improvements without emitting any errors. You can use the 85 letter-of-the-spec mode with ``strict=True``. 86 87 Errors 88 ====== 89 90 This module defines a generic error type that derives from :exc:`.QiskitError` that can be used as a 91 catch when you care about failures emitted by the interoperation layer specifically. 92 93 .. autoexception:: QASM2Error 94 95 In cases where the lexer or parser fails due to an invalid OpenQASM 2 file, the conversion functions 96 will raise a more specific error with a message explaining what the failure is, and where in the 97 file it occurred. 98 99 .. autoexception:: QASM2ParseError 100 101 .. _qasm2-examples: 102 103 Examples 104 ======== 105 106 Use :func:`loads` to import an OpenQASM 2 program in a string into a :class:`.QuantumCircuit`: 107 108 .. code-block:: python 109 110 import qiskit.qasm2 111 program = ''' 112 OPENQASM 2.0; 113 include "qelib1.inc"; 114 qreg q[2]; 115 creg c[2]; 116 117 h q[0]; 118 cx q[0], q[1]; 119 120 measure q -> c; 121 ''' 122 circuit = qiskit.qasm2.loads(program) 123 circuit.draw() 124 125 .. code-block:: text 126 127 ┌───┐ ┌─┐ 128 q_0: ┤ H ├──■──┤M├─── 129 └───┘┌─┴─┐└╥┘┌─┐ 130 q_1: ─────┤ X ├─╫─┤M├ 131 └───┘ ║ └╥┘ 132 c: 2/═══════════╩══╩═ 133 0 1 134 135 You can achieve the same thing if the program is stored in a file by using :func:`load` instead, 136 passing the filename as an argument: 137 138 .. code-block:: python 139 140 import qiskit.qasm2 141 circuit = qiskit.qasm2.load("myfile.qasm") 142 143 OpenQASM 2 files can include other OpenQASM 2 files via the ``include`` statement. You can 144 influence the search path used for finding these files with the ``include_path`` argument to both 145 :func:`load` and :func:`loads`. By default, only the current working directory is searched. 146 147 .. code-block:: python 148 149 import qiskit.qasm2 150 program = ''' 151 include "other.qasm"; 152 // ... and so on 153 ''' 154 circuit = qiskit.qasm2.loads(program, include_path=("/path/to/a", "/path/to/b", ".")) 155 156 For :func:`load` only, there is an extra argument ``include_input_directory``, which can be used to 157 either ``'append'``, ``'prepend'`` or ignore (``None``) the directory of the loaded file in the 158 include path. By default, this directory is appended to the search path, so it is tried last, but 159 you can change this. 160 161 .. code-block:: python 162 163 import qiskit.qasm2 164 filenames = ["./subdirectory/a.qasm", "/path/to/b.qasm", "~/my.qasm"] 165 # Search the directory of each file before other parts of the include path. 166 circuits = [ 167 qiskit.qasm2.load(filename, include_input_directory="prepend") for filename in filenames 168 ] 169 # Override the include path, and don't search the directory of each file unless it's in the 170 # absolute path list. 171 circuits = [ 172 qiskit.qasm2.load( 173 filename, 174 include_path=("/usr/include/qasm", "~/qasm/include"), 175 include_input_directory=None, 176 ) 177 for filename in filenames 178 ] 179 180 Sometimes you may want to influence the :class:`.Gate` objects that the importer emits for given 181 named instructions. Gates that are defined by the statement ``include "qelib1.inc";`` will 182 automatically be associated with a suitable Qiskit circuit-library gate, but you can extend this: 183 184 .. code-block:: python 185 186 from qiskit.circuit import Gate 187 from qiskit.qasm2 import loads, CustomInstruction 188 189 class MyGate(Gate): 190 def __init__(self, theta): 191 super().__init__("my", 2, [theta]) 192 193 class Builtin(Gate): 194 def __init__(self): 195 super().__init__("builtin", 1, []) 196 197 program = ''' 198 opaque my(theta) q1, q2; 199 qreg q[2]; 200 my(0.5) q[0], q[1]; 201 builtin q[0]; 202 ''' 203 customs = [ 204 CustomInstruction(name="my", num_params=1, num_qubits=2, constructor=MyGate), 205 # Setting 'builtin=True' means the instruction doesn't require a declaration to be usable. 206 CustomInstruction("builtin", 0, 1, Builtin, builtin=True), 207 ] 208 circuit = loads(program, custom_instructions=customs) 209 210 211 Similarly, you can add new classical functions used during the description of arguments to gates, 212 both in the main body of the program (which come out constant-folded) and within the bodies of 213 defined gates (which are computed on demand). Here we provide a Python version of ``atan2(y, x)``, 214 which mathematically is :math:`\atan(y/x)` but correctly handling angle quadrants and infinities, 215 and a custom ``add_one`` function: 216 217 .. code-block:: python 218 219 import math 220 from qiskit.qasm2 import loads, CustomClassical 221 222 program = ''' 223 include "qelib1.inc"; 224 qreg q[2]; 225 rx(atan2(pi, 3 + add_one(0.2))) q[0]; 226 cx q[0], q[1]; 227 ''' 228 229 def add_one(x): 230 return x + 1 231 232 customs = [ 233 # `atan2` takes two parameters, and `math.atan2` implements it. 234 CustomClassical("atan2", 2, math.atan2), 235 # Our `add_one` takes only one parameter. 236 CustomClassical("add_one", 1, add_one), 237 ] 238 circuit = loads(program, custom_classical=customs) 239 240 241 .. _qasm2-legacy-compatibility: 242 243 Legacy Compatibility 244 ==================== 245 246 :meth:`.QuantumCircuit.from_qasm_str` and :meth:`~.QuantumCircuit.from_qasm_file` used to make a few 247 additions on top of the raw specification. Qiskit originally tried to use OpenQASM 2 as a sort of 248 serialisation format, and expanded its behaviour as Qiskit expanded. The new parser under all its 249 defaults implements the specification more strictly. 250 251 The complete legacy code-paths are 252 253 .. code-block:: python 254 255 from qiskit.converters import ast_to_dag, dag_to_circuit 256 from qiskit.qasm import Qasm 257 258 def from_qasm_file(path: str): 259 dag_to_circuit(ast_to_dag(Qasm(filename=path).parse())) 260 261 def from_qasm_str(qasm_str: str): 262 dag_to_circuit(ast_to_dag(Qasm(data=qasm_str).parse())) 263 264 In particular, in the legacy importers: 265 266 * the `include_path` is effectively: 267 1. ``<qiskit>/qasm/libs``, where ``<qiskit>`` is the root of the installed ``qiskit`` package; 268 2. the current working directory. 269 270 * there are additional instructions defined in ``qelib1.inc``: 271 ``csx a, b`` 272 Controlled :math:`\sqrt X` gate, corresponding to :class:`.CSXGate`. 273 274 ``cu(theta, phi, lambda, gamma) c, t`` 275 The four-parameter version of a controlled-:math:`U`, corresponding to :class:`.CUGate`. 276 277 ``rxx(theta) a, b`` 278 Two-qubit rotation arond the :math:`XX` axis, corresponding to :class:`.RXXGate`. 279 280 ``rzz(theta) a, b`` 281 Two-qubit rotation arond the :math:`ZZ` axis, corresponding to :class:`.RZZGate`. 282 283 ``rccx a, b, c`` 284 The double-controlled :math:`X` gate, but with relative phase differences over the standard 285 Toffoli gate. This *should* correspond to the Qiskit gate :class:`~.RCCXGate`, but the legacy 286 converter wouldn't actually output this type. 287 288 ``rc3x a, b, c, d`` 289 The triple-controlled :math:`X` gate, but with relative phase differences over the standard 290 definition. Corresponds to :class:`.RC3XGate`. 291 292 ``c3x a, b, c, d`` 293 The triple-controlled :math:`X` gate, corresponding to :class:`.C3XGate`. 294 295 ``c3sqrtx a, b, c, d`` 296 The triple-controlled :math:`\sqrt X` gate, corresponding to :class:`.C3SXGate`. 297 298 ``c4x a, b, c, d, e`` 299 The quadruple-controlled :math:`X` gate., corresponding to :class:`.C4XGate`. 300 301 * if *any* ``opaque`` or ``gate`` definition was given for the name ``delay``, they attempt to 302 output a :class:`~qiskit.circuit.Delay` instruction at each call. To function, this expects a 303 definition compatible with ``opaque delay(t) q;``, where the time ``t`` is given in units of 304 ``dt``. The importer will raise errors on construction if there was not exactly one parameter 305 and one qubit, or if the parameter is not integer-valued. 306 307 * the additional scientific-calculator functions ``asin``, ``acos`` and ``atan`` are available. 308 309 * the parsed grammar is effectively the same as :ref:`the strict mode of the new importers 310 <qasm2-strict-mode>`. 311 312 You can emulate this behaviour in :func:`load` and :func:`loads` by setting `include_path` 313 appropriately (try inspecting the variable ``qiskit.__file__`` to find the installed location), and 314 by passing a list of :class:`CustomInstruction` instances for each of the custom gates you care 315 about. To make things easier we make three tuples available, which each contain one component of 316 a configuration that is equivalent to Qiskit's legacy converter behaviour. 317 318 .. py:data:: LEGACY_CUSTOM_INSTRUCTIONS 319 320 A tuple containing the extra `custom_instructions` that Qiskit's legacy built-in converters used 321 if ``qelib1.inc`` is included, and there is any definition of a ``delay`` instruction. The gates 322 in the paper version of ``qelib1.inc`` and ``delay`` all require a compatible declaration 323 statement to be present within the OpenQASM 2 program, but Qiskit's legacy additions are all 324 marked as builtins since they are not actually present in any include file this parser sees. 325 326 .. py:data:: LEGACY_CUSTOM_CLASSICAL 327 328 A tuple containing the extra `custom_classical` functions that Qiskit's legacy built-in 329 converters use beyond those specified by the paper. This is the three basic inverse 330 trigonometric functions: :math:`\asin`, :math:`\acos` and :math:`\atan`. 331 332 .. py:data:: LEGACY_INCLUDE_PATH 333 334 A tuple containing the exact `include_path` used by the legacy Qiskit converter. 335 336 On *all* the gates defined in Qiskit's legacy version of ``qelib1.inc`` and the ``delay`` 337 instruction, it does not matter how the gates are actually defined and used, the legacy importer 338 will always attempt to output its custom objects for them. This can result in errors during the 339 circuit construction, even after a successful parse. There is no way to emulate this buggy 340 behaviour with :mod:`qiskit.qasm2`; only an ``include "qelib1.inc";`` statement or the 341 `custom_instructions` argument can cause built-in Qiskit instructions to be used, and the signatures 342 of these match each other. 343 344 .. note:: 345 346 Circuits imported with :func:`load` and :func:`loads` with the above legacy-compability settings 347 should compare equal to those created by Qiskit's legacy importer, provided no non-``qelib1.inc`` 348 user gates are defined. User-defined gates are handled slightly differently in the new importer, 349 and while they should have equivalent :attr:`~.Instruction.definition` fields on inspection, this 350 module uses a custom class to lazily load the definition when it is requested (like most Qiskit 351 objects), rather than eagerly creating it during the parse. Qiskit's comparison rules for gates 352 will see these two objects as unequal, although any pass through :func:`.transpile` for a 353 particular backend should produce the same output circuits. 354 355 356 .. _qasm2-alternatives: 357 358 Alternatives 359 ============ 360 361 The parser components of this module started off as a separate PyPI package: `qiskit-qasm2 362 <https://pypi.org/project/qiskit-qasm2/>`__. This package at version 0.5.3 was vendored into Qiskit 363 Terra 0.24. Any subsequent changes between the two packages may not necessarily be kept in sync. 364 365 There is a newer version of the OpenQASM specification, version 3.0, which is described at 366 https://openqasm.com. This includes far more facilities for high-level classical programming. 367 Qiskit has some rudimentary support for OpenQASM 3 already; see :mod:`qiskit.qasm3` for that. 368 369 OpenQASM 2 is not a suitable serialization language for Qiskit's :class:`.QuantumCircuit`. This 370 module is provided for interoperability purposes, not as a general serialization format. If that is 371 what you need, consider using :mod:`qiskit.qpy` instead. 372 """ 373 374 __all__ = [ 375 "load", 376 "loads", 377 "CustomInstruction", 378 "CustomClassical", 379 "LEGACY_CUSTOM_INSTRUCTIONS", 380 "LEGACY_CUSTOM_CLASSICAL", 381 "LEGACY_INCLUDE_PATH", 382 "QASM2Error", 383 "QASM2ParseError", 384 "QASM2ExportError", 385 ] 386 387 import os 388 from pathlib import Path 389 from typing import Iterable, Union, Optional, Literal 390 391 # Pylint can't handle the C-extension introspection of `_qasm2` because there's a re-import through 392 # to `qiskit.qasm2.exceptions`, and pylint ends up trying to import `_qasm2` twice, which PyO3 393 # hates. If that gets fixed, this disable can be removed and `qiskit._qasm2` added to the allowed C 394 # extensions for loadings in the `pyproject.toml`. 395 # pylint: disable=c-extension-no-member 396 from qiskit import _qasm2 397 from qiskit.circuit import QuantumCircuit 398 from . import parse as _parse 399 from .exceptions import QASM2Error, QASM2ParseError, QASM2ExportError 400 from .parse import ( 401 CustomInstruction, 402 CustomClassical, 403 LEGACY_CUSTOM_INSTRUCTIONS, 404 LEGACY_CUSTOM_CLASSICAL, 405 ) 406 407 408 LEGACY_INCLUDE_PATH = ( 409 Path(__file__).parents[1] / "qasm" / "libs", 410 # This is deliberately left as a relative current-directory import until call time, so it 411 # respects changes the user might make from within the interpreter. 412 Path("."), 413 ) 414 415 416 def _normalize_path(path: Union[str, os.PathLike]) -> str: 417 """Normalise a given path into a path-like object that can be passed to Rust. 418 419 Ideally this would be something that we can convert to Rust's `OSString`, but in practice, 420 Python uses `os.fsencode` to produce a `bytes` object, but this doesn't map especially well. 421 """ 422 path = Path(path).expanduser().absolute() 423 if not path.exists(): 424 raise FileNotFoundError(str(path)) 425 return str(path) 426 427 428 def loads( 429 string: str, 430 *, 431 include_path: Iterable[Union[str, os.PathLike]] = (".",), 432 custom_instructions: Iterable[CustomInstruction] = (), 433 custom_classical: Iterable[CustomClassical] = (), 434 strict: bool = False, 435 ) -> QuantumCircuit: 436 """Parse an OpenQASM 2 program from a string into a :class:`.QuantumCircuit`. 437 438 Args: 439 string: The OpenQASM 2 program in a string. 440 include_path: order of directories to search when evluating ``include`` statements. 441 custom_instructions: any custom constructors that should be used for specific gates or 442 opaque instructions during circuit construction. See :ref:`qasm2-custom-instructions` 443 for more. 444 custom_classical: any custom classical functions that should be used during the parsing of 445 classical expressions. See :ref:`qasm2-custom-classical` for more. 446 strict: whether to run in :ref:`strict mode <qasm2-strict-mode>`. 447 448 Returns: 449 A circuit object representing the same OpenQASM 2 program. 450 """ 451 custom_instructions = list(custom_instructions) 452 return _parse.from_bytecode( 453 _qasm2.bytecode_from_string( 454 string, 455 [_normalize_path(path) for path in include_path], 456 [ 457 _qasm2.CustomInstruction(x.name, x.num_params, x.num_qubits, x.builtin) 458 for x in custom_instructions 459 ], 460 tuple(custom_classical), 461 strict, 462 ), 463 custom_instructions, 464 ) 465 466 467 def load( 468 filename: Union[str, os.PathLike], 469 *, 470 include_path: Iterable[Union[str, os.PathLike]] = (".",), 471 include_input_directory: Optional[Literal["append", "prepend"]] = "append", 472 custom_instructions: Iterable[CustomInstruction] = (), 473 custom_classical: Iterable[CustomClassical] = (), 474 strict: bool = False, 475 ) -> QuantumCircuit: 476 """Parse an OpenQASM 2 program from a file into a :class:`.QuantumCircuit`. The given path 477 should be ASCII or UTF-8 encoded, and contain the OpenQASM 2 program. 478 479 Args: 480 filename: The OpenQASM 2 program in a string. 481 include_path: order of directories to search when evluating ``include`` statements. 482 include_input_directory: Whether to add the directory of the input file to the 483 ``include_path``, and if so, whether to *append* it to search last, or *prepend* it to 484 search first. Pass ``None`` to suppress adding this directory entirely. 485 custom_instructions: any custom constructors that should be used for specific gates or 486 opaque instructions during circuit construction. See :ref:`qasm2-custom-instructions` 487 for more. 488 custom_classical: any custom classical functions that should be used during the parsing of 489 classical expressions. See :ref:`qasm2-custom-classical` for more. 490 strict: whether to run in :ref:`strict mode <qasm2-strict-mode>`. 491 492 Returns: 493 A circuit object representing the same OpenQASM 2 program. 494 """ 495 filename = Path(filename) 496 include_path = [_normalize_path(path) for path in include_path] 497 if include_input_directory == "append": 498 include_path.append(str(filename.parent)) 499 elif include_input_directory == "prepend": 500 include_path.insert(0, str(filename.parent)) 501 elif include_input_directory is not None: 502 raise ValueError( 503 f"unknown value for include_input_directory: '{include_input_directory}'." 504 " Valid values are '\"append\"', '\"prepend\"' and 'None'." 505 ) 506 custom_instructions = tuple(custom_instructions) 507 return _parse.from_bytecode( 508 _qasm2.bytecode_from_file( 509 _normalize_path(filename), 510 include_path, 511 [ 512 _qasm2.CustomInstruction(x.name, x.num_params, x.num_qubits, x.builtin) 513 for x in custom_instructions 514 ], 515 tuple(custom_classical), 516 strict, 517 ), 518 custom_instructions, 519 ) 520 [end of qiskit/qasm2/__init__.py] [start of qiskit/quantum_info/synthesis/qsd.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2022. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 """ 13 Quantum Shannon Decomposition. 14 15 Method is described in arXiv:quant-ph/0406176. 16 """ 17 import scipy 18 import numpy as np 19 from qiskit.circuit import QuantumCircuit, QuantumRegister 20 from qiskit.quantum_info.synthesis import two_qubit_decompose, one_qubit_decompose 21 from qiskit.quantum_info.operators.predicates import is_hermitian_matrix 22 from qiskit.extensions.quantum_initializer.uc_pauli_rot import UCPauliRotGate, _EPS 23 24 25 def qs_decomposition( 26 mat, opt_a1=True, opt_a2=True, decomposer_1q=None, decomposer_2q=None, *, _depth=0 27 ): 28 """ 29 Decomposes unitary matrix into one and two qubit gates using Quantum Shannon Decomposition. 30 31 ┌───┐ ┌───┐ ┌───┐ ┌───┐ 32 ─┤ ├─ ───────┤ Rz├─────┤ Ry├─────┤ Rz├───── 33 │ │ ≃ ┌───┐└─┬─┘┌───┐└─┬─┘┌───┐└─┬─┘┌───┐ 34 /─┤ ├─ /─┤ ├──□──┤ ├──□──┤ ├──□──┤ ├ 35 └───┘ └───┘ └───┘ └───┘ └───┘ 36 37 The number of CX gates generated with the decomposition without optimizations is, 38 39 .. math:: 40 41 \frac{9}{16} 4^n - frac{3}{2} 2^n 42 43 If opt_a1 = True, the default, the CX count is reduced by, 44 45 .. math:: 46 47 \frac{1}{3} 4^{n - 2} - 1. 48 49 If opt_a2 = True, the default, the CX count is reduced by, 50 51 .. math:: 52 53 4^{n-2} - 1. 54 55 This decomposition is described in arXiv:quant-ph/0406176. 56 57 Arguments: 58 mat (ndarray): unitary matrix to decompose 59 opt_a1 (bool): whether to try optimization A.1 from Shende. This should eliminate 1 cnot 60 per call. If True CZ gates are left in the output. If desired these can be further decomposed 61 to CX. 62 opt_a2 (bool): whether to try optimization A.2 from Shende. This decomposes two qubit 63 unitaries into a diagonal gate and a two cx unitary and reduces overal cx count by 64 4^(n-2) - 1. 65 decomposer_1q (None or Object): optional 1Q decomposer. If None, uses 66 :class:`~qiskit.quantum_info.synthesis.one_qubit_decomposer.OneQubitEulerDecomser` 67 decomposer_2q (None or Object): optional 2Q decomposer. If None, uses 68 :class:`~qiskit.quantum_info.synthesis.two_qubit_decomposer.two_qubit_cnot_decompose 69 70 Return: 71 QuantumCircuit: Decomposed quantum circuit. 72 """ 73 # _depth (int): Internal use parameter to track recursion depth. 74 dim = mat.shape[0] 75 nqubits = int(np.log2(dim)) 76 if np.allclose(np.identity(dim), mat): 77 return QuantumCircuit(nqubits) 78 if dim == 2: 79 if decomposer_1q is None: 80 decomposer_1q = one_qubit_decompose.OneQubitEulerDecomposer() 81 circ = decomposer_1q(mat) 82 elif dim == 4: 83 if decomposer_2q is None: 84 if opt_a2: 85 from qiskit.extensions.unitary import UnitaryGate # pylint: disable=cyclic-import 86 87 def decomp_2q(mat): 88 ugate = UnitaryGate(mat) 89 qc = QuantumCircuit(2, name="qsd2q") 90 qc.append(ugate, [0, 1]) 91 return qc 92 93 decomposer_2q = decomp_2q 94 else: 95 decomposer_2q = two_qubit_decompose.two_qubit_cnot_decompose 96 circ = decomposer_2q(mat) 97 else: 98 qr = QuantumRegister(nqubits) 99 circ = QuantumCircuit(qr) 100 dim_o2 = dim // 2 101 # perform cosine-sine decomposition 102 (u1, u2), vtheta, (v1h, v2h) = scipy.linalg.cossin(mat, separate=True, p=dim_o2, q=dim_o2) 103 # left circ 104 left_circ = _demultiplex(v1h, v2h, opt_a1=opt_a1, opt_a2=opt_a2, _depth=_depth) 105 circ.append(left_circ.to_instruction(), qr) 106 # middle circ 107 if opt_a1: 108 nangles = len(vtheta) 109 half_size = nangles // 2 110 # get UCG in terms of CZ 111 circ_cz = _get_ucry_cz(nqubits, (2 * vtheta).tolist()) 112 circ.append(circ_cz.to_instruction(), range(nqubits)) 113 # merge final cz with right-side generic multiplexer 114 u2[:, half_size:] = np.negative(u2[:, half_size:]) 115 else: 116 circ.ucry((2 * vtheta).tolist(), qr[:-1], qr[-1]) 117 # right circ 118 right_circ = _demultiplex(u1, u2, opt_a1=opt_a1, opt_a2=opt_a2, _depth=_depth) 119 circ.append(right_circ.to_instruction(), qr) 120 121 if opt_a2 and _depth == 0: 122 return _apply_a2(circ) 123 return circ 124 125 126 def _demultiplex(um0, um1, opt_a1=False, opt_a2=False, *, _depth=0): 127 """Decompose a generic multiplexer. 128 129 ────□──── 130 ┌──┴──┐ 131 /─┤ ├─ 132 └─────┘ 133 134 represented by the block diagonal matrix 135 136 ┏ ┓ 137 ┃ um0 ┃ 138 ┃ um1 ┃ 139 ┗ ┛ 140 141 to 142 ┌───┐ 143 ───────┤ Rz├────── 144 ┌───┐└─┬─┘┌───┐ 145 /─┤ w ├──□──┤ v ├─ 146 └───┘ └───┘ 147 148 where v and w are general unitaries determined from decomposition. 149 150 Args: 151 um0 (ndarray): applied if MSB is 0 152 um1 (ndarray): applied if MSB is 1 153 opt_a1 (bool): whether to try optimization A.1 from Shende. This should elliminate 1 cnot 154 per call. If True CZ gates are left in the output. If desired these can be further decomposed 155 opt_a2 (bool): whether to try optimization A.2 from Shende. This decomposes two qubit 156 unitaries into a diagonal gate and a two cx unitary and reduces overal cx count by 157 4^(n-2) - 1. 158 _depth (int): This is an internal variable to track the recursion depth. 159 160 Returns: 161 QuantumCircuit: decomposed circuit 162 """ 163 dim = um0.shape[0] + um1.shape[0] # these should be same dimension 164 nqubits = int(np.log2(dim)) 165 um0um1 = um0 @ um1.T.conjugate() 166 if is_hermitian_matrix(um0um1): 167 eigvals, vmat = scipy.linalg.eigh(um0um1) 168 else: 169 evals, vmat = scipy.linalg.schur(um0um1, output="complex") 170 eigvals = evals.diagonal() 171 dvals = np.lib.scimath.sqrt(eigvals) 172 dmat = np.diag(dvals) 173 wmat = dmat @ vmat.T.conjugate() @ um1 174 175 circ = QuantumCircuit(nqubits) 176 177 # left gate 178 left_gate = qs_decomposition( 179 wmat, opt_a1=opt_a1, opt_a2=opt_a2, _depth=_depth + 1 180 ).to_instruction() 181 circ.append(left_gate, range(nqubits - 1)) 182 183 # multiplexed Rz 184 angles = 2 * np.angle(np.conj(dvals)) 185 circ.ucrz(angles.tolist(), list(range(nqubits - 1)), [nqubits - 1]) 186 187 # right gate 188 right_gate = qs_decomposition( 189 vmat, opt_a1=opt_a1, opt_a2=opt_a2, _depth=_depth + 1 190 ).to_instruction() 191 circ.append(right_gate, range(nqubits - 1)) 192 193 return circ 194 195 196 def _get_ucry_cz(nqubits, angles): 197 """ 198 Get uniformly controlled Ry gate in in CZ-Ry as in UCPauliRotGate. 199 """ 200 nangles = len(angles) 201 qc = QuantumCircuit(nqubits) 202 q_controls = qc.qubits[:-1] 203 q_target = qc.qubits[-1] 204 if not q_controls: 205 if np.abs(angles[0]) > _EPS: 206 qc.ry(angles[0], q_target) 207 else: 208 angles = angles.copy() 209 UCPauliRotGate._dec_uc_rotations(angles, 0, len(angles), False) 210 for (i, angle) in enumerate(angles): 211 if np.abs(angle) > _EPS: 212 qc.ry(angle, q_target) 213 if not i == len(angles) - 1: 214 binary_rep = np.binary_repr(i + 1) 215 q_contr_index = len(binary_rep) - len(binary_rep.rstrip("0")) 216 else: 217 # Handle special case: 218 q_contr_index = len(q_controls) - 1 219 # leave off last CZ for merging with adjacent UCG 220 if i < nangles - 1: 221 qc.cz(q_controls[q_contr_index], q_target) 222 return qc 223 224 225 def _apply_a2(circ): 226 from qiskit import transpile 227 from qiskit.quantum_info import Operator 228 229 # from qiskit.extensions.unitary import UnitaryGate 230 import qiskit.extensions.unitary 231 232 decomposer = two_qubit_decompose.TwoQubitDecomposeUpToDiagonal() 233 ccirc = transpile(circ, basis_gates=["u", "cx", "qsd2q"], optimization_level=0) 234 ind2q = [] 235 # collect 2q instrs 236 for i, instruction in enumerate(ccirc.data): 237 if instruction.operation.name == "qsd2q": 238 ind2q.append(i) 239 # rolling over diagonals 240 ind2 = None # lint 241 for ind1, ind2 in zip(ind2q[0:-1:], ind2q[1::]): 242 # get neigboring 2q gates separated by controls 243 instr1 = ccirc.data[ind1] 244 mat1 = Operator(instr1.operation).data 245 instr2 = ccirc.data[ind2] 246 mat2 = Operator(instr2.operation).data 247 # rollover 248 dmat, qc2cx = decomposer(mat1) 249 ccirc.data[ind1] = instr1.replace(operation=qc2cx.to_gate()) 250 mat2 = mat2 @ dmat 251 ccirc.data[ind2] = instr2.replace(qiskit.extensions.unitary.UnitaryGate(mat2)) 252 qc3 = two_qubit_decompose.two_qubit_cnot_decompose(mat2) 253 ccirc.data[ind2] = ccirc.data[ind2].replace(operation=qc3.to_gate()) 254 return ccirc 255 [end of qiskit/quantum_info/synthesis/qsd.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
574da7ee5cfb58cf3d7eb5ef726d15166c5e247a
Quantum shannon decomposition failing for some inputs ### Environment - **Qiskit Terra version**: 0.42.1 - **Python version**: 3.10.8 ### What is happening? qs_decomposition in [qsd.py](https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/quantum_info/synthesis/qsd.py) throws an error for some examples, seemingly due to a bug in the code ### How can we reproduce the issue? ``` from qiskit.quantum_info.synthesis.qsd import qs_decomposition qs_decomposition(np.array([[0,1],[1,0]])) ``` Output: ``` Traceback (most recent call last): Cell In[14], line 3 qs_decomposition(np.array([[0,1],[1,0]])) File /opt/conda/lib/python3.10/site-packages/qiskit/quantum_info/synthesis/qsd.py:122 in qs_decomposition return _apply_a2(circ) File /opt/conda/lib/python3.10/site-packages/qiskit/quantum_info/synthesis/qsd.py:253 in _apply_a2 qc3 = two_qubit_decompose.two_qubit_cnot_decompose(mat2) UnboundLocalError: local variable 'mat2' referenced before assignment Use %tb to get the full traceback. ``` Alternatively, ``` qs_decomposition(qiskit.quantum_info.random_unitary(4).to_matrix()) ``` Giving the same error ### What should happen? We should still be able to produce a decomposed circuit from these examples ### Any suggestions? This seems to occur when line 233 of qsd.py in the function ‘_apply_a2()’ does not transpile to include any of the ‘qsd2q’ gate type for an instance. To fix this add something such as the following before the loop on line 242: ``` if not ind2q: return ccirc ``` Additionally should add a test for this kind of case to the [test](https://github.com/Qiskit/qiskit-terra/blob/main/test/python/quantum_info/test_synthesis.py)
I can correct this with the above indicated as long as the reasoning follows Yeah, I believe your reasoning is correct here, thanks - I don't think there's anything to do if there's nothing to decompose. @ewinston can check me, though, and I'll assign him to the PR if you're able to make it. Let us know if not, though, and one of us will. The `_apply_a2` function actually wasn't supposed to by applied for `dim == 2`. I can submit a pr for that fix. Did you ever notice this for dimension > 2?
2023-05-17T15:30:11Z
<patch> diff --git a/qiskit/quantum_info/synthesis/qsd.py b/qiskit/quantum_info/synthesis/qsd.py --- a/qiskit/quantum_info/synthesis/qsd.py +++ b/qiskit/quantum_info/synthesis/qsd.py @@ -81,7 +81,7 @@ def qs_decomposition( circ = decomposer_1q(mat) elif dim == 4: if decomposer_2q is None: - if opt_a2: + if opt_a2 and _depth > 0: from qiskit.extensions.unitary import UnitaryGate # pylint: disable=cyclic-import def decomp_2q(mat): @@ -118,7 +118,7 @@ def decomp_2q(mat): right_circ = _demultiplex(u1, u2, opt_a1=opt_a1, opt_a2=opt_a2, _depth=_depth) circ.append(right_circ.to_instruction(), qr) - if opt_a2 and _depth == 0: + if opt_a2 and _depth == 0 and dim > 4: return _apply_a2(circ) return circ @@ -236,6 +236,8 @@ def _apply_a2(circ): for i, instruction in enumerate(ccirc.data): if instruction.operation.name == "qsd2q": ind2q.append(i) + if not ind2q: + return ccirc # rolling over diagonals ind2 = None # lint for ind1, ind2 in zip(ind2q[0:-1:], ind2q[1::]): </patch>
[]
[]
mesonbuild__meson-9369
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Throws exception instead of parsing error meson git c6d74ac7e0890c323bd1190d5f5d3d938fc6d59a When building this tree, meson throws an exception instead of complaining about the parsing error and where it occurred. [grilo-wip-hadess-grlnet-disable-fix.zip](https://github.com/mesonbuild/meson/files/7278069/grilo-wip-hadess-grlnet-disable-fix.zip) ```sh $ ~/Projects/jhbuild/meson/meson.py --prefix /home/hadess/Projects/gnome-install --libdir lib --buildtype=debugoptimized /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix The Meson build system Version: 0.59.99 Source dir: /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix Build dir: /tmp/bug-repro Build type: native build Project name: grilo Project version: 0.3.14 C compiler for the host machine: ccache cc (gcc 11.2.1 "cc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)") C linker for the host machine: cc ld.bfd 2.37-10 Host machine cpu family: x86_64 Host machine cpu: x86_64 Found pkg-config: /usr/bin/pkg-config (1.8.0) Run-time dependency gio-2.0 found: YES 2.70.0 Run-time dependency glib-2.0 found: YES 2.70.0 Run-time dependency gmodule-2.0 found: YES 2.70.0 Run-time dependency gobject-2.0 found: YES 2.70.0 Run-time dependency libxml-2.0 found: YES 2.9.12 Run-time dependency libsoup-2.4 found: YES 2.74.0 Run-time dependency totem-plparser found: YES 3.26.6 Program g-ir-scanner found: YES (/usr/bin/g-ir-scanner) Program vapigen found: YES (/usr/bin/vapigen) Run-time dependency gtk+-3.0 found: YES 3.24.30 Run-time dependency oauth found: YES 1.0.3 Run-time dependency gobject-introspection-1.0 found: YES 1.70.0 Run-time dependency vapigen found: YES 0.54.1 Found pkg-config: /usr/bin/pkg-config (1.8.0) Program glib-genmarshal found: YES (/usr/bin/glib-genmarshal) Traceback (most recent call last): File "/home/hadess/Projects/jhbuild/meson/mesonbuild/mesonmain.py", line 228, in run return options.run_func(options) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 290, in run app.generate() File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 181, in generate self._generate(env) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 225, in _generate intr.run() File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 2456, in run super().run() File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 165, in run self.evaluate_codeblock(self.ast, start=1) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock raise e File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock self.evaluate_statement(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 196, in evaluate_statement return self.function_call(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 82, in wrapper res = f(self, node) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 629, in function_call return func(node, func_args, kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 697, in wrapped return f(*wrapped_args, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped return f(*wrapped_args, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 275, in wrapper return f(*nargs, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 1941, in func_subdir self.evaluate_codeblock(codeblock) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock raise e File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock self.evaluate_statement(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 198, in evaluate_statement self.assignment(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 848, in assignment value = self.evaluate_statement(node.value) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 200, in evaluate_statement return self.method_call(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 666, in method_call return self._holderify(obj.method_call(method_name, args, kwargs)) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreterobjects.py", line 751, in method_call ret = method(state, args, kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped return f(*wrapped_args, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/modules/gnome.py", line 1669, in genmarshal header = build.CustomTarget(output + '_h', state.subdir, state.subproject, custom_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2317, in __init__ self.process_kwargs(kwargs, backend) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2426, in process_kwargs if isinstance(kwargs['install_dir'], list): KeyError: 'install_dir' ``` </issue> <code> [start of README.md] 1 <p align="center"> 2 <img src="https://mesonbuild.com/assets/images/meson_logo.png"> 3 </p> 4 Meson® is a project to create the best possible next-generation 5 build system. 6 7 #### Status 8 9 [![PyPI](https://img.shields.io/pypi/v/meson.svg)](https://pypi.python.org/pypi/meson) 10 [![Build Status](https://dev.azure.com/jussi0947/jussi/_apis/build/status/mesonbuild.meson)](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1) 11 [![Codecov](https://codecov.io/gh/mesonbuild/meson/coverage.svg?branch=master)](https://codecov.io/gh/mesonbuild/meson/branch/master) 12 [![Code Quality: Python](https://img.shields.io/lgtm/grade/python/g/mesonbuild/meson.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/mesonbuild/meson/context:python) 13 [![Total Alerts](https://img.shields.io/lgtm/alerts/g/mesonbuild/meson.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/mesonbuild/meson/alerts) 14 15 #### Dependencies 16 17 - [Python](https://python.org) (version 3.6 or newer) 18 - [Ninja](https://ninja-build.org) (version 1.8.2 or newer) 19 20 #### Installing from source 21 22 Meson is available on [PyPi](https://pypi.python.org/pypi/meson), so 23 it can be installed with `pip3 install meson`. The exact command to 24 type to install with `pip` can vary between systems, be sure to use 25 the Python 3 version of `pip`. 26 27 If you wish you can install it locally with the standard Python command: 28 29 ```console 30 python3 -m pip install meson 31 ``` 32 33 For builds using Ninja, Ninja can be downloaded directly from Ninja 34 [GitHub release page](https://github.com/ninja-build/ninja/releases) 35 or via [PyPi](https://pypi.python.org/pypi/ninja) 36 37 ```console 38 python3 -m pip install ninja 39 ``` 40 41 More on Installing Meson build can be found at the 42 [getting meson page](https://mesonbuild.com/Getting-meson.html). 43 44 #### Running 45 46 Meson requires that you have a source directory and a build directory 47 and that these two are different. In your source root must exist a 48 file called `meson.build`. To generate the build system run this 49 command: 50 51 `meson setup <source directory> <build directory>` 52 53 Depending on how you obtained Meson the command might also be called 54 `meson.py` instead of plain `meson`. In the rest of this document we 55 are going to use the latter form. 56 57 You can omit either of the two directories, and Meson will substitute 58 the current directory and autodetect what you mean. This allows you to 59 do things like this: 60 61 ```console 62 cd <source root> 63 meson setup builddir 64 ``` 65 66 To compile, cd into your build directory and type `ninja`. To run unit 67 tests, type `ninja test`. 68 69 More on running Meson build system commands can be found at the 70 [running meson page](https://mesonbuild.com/Running-Meson.html) 71 or by typing `meson --help`. 72 73 #### Contributing 74 75 We love code contributions. See the [contribution 76 page](https://mesonbuild.com/Contributing.html) on the website for 77 details. 78 79 80 #### IRC 81 82 The channel to use is `#mesonbuild` either via Matrix ([web 83 interface][matrix_web]) or [OFTC IRC][oftc_irc]. 84 85 [matrix_web]: https://app.element.io/#/room/#mesonbuild:matrix.org 86 [oftc_irc]: https://www.oftc.net/ 87 88 #### Further info 89 90 More information about the Meson build system can be found at the 91 [project's home page](https://mesonbuild.com). 92 93 Meson is a registered trademark of ***Jussi Pakkanen***. 94 [end of README.md] [start of mesonbuild/interpreterbase/interpreterbase.py] 1 # Copyright 2016-2017 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 # This class contains the basic functionality needed to run any interpreter 16 # or an interpreter-based tool. 17 18 from .. import mparser, mesonlib 19 from .. import environment 20 21 from .baseobjects import ( 22 InterpreterObject, 23 MesonInterpreterObject, 24 MutableInterpreterObject, 25 InterpreterObjectTypeVar, 26 ObjectHolder, 27 IterableObject, 28 29 TYPE_var, 30 TYPE_kwargs, 31 32 HoldableTypes, 33 ) 34 35 from .exceptions import ( 36 InterpreterException, 37 InvalidCode, 38 InvalidArguments, 39 SubdirDoneRequest, 40 ContinueRequest, 41 BreakRequest 42 ) 43 44 from .decorators import FeatureNew 45 from .disabler import Disabler, is_disabled 46 from .helpers import default_resolve_key, flatten, resolve_second_level_holders 47 from .operator import MesonOperator 48 from ._unholder import _unholder 49 50 import os, copy, re, pathlib 51 import typing as T 52 import textwrap 53 54 if T.TYPE_CHECKING: 55 # T.cast is not handled by flake8 to detect quoted annotation use 56 # see https://github.com/PyCQA/pyflakes/pull/632 57 from ..interpreter import Interpreter # noqa 58 59 HolderMapType = T.Dict[ 60 T.Union[ 61 T.Type[mesonlib.HoldableObject], 62 T.Type[int], 63 T.Type[bool], 64 T.Type[str], 65 T.Type[list], 66 T.Type[dict], 67 ], 68 # For some reason, this has to be a callable and can't just be ObjectHolder[InterpreterObjectTypeVar] 69 T.Callable[[InterpreterObjectTypeVar, 'Interpreter'], ObjectHolder[InterpreterObjectTypeVar]] 70 ] 71 72 FunctionType = T.Dict[ 73 str, 74 T.Callable[[mparser.BaseNode, T.List[TYPE_var], T.Dict[str, TYPE_var]], TYPE_var] 75 ] 76 77 class InterpreterBase: 78 def __init__(self, source_root: str, subdir: str, subproject: str): 79 self.source_root = source_root 80 self.funcs: FunctionType = {} 81 self.builtin: T.Dict[str, InterpreterObject] = {} 82 # Holder maps store a mapping from an HoldableObject to a class ObjectHolder 83 self.holder_map: HolderMapType = {} 84 self.bound_holder_map: HolderMapType = {} 85 self.subdir = subdir 86 self.root_subdir = subdir 87 self.subproject = subproject 88 self.variables: T.Dict[str, InterpreterObject] = {} 89 self.argument_depth = 0 90 self.current_lineno = -1 91 # Current node set during a function call. This can be used as location 92 # when printing a warning message during a method call. 93 self.current_node = None # type: mparser.BaseNode 94 # This is set to `version_string` when this statement is evaluated: 95 # meson.version().compare_version(version_string) 96 # If it was part of a if-clause, it is used to temporally override the 97 # current meson version target within that if-block. 98 self.tmp_meson_version = None # type: T.Optional[str] 99 100 def load_root_meson_file(self) -> None: 101 mesonfile = os.path.join(self.source_root, self.subdir, environment.build_filename) 102 if not os.path.isfile(mesonfile): 103 raise InvalidArguments('Missing Meson file in %s' % mesonfile) 104 with open(mesonfile, encoding='utf-8') as mf: 105 code = mf.read() 106 if code.isspace(): 107 raise InvalidCode('Builder file is empty.') 108 assert isinstance(code, str) 109 try: 110 self.ast = mparser.Parser(code, mesonfile).parse() 111 except mesonlib.MesonException as me: 112 me.file = mesonfile 113 raise me 114 115 def parse_project(self) -> None: 116 """ 117 Parses project() and initializes languages, compilers etc. Do this 118 early because we need this before we parse the rest of the AST. 119 """ 120 self.evaluate_codeblock(self.ast, end=1) 121 122 def sanity_check_ast(self) -> None: 123 if not isinstance(self.ast, mparser.CodeBlockNode): 124 raise InvalidCode('AST is of invalid type. Possibly a bug in the parser.') 125 if not self.ast.lines: 126 raise InvalidCode('No statements in code.') 127 first = self.ast.lines[0] 128 if not isinstance(first, mparser.FunctionNode) or first.func_name != 'project': 129 p = pathlib.Path(self.source_root).resolve() 130 found = p 131 for parent in p.parents: 132 if (parent / 'meson.build').is_file(): 133 with open(parent / 'meson.build', encoding='utf-8') as f: 134 if f.readline().startswith('project('): 135 found = parent 136 break 137 else: 138 break 139 140 error = 'first statement must be a call to project()' 141 if found != p: 142 raise InvalidCode(f'Not the project root: {error}\n\nDid you mean to run meson from the directory: "{found}"?') 143 else: 144 raise InvalidCode(f'Invalid source tree: {error}') 145 146 def run(self) -> None: 147 # Evaluate everything after the first line, which is project() because 148 # we already parsed that in self.parse_project() 149 try: 150 self.evaluate_codeblock(self.ast, start=1) 151 except SubdirDoneRequest: 152 pass 153 154 def evaluate_codeblock(self, node: mparser.CodeBlockNode, start: int = 0, end: T.Optional[int] = None) -> None: 155 if node is None: 156 return 157 if not isinstance(node, mparser.CodeBlockNode): 158 e = InvalidCode('Tried to execute a non-codeblock. Possibly a bug in the parser.') 159 e.lineno = node.lineno 160 e.colno = node.colno 161 raise e 162 statements = node.lines[start:end] 163 i = 0 164 while i < len(statements): 165 cur = statements[i] 166 try: 167 self.current_lineno = cur.lineno 168 self.evaluate_statement(cur) 169 except Exception as e: 170 if getattr(e, 'lineno', None) is None: 171 # We are doing the equivalent to setattr here and mypy does not like it 172 e.lineno = cur.lineno # type: ignore 173 e.colno = cur.colno # type: ignore 174 e.file = os.path.join(self.source_root, self.subdir, environment.build_filename) # type: ignore 175 raise e 176 i += 1 # In THE FUTURE jump over blocks and stuff. 177 178 def evaluate_statement(self, cur: mparser.BaseNode) -> T.Optional[InterpreterObject]: 179 self.current_node = cur 180 if isinstance(cur, mparser.FunctionNode): 181 return self.function_call(cur) 182 elif isinstance(cur, mparser.AssignmentNode): 183 self.assignment(cur) 184 elif isinstance(cur, mparser.MethodNode): 185 return self.method_call(cur) 186 elif isinstance(cur, mparser.StringNode): 187 return self._holderify(cur.value) 188 elif isinstance(cur, mparser.BooleanNode): 189 return self._holderify(cur.value) 190 elif isinstance(cur, mparser.IfClauseNode): 191 return self.evaluate_if(cur) 192 elif isinstance(cur, mparser.IdNode): 193 return self.get_variable(cur.value) 194 elif isinstance(cur, mparser.ComparisonNode): 195 return self.evaluate_comparison(cur) 196 elif isinstance(cur, mparser.ArrayNode): 197 return self.evaluate_arraystatement(cur) 198 elif isinstance(cur, mparser.DictNode): 199 return self.evaluate_dictstatement(cur) 200 elif isinstance(cur, mparser.NumberNode): 201 return self._holderify(cur.value) 202 elif isinstance(cur, mparser.AndNode): 203 return self.evaluate_andstatement(cur) 204 elif isinstance(cur, mparser.OrNode): 205 return self.evaluate_orstatement(cur) 206 elif isinstance(cur, mparser.NotNode): 207 return self.evaluate_notstatement(cur) 208 elif isinstance(cur, mparser.UMinusNode): 209 return self.evaluate_uminusstatement(cur) 210 elif isinstance(cur, mparser.ArithmeticNode): 211 return self.evaluate_arithmeticstatement(cur) 212 elif isinstance(cur, mparser.ForeachClauseNode): 213 self.evaluate_foreach(cur) 214 elif isinstance(cur, mparser.PlusAssignmentNode): 215 self.evaluate_plusassign(cur) 216 elif isinstance(cur, mparser.IndexNode): 217 return self.evaluate_indexing(cur) 218 elif isinstance(cur, mparser.TernaryNode): 219 return self.evaluate_ternary(cur) 220 elif isinstance(cur, mparser.FormatStringNode): 221 return self.evaluate_fstring(cur) 222 elif isinstance(cur, mparser.ContinueNode): 223 raise ContinueRequest() 224 elif isinstance(cur, mparser.BreakNode): 225 raise BreakRequest() 226 else: 227 raise InvalidCode("Unknown statement.") 228 return None 229 230 def evaluate_arraystatement(self, cur: mparser.ArrayNode) -> InterpreterObject: 231 (arguments, kwargs) = self.reduce_arguments(cur.args) 232 if len(kwargs) > 0: 233 raise InvalidCode('Keyword arguments are invalid in array construction.') 234 return self._holderify([_unholder(x) for x in arguments]) 235 236 @FeatureNew('dict', '0.47.0') 237 def evaluate_dictstatement(self, cur: mparser.DictNode) -> InterpreterObject: 238 def resolve_key(key: mparser.BaseNode) -> str: 239 if not isinstance(key, mparser.StringNode): 240 FeatureNew.single_use('Dictionary entry using non literal key', '0.53.0', self.subproject) 241 str_key = _unholder(self.evaluate_statement(key)) 242 if not isinstance(str_key, str): 243 raise InvalidArguments('Key must be a string') 244 return str_key 245 arguments, kwargs = self.reduce_arguments(cur.args, key_resolver=resolve_key, duplicate_key_error='Duplicate dictionary key: {}') 246 assert not arguments 247 return self._holderify({k: _unholder(v) for k, v in kwargs.items()}) 248 249 def evaluate_notstatement(self, cur: mparser.NotNode) -> InterpreterObject: 250 v = self.evaluate_statement(cur.value) 251 if isinstance(v, Disabler): 252 return v 253 return self._holderify(v.operator_call(MesonOperator.NOT, None)) 254 255 def evaluate_if(self, node: mparser.IfClauseNode) -> T.Optional[Disabler]: 256 assert isinstance(node, mparser.IfClauseNode) 257 for i in node.ifs: 258 # Reset self.tmp_meson_version to know if it gets set during this 259 # statement evaluation. 260 self.tmp_meson_version = None 261 result = self.evaluate_statement(i.condition) 262 if isinstance(result, Disabler): 263 return result 264 if not isinstance(result, InterpreterObject): 265 raise mesonlib.MesonBugException(f'Argument to not ({result}) is not an InterpreterObject but {type(result).__name__}.') 266 res = result.operator_call(MesonOperator.BOOL, None) 267 if not isinstance(res, bool): 268 raise InvalidCode(f'If clause {result!r} does not evaluate to true or false.') 269 if res: 270 prev_meson_version = mesonlib.project_meson_versions[self.subproject] 271 if self.tmp_meson_version: 272 mesonlib.project_meson_versions[self.subproject] = self.tmp_meson_version 273 try: 274 self.evaluate_codeblock(i.block) 275 finally: 276 mesonlib.project_meson_versions[self.subproject] = prev_meson_version 277 return None 278 if not isinstance(node.elseblock, mparser.EmptyNode): 279 self.evaluate_codeblock(node.elseblock) 280 return None 281 282 def evaluate_comparison(self, node: mparser.ComparisonNode) -> InterpreterObject: 283 val1 = self.evaluate_statement(node.left) 284 if isinstance(val1, Disabler): 285 return val1 286 val2 = self.evaluate_statement(node.right) 287 if isinstance(val2, Disabler): 288 return val2 289 290 # New code based on InterpreterObjects 291 operator = { 292 'in': MesonOperator.IN, 293 'notin': MesonOperator.NOT_IN, 294 '==': MesonOperator.EQUALS, 295 '!=': MesonOperator.NOT_EQUALS, 296 '>': MesonOperator.GREATER, 297 '<': MesonOperator.LESS, 298 '>=': MesonOperator.GREATER_EQUALS, 299 '<=': MesonOperator.LESS_EQUALS, 300 }[node.ctype] 301 302 # Check if the arguments should be reversed for simplicity (this essentially converts `in` to `contains`) 303 if operator in (MesonOperator.IN, MesonOperator.NOT_IN): 304 val1, val2 = val2, val1 305 306 return self._holderify(val1.operator_call(operator, _unholder(val2))) 307 308 def evaluate_andstatement(self, cur: mparser.AndNode) -> InterpreterObject: 309 l = self.evaluate_statement(cur.left) 310 if isinstance(l, Disabler): 311 return l 312 l_bool = l.operator_call(MesonOperator.BOOL, None) 313 if not l_bool: 314 return self._holderify(l_bool) 315 r = self.evaluate_statement(cur.right) 316 if isinstance(r, Disabler): 317 return r 318 return self._holderify(r.operator_call(MesonOperator.BOOL, None)) 319 320 def evaluate_orstatement(self, cur: mparser.OrNode) -> InterpreterObject: 321 l = self.evaluate_statement(cur.left) 322 if isinstance(l, Disabler): 323 return l 324 l_bool = l.operator_call(MesonOperator.BOOL, None) 325 if l_bool: 326 return self._holderify(l_bool) 327 r = self.evaluate_statement(cur.right) 328 if isinstance(r, Disabler): 329 return r 330 return self._holderify(r.operator_call(MesonOperator.BOOL, None)) 331 332 def evaluate_uminusstatement(self, cur: mparser.UMinusNode) -> InterpreterObject: 333 v = self.evaluate_statement(cur.value) 334 if isinstance(v, Disabler): 335 return v 336 return self._holderify(v.operator_call(MesonOperator.UMINUS, None)) 337 338 def evaluate_arithmeticstatement(self, cur: mparser.ArithmeticNode) -> InterpreterObject: 339 l = self.evaluate_statement(cur.left) 340 if isinstance(l, Disabler): 341 return l 342 r = self.evaluate_statement(cur.right) 343 if isinstance(r, Disabler): 344 return r 345 346 mapping: T.Dict[str, MesonOperator] = { 347 'add': MesonOperator.PLUS, 348 'sub': MesonOperator.MINUS, 349 'mul': MesonOperator.TIMES, 350 'div': MesonOperator.DIV, 351 'mod': MesonOperator.MOD, 352 } 353 res = l.operator_call(mapping[cur.operation], _unholder(r)) 354 return self._holderify(res) 355 356 def evaluate_ternary(self, node: mparser.TernaryNode) -> T.Optional[InterpreterObject]: 357 assert isinstance(node, mparser.TernaryNode) 358 result = self.evaluate_statement(node.condition) 359 if isinstance(result, Disabler): 360 return result 361 result_bool = result.operator_call(MesonOperator.BOOL, None) 362 if result_bool: 363 return self.evaluate_statement(node.trueblock) 364 else: 365 return self.evaluate_statement(node.falseblock) 366 367 @FeatureNew('format strings', '0.58.0') 368 def evaluate_fstring(self, node: mparser.FormatStringNode) -> InterpreterObject: 369 assert isinstance(node, mparser.FormatStringNode) 370 371 def replace(match: T.Match[str]) -> str: 372 var = str(match.group(1)) 373 try: 374 val = _unholder(self.variables[var]) 375 if not isinstance(val, (str, int, float, bool)): 376 raise InvalidCode(f'Identifier "{var}" does not name a formattable variable ' + 377 '(has to be an integer, a string, a floating point number or a boolean).') 378 379 return str(val) 380 except KeyError: 381 raise InvalidCode(f'Identifier "{var}" does not name a variable.') 382 383 res = re.sub(r'@([_a-zA-Z][_0-9a-zA-Z]*)@', replace, node.value) 384 return self._holderify(res) 385 386 def evaluate_foreach(self, node: mparser.ForeachClauseNode) -> None: 387 assert isinstance(node, mparser.ForeachClauseNode) 388 items = self.evaluate_statement(node.items) 389 if not isinstance(items, IterableObject): 390 raise InvalidArguments('Items of foreach loop do not support iterating') 391 392 tsize = items.iter_tuple_size() 393 if len(node.varnames) != (tsize or 1): 394 raise InvalidArguments(f'Foreach expects exactly {tsize or 1} variables for iterating over objects of type {items.display_name()}') 395 396 for i in items.iter_self(): 397 if tsize is None: 398 if isinstance(i, tuple): 399 raise mesonlib.MesonBugException(f'Iteration of {items} returned a tuple even though iter_tuple_size() is None') 400 self.set_variable(node.varnames[0], self._holderify(i)) 401 else: 402 if not isinstance(i, tuple): 403 raise mesonlib.MesonBugException(f'Iteration of {items} did not return a tuple even though iter_tuple_size() is {tsize}') 404 if len(i) != tsize: 405 raise mesonlib.MesonBugException(f'Iteration of {items} did not return a tuple even though iter_tuple_size() is {tsize}') 406 for j in range(tsize): 407 self.set_variable(node.varnames[j], self._holderify(i[j])) 408 try: 409 self.evaluate_codeblock(node.block) 410 except ContinueRequest: 411 continue 412 except BreakRequest: 413 break 414 415 def evaluate_plusassign(self, node: mparser.PlusAssignmentNode) -> None: 416 assert isinstance(node, mparser.PlusAssignmentNode) 417 varname = node.var_name 418 addition = self.evaluate_statement(node.value) 419 420 # Remember that all variables are immutable. We must always create a 421 # full new variable and then assign it. 422 old_variable = self.get_variable(varname) 423 new_value = self._holderify(old_variable.operator_call(MesonOperator.PLUS, _unholder(addition))) 424 self.set_variable(varname, new_value) 425 426 def evaluate_indexing(self, node: mparser.IndexNode) -> InterpreterObject: 427 assert isinstance(node, mparser.IndexNode) 428 iobject = self.evaluate_statement(node.iobject) 429 if isinstance(iobject, Disabler): 430 return iobject 431 index = _unholder(self.evaluate_statement(node.index)) 432 433 if iobject is None: 434 raise InterpreterException('Tried to evaluate indexing on None') 435 return self._holderify(iobject.operator_call(MesonOperator.INDEX, index)) 436 437 def function_call(self, node: mparser.FunctionNode) -> T.Optional[InterpreterObject]: 438 func_name = node.func_name 439 (h_posargs, h_kwargs) = self.reduce_arguments(node.args) 440 (posargs, kwargs) = self._unholder_args(h_posargs, h_kwargs) 441 if is_disabled(posargs, kwargs) and func_name not in {'get_variable', 'set_variable', 'unset_variable', 'is_disabler'}: 442 return Disabler() 443 if func_name in self.funcs: 444 func = self.funcs[func_name] 445 func_args = posargs 446 if not getattr(func, 'no-args-flattening', False): 447 func_args = flatten(posargs) 448 if not getattr(func, 'no-second-level-holder-flattening', False): 449 func_args, kwargs = resolve_second_level_holders(func_args, kwargs) 450 res = func(node, func_args, kwargs) 451 return self._holderify(res) if res is not None else None 452 else: 453 self.unknown_function_called(func_name) 454 return None 455 456 def method_call(self, node: mparser.MethodNode) -> T.Optional[InterpreterObject]: 457 invokable = node.source_object 458 obj: T.Optional[InterpreterObject] 459 if isinstance(invokable, mparser.IdNode): 460 object_name = invokable.value 461 obj = self.get_variable(object_name) 462 else: 463 obj = self.evaluate_statement(invokable) 464 method_name = node.name 465 (h_args, h_kwargs) = self.reduce_arguments(node.args) 466 (args, kwargs) = self._unholder_args(h_args, h_kwargs) 467 if is_disabled(args, kwargs): 468 return Disabler() 469 if not isinstance(obj, InterpreterObject): 470 raise InvalidArguments('Variable "%s" is not callable.' % object_name) 471 # TODO: InterpreterBase **really** shouldn't be in charge of checking this 472 if method_name == 'extract_objects': 473 if isinstance(obj, ObjectHolder): 474 self.validate_extraction(obj.held_object) 475 elif not isinstance(obj, Disabler): 476 raise InvalidArguments(f'Invalid operation "extract_objects" on variable "{object_name}" of type {type(obj).__name__}') 477 obj.current_node = node 478 res = obj.method_call(method_name, args, kwargs) 479 return self._holderify(res) if res is not None else None 480 481 def _holderify(self, res: T.Union[TYPE_var, InterpreterObject]) -> InterpreterObject: 482 if isinstance(res, HoldableTypes): 483 # Always check for an exact match first. 484 cls = self.holder_map.get(type(res), None) 485 if cls is not None: 486 # Casts to Interpreter are required here since an assertion would 487 # not work for the `ast` module. 488 return cls(res, T.cast('Interpreter', self)) 489 # Try the boundary types next. 490 for typ, cls in self.bound_holder_map.items(): 491 if isinstance(res, typ): 492 return cls(res, T.cast('Interpreter', self)) 493 raise mesonlib.MesonBugException(f'Object {res} of type {type(res).__name__} is neither in self.holder_map nor self.bound_holder_map.') 494 elif isinstance(res, ObjectHolder): 495 raise mesonlib.MesonBugException(f'Returned object {res} of type {type(res).__name__} is an object holder.') 496 elif isinstance(res, MesonInterpreterObject): 497 return res 498 raise mesonlib.MesonBugException(f'Unknown returned object {res} of type {type(res).__name__} in the parameters.') 499 500 def _unholder_args(self, 501 args: T.List[InterpreterObject], 502 kwargs: T.Dict[str, InterpreterObject]) -> T.Tuple[T.List[TYPE_var], TYPE_kwargs]: 503 return [_unholder(x) for x in args], {k: _unholder(v) for k, v in kwargs.items()} 504 505 def unknown_function_called(self, func_name: str) -> None: 506 raise InvalidCode('Unknown function "%s".' % func_name) 507 508 def reduce_arguments( 509 self, 510 args: mparser.ArgumentNode, 511 key_resolver: T.Callable[[mparser.BaseNode], str] = default_resolve_key, 512 duplicate_key_error: T.Optional[str] = None, 513 ) -> T.Tuple[ 514 T.List[InterpreterObject], 515 T.Dict[str, InterpreterObject] 516 ]: 517 assert isinstance(args, mparser.ArgumentNode) 518 if args.incorrect_order(): 519 raise InvalidArguments('All keyword arguments must be after positional arguments.') 520 self.argument_depth += 1 521 reduced_pos = [self.evaluate_statement(arg) for arg in args.arguments] 522 if any(x is None for x in reduced_pos): 523 raise InvalidArguments(f'At least one value in the arguments is void.') 524 reduced_kw: T.Dict[str, InterpreterObject] = {} 525 for key, val in args.kwargs.items(): 526 reduced_key = key_resolver(key) 527 assert isinstance(val, mparser.BaseNode) 528 reduced_val = self.evaluate_statement(val) 529 if reduced_val is None: 530 raise InvalidArguments(f'Value of key {reduced_key} is void.') 531 if duplicate_key_error and reduced_key in reduced_kw: 532 raise InvalidArguments(duplicate_key_error.format(reduced_key)) 533 reduced_kw[reduced_key] = reduced_val 534 self.argument_depth -= 1 535 final_kw = self.expand_default_kwargs(reduced_kw) 536 return reduced_pos, final_kw 537 538 def expand_default_kwargs(self, kwargs: T.Dict[str, T.Optional[InterpreterObject]]) -> T.Dict[str, T.Optional[InterpreterObject]]: 539 if 'kwargs' not in kwargs: 540 return kwargs 541 to_expand = _unholder(kwargs.pop('kwargs')) 542 if not isinstance(to_expand, dict): 543 raise InterpreterException('Value of "kwargs" must be dictionary.') 544 if 'kwargs' in to_expand: 545 raise InterpreterException('Kwargs argument must not contain a "kwargs" entry. Points for thinking meta, though. :P') 546 for k, v in to_expand.items(): 547 if k in kwargs: 548 raise InterpreterException(f'Entry "{k}" defined both as a keyword argument and in a "kwarg" entry.') 549 kwargs[k] = self._holderify(v) 550 return kwargs 551 552 def assignment(self, node: mparser.AssignmentNode) -> None: 553 assert isinstance(node, mparser.AssignmentNode) 554 if self.argument_depth != 0: 555 raise InvalidArguments(textwrap.dedent('''\ 556 Tried to assign values inside an argument list. 557 To specify a keyword argument, use : instead of =. 558 ''')) 559 var_name = node.var_name 560 if not isinstance(var_name, str): 561 raise InvalidArguments('Tried to assign value to a non-variable.') 562 value = self.evaluate_statement(node.value) 563 # For mutable objects we need to make a copy on assignment 564 if isinstance(value, MutableInterpreterObject): 565 value = copy.deepcopy(value) 566 self.set_variable(var_name, value) 567 return None 568 569 def set_variable(self, varname: str, variable: T.Union[TYPE_var, InterpreterObject], *, holderify: bool = False) -> None: 570 if variable is None: 571 raise InvalidCode('Can not assign None to variable.') 572 if holderify: 573 variable = self._holderify(variable) 574 else: 575 # Ensure that we are always storing ObjectHolders 576 if not isinstance(variable, InterpreterObject): 577 raise mesonlib.MesonBugException(f'set_variable in InterpreterBase called with a non InterpreterObject {variable} of type {type(variable).__name__}') 578 if not isinstance(varname, str): 579 raise InvalidCode('First argument to set_variable must be a string.') 580 if re.match('[_a-zA-Z][_0-9a-zA-Z]*$', varname) is None: 581 raise InvalidCode('Invalid variable name: ' + varname) 582 if varname in self.builtin: 583 raise InvalidCode('Tried to overwrite internal variable "%s"' % varname) 584 self.variables[varname] = variable 585 586 def get_variable(self, varname: str) -> InterpreterObject: 587 if varname in self.builtin: 588 return self.builtin[varname] 589 if varname in self.variables: 590 return self.variables[varname] 591 raise InvalidCode('Unknown variable "%s".' % varname) 592 593 def validate_extraction(self, buildtarget: mesonlib.HoldableObject) -> None: 594 raise InterpreterException('validate_extraction is not implemented in this context (please file a bug)') 595 [end of mesonbuild/interpreterbase/interpreterbase.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
mesonbuild/meson
98d1ec7a32e15e82b62a35d0288e8458321ebd23
Throws exception instead of parsing error meson git c6d74ac7e0890c323bd1190d5f5d3d938fc6d59a When building this tree, meson throws an exception instead of complaining about the parsing error and where it occurred. [grilo-wip-hadess-grlnet-disable-fix.zip](https://github.com/mesonbuild/meson/files/7278069/grilo-wip-hadess-grlnet-disable-fix.zip) ```sh $ ~/Projects/jhbuild/meson/meson.py --prefix /home/hadess/Projects/gnome-install --libdir lib --buildtype=debugoptimized /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix The Meson build system Version: 0.59.99 Source dir: /home/hadess/Downloads/grilo-wip-hadess-grlnet-disable-fix Build dir: /tmp/bug-repro Build type: native build Project name: grilo Project version: 0.3.14 C compiler for the host machine: ccache cc (gcc 11.2.1 "cc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)") C linker for the host machine: cc ld.bfd 2.37-10 Host machine cpu family: x86_64 Host machine cpu: x86_64 Found pkg-config: /usr/bin/pkg-config (1.8.0) Run-time dependency gio-2.0 found: YES 2.70.0 Run-time dependency glib-2.0 found: YES 2.70.0 Run-time dependency gmodule-2.0 found: YES 2.70.0 Run-time dependency gobject-2.0 found: YES 2.70.0 Run-time dependency libxml-2.0 found: YES 2.9.12 Run-time dependency libsoup-2.4 found: YES 2.74.0 Run-time dependency totem-plparser found: YES 3.26.6 Program g-ir-scanner found: YES (/usr/bin/g-ir-scanner) Program vapigen found: YES (/usr/bin/vapigen) Run-time dependency gtk+-3.0 found: YES 3.24.30 Run-time dependency oauth found: YES 1.0.3 Run-time dependency gobject-introspection-1.0 found: YES 1.70.0 Run-time dependency vapigen found: YES 0.54.1 Found pkg-config: /usr/bin/pkg-config (1.8.0) Program glib-genmarshal found: YES (/usr/bin/glib-genmarshal) Traceback (most recent call last): File "/home/hadess/Projects/jhbuild/meson/mesonbuild/mesonmain.py", line 228, in run return options.run_func(options) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 290, in run app.generate() File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 181, in generate self._generate(env) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/msetup.py", line 225, in _generate intr.run() File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 2456, in run super().run() File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 165, in run self.evaluate_codeblock(self.ast, start=1) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock raise e File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock self.evaluate_statement(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 196, in evaluate_statement return self.function_call(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 82, in wrapper res = f(self, node) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 629, in function_call return func(node, func_args, kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 697, in wrapped return f(*wrapped_args, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped return f(*wrapped_args, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 275, in wrapper return f(*nargs, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreter.py", line 1941, in func_subdir self.evaluate_codeblock(codeblock) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 190, in evaluate_codeblock raise e File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 183, in evaluate_codeblock self.evaluate_statement(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 198, in evaluate_statement self.assignment(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 848, in assignment value = self.evaluate_statement(node.value) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 200, in evaluate_statement return self.method_call(cur) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/interpreterbase.py", line 666, in method_call return self._holderify(obj.method_call(method_name, args, kwargs)) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreter/interpreterobjects.py", line 751, in method_call ret = method(state, args, kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/interpreterbase/decorators.py", line 114, in wrapped return f(*wrapped_args, **wrapped_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/modules/gnome.py", line 1669, in genmarshal header = build.CustomTarget(output + '_h', state.subdir, state.subproject, custom_kwargs) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2317, in __init__ self.process_kwargs(kwargs, backend) File "/home/hadess/Projects/jhbuild/meson/mesonbuild/build.py", line 2426, in process_kwargs if isinstance(kwargs['install_dir'], list): KeyError: 'install_dir' ```
The fix is here below, but I would have expected an error about the type mismatches before the ninja file generation. ```patch diff --git a/bindings/vala/meson.build b/bindings/vala/meson.build index f5723b3..493634c 100644 --- a/bindings/vala/meson.build +++ b/bindings/vala/meson.build @@ -10,7 +10,7 @@ vala_sources = [ # LIBRARY, GIR, DEPS ] if enable_grlnet - vala_sources += ['grilo-net-@0@'.format(grl_majorminor), grlnet_gir[0], ['gio-2.0']] + vala_sources += [['grilo-net-@0@'.format(grl_majorminor), grlnet_gir[0], ['gio-2.0']]] endif foreach s: vala_sources ``` Sounds like a regression since custom_target() got ported to typed_kwargs(). Modules are creating ct without going through those decorators. @dcbaker I think the fix is to add a wrapper on ModuleState() object to create custom targets. Modules should stop using internal APIs like that. See ModuleState.test(), we did the same thing there. @xclaesse The plan is to make CustomTarget itself useful, we shouldn't have to add wrappers around the initialzers of internal class, they should stop doing the interpreter's job, stop taking a `kwargs`, and use keywords. I'll get it fixed. True, that's why state.test() takes python arguments instead of a kwargs dict, even if internally it goes back to a kwarg for now. I think we should still wrap that with a ModuleState method for CustomTarget too, one reason for that is I want - on the long term - get ride of ModuleReturnValue, ModuleState should be responsible of adding targets into the build list instead of process_new_values().
2021-10-07T16:28:29Z
<patch> diff --git a/mesonbuild/modules/gnome.py b/mesonbuild/modules/gnome.py --- a/mesonbuild/modules/gnome.py +++ b/mesonbuild/modules/gnome.py @@ -222,9 +222,9 @@ def compile_resources(self, state, args, kwargs): ifile = os.path.join(ifile.subdir, ifile.fname) elif isinstance(ifile, str): ifile = os.path.join(state.subdir, ifile) - elif isinstance(ifile, (interpreter.CustomTargetHolder, - interpreter.CustomTargetIndexHolder, - interpreter.GeneratedObjectsHolder)): + elif isinstance(ifile, (build.CustomTarget, + build.CustomTargetIndex, + build.GeneratedList)): m = 'Resource xml files generated at build-time cannot be used ' \ 'with gnome.compile_resources() because we need to scan ' \ 'the xml for dependencies. Use configure_file() instead ' \ @@ -286,7 +286,7 @@ def compile_resources(self, state, args, kwargs): kwargs['depend_files'] = depend_files kwargs['command'] = cmd else: - depfile = kwargs['output'] + '.d' + depfile = f'{output}.d' kwargs['depfile'] = depfile kwargs['command'] = copy.copy(cmd) + ['--dependency-file', '@DEPFILE@'] target_c = GResourceTarget(name, state.subdir, state.subproject, kwargs) @@ -1633,7 +1633,7 @@ def genmarshal(self, state, args, kwargs): raise MesonException(f'Genmarshal does not take a {arg} keyword argument.') install_header = kwargs.pop('install_header', False) - install_dir = kwargs.pop('install_dir', None) + install_dir = kwargs.pop('install_dir', []) custom_kwargs = { 'input': sources, @@ -1658,8 +1658,7 @@ def genmarshal(self, state, args, kwargs): body = build.CustomTarget(output + '_c', state.subdir, state.subproject, custom_kwargs) custom_kwargs['install'] = install_header - if install_dir is not None: - custom_kwargs['install_dir'] = install_dir + custom_kwargs['install_dir'] = install_dir if new_genmarshal: cmd += ['--pragma-once'] custom_kwargs['command'] = cmd + ['--header', '@INPUT@'] </patch>
[]
[]
Qiskit__qiskit-10476
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Pulse's channel index validation fails ### Environment - **Qiskit Terra version**: 0.25.0 (main) - **Python version**: 3.9 - **Operating system**: Windows ### What is happening? Channel index validation doesn't produce a `PulseError` when the channel is either non-integer or negative. Only when both are true. ### How can we reproduce the issue? The following should produce a `PulseError`, but it doesn't: ``` from qiskit import pulse pulse.DriveChannel(0.5) pulse.DriveChannel(-1) ``` Because of [this line](https://github.com/Qiskit/qiskit-terra/blob/e55389f3f05e2d871fdea3814917c93b5c280e93/qiskit/pulse/channels.py#L124), only when the index is both not an integer *and* negative, an error is raised. This does raise a `PulseError`: ``` pulse.DriveChannel(-1.5) ``` ### What should happen? A `PulseError` should be raised if either of the conditions are met. ### Any suggestions? _No response_ </issue> <code> [start of README.md] 1 # Qiskit Terra 2 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)<!--- long-description-skip-begin -->[![Release](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![Downloads](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=main)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=main)[![Minimum rustc 1.61.0](https://img.shields.io/badge/rustc-1.61.0+-blue.svg)](https://rust-lang.github.io/rfcs/2495-min-rust-version.html)<!--- long-description-skip-end --> 3 4 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms. 5 6 This library is the core component of Qiskit, **Terra**, which contains the building blocks for creating 7 and working with quantum circuits, programs, and algorithms. It also contains a compiler that supports 8 different quantum computers and a common interface for running programs on different quantum computer architectures. 9 10 For more details on how to use Qiskit you can refer to the documentation located here: 11 12 https://qiskit.org/documentation/ 13 14 15 ## Installation 16 17 We encourage installing Qiskit via ``pip``. The following command installs the core Qiskit components, including Terra. 18 19 ```bash 20 pip install qiskit 21 ``` 22 23 Pip will handle all dependencies automatically and you will always install the latest (and well-tested) version. 24 25 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-install-from-source-label). 26 27 ## Creating Your First Quantum Program in Qiskit Terra 28 29 Now that Qiskit is installed, it's time to begin working with Qiskit. To do this 30 we create a `QuantumCircuit` object to define a basic quantum program. 31 32 ```python 33 from qiskit import QuantumCircuit 34 qc = QuantumCircuit(2, 2) 35 qc.h(0) 36 qc.cx(0, 1) 37 qc.measure([0,1], [0,1]) 38 ``` 39 40 This simple example makes an entangled state, also called a [Bell state](https://qiskit.org/textbook/ch-gates/multiple-qubits-entangled-states.html#3.2-Entangled-States-). 41 42 Once you've made your first quantum circuit, you can then simulate it. 43 To do this, first we need to compile your circuit for the target backend we're going to run 44 on. In this case we are leveraging the built-in `BasicAer` simulator. However, this 45 simulator is primarily for testing and is limited in performance and functionality (as the name 46 implies). You should consider more sophisticated simulators, such as [`qiskit-aer`](https://github.com/Qiskit/qiskit-aer/), 47 for any real simulation work. 48 49 ```python 50 from qiskit import transpile 51 from qiskit.providers.basicaer import QasmSimulatorPy 52 backend_sim = QasmSimulatorPy() 53 transpiled_qc = transpile(qc, backend_sim) 54 ``` 55 56 After compiling the circuit we can then run this on the ``backend`` object with: 57 58 ```python 59 result = backend_sim.run(transpiled_qc).result() 60 print(result.get_counts(qc)) 61 ``` 62 63 The output from this execution will look similar to this: 64 65 ```python 66 {'00': 513, '11': 511} 67 ``` 68 69 For further examples of using Qiskit you can look at the example scripts in **examples/python**. You can start with 70 [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in the levels. Also 71 you can refer to the tutorials in the documentation here: 72 73 https://qiskit.org/documentation/tutorials.html 74 75 76 ### Executing your code on a real quantum chip 77 78 You can also use Qiskit to execute your code on a **real quantum processor**. 79 Qiskit provides an abstraction layer that lets users run quantum circuits on hardware from any 80 vendor that provides an interface to their systems through Qiskit. Using these ``providers`` you can run any Qiskit code against 81 real quantum computers. Some examples of published provider packages for running on real hardware are: 82 83 * https://github.com/Qiskit/qiskit-ibmq-provider 84 * https://github.com/Qiskit-Partners/qiskit-ionq 85 * https://github.com/Qiskit-Partners/qiskit-aqt-provider 86 * https://github.com/qiskit-community/qiskit-braket-provider 87 * https://github.com/qiskit-community/qiskit-quantinuum-provider 88 * https://github.com/rigetti/qiskit-rigetti 89 90 <!-- This is not an exhasutive list, and if you maintain a provider package please feel free to open a PR to add new providers --> 91 92 You can refer to the documentation of these packages for further instructions 93 on how to get access and use these systems. 94 95 ## Contribution Guidelines 96 97 If you'd like to contribute to Qiskit Terra, please take a look at our 98 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 99 100 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please 101 [join the Qiskit Slack community](https://qisk.it/join-slack) 102 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions. 103 For questions that are more suited for a forum we use the `qiskit` tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit). 104 105 ## Next Steps 106 107 Now you're set up and ready to check out some of the other examples from our 108 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository. 109 110 ## Authors and Citation 111 112 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 113 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](CITATION.bib). 114 115 ## Changelog and Release Notes 116 117 The changelog for a particular release is dynamically generated and gets 118 written to the release page on Github for each release. For example, you can 119 find the page for the `0.9.0` release here: 120 121 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0 122 123 The changelog for the current release can be found in the releases tab: 124 [![Releases](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases) 125 The changelog provides a quick overview of notable changes for a given 126 release. 127 128 Additionally, as part of each release detailed release notes are written to 129 document in detail what has changed as part of a release. This includes any 130 documentation on potential breaking changes on upgrade and new features. 131 For example, you can find the release notes for the `0.9.0` release in the 132 Qiskit documentation here: 133 134 https://qiskit.org/documentation/release_notes.html#terra-0-9 135 136 ## License 137 138 [Apache License 2.0](LICENSE.txt) 139 [end of README.md] [start of qiskit/pulse/channels.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2017, 2019. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 """ 14 .. _pulse-channels: 15 16 ======================================= 17 Channels (:mod:`qiskit.pulse.channels`) 18 ======================================= 19 20 Pulse is meant to be agnostic to the underlying hardware implementation, while still allowing 21 low-level control. Therefore, our signal channels are *virtual* hardware channels. The backend 22 which executes our programs is responsible for mapping these virtual channels to the proper 23 physical channel within the quantum control hardware. 24 25 Channels are characterized by their type and their index. Channels include: 26 27 * transmit channels, which should subclass ``PulseChannel`` 28 * receive channels, such as :class:`AcquireChannel` 29 * non-signal "channels" such as :class:`SnapshotChannel`, :class:`MemorySlot` and 30 :class:`RegisterChannel`. 31 32 Novel channel types can often utilize the :class:`ControlChannel`, but if this is not sufficient, 33 new channel types can be created. Then, they must be supported in the PulseQobj schema and the 34 assembler. Channels are characterized by their type and their index. See each channel type below to 35 learn more. 36 37 .. autosummary:: 38 :toctree: ../stubs/ 39 40 DriveChannel 41 MeasureChannel 42 AcquireChannel 43 ControlChannel 44 RegisterSlot 45 MemorySlot 46 SnapshotChannel 47 48 All channels are children of the same abstract base class: 49 50 .. autoclass:: Channel 51 """ 52 from abc import ABCMeta 53 from typing import Any, Set, Union 54 55 import numpy as np 56 57 from qiskit.circuit.parameterexpression import ParameterExpression 58 from qiskit.pulse.exceptions import PulseError 59 60 61 class Channel(metaclass=ABCMeta): 62 """Base class of channels. Channels provide a Qiskit-side label for typical quantum control 63 hardware signal channels. The final label -> physical channel mapping is the responsibility 64 of the hardware backend. For instance, ``DriveChannel(0)`` holds instructions which the backend 65 should map to the signal line driving gate operations on the qubit labeled (indexed) 0. 66 67 When serialized channels are identified by their serialized name ``<prefix><index>``. 68 The type of the channel is interpreted from the prefix, 69 and the index often (but not always) maps to the qubit index. 70 All concrete channel classes must have a ``prefix`` class attribute 71 (and instances of that class have an index attribute). Base classes which have 72 ``prefix`` set to ``None`` are prevented from being instantiated. 73 74 To implement a new channel inherit from :class:`Channel` and provide a unique string identifier 75 for the ``prefix`` class attribute. 76 """ 77 78 prefix = None # type: Optional[str] 79 """A shorthand string prefix for characterizing the channel type.""" 80 81 # pylint: disable=unused-argument 82 def __new__(cls, *args, **kwargs): 83 if cls.prefix is None: 84 raise NotImplementedError( 85 "Cannot instantiate abstract channel. " 86 "See Channel documentation for more information." 87 ) 88 89 return super().__new__(cls) 90 91 def __init__(self, index: int): 92 """Channel class. 93 94 Args: 95 index: Index of channel. 96 """ 97 self._validate_index(index) 98 self._index = index 99 self._hash = hash((self.__class__.__name__, self._index)) 100 101 @property 102 def index(self) -> Union[int, ParameterExpression]: 103 """Return the index of this channel. The index is a label for a control signal line 104 typically mapped trivially to a qubit index. For instance, ``DriveChannel(0)`` labels 105 the signal line driving the qubit labeled with index 0. 106 """ 107 return self._index 108 109 def _validate_index(self, index: Any) -> None: 110 """Raise a PulseError if the channel index is invalid, namely, if it's not a positive 111 integer. 112 113 Raises: 114 PulseError: If ``index`` is not a nonnegative integer. 115 """ 116 if isinstance(index, ParameterExpression) and index.parameters: 117 # Parameters are unbound 118 return 119 elif isinstance(index, ParameterExpression): 120 index = float(index) 121 if index.is_integer(): 122 index = int(index) 123 124 if not isinstance(index, (int, np.integer)) and index < 0: 125 raise PulseError("Channel index must be a nonnegative integer") 126 127 @property 128 def parameters(self) -> Set: 129 """Parameters which determine the channel index.""" 130 if isinstance(self.index, ParameterExpression): 131 return self.index.parameters 132 return set() 133 134 def is_parameterized(self) -> bool: 135 """Return True iff the channel is parameterized.""" 136 return isinstance(self.index, ParameterExpression) 137 138 @property 139 def name(self) -> str: 140 """Return the shorthand alias for this channel, which is based on its type and index.""" 141 return f"{self.__class__.prefix}{self._index}" 142 143 def __repr__(self): 144 return f"{self.__class__.__name__}({self._index})" 145 146 def __eq__(self, other: "Channel") -> bool: 147 """Return True iff self and other are equal, specifically, iff they have the same type 148 and the same index. 149 150 Args: 151 other: The channel to compare to this channel. 152 153 Returns: 154 True iff equal. 155 """ 156 return type(self) is type(other) and self._index == other._index 157 158 def __hash__(self): 159 return self._hash 160 161 162 class PulseChannel(Channel, metaclass=ABCMeta): 163 """Base class of transmit Channels. Pulses can be played on these channels.""" 164 165 pass 166 167 168 class ClassicalIOChannel(Channel, metaclass=ABCMeta): 169 """Base class of classical IO channels. These cannot have instructions scheduled on them.""" 170 171 pass 172 173 174 class DriveChannel(PulseChannel): 175 """Drive channels transmit signals to qubits which enact gate operations.""" 176 177 prefix = "d" 178 179 180 class MeasureChannel(PulseChannel): 181 """Measure channels transmit measurement stimulus pulses for readout.""" 182 183 prefix = "m" 184 185 186 class ControlChannel(PulseChannel): 187 """Control channels provide supplementary control over the qubit to the drive channel. 188 These are often associated with multi-qubit gate operations. They may not map trivially 189 to a particular qubit index. 190 """ 191 192 prefix = "u" 193 194 195 class AcquireChannel(Channel): 196 """Acquire channels are used to collect data.""" 197 198 prefix = "a" 199 200 201 class SnapshotChannel(ClassicalIOChannel): 202 """Snapshot channels are used to specify instructions for simulators.""" 203 204 prefix = "s" 205 206 def __init__(self): 207 """Create new snapshot channel.""" 208 super().__init__(0) 209 210 211 class MemorySlot(ClassicalIOChannel): 212 """Memory slot channels represent classical memory storage.""" 213 214 prefix = "m" 215 216 217 class RegisterSlot(ClassicalIOChannel): 218 """Classical resister slot channels represent classical registers (low-latency classical 219 memory). 220 """ 221 222 prefix = "c" 223 [end of qiskit/pulse/channels.py] [start of qiskit/pulse/parameter_manager.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2021. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 # pylint: disable=invalid-name 14 15 """"Management of pulse program parameters. 16 17 Background 18 ========== 19 20 In contrast to ``QuantumCircuit``, in pulse programs, parameter objects can be stored in 21 multiple places at different layers, for example 22 23 - program variables: ``ScheduleBlock.alignment_context._context_params`` 24 25 - instruction operands: ``ShiftPhase.phase``, ... 26 27 - operand parameters: ``pulse.parameters``, ``channel.index`` ... 28 29 This complexity is due to the tight coupling of the program to an underlying device Hamiltonian, 30 i.e. the variance of physical parameters between qubits and their couplings. 31 If we want to define a program that can be used with arbitrary qubits, 32 we should be able to parametrize every control parameter in the program. 33 34 Implementation 35 ============== 36 37 Managing parameters in each object within a program, i.e. the ``ParameterTable`` model, 38 makes the framework quite complicated. With the ``ParameterManager`` class within this module, 39 the parameter assignment operation is performed by a visitor instance. 40 41 The visitor pattern is a way of separating data processing from the object on which it operates. 42 This removes the overhead of parameter management from each piece of the program. 43 The computational complexity of the parameter assignment operation may be increased 44 from the parameter table model of ~O(1), however, usually, this calculation occurs 45 only once before the program is executed. Thus this doesn't hurt user experience during 46 pulse programming. On the contrary, it removes parameter table object and associated logic 47 from each object, yielding smaller object creation cost and higher performance 48 as the data amount scales. 49 50 Note that we don't need to write any parameter management logic for each object, 51 and thus this parameter framework gives greater scalability to the pulse module. 52 """ 53 from copy import copy 54 from typing import List, Dict, Set, Any, Union 55 56 from qiskit.circuit.parameter import Parameter 57 from qiskit.circuit.parameterexpression import ParameterExpression, ParameterValueType 58 from qiskit.pulse import instructions, channels 59 from qiskit.pulse.exceptions import PulseError 60 from qiskit.pulse.library import ParametricPulse, SymbolicPulse, Waveform 61 from qiskit.pulse.schedule import Schedule, ScheduleBlock 62 from qiskit.pulse.transforms.alignments import AlignmentKind 63 from qiskit.pulse.utils import format_parameter_value 64 65 66 class NodeVisitor: 67 """A node visitor base class that walks instruction data in a pulse program and calls 68 visitor functions for every node. 69 70 Though this class implementation is based on Python AST, each node doesn't have 71 a dedicated node class due to the lack of an abstract syntax tree for pulse programs in 72 Qiskit. Instead of parsing pulse programs, this visitor class finds the associated visitor 73 function based on class name of the instruction node, i.e. ``Play``, ``Call``, etc... 74 The `.visit` method recursively checks superclass of given node since some parametrized 75 components such as ``DriveChannel`` may share a common superclass with other subclasses. 76 In this example, we can just define ``visit_Channel`` method instead of defining 77 the same visitor function for every subclasses. 78 79 Some instructions may have special logic or data structure to store parameter objects, 80 and visitor functions for these nodes should be individually defined. 81 82 Because pulse programs can be nested into another pulse program, 83 the visitor function should be able to recursively call proper visitor functions. 84 If visitor function is not defined for a given node, ``generic_visit`` 85 method is called. Usually, this method is provided for operating on object defined 86 outside of the Qiskit Pulse module. 87 """ 88 89 def visit(self, node: Any): 90 """Visit a node.""" 91 visitor = self._get_visitor(type(node)) 92 return visitor(node) 93 94 def _get_visitor(self, node_class): 95 """A helper function to recursively investigate superclass visitor method.""" 96 if node_class == object: 97 return self.generic_visit 98 99 try: 100 return getattr(self, f"visit_{node_class.__name__}") 101 except AttributeError: 102 # check super class 103 return self._get_visitor(node_class.__base__) 104 105 def visit_ScheduleBlock(self, node: ScheduleBlock): 106 """Visit ``ScheduleBlock``. Recursively visit context blocks and overwrite. 107 108 .. note:: ``ScheduleBlock`` can have parameters in blocks and its alignment. 109 """ 110 raise NotImplementedError 111 112 def visit_Schedule(self, node: Schedule): 113 """Visit ``Schedule``. Recursively visit schedule children and overwrite.""" 114 raise NotImplementedError 115 116 def generic_visit(self, node: Any): 117 """Called if no explicit visitor function exists for a node.""" 118 raise NotImplementedError 119 120 121 class ParameterSetter(NodeVisitor): 122 """Node visitor for parameter binding. 123 124 This visitor is initialized with a dictionary of parameters to be assigned, 125 and assign values to operands of nodes found. 126 """ 127 128 def __init__(self, param_map: Dict[ParameterExpression, ParameterValueType]): 129 self._param_map = param_map 130 131 # Top layer: Assign parameters to programs 132 133 def visit_ScheduleBlock(self, node: ScheduleBlock): 134 """Visit ``ScheduleBlock``. Recursively visit context blocks and overwrite. 135 136 .. note:: ``ScheduleBlock`` can have parameters in blocks and its alignment. 137 """ 138 node._alignment_context = self.visit_AlignmentKind(node.alignment_context) 139 for elm in node._blocks: 140 self.visit(elm) 141 142 self._update_parameter_manager(node) 143 return node 144 145 def visit_Schedule(self, node: Schedule): 146 """Visit ``Schedule``. Recursively visit schedule children and overwrite.""" 147 # accessing to private member 148 # TODO: consider updating Schedule to handle this more gracefully 149 node._Schedule__children = [(t0, self.visit(sched)) for t0, sched in node.instructions] 150 node._renew_timeslots() 151 152 self._update_parameter_manager(node) 153 return node 154 155 def visit_AlignmentKind(self, node: AlignmentKind): 156 """Assign parameters to block's ``AlignmentKind`` specification.""" 157 new_parameters = tuple(self.visit(param) for param in node._context_params) 158 node._context_params = new_parameters 159 160 return node 161 162 # Mid layer: Assign parameters to instructions 163 164 def visit_Call(self, node: instructions.Call): 165 """Assign parameters to ``Call`` instruction. 166 167 .. note:: ``Call`` instruction has a special parameter handling logic. 168 This instruction separately keeps program, i.e. parametrized schedule, 169 and bound parameters until execution. The parameter assignment operation doesn't 170 immediately override its operand data. 171 """ 172 if node.is_parameterized(): 173 new_table = copy(node.arguments) 174 175 for parameter, value in new_table.items(): 176 if isinstance(value, ParameterExpression): 177 new_table[parameter] = self._assign_parameter_expression(value) 178 node.arguments = new_table 179 180 return node 181 182 def visit_Instruction(self, node: instructions.Instruction): 183 """Assign parameters to general pulse instruction. 184 185 .. note:: All parametrized object should be stored in the operands. 186 Otherwise parameter cannot be detected. 187 """ 188 if node.is_parameterized(): 189 node._operands = tuple(self.visit(op) for op in node.operands) 190 191 return node 192 193 # Lower layer: Assign parameters to operands 194 195 def visit_Channel(self, node: channels.Channel): 196 """Assign parameters to ``Channel`` object.""" 197 if node.is_parameterized(): 198 new_index = self._assign_parameter_expression(node.index) 199 200 # validate 201 if not isinstance(new_index, ParameterExpression): 202 if not isinstance(new_index, int) or new_index < 0: 203 raise PulseError("Channel index must be a nonnegative integer") 204 205 # return new instance to prevent accidentally override timeslots without evaluation 206 return node.__class__(index=new_index) 207 208 return node 209 210 def visit_ParametricPulse(self, node: ParametricPulse): 211 """Assign parameters to ``ParametricPulse`` object.""" 212 if node.is_parameterized(): 213 new_parameters = {} 214 for op, op_value in node.parameters.items(): 215 if isinstance(op_value, ParameterExpression): 216 op_value = self._assign_parameter_expression(op_value) 217 new_parameters[op] = op_value 218 219 return node.__class__(**new_parameters, name=node.name) 220 221 return node 222 223 def visit_SymbolicPulse(self, node: SymbolicPulse): 224 """Assign parameters to ``SymbolicPulse`` object.""" 225 if node.is_parameterized(): 226 # Assign duration 227 if isinstance(node.duration, ParameterExpression): 228 node.duration = self._assign_parameter_expression(node.duration) 229 # Assign other parameters 230 for name in node._params: 231 pval = node._params[name] 232 if isinstance(pval, ParameterExpression): 233 new_val = self._assign_parameter_expression(pval) 234 node._params[name] = new_val 235 node.validate_parameters() 236 237 return node 238 239 def visit_Waveform(self, node: Waveform): 240 """Assign parameters to ``Waveform`` object. 241 242 .. node:: No parameter can be assigned to ``Waveform`` object. 243 """ 244 return node 245 246 def generic_visit(self, node: Any): 247 """Assign parameters to object that doesn't belong to Qiskit Pulse module.""" 248 if isinstance(node, ParameterExpression): 249 return self._assign_parameter_expression(node) 250 else: 251 return node 252 253 def _assign_parameter_expression(self, param_expr: ParameterExpression): 254 """A helper function to assign parameter value to parameter expression.""" 255 new_value = copy(param_expr) 256 updated = param_expr.parameters & self._param_map.keys() 257 for param in updated: 258 new_value = new_value.assign(param, self._param_map[param]) 259 new_value = format_parameter_value(new_value) 260 return new_value 261 262 def _update_parameter_manager(self, node: Union[Schedule, ScheduleBlock]): 263 """A helper function to update parameter manager of pulse program.""" 264 if not hasattr(node, "_parameter_manager"): 265 raise PulseError(f"Node type {node.__class__.__name__} has no parameter manager.") 266 267 param_manager = node._parameter_manager 268 updated = param_manager.parameters & self._param_map.keys() 269 270 new_parameters = set() 271 for param in param_manager.parameters: 272 if param not in updated: 273 new_parameters.add(param) 274 continue 275 new_value = self._param_map[param] 276 if isinstance(new_value, ParameterExpression): 277 new_parameters |= new_value.parameters 278 param_manager._parameters = new_parameters 279 280 281 class ParameterGetter(NodeVisitor): 282 """Node visitor for parameter finding. 283 284 This visitor initializes empty parameter array, and recursively visits nodes 285 and add parameters found to the array. 286 """ 287 288 def __init__(self): 289 self.parameters = set() 290 291 # Top layer: Get parameters from programs 292 293 def visit_ScheduleBlock(self, node: ScheduleBlock): 294 """Visit ``ScheduleBlock``. Recursively visit context blocks and search parameters. 295 296 .. note:: ``ScheduleBlock`` can have parameters in blocks and its alignment. 297 """ 298 # Note that node.parameters returns parameters of main program with subroutines. 299 # The manager of main program is not aware of parameters in subroutines. 300 self.parameters |= node._parameter_manager.parameters 301 302 def visit_Schedule(self, node: Schedule): 303 """Visit ``Schedule``. Recursively visit schedule children and search parameters.""" 304 self.parameters |= node.parameters 305 306 def visit_AlignmentKind(self, node: AlignmentKind): 307 """Get parameters from block's ``AlignmentKind`` specification.""" 308 for param in node._context_params: 309 if isinstance(param, ParameterExpression): 310 self.parameters |= param.parameters 311 312 # Mid layer: Get parameters from instructions 313 314 def visit_Call(self, node: instructions.Call): 315 """Get parameters from ``Call`` instruction. 316 317 .. note:: ``Call`` instruction has a special parameter handling logic. 318 This instruction separately keeps parameters and program. 319 """ 320 self.parameters |= node.parameters 321 322 def visit_Instruction(self, node: instructions.Instruction): 323 """Get parameters from general pulse instruction. 324 325 .. note:: All parametrized object should be stored in the operands. 326 Otherwise, parameter cannot be detected. 327 """ 328 for op in node.operands: 329 self.visit(op) 330 331 # Lower layer: Get parameters from operands 332 333 def visit_Channel(self, node: channels.Channel): 334 """Get parameters from ``Channel`` object.""" 335 self.parameters |= node.parameters 336 337 def visit_ParametricPulse(self, node: ParametricPulse): 338 """Get parameters from ``ParametricPulse`` object.""" 339 for op_value in node.parameters.values(): 340 if isinstance(op_value, ParameterExpression): 341 self.parameters |= op_value.parameters 342 343 def visit_SymbolicPulse(self, node: SymbolicPulse): 344 """Get parameters from ``SymbolicPulse`` object.""" 345 for op_value in node.parameters.values(): 346 if isinstance(op_value, ParameterExpression): 347 self.parameters |= op_value.parameters 348 349 def visit_Waveform(self, node: Waveform): 350 """Get parameters from ``Waveform`` object. 351 352 .. node:: No parameter can be assigned to ``Waveform`` object. 353 """ 354 pass 355 356 def generic_visit(self, node: Any): 357 """Get parameters from object that doesn't belong to Qiskit Pulse module.""" 358 if isinstance(node, ParameterExpression): 359 self.parameters |= node.parameters 360 361 362 class ParameterManager: 363 """Helper class to manage parameter objects associated with arbitrary pulse programs. 364 365 This object is implicitly initialized with the parameter object storage 366 that stores parameter objects added to the parent pulse program. 367 368 Parameter assignment logic is implemented based on the visitor pattern. 369 Instruction data and its location are not directly associated with this object. 370 """ 371 372 def __init__(self): 373 """Create new parameter table for pulse programs.""" 374 self._parameters = set() 375 376 @property 377 def parameters(self) -> Set[Parameter]: 378 """Parameters which determine the schedule behavior.""" 379 return self._parameters 380 381 def clear(self): 382 """Remove the parameters linked to this manager.""" 383 self._parameters.clear() 384 385 def is_parameterized(self) -> bool: 386 """Return True iff the instruction is parameterized.""" 387 return bool(self.parameters) 388 389 def get_parameters(self, parameter_name: str) -> List[Parameter]: 390 """Get parameter object bound to this schedule by string name. 391 392 Because different ``Parameter`` objects can have the same name, 393 this method returns a list of ``Parameter`` s for the provided name. 394 395 Args: 396 parameter_name: Name of parameter. 397 398 Returns: 399 Parameter objects that have corresponding name. 400 """ 401 return [param for param in self.parameters if param.name == parameter_name] 402 403 def assign_parameters( 404 self, 405 pulse_program: Any, 406 value_dict: Dict[ParameterExpression, ParameterValueType], 407 ) -> Any: 408 """Modify and return program data with parameters assigned according to the input. 409 410 Args: 411 pulse_program: Arbitrary pulse program associated with this manager instance. 412 value_dict: A mapping from Parameters to either numeric values or another 413 Parameter expression. 414 415 Returns: 416 Updated program data. 417 """ 418 valid_map = {k: value_dict[k] for k in value_dict.keys() & self._parameters} 419 if valid_map: 420 visitor = ParameterSetter(param_map=valid_map) 421 return visitor.visit(pulse_program) 422 return pulse_program 423 424 def update_parameter_table(self, new_node: Any): 425 """A helper function to update parameter table with given data node. 426 427 Args: 428 new_node: A new data node to be added. 429 """ 430 visitor = ParameterGetter() 431 visitor.visit(new_node) 432 self._parameters |= visitor.parameters 433 [end of qiskit/pulse/parameter_manager.py] [start of qiskit/pulse/transforms/canonicalization.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2021. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 """Basic rescheduling functions which take schedule or instructions and return new schedules.""" 13 14 import warnings 15 from collections import defaultdict 16 from typing import List, Optional, Iterable, Union, Type 17 18 import numpy as np 19 20 from qiskit.pulse import channels as chans, exceptions, instructions 21 from qiskit.pulse.channels import ClassicalIOChannel 22 from qiskit.pulse.exceptions import PulseError 23 from qiskit.pulse.exceptions import UnassignedDurationError 24 from qiskit.pulse.instruction_schedule_map import InstructionScheduleMap 25 from qiskit.pulse.instructions import directives 26 from qiskit.pulse.schedule import Schedule, ScheduleBlock, ScheduleComponent 27 28 29 def block_to_schedule(block: ScheduleBlock) -> Schedule: 30 """Convert ``ScheduleBlock`` to ``Schedule``. 31 32 Args: 33 block: A ``ScheduleBlock`` to convert. 34 35 Returns: 36 Scheduled pulse program. 37 38 Raises: 39 UnassignedDurationError: When any instruction duration is not assigned. 40 PulseError: When the alignment context duration is shorter than the schedule duration. 41 42 .. note:: This transform may insert barriers in between contexts. 43 """ 44 if not block.is_schedulable(): 45 raise UnassignedDurationError( 46 "All instruction durations should be assigned before creating `Schedule`." 47 "Please check `.parameters` to find unassigned parameter objects." 48 ) 49 50 schedule = Schedule.initialize_from(block) 51 52 for op_data in block.blocks: 53 if isinstance(op_data, ScheduleBlock): 54 context_schedule = block_to_schedule(op_data) 55 if hasattr(op_data.alignment_context, "duration"): 56 # context may have local scope duration, e.g. EquispacedAlignment for 1000 dt 57 post_buffer = op_data.alignment_context.duration - context_schedule.duration 58 if post_buffer < 0: 59 raise PulseError( 60 f"ScheduleBlock {op_data.name} has longer duration than " 61 "the specified context duration " 62 f"{context_schedule.duration} > {op_data.duration}." 63 ) 64 else: 65 post_buffer = 0 66 schedule.append(context_schedule, inplace=True) 67 68 # prevent interruption by following instructions. 69 # padding with delay instructions is no longer necessary, thanks to alignment context. 70 if post_buffer > 0: 71 context_boundary = instructions.RelativeBarrier(*op_data.channels) 72 schedule.append(context_boundary.shift(post_buffer), inplace=True) 73 else: 74 schedule.append(op_data, inplace=True) 75 76 # transform with defined policy 77 return block.alignment_context.align(schedule) 78 79 80 def compress_pulses(schedules: List[Schedule]) -> List[Schedule]: 81 """Optimization pass to replace identical pulses. 82 83 Args: 84 schedules: Schedules to compress. 85 86 Returns: 87 Compressed schedules. 88 """ 89 existing_pulses = [] 90 new_schedules = [] 91 92 for schedule in schedules: 93 new_schedule = Schedule.initialize_from(schedule) 94 95 for time, inst in schedule.instructions: 96 if isinstance(inst, instructions.Play): 97 if inst.pulse in existing_pulses: 98 idx = existing_pulses.index(inst.pulse) 99 identical_pulse = existing_pulses[idx] 100 new_schedule.insert( 101 time, 102 instructions.Play(identical_pulse, inst.channel, inst.name), 103 inplace=True, 104 ) 105 else: 106 existing_pulses.append(inst.pulse) 107 new_schedule.insert(time, inst, inplace=True) 108 else: 109 new_schedule.insert(time, inst, inplace=True) 110 111 new_schedules.append(new_schedule) 112 113 return new_schedules 114 115 116 def flatten(program: Schedule) -> Schedule: 117 """Flatten (inline) any called nodes into a Schedule tree with no nested children. 118 119 Args: 120 program: Pulse program to remove nested structure. 121 122 Returns: 123 Flatten pulse program. 124 125 Raises: 126 PulseError: When invalid data format is given. 127 """ 128 if isinstance(program, Schedule): 129 flat_sched = Schedule.initialize_from(program) 130 for time, inst in program.instructions: 131 flat_sched.insert(time, inst, inplace=True) 132 return flat_sched 133 else: 134 raise PulseError(f"Invalid input program {program.__class__.__name__} is specified.") 135 136 137 def inline_subroutines(program: Union[Schedule, ScheduleBlock]) -> Union[Schedule, ScheduleBlock]: 138 """Recursively remove call instructions and inline the respective subroutine instructions. 139 140 Assigned parameter values, which are stored in the parameter table, are also applied. 141 The subroutine is copied before the parameter assignment to avoid mutation problem. 142 143 Args: 144 program: A program which may contain the subroutine, i.e. ``Call`` instruction. 145 146 Returns: 147 A schedule without subroutine. 148 149 Raises: 150 PulseError: When input program is not valid data format. 151 """ 152 if isinstance(program, Schedule): 153 return _inline_schedule(program) 154 elif isinstance(program, ScheduleBlock): 155 return _inline_block(program) 156 else: 157 raise PulseError(f"Invalid program {program.__class__.__name__} is specified.") 158 159 160 def _inline_schedule(schedule: Schedule) -> Schedule: 161 """A helper function to inline subroutine of schedule. 162 163 .. note:: If subroutine is ``ScheduleBlock`` it is converted into Schedule to get ``t0``. 164 """ 165 ret_schedule = Schedule.initialize_from(schedule) 166 for t0, inst in schedule.children: 167 # note that schedule.instructions unintentionally flatten the nested schedule. 168 # this should be performed by another transformer node. 169 if isinstance(inst, instructions.Call): 170 # bind parameter 171 subroutine = inst.assigned_subroutine() 172 # convert into schedule if block is given 173 if isinstance(subroutine, ScheduleBlock): 174 subroutine = block_to_schedule(subroutine) 175 # recursively inline the program 176 inline_schedule = _inline_schedule(subroutine) 177 ret_schedule.insert(t0, inline_schedule, inplace=True) 178 elif isinstance(inst, Schedule): 179 # recursively inline the program 180 inline_schedule = _inline_schedule(inst) 181 ret_schedule.insert(t0, inline_schedule, inplace=True) 182 else: 183 ret_schedule.insert(t0, inst, inplace=True) 184 return ret_schedule 185 186 187 def _inline_block(block: ScheduleBlock) -> ScheduleBlock: 188 """A helper function to inline subroutine of schedule block. 189 190 .. note:: If subroutine is ``Schedule`` the function raises an error. 191 """ 192 ret_block = ScheduleBlock.initialize_from(block) 193 for inst in block.blocks: 194 if isinstance(inst, instructions.Call): 195 # bind parameter 196 subroutine = inst.assigned_subroutine() 197 if isinstance(subroutine, Schedule): 198 raise PulseError( 199 f"A subroutine {subroutine.name} is a pulse Schedule. " 200 "This program cannot be inserted into ScheduleBlock because " 201 "t0 associated with instruction will be lost." 202 ) 203 # recursively inline the program 204 inline_block = _inline_block(subroutine) 205 ret_block.append(inline_block, inplace=True) 206 elif isinstance(inst, ScheduleBlock): 207 # recursively inline the program 208 inline_block = _inline_block(inst) 209 ret_block.append(inline_block, inplace=True) 210 else: 211 ret_block.append(inst, inplace=True) 212 return ret_block 213 214 215 def remove_directives(schedule: Schedule) -> Schedule: 216 """Remove directives. 217 218 Args: 219 schedule: A schedule to remove compiler directives. 220 221 Returns: 222 A schedule without directives. 223 """ 224 return schedule.exclude(instruction_types=[directives.Directive]) 225 226 227 def remove_trivial_barriers(schedule: Schedule) -> Schedule: 228 """Remove trivial barriers with 0 or 1 channels. 229 230 Args: 231 schedule: A schedule to remove trivial barriers. 232 233 Returns: 234 schedule: A schedule without trivial barriers 235 """ 236 237 def filter_func(inst): 238 return isinstance(inst[1], directives.RelativeBarrier) and len(inst[1].channels) < 2 239 240 return schedule.exclude(filter_func) 241 242 243 def align_measures( 244 schedules: Iterable[ScheduleComponent], 245 inst_map: Optional[InstructionScheduleMap] = None, 246 cal_gate: str = "u3", 247 max_calibration_duration: Optional[int] = None, 248 align_time: Optional[int] = None, 249 align_all: Optional[bool] = True, 250 ) -> List[Schedule]: 251 """Return new schedules where measurements occur at the same physical time. 252 253 This transformation will align the first :class:`.Acquire` on 254 every channel to occur at the same time. 255 256 Minimum measurement wait time (to allow for calibration pulses) is enforced 257 and may be set with ``max_calibration_duration``. 258 259 By default only instructions containing a :class:`.AcquireChannel` or :class:`.MeasureChannel` 260 will be shifted. If you wish to keep the relative timing of all instructions in the schedule set 261 ``align_all=True``. 262 263 This method assumes that ``MeasureChannel(i)`` and ``AcquireChannel(i)`` 264 correspond to the same qubit and the acquire/play instructions 265 should be shifted together on these channels. 266 267 .. code-block:: 268 269 from qiskit import pulse 270 from qiskit.pulse import transforms 271 272 d0 = pulse.DriveChannel(0) 273 m0 = pulse.MeasureChannel(0) 274 a0 = pulse.AcquireChannel(0) 275 mem0 = pulse.MemorySlot(0) 276 277 sched = pulse.Schedule() 278 sched.append(pulse.Play(pulse.Constant(10, 0.5), d0), inplace=True) 279 sched.append(pulse.Play(pulse.Constant(10, 1.), m0).shift(sched.duration), inplace=True) 280 sched.append(pulse.Acquire(20, a0, mem0).shift(sched.duration), inplace=True) 281 282 sched_shifted = sched << 20 283 284 aligned_sched, aligned_sched_shifted = transforms.align_measures([sched, sched_shifted]) 285 286 assert aligned_sched == aligned_sched_shifted 287 288 If it is desired to only shift acquisition and measurement stimulus instructions 289 set the flag ``align_all=False``: 290 291 .. code-block:: 292 293 aligned_sched, aligned_sched_shifted = transforms.align_measures( 294 [sched, sched_shifted], 295 align_all=False, 296 ) 297 298 assert aligned_sched != aligned_sched_shifted 299 300 301 Args: 302 schedules: Collection of schedules to be aligned together 303 inst_map: Mapping of circuit operations to pulse schedules 304 cal_gate: The name of the gate to inspect for the calibration time 305 max_calibration_duration: If provided, inst_map and cal_gate will be ignored 306 align_time: If provided, this will be used as final align time. 307 align_all: Shift all instructions in the schedule such that they maintain 308 their relative alignment with the shifted acquisition instruction. 309 If ``False`` only the acquisition and measurement pulse instructions 310 will be shifted. 311 Returns: 312 The input list of schedules transformed to have their measurements aligned. 313 314 Raises: 315 PulseError: If the provided alignment time is negative. 316 """ 317 318 def get_first_acquire_times(schedules): 319 """Return a list of first acquire times for each schedule.""" 320 acquire_times = [] 321 for schedule in schedules: 322 visited_channels = set() 323 qubit_first_acquire_times = defaultdict(lambda: None) 324 325 for time, inst in schedule.instructions: 326 if isinstance(inst, instructions.Acquire) and inst.channel not in visited_channels: 327 visited_channels.add(inst.channel) 328 qubit_first_acquire_times[inst.channel.index] = time 329 330 acquire_times.append(qubit_first_acquire_times) 331 return acquire_times 332 333 def get_max_calibration_duration(inst_map, cal_gate): 334 """Return the time needed to allow for readout discrimination calibration pulses.""" 335 # TODO (qiskit-terra #5472): fix behavior of this. 336 max_calibration_duration = 0 337 for qubits in inst_map.qubits_with_instruction(cal_gate): 338 cmd = inst_map.get(cal_gate, qubits, np.pi, 0, np.pi) 339 max_calibration_duration = max(cmd.duration, max_calibration_duration) 340 return max_calibration_duration 341 342 if align_time is not None and align_time < 0: 343 raise exceptions.PulseError("Align time cannot be negative.") 344 345 first_acquire_times = get_first_acquire_times(schedules) 346 # Extract the maximum acquire in every schedule across all acquires in the schedule. 347 # If there are no acquires in the schedule default to 0. 348 max_acquire_times = [max(0, *times.values()) for times in first_acquire_times] 349 if align_time is None: 350 if max_calibration_duration is None: 351 if inst_map: 352 max_calibration_duration = get_max_calibration_duration(inst_map, cal_gate) 353 else: 354 max_calibration_duration = 0 355 align_time = max(max_calibration_duration, *max_acquire_times) 356 357 # Shift acquires according to the new scheduled time 358 new_schedules = [] 359 for sched_idx, schedule in enumerate(schedules): 360 new_schedule = Schedule.initialize_from(schedule) 361 stop_time = schedule.stop_time 362 363 if align_all: 364 if first_acquire_times[sched_idx]: 365 shift = align_time - max_acquire_times[sched_idx] 366 else: 367 shift = align_time - stop_time 368 else: 369 shift = 0 370 371 for time, inst in schedule.instructions: 372 measurement_channels = { 373 chan.index 374 for chan in inst.channels 375 if isinstance(chan, (chans.MeasureChannel, chans.AcquireChannel)) 376 } 377 if measurement_channels: 378 sched_first_acquire_times = first_acquire_times[sched_idx] 379 max_start_time = max( 380 sched_first_acquire_times[chan] 381 for chan in measurement_channels 382 if chan in sched_first_acquire_times 383 ) 384 shift = align_time - max_start_time 385 386 if shift < 0: 387 warnings.warn( 388 "The provided alignment time is scheduling an acquire instruction " 389 "earlier than it was scheduled for in the original Schedule. " 390 "This may result in an instruction being scheduled before t=0 and " 391 "an error being raised." 392 ) 393 new_schedule.insert(time + shift, inst, inplace=True) 394 395 new_schedules.append(new_schedule) 396 397 return new_schedules 398 399 400 def add_implicit_acquires(schedule: ScheduleComponent, meas_map: List[List[int]]) -> Schedule: 401 """Return a new schedule with implicit acquires from the measurement mapping replaced by 402 explicit ones. 403 404 .. warning:: Since new acquires are being added, Memory Slots will be set to match the 405 qubit index. This may overwrite your specification. 406 407 Args: 408 schedule: Schedule to be aligned. 409 meas_map: List of lists of qubits that are measured together. 410 411 Returns: 412 A ``Schedule`` with the additional acquisition instructions. 413 """ 414 new_schedule = Schedule.initialize_from(schedule) 415 acquire_map = {} 416 417 for time, inst in schedule.instructions: 418 if isinstance(inst, instructions.Acquire): 419 if inst.mem_slot and inst.mem_slot.index != inst.channel.index: 420 warnings.warn( 421 "One of your acquires was mapped to a memory slot which didn't match" 422 " the qubit index. I'm relabeling them to match." 423 ) 424 425 # Get the label of all qubits that are measured with the qubit(s) in this instruction 426 all_qubits = [] 427 for sublist in meas_map: 428 if inst.channel.index in sublist: 429 all_qubits.extend(sublist) 430 # Replace the old acquire instruction by a new one explicitly acquiring all qubits in 431 # the measurement group. 432 for i in all_qubits: 433 explicit_inst = instructions.Acquire( 434 inst.duration, 435 chans.AcquireChannel(i), 436 mem_slot=chans.MemorySlot(i), 437 kernel=inst.kernel, 438 discriminator=inst.discriminator, 439 ) 440 if time not in acquire_map: 441 new_schedule.insert(time, explicit_inst, inplace=True) 442 acquire_map = {time: {i}} 443 elif i not in acquire_map[time]: 444 new_schedule.insert(time, explicit_inst, inplace=True) 445 acquire_map[time].add(i) 446 else: 447 new_schedule.insert(time, inst, inplace=True) 448 449 return new_schedule 450 451 452 def pad( 453 schedule: Schedule, 454 channels: Optional[Iterable[chans.Channel]] = None, 455 until: Optional[int] = None, 456 inplace: bool = False, 457 pad_with: Optional[Type[instructions.Instruction]] = None, 458 ) -> Schedule: 459 """Pad the input Schedule with ``Delay``s on all unoccupied timeslots until 460 ``schedule.duration`` or ``until`` if not ``None``. 461 462 Args: 463 schedule: Schedule to pad. 464 channels: Channels to pad. Defaults to all channels in 465 ``schedule`` if not provided. If the supplied channel is not a member 466 of ``schedule`` it will be added. 467 until: Time to pad until. Defaults to ``schedule.duration`` if not provided. 468 inplace: Pad this schedule by mutating rather than returning a new schedule. 469 pad_with: Pulse ``Instruction`` subclass to be used for padding. 470 Default to :class:`~qiskit.pulse.instructions.Delay` instruction. 471 472 Returns: 473 The padded schedule. 474 475 Raises: 476 PulseError: When non pulse instruction is set to `pad_with`. 477 """ 478 until = until or schedule.duration 479 channels = channels or schedule.channels 480 481 if pad_with: 482 if issubclass(pad_with, instructions.Instruction): 483 pad_cls = pad_with 484 else: 485 raise PulseError( 486 f"'{pad_with.__class__.__name__}' is not valid pulse instruction to pad with." 487 ) 488 else: 489 pad_cls = instructions.Delay 490 491 for channel in channels: 492 if isinstance(channel, ClassicalIOChannel): 493 continue 494 495 if channel not in schedule.channels: 496 schedule = schedule.insert(0, instructions.Delay(until, channel), inplace=inplace) 497 continue 498 499 prev_time = 0 500 timeslots = iter(schedule.timeslots[channel]) 501 to_pad = [] 502 while prev_time < until: 503 try: 504 t0, t1 = next(timeslots) 505 except StopIteration: 506 to_pad.append((prev_time, until - prev_time)) 507 break 508 if prev_time < t0: 509 to_pad.append((prev_time, min(t0, until) - prev_time)) 510 prev_time = t1 511 for t0, duration in to_pad: 512 schedule = schedule.insert(t0, pad_cls(duration, channel), inplace=inplace) 513 514 return schedule 515 [end of qiskit/pulse/transforms/canonicalization.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
2710765293bd4aaf92e377c313cfc359096c799a
Pulse's channel index validation fails ### Environment - **Qiskit Terra version**: 0.25.0 (main) - **Python version**: 3.9 - **Operating system**: Windows ### What is happening? Channel index validation doesn't produce a `PulseError` when the channel is either non-integer or negative. Only when both are true. ### How can we reproduce the issue? The following should produce a `PulseError`, but it doesn't: ``` from qiskit import pulse pulse.DriveChannel(0.5) pulse.DriveChannel(-1) ``` Because of [this line](https://github.com/Qiskit/qiskit-terra/blob/e55389f3f05e2d871fdea3814917c93b5c280e93/qiskit/pulse/channels.py#L124), only when the index is both not an integer *and* negative, an error is raised. This does raise a `PulseError`: ``` pulse.DriveChannel(-1.5) ``` ### What should happen? A `PulseError` should be raised if either of the conditions are met. ### Any suggestions? _No response_
2023-07-22T08:26:26Z
<patch> diff --git a/qiskit/pulse/channels.py b/qiskit/pulse/channels.py --- a/qiskit/pulse/channels.py +++ b/qiskit/pulse/channels.py @@ -121,7 +121,7 @@ def _validate_index(self, index: Any) -> None: if index.is_integer(): index = int(index) - if not isinstance(index, (int, np.integer)) and index < 0: + if not isinstance(index, (int, np.integer)) or index < 0: raise PulseError("Channel index must be a nonnegative integer") @property </patch>
[]
[]
pandas-dev__pandas-26298
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Series.at and DataFrame.at crash with CategoricalIndex #### Code Sample, a copy-pastable example if possible ```python Python 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> x = pd.Series([1, 2, 3], index=pd.CategoricalIndex(["A", "B", "C"])) >>> x.loc["A"] 1 >>> x.at["A"] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py", line 1869, in __getitem__ return self.obj._get_value(*key, takeable=self._takeable) File "/usr/local/lib/python3.6/dist-packages/pandas/core/series.py", line 929, in _get_value return self.index.get_value(self._values, label) File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/category.py", line 423, in get_value return series.iloc[indexer] AttributeError: 'numpy.ndarray' object has no attribute 'iloc' >>> x = pd.DataFrame([[1, 2], [3, 4], [5, 6]], index=pd.CategoricalIndex(["A", "B", "C"])) >>> x.loc["B", 1] 4 >>> x.at["B", 1] Traceback (most recent call last): File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 811, in pandas._libs.hashtable.Int64HashTable.get_item TypeError: an integer is required During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py", line 1869, in __getitem__ return self.obj._get_value(*key, takeable=self._takeable) File "/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py", line 1985, in _get_value return engine.get_value(series._values, index) File "pandas/_libs/index.pyx", line 83, in pandas._libs.index.IndexEngine.get_value File "pandas/_libs/index.pyx", line 91, in pandas._libs.index.IndexEngine.get_value File "pandas/_libs/index.pyx", line 141, in pandas._libs.index.IndexEngine.get_loc KeyError: 'B' ``` #### Problem description With a `CategoricalIndex`, `Series.at` raises `AttributeError` and `DataFrame.at` raises `TypeError` and `KeyError`. #### Expected Output `x.at["A"]` should work the same as `x.loc["A"]`, and `x.at["B", 1]` should work the same as `x.loc["B", 1]`. #### Output of ``pd.show_versions()`` <details> commit: None python: 3.6.3.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-37-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.22.0 pytest: 3.4.0 pip: 9.0.3 setuptools: 39.0.1 Cython: None numpy: 1.14.2 scipy: 1.0.0 pyarrow: None xarray: None IPython: None sphinx: 1.6.7 patsy: 0.4.1 dateutil: 2.7.2 pytz: 2018.3 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 2.2.2 openpyxl: 2.4.9 xlrd: 1.1.0 xlwt: None xlsxwriter: None lxml: 4.0.0 bs4: 4.6.0 html5lib: 1.0.1 sqlalchemy: 1.2.0 pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None </details> </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /> 30 </a> 31 </td> 32 </tr> 33 <tr> 34 <td>License</td> 35 <td> 36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 38 </a> 39 </td> 40 </tr> 41 <tr> 42 <td>Build Status</td> 43 <td> 44 <a href="https://travis-ci.org/pandas-dev/pandas"> 45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 46 </a> 47 </td> 48 </tr> 49 <tr> 50 <td></td> 51 <td> 52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 54 </a> 55 </td> 56 </tr> 57 <tr> 58 <td>Coverage</td> 59  <td> 60 <a href="https://codecov.io/gh/pandas-dev/pandas"> 61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 62 </a> 63 </td> 64 </tr> 65 <tr> 66 <td>Downloads</td> 67 <td> 68 <a href="https://pandas.pydata.org"> 69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 70 </a> 71 </td> 72 </tr> 73 <tr> 74 <td>Gitter</td> 75 <td> 76 <a href="https://gitter.im/pydata/pandas"> 77 <img src="https://badges.gitter.im/Join%20Chat.svg" /> 78 </a> 79 </td> 80 </tr> 81 </table> 82 83 84 85 ## What is it? 86 87 **pandas** is a Python package providing fast, flexible, and expressive data 88 structures designed to make working with "relational" or "labeled" data both 89 easy and intuitive. It aims to be the fundamental high-level building block for 90 doing practical, **real world** data analysis in Python. Additionally, it has 91 the broader goal of becoming **the most powerful and flexible open source data 92 analysis / manipulation tool available in any language**. It is already well on 93 its way towards this goal. 94 95 ## Main Features 96 Here are just a few of the things that pandas does well: 97 98 - Easy handling of [**missing data**][missing-data] (represented as 99 `NaN`) in floating point as well as non-floating point data 100 - Size mutability: columns can be [**inserted and 101 deleted**][insertion-deletion] from DataFrame and higher dimensional 102 objects 103 - Automatic and explicit [**data alignment**][alignment]: objects can 104 be explicitly aligned to a set of labels, or the user can simply 105 ignore the labels and let `Series`, `DataFrame`, etc. automatically 106 align the data for you in computations 107 - Powerful, flexible [**group by**][groupby] functionality to perform 108 split-apply-combine operations on data sets, for both aggregating 109 and transforming data 110 - Make it [**easy to convert**][conversion] ragged, 111 differently-indexed data in other Python and NumPy data structures 112 into DataFrame objects 113 - Intelligent label-based [**slicing**][slicing], [**fancy 114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 115 large data sets 116 - Intuitive [**merging**][merging] and [**joining**][joining] data 117 sets 118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 119 data sets 120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 121 labels per tick) 122 - Robust IO tools for loading data from [**flat files**][flat-files] 123 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 125 - [**Time series**][timeseries]-specific functionality: date range 126 generation and frequency conversion, moving window statistics, 127 moving window linear regressions, date shifting and lagging, etc. 128 129 130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 148 149 ## Where to get it 150 The source code is currently hosted on GitHub at: 151 https://github.com/pandas-dev/pandas 152 153 Binary installers for the latest released version are available at the [Python 154 package index](https://pypi.org/project/pandas) and on conda. 155 156 ```sh 157 # conda 158 conda install pandas 159 ``` 160 161 ```sh 162 # or PyPI 163 pip install pandas 164 ``` 165 166 ## Dependencies 167 - [NumPy](https://www.numpy.org): 1.13.3 or higher 168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher 170 171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 172 for recommended and optional dependencies. 173 174 ## Installation from sources 175 To install pandas from source you need Cython in addition to the normal 176 dependencies above. Cython can be installed from pypi: 177 178 ```sh 179 pip install cython 180 ``` 181 182 In the `pandas` directory (same one where you found this file after 183 cloning the git repo), execute: 184 185 ```sh 186 python setup.py install 187 ``` 188 189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 190 191 ```sh 192 python setup.py develop 193 ``` 194 195 Alternatively, you can use `pip` if you want all the dependencies pulled 196 in automatically (the `-e` option is for installing it in [development 197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 198 199 ```sh 200 pip install -e . 201 ``` 202 203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 204 205 ## License 206 [BSD 3](LICENSE) 207 208 ## Documentation 209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 210 211 ## Background 212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 213 has been under active development since then. 214 215 ## Getting Help 216 217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 219 220 ## Discussion and Development 221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 222 223 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 224 225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 226 227 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 228 229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 230 231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 232 233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 234 235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 236 [end of README.md] [start of pandas/core/computation/eval.py] 1 #!/usr/bin/env python 2 3 """Top level ``eval`` module. 4 """ 5 6 import tokenize 7 import warnings 8 9 from pandas.util._validators import validate_bool_kwarg 10 11 from pandas.core.computation.engines import _engines 12 from pandas.core.computation.scope import _ensure_scope 13 14 from pandas.io.formats.printing import pprint_thing 15 16 17 def _check_engine(engine): 18 """Make sure a valid engine is passed. 19 20 Parameters 21 ---------- 22 engine : str 23 24 Raises 25 ------ 26 KeyError 27 * If an invalid engine is passed 28 ImportError 29 * If numexpr was requested but doesn't exist 30 31 Returns 32 ------- 33 string engine 34 35 """ 36 from pandas.core.computation.check import _NUMEXPR_INSTALLED 37 38 if engine is None: 39 if _NUMEXPR_INSTALLED: 40 engine = 'numexpr' 41 else: 42 engine = 'python' 43 44 if engine not in _engines: 45 valid = list(_engines.keys()) 46 raise KeyError('Invalid engine {engine!r} passed, valid engines are' 47 ' {valid}'.format(engine=engine, valid=valid)) 48 49 # TODO: validate this in a more general way (thinking of future engines 50 # that won't necessarily be import-able) 51 # Could potentially be done on engine instantiation 52 if engine == 'numexpr': 53 if not _NUMEXPR_INSTALLED: 54 raise ImportError("'numexpr' is not installed or an " 55 "unsupported version. Cannot use " 56 "engine='numexpr' for query/eval " 57 "if 'numexpr' is not installed") 58 59 return engine 60 61 62 def _check_parser(parser): 63 """Make sure a valid parser is passed. 64 65 Parameters 66 ---------- 67 parser : str 68 69 Raises 70 ------ 71 KeyError 72 * If an invalid parser is passed 73 """ 74 from pandas.core.computation.expr import _parsers 75 76 if parser not in _parsers: 77 raise KeyError('Invalid parser {parser!r} passed, valid parsers are' 78 ' {valid}'.format(parser=parser, valid=_parsers.keys())) 79 80 81 def _check_resolvers(resolvers): 82 if resolvers is not None: 83 for resolver in resolvers: 84 if not hasattr(resolver, '__getitem__'): 85 name = type(resolver).__name__ 86 raise TypeError('Resolver of type {name!r} does not implement ' 87 'the __getitem__ method'.format(name=name)) 88 89 90 def _check_expression(expr): 91 """Make sure an expression is not an empty string 92 93 Parameters 94 ---------- 95 expr : object 96 An object that can be converted to a string 97 98 Raises 99 ------ 100 ValueError 101 * If expr is an empty string 102 """ 103 if not expr: 104 raise ValueError("expr cannot be an empty string") 105 106 107 def _convert_expression(expr): 108 """Convert an object to an expression. 109 110 Thus function converts an object to an expression (a unicode string) and 111 checks to make sure it isn't empty after conversion. This is used to 112 convert operators to their string representation for recursive calls to 113 :func:`~pandas.eval`. 114 115 Parameters 116 ---------- 117 expr : object 118 The object to be converted to a string. 119 120 Returns 121 ------- 122 s : unicode 123 The string representation of an object. 124 125 Raises 126 ------ 127 ValueError 128 * If the expression is empty. 129 """ 130 s = pprint_thing(expr) 131 _check_expression(s) 132 return s 133 134 135 def _check_for_locals(expr, stack_level, parser): 136 from pandas.core.computation.expr import tokenize_string 137 138 at_top_of_stack = stack_level == 0 139 not_pandas_parser = parser != 'pandas' 140 141 if not_pandas_parser: 142 msg = "The '@' prefix is only supported by the pandas parser" 143 elif at_top_of_stack: 144 msg = ("The '@' prefix is not allowed in " 145 "top-level eval calls, \nplease refer to " 146 "your variables by name without the '@' " 147 "prefix") 148 149 if at_top_of_stack or not_pandas_parser: 150 for toknum, tokval in tokenize_string(expr): 151 if toknum == tokenize.OP and tokval == '@': 152 raise SyntaxError(msg) 153 154 155 def eval(expr, parser='pandas', engine=None, truediv=True, 156 local_dict=None, global_dict=None, resolvers=(), level=0, 157 target=None, inplace=False): 158 """Evaluate a Python expression as a string using various backends. 159 160 The following arithmetic operations are supported: ``+``, ``-``, ``*``, 161 ``/``, ``**``, ``%``, ``//`` (python engine only) along with the following 162 boolean operations: ``|`` (or), ``&`` (and), and ``~`` (not). 163 Additionally, the ``'pandas'`` parser allows the use of :keyword:`and`, 164 :keyword:`or`, and :keyword:`not` with the same semantics as the 165 corresponding bitwise operators. :class:`~pandas.Series` and 166 :class:`~pandas.DataFrame` objects are supported and behave as they would 167 with plain ol' Python evaluation. 168 169 Parameters 170 ---------- 171 expr : str or unicode 172 The expression to evaluate. This string cannot contain any Python 173 `statements 174 <https://docs.python.org/3/reference/simple_stmts.html#simple-statements>`__, 175 only Python `expressions 176 <https://docs.python.org/3/reference/simple_stmts.html#expression-statements>`__. 177 parser : string, default 'pandas', {'pandas', 'python'} 178 The parser to use to construct the syntax tree from the expression. The 179 default of ``'pandas'`` parses code slightly different than standard 180 Python. Alternatively, you can parse an expression using the 181 ``'python'`` parser to retain strict Python semantics. See the 182 :ref:`enhancing performance <enhancingperf.eval>` documentation for 183 more details. 184 engine : string or None, default 'numexpr', {'python', 'numexpr'} 185 186 The engine used to evaluate the expression. Supported engines are 187 188 - None : tries to use ``numexpr``, falls back to ``python`` 189 - ``'numexpr'``: This default engine evaluates pandas objects using 190 numexpr for large speed ups in complex expressions 191 with large frames. 192 - ``'python'``: Performs operations as if you had ``eval``'d in top 193 level python. This engine is generally not that useful. 194 195 More backends may be available in the future. 196 197 truediv : bool, optional 198 Whether to use true division, like in Python >= 3 199 local_dict : dict or None, optional 200 A dictionary of local variables, taken from locals() by default. 201 global_dict : dict or None, optional 202 A dictionary of global variables, taken from globals() by default. 203 resolvers : list of dict-like or None, optional 204 A list of objects implementing the ``__getitem__`` special method that 205 you can use to inject an additional collection of namespaces to use for 206 variable lookup. For example, this is used in the 207 :meth:`~DataFrame.query` method to inject the 208 ``DataFrame.index`` and ``DataFrame.columns`` 209 variables that refer to their respective :class:`~pandas.DataFrame` 210 instance attributes. 211 level : int, optional 212 The number of prior stack frames to traverse and add to the current 213 scope. Most users will **not** need to change this parameter. 214 target : object, optional, default None 215 This is the target object for assignment. It is used when there is 216 variable assignment in the expression. If so, then `target` must 217 support item assignment with string keys, and if a copy is being 218 returned, it must also support `.copy()`. 219 inplace : bool, default False 220 If `target` is provided, and the expression mutates `target`, whether 221 to modify `target` inplace. Otherwise, return a copy of `target` with 222 the mutation. 223 224 Returns 225 ------- 226 ndarray, numeric scalar, DataFrame, Series 227 228 Raises 229 ------ 230 ValueError 231 There are many instances where such an error can be raised: 232 233 - `target=None`, but the expression is multiline. 234 - The expression is multiline, but not all them have item assignment. 235 An example of such an arrangement is this: 236 237 a = b + 1 238 a + 2 239 240 Here, there are expressions on different lines, making it multiline, 241 but the last line has no variable assigned to the output of `a + 2`. 242 - `inplace=True`, but the expression is missing item assignment. 243 - Item assignment is provided, but the `target` does not support 244 string item assignment. 245 - Item assignment is provided and `inplace=False`, but the `target` 246 does not support the `.copy()` method 247 248 See Also 249 -------- 250 DataFrame.query 251 DataFrame.eval 252 253 Notes 254 ----- 255 The ``dtype`` of any objects involved in an arithmetic ``%`` operation are 256 recursively cast to ``float64``. 257 258 See the :ref:`enhancing performance <enhancingperf.eval>` documentation for 259 more details. 260 """ 261 from pandas.core.computation.expr import Expr 262 263 inplace = validate_bool_kwarg(inplace, "inplace") 264 265 if isinstance(expr, str): 266 _check_expression(expr) 267 exprs = [e.strip() for e in expr.splitlines() if e.strip() != ''] 268 else: 269 exprs = [expr] 270 multi_line = len(exprs) > 1 271 272 if multi_line and target is None: 273 raise ValueError("multi-line expressions are only valid in the " 274 "context of data, use DataFrame.eval") 275 276 ret = None 277 first_expr = True 278 target_modified = False 279 280 for expr in exprs: 281 expr = _convert_expression(expr) 282 engine = _check_engine(engine) 283 _check_parser(parser) 284 _check_resolvers(resolvers) 285 _check_for_locals(expr, level, parser) 286 287 # get our (possibly passed-in) scope 288 env = _ensure_scope(level + 1, global_dict=global_dict, 289 local_dict=local_dict, resolvers=resolvers, 290 target=target) 291 292 parsed_expr = Expr(expr, engine=engine, parser=parser, env=env, 293 truediv=truediv) 294 295 # construct the engine and evaluate the parsed expression 296 eng = _engines[engine] 297 eng_inst = eng(parsed_expr) 298 ret = eng_inst.evaluate() 299 300 if parsed_expr.assigner is None: 301 if multi_line: 302 raise ValueError("Multi-line expressions are only valid" 303 " if all expressions contain an assignment") 304 elif inplace: 305 raise ValueError("Cannot operate inplace " 306 "if there is no assignment") 307 308 # assign if needed 309 assigner = parsed_expr.assigner 310 if env.target is not None and assigner is not None: 311 target_modified = True 312 313 # if returning a copy, copy only on the first assignment 314 if not inplace and first_expr: 315 try: 316 target = env.target.copy() 317 except AttributeError: 318 raise ValueError("Cannot return a copy of the target") 319 else: 320 target = env.target 321 322 # TypeError is most commonly raised (e.g. int, list), but you 323 # get IndexError if you try to do this assignment on np.ndarray. 324 # we will ignore numpy warnings here; e.g. if trying 325 # to use a non-numeric indexer 326 try: 327 with warnings.catch_warnings(record=True): 328 # TODO: Filter the warnings we actually care about here. 329 target[assigner] = ret 330 except (TypeError, IndexError): 331 raise ValueError("Cannot assign expression output to target") 332 333 if not resolvers: 334 resolvers = ({assigner: ret},) 335 else: 336 # existing resolver needs updated to handle 337 # case of mutating existing column in copy 338 for resolver in resolvers: 339 if assigner in resolver: 340 resolver[assigner] = ret 341 break 342 else: 343 resolvers += ({assigner: ret},) 344 345 ret = None 346 first_expr = False 347 348 # We want to exclude `inplace=None` as being False. 349 if inplace is False: 350 return target if target_modified else ret 351 [end of pandas/core/computation/eval.py] [start of pandas/util/_print_versions.py] 1 import codecs 2 import importlib 3 import locale 4 import os 5 import platform 6 import struct 7 import subprocess 8 import sys 9 10 11 def get_sys_info(): 12 "Returns system information as a dict" 13 14 blob = [] 15 16 # get full commit hash 17 commit = None 18 if os.path.isdir(".git") and os.path.isdir("pandas"): 19 try: 20 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), 21 stdout=subprocess.PIPE, 22 stderr=subprocess.PIPE) 23 so, serr = pipe.communicate() 24 except (OSError, ValueError): 25 pass 26 else: 27 if pipe.returncode == 0: 28 commit = so 29 try: 30 commit = so.decode('utf-8') 31 except ValueError: 32 pass 33 commit = commit.strip().strip('"') 34 35 blob.append(('commit', commit)) 36 37 try: 38 (sysname, nodename, release, 39 version, machine, processor) = platform.uname() 40 blob.extend([ 41 ("python", '.'.join(map(str, sys.version_info))), 42 ("python-bits", struct.calcsize("P") * 8), 43 ("OS", "{sysname}".format(sysname=sysname)), 44 ("OS-release", "{release}".format(release=release)), 45 # ("Version", "{version}".format(version=version)), 46 ("machine", "{machine}".format(machine=machine)), 47 ("processor", "{processor}".format(processor=processor)), 48 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)), 49 ("LC_ALL", "{lc}".format(lc=os.environ.get('LC_ALL', "None"))), 50 ("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))), 51 ("LOCALE", '.'.join(map(str, locale.getlocale()))), 52 ]) 53 except (KeyError, ValueError): 54 pass 55 56 return blob 57 58 59 def show_versions(as_json=False): 60 sys_info = get_sys_info() 61 62 deps = [ 63 # (MODULE_NAME, f(mod) -> mod version) 64 ("pandas", lambda mod: mod.__version__), 65 ("pytest", lambda mod: mod.__version__), 66 ("pip", lambda mod: mod.__version__), 67 ("setuptools", lambda mod: mod.__version__), 68 ("Cython", lambda mod: mod.__version__), 69 ("numpy", lambda mod: mod.version.version), 70 ("scipy", lambda mod: mod.version.version), 71 ("pyarrow", lambda mod: mod.__version__), 72 ("xarray", lambda mod: mod.__version__), 73 ("IPython", lambda mod: mod.__version__), 74 ("sphinx", lambda mod: mod.__version__), 75 ("patsy", lambda mod: mod.__version__), 76 ("dateutil", lambda mod: mod.__version__), 77 ("pytz", lambda mod: mod.VERSION), 78 ("blosc", lambda mod: mod.__version__), 79 ("bottleneck", lambda mod: mod.__version__), 80 ("tables", lambda mod: mod.__version__), 81 ("numexpr", lambda mod: mod.__version__), 82 ("feather", lambda mod: mod.__version__), 83 ("matplotlib", lambda mod: mod.__version__), 84 ("openpyxl", lambda mod: mod.__version__), 85 ("xlrd", lambda mod: mod.__VERSION__), 86 ("xlwt", lambda mod: mod.__VERSION__), 87 ("xlsxwriter", lambda mod: mod.__version__), 88 ("lxml.etree", lambda mod: mod.__version__), 89 ("bs4", lambda mod: mod.__version__), 90 ("html5lib", lambda mod: mod.__version__), 91 ("sqlalchemy", lambda mod: mod.__version__), 92 ("pymysql", lambda mod: mod.__version__), 93 ("psycopg2", lambda mod: mod.__version__), 94 ("jinja2", lambda mod: mod.__version__), 95 ("s3fs", lambda mod: mod.__version__), 96 ("fastparquet", lambda mod: mod.__version__), 97 ("pandas_gbq", lambda mod: mod.__version__), 98 ("pandas_datareader", lambda mod: mod.__version__), 99 ("gcsfs", lambda mod: mod.__version__), 100 ] 101 102 deps_blob = list() 103 for (modname, ver_f) in deps: 104 try: 105 if modname in sys.modules: 106 mod = sys.modules[modname] 107 else: 108 mod = importlib.import_module(modname) 109 ver = ver_f(mod) 110 deps_blob.append((modname, ver)) 111 except ImportError: 112 deps_blob.append((modname, None)) 113 114 if (as_json): 115 try: 116 import json 117 except ImportError: 118 import simplejson as json 119 120 j = dict(system=dict(sys_info), dependencies=dict(deps_blob)) 121 122 if as_json is True: 123 print(j) 124 else: 125 with codecs.open(as_json, "wb", encoding='utf8') as f: 126 json.dump(j, f, indent=2) 127 128 else: 129 130 print("\nINSTALLED VERSIONS") 131 print("------------------") 132 133 for k, stat in sys_info: 134 print("{k}: {stat}".format(k=k, stat=stat)) 135 136 print("") 137 for k, stat in deps_blob: 138 print("{k}: {stat}".format(k=k, stat=stat)) 139 140 141 def main(): 142 from optparse import OptionParser 143 parser = OptionParser() 144 parser.add_option("-j", "--json", metavar="FILE", nargs=1, 145 help="Save output as JSON into file, pass in " 146 "'-' to output to stdout") 147 148 (options, args) = parser.parse_args() 149 150 if options.json == "-": 151 options.json = True 152 153 show_versions(as_json=options.json) 154 155 return 0 156 157 158 if __name__ == "__main__": 159 sys.exit(main()) 160 [end of pandas/util/_print_versions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
b99e84fb1d3bfa3506bf067bcdd0628ec7ace40c
Series.at and DataFrame.at crash with CategoricalIndex #### Code Sample, a copy-pastable example if possible ```python Python 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> x = pd.Series([1, 2, 3], index=pd.CategoricalIndex(["A", "B", "C"])) >>> x.loc["A"] 1 >>> x.at["A"] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py", line 1869, in __getitem__ return self.obj._get_value(*key, takeable=self._takeable) File "/usr/local/lib/python3.6/dist-packages/pandas/core/series.py", line 929, in _get_value return self.index.get_value(self._values, label) File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/category.py", line 423, in get_value return series.iloc[indexer] AttributeError: 'numpy.ndarray' object has no attribute 'iloc' >>> x = pd.DataFrame([[1, 2], [3, 4], [5, 6]], index=pd.CategoricalIndex(["A", "B", "C"])) >>> x.loc["B", 1] 4 >>> x.at["B", 1] Traceback (most recent call last): File "pandas/_libs/index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 811, in pandas._libs.hashtable.Int64HashTable.get_item TypeError: an integer is required During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py", line 1869, in __getitem__ return self.obj._get_value(*key, takeable=self._takeable) File "/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py", line 1985, in _get_value return engine.get_value(series._values, index) File "pandas/_libs/index.pyx", line 83, in pandas._libs.index.IndexEngine.get_value File "pandas/_libs/index.pyx", line 91, in pandas._libs.index.IndexEngine.get_value File "pandas/_libs/index.pyx", line 141, in pandas._libs.index.IndexEngine.get_loc KeyError: 'B' ``` #### Problem description With a `CategoricalIndex`, `Series.at` raises `AttributeError` and `DataFrame.at` raises `TypeError` and `KeyError`. #### Expected Output `x.at["A"]` should work the same as `x.loc["A"]`, and `x.at["B", 1]` should work the same as `x.loc["B", 1]`. #### Output of ``pd.show_versions()`` <details> commit: None python: 3.6.3.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-37-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.22.0 pytest: 3.4.0 pip: 9.0.3 setuptools: 39.0.1 Cython: None numpy: 1.14.2 scipy: 1.0.0 pyarrow: None xarray: None IPython: None sphinx: 1.6.7 patsy: 0.4.1 dateutil: 2.7.2 pytz: 2018.3 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 2.2.2 openpyxl: 2.4.9 xlrd: 1.1.0 xlwt: None xlsxwriter: None lxml: 4.0.0 bs4: 4.6.0 html5lib: 1.0.1 sqlalchemy: 1.2.0 pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None </details>
Thanks for the report, that's clearly a bug. Related issue (on indexing with categorical index): https://github.com/pandas-dev/pandas/issues/14865
2019-05-06T20:43:42Z
<patch> diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst --- a/doc/source/whatsnew/v0.25.0.rst +++ b/doc/source/whatsnew/v0.25.0.rst @@ -301,7 +301,7 @@ Bug Fixes Categorical ^^^^^^^^^^^ -- +- Bug in :func:`DataFrame.at` and :func:`Series.at` that would raise exception if the index was a :class:`CategoricalIndex` (:issue:`20629`) - - diff --git a/pandas/_typing.py b/pandas/_typing.py --- a/pandas/_typing.py +++ b/pandas/_typing.py @@ -8,8 +8,14 @@ from pandas._libs.tslibs.timedeltas import Timedelta from pandas.core.dtypes.dtypes import ExtensionDtype -from pandas.core.dtypes.generic import ABCExtensionArray +from pandas.core.dtypes.generic import ( + ABCExtensionArray, ABCIndexClass, ABCSeries, ABCSparseSeries) +AnyArrayLike = Union[ABCExtensionArray, + ABCIndexClass, + ABCSeries, + ABCSparseSeries, + np.ndarray] ArrayLike = Union[ABCExtensionArray, np.ndarray] DatetimeLikeScalar = Type[Union[Period, Timestamp, Timedelta]] Dtype = Union[str, np.dtype, ExtensionDtype] diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -2694,13 +2694,19 @@ def _get_value(self, index, col, takeable=False): try: return engine.get_value(series._values, index) + except KeyError: + # GH 20629 + if self.index.nlevels > 1: + # partial indexing forbidden + raise except (TypeError, ValueError): + pass - # we cannot handle direct indexing - # use positional - col = self.columns.get_loc(col) - index = self.index.get_loc(index) - return self._get_value(index, col, takeable=True) + # we cannot handle direct indexing + # use positional + col = self.columns.get_loc(col) + index = self.index.get_loc(index) + return self._get_value(index, col, takeable=True) _get_value.__doc__ = get_value.__doc__ def set_value(self, index, col, value, takeable=False): diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py --- a/pandas/core/indexes/category.py +++ b/pandas/core/indexes/category.py @@ -1,4 +1,5 @@ import operator +from typing import Any import warnings import numpy as np @@ -17,6 +18,7 @@ from pandas.core.dtypes.generic import ABCCategorical, ABCSeries from pandas.core.dtypes.missing import isna +from pandas._typing import AnyArrayLike from pandas.core import accessor from pandas.core.algorithms import take_1d from pandas.core.arrays.categorical import Categorical, contains @@ -494,16 +496,31 @@ def get_loc(self, key, method=None): except KeyError: raise KeyError(key) - def get_value(self, series, key): + def get_value(self, + series: AnyArrayLike, + key: Any): """ Fast lookup of value from 1-dimensional ndarray. Only use this if you know what you're doing + + Parameters + ---------- + series : Series, ExtensionArray, Index, or ndarray + 1-dimensional array to take values from + key: : scalar + The value of this index at the position of the desired value, + otherwise the positional index of the desired value + + Returns + ------- + Any + The element of the series at the position indicated by the key """ try: k = com.values_from_object(key) k = self._convert_scalar_indexer(k, kind='getitem') indexer = self.get_loc(k) - return series.iloc[indexer] + return series.take([indexer])[0] except (KeyError, TypeError): pass </patch>
[]
[]
pypa__pip-9569
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Set encoding for tar file and use unicode path for unpacking When tarfile.TarFile decodes filenames in Python 2.7 by default it uses sys.getfilesystemencoding. On Windows this returns "mbcs", which is lossy when converting from proper utf-8 to bytes (results in '?' for out of range characters). We now pass an encoding to tarfile.open which will be used instead. Since the encoding argument is only ever used for the PAX format, and since the PAX format guarantees utf-8 encoded information, this should work in all circumstances. For filesystem APIs in Python 2, the type of the path object passed dictates the underlying Windows API that is called. For `str` it is the `*A` (for ANSI) APIs. For `unicode` it is the `*W` (for Wide character) APIs. To use the second set of APIs, which properly handles unicode filenames, we convert the byte path to utf-8. Fixes #7667. Filename encoding error in some environments with PAX sdist **Environment** * pip version: any * Python version: 2.7 * OS: Windows, non-Windows in C locale (pip Windows CI hits this) **Description** The PAX format wheel 0.34.1 sdists fail to install on Python 2.7 on Windows with a UnicodeEncodeError, or on non-Windows systems in a non-utf-8 locale: https://github.com/pypa/wheel/issues/331 **Expected behavior** Unicode filename from the PAX tarball is correctly encoded for the local filesystem. **How to Reproduce** Attempt to install a PAX formatted tarball containing a file name that cannot be encoded to the default code page (Windows) or the default locale encoding (non-Windows). In GNU tar, the affected paths are pre-mangled to something ASCII compatible, but PAX tar preserves them correctly, so the installer needs to handle them itself. **Output** See https://dev.azure.com/pypa/pip/_build/results?buildId=18040&view=logs&j=404e6841-f5ba-57d9-f2c8-8c5322057572&t=0219f6bf-240d-5b08-c877-377b12af5079&l=309 for a Windows example in the pip test suite. The wheel issue linked above has some Linux examples. </issue> <code> [start of README.rst] 1 pip - The Python Package Installer 2 ================================== 3 4 .. image:: https://img.shields.io/pypi/v/pip.svg 5 :target: https://pypi.org/project/pip/ 6 7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest 8 :target: https://pip.pypa.io/en/latest 9 10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes. 11 12 Please take a look at our documentation for how to install and use pip: 13 14 * `Installation`_ 15 * `Usage`_ 16 17 We release updates regularly, with a new version every 3 months. Find more details in our documentation: 18 19 * `Release notes`_ 20 * `Release process`_ 21 22 In pip 20.3, we've `made a big improvement to the heart of pip`_; `learn more`_. We want your input, so `sign up for our user experience research studies`_ to help us do it right. 23 24 **Note**: pip 21.0, in January 2021, removed Python 2 support, per pip's `Python 2 support policy`_. Please migrate to Python 3. 25 26 If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms: 27 28 * `Issue tracking`_ 29 * `Discourse channel`_ 30 * `User IRC`_ 31 32 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms: 33 34 * `GitHub page`_ 35 * `Development documentation`_ 36 * `Development mailing list`_ 37 * `Development IRC`_ 38 39 Code of Conduct 40 --------------- 41 42 Everyone interacting in the pip project's codebases, issue trackers, chat 43 rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_. 44 45 .. _package installer: https://packaging.python.org/guides/tool-recommendations/ 46 .. _Python Package Index: https://pypi.org 47 .. _Installation: https://pip.pypa.io/en/stable/installing.html 48 .. _Usage: https://pip.pypa.io/en/stable/ 49 .. _Release notes: https://pip.pypa.io/en/stable/news.html 50 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/ 51 .. _GitHub page: https://github.com/pypa/pip 52 .. _Development documentation: https://pip.pypa.io/en/latest/development 53 .. _made a big improvement to the heart of pip: https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html 54 .. _learn more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020 55 .. _sign up for our user experience research studies: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html 56 .. _Python 2 support policy: https://pip.pypa.io/en/latest/development/release-process/#python-2-support 57 .. _Issue tracking: https://github.com/pypa/pip/issues 58 .. _Discourse channel: https://discuss.python.org/c/packaging 59 .. _Development mailing list: https://mail.python.org/mailman3/lists/distutils-sig.python.org/ 60 .. _User IRC: https://webchat.freenode.net/?channels=%23pypa 61 .. _Development IRC: https://webchat.freenode.net/?channels=%23pypa-dev 62 .. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md 63 [end of README.rst] [start of src/pip/_vendor/requests/utils.py] 1 # -*- coding: utf-8 -*- 2 3 """ 4 requests.utils 5 ~~~~~~~~~~~~~~ 6 7 This module provides utility functions that are used within Requests 8 that are also useful for external consumption. 9 """ 10 11 import codecs 12 import contextlib 13 import io 14 import os 15 import re 16 import socket 17 import struct 18 import sys 19 import tempfile 20 import warnings 21 import zipfile 22 from collections import OrderedDict 23 24 from .__version__ import __version__ 25 from . import certs 26 # to_native_string is unused here, but imported here for backwards compatibility 27 from ._internal_utils import to_native_string 28 from .compat import parse_http_list as _parse_list_header 29 from .compat import ( 30 quote, urlparse, bytes, str, unquote, getproxies, 31 proxy_bypass, urlunparse, basestring, integer_types, is_py3, 32 proxy_bypass_environment, getproxies_environment, Mapping) 33 from .cookies import cookiejar_from_dict 34 from .structures import CaseInsensitiveDict 35 from .exceptions import ( 36 InvalidURL, InvalidHeader, FileModeWarning, UnrewindableBodyError) 37 38 NETRC_FILES = ('.netrc', '_netrc') 39 40 DEFAULT_CA_BUNDLE_PATH = certs.where() 41 42 DEFAULT_PORTS = {'http': 80, 'https': 443} 43 44 45 if sys.platform == 'win32': 46 # provide a proxy_bypass version on Windows without DNS lookups 47 48 def proxy_bypass_registry(host): 49 try: 50 if is_py3: 51 import winreg 52 else: 53 import _winreg as winreg 54 except ImportError: 55 return False 56 57 try: 58 internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER, 59 r'Software\Microsoft\Windows\CurrentVersion\Internet Settings') 60 # ProxyEnable could be REG_SZ or REG_DWORD, normalizing it 61 proxyEnable = int(winreg.QueryValueEx(internetSettings, 62 'ProxyEnable')[0]) 63 # ProxyOverride is almost always a string 64 proxyOverride = winreg.QueryValueEx(internetSettings, 65 'ProxyOverride')[0] 66 except OSError: 67 return False 68 if not proxyEnable or not proxyOverride: 69 return False 70 71 # make a check value list from the registry entry: replace the 72 # '<local>' string by the localhost entry and the corresponding 73 # canonical entry. 74 proxyOverride = proxyOverride.split(';') 75 # now check if we match one of the registry values. 76 for test in proxyOverride: 77 if test == '<local>': 78 if '.' not in host: 79 return True 80 test = test.replace(".", r"\.") # mask dots 81 test = test.replace("*", r".*") # change glob sequence 82 test = test.replace("?", r".") # change glob char 83 if re.match(test, host, re.I): 84 return True 85 return False 86 87 def proxy_bypass(host): # noqa 88 """Return True, if the host should be bypassed. 89 90 Checks proxy settings gathered from the environment, if specified, 91 or the registry. 92 """ 93 if getproxies_environment(): 94 return proxy_bypass_environment(host) 95 else: 96 return proxy_bypass_registry(host) 97 98 99 def dict_to_sequence(d): 100 """Returns an internal sequence dictionary update.""" 101 102 if hasattr(d, 'items'): 103 d = d.items() 104 105 return d 106 107 108 def super_len(o): 109 total_length = None 110 current_position = 0 111 112 if hasattr(o, '__len__'): 113 total_length = len(o) 114 115 elif hasattr(o, 'len'): 116 total_length = o.len 117 118 elif hasattr(o, 'fileno'): 119 try: 120 fileno = o.fileno() 121 except io.UnsupportedOperation: 122 pass 123 else: 124 total_length = os.fstat(fileno).st_size 125 126 # Having used fstat to determine the file length, we need to 127 # confirm that this file was opened up in binary mode. 128 if 'b' not in o.mode: 129 warnings.warn(( 130 "Requests has determined the content-length for this " 131 "request using the binary size of the file: however, the " 132 "file has been opened in text mode (i.e. without the 'b' " 133 "flag in the mode). This may lead to an incorrect " 134 "content-length. In Requests 3.0, support will be removed " 135 "for files in text mode."), 136 FileModeWarning 137 ) 138 139 if hasattr(o, 'tell'): 140 try: 141 current_position = o.tell() 142 except (OSError, IOError): 143 # This can happen in some weird situations, such as when the file 144 # is actually a special file descriptor like stdin. In this 145 # instance, we don't know what the length is, so set it to zero and 146 # let requests chunk it instead. 147 if total_length is not None: 148 current_position = total_length 149 else: 150 if hasattr(o, 'seek') and total_length is None: 151 # StringIO and BytesIO have seek but no useable fileno 152 try: 153 # seek to end of file 154 o.seek(0, 2) 155 total_length = o.tell() 156 157 # seek back to current position to support 158 # partially read file-like objects 159 o.seek(current_position or 0) 160 except (OSError, IOError): 161 total_length = 0 162 163 if total_length is None: 164 total_length = 0 165 166 return max(0, total_length - current_position) 167 168 169 def get_netrc_auth(url, raise_errors=False): 170 """Returns the Requests tuple auth for a given url from netrc.""" 171 172 netrc_file = os.environ.get('NETRC') 173 if netrc_file is not None: 174 netrc_locations = (netrc_file,) 175 else: 176 netrc_locations = ('~/{}'.format(f) for f in NETRC_FILES) 177 178 try: 179 from netrc import netrc, NetrcParseError 180 181 netrc_path = None 182 183 for f in netrc_locations: 184 try: 185 loc = os.path.expanduser(f) 186 except KeyError: 187 # os.path.expanduser can fail when $HOME is undefined and 188 # getpwuid fails. See https://bugs.python.org/issue20164 & 189 # https://github.com/psf/requests/issues/1846 190 return 191 192 if os.path.exists(loc): 193 netrc_path = loc 194 break 195 196 # Abort early if there isn't one. 197 if netrc_path is None: 198 return 199 200 ri = urlparse(url) 201 202 # Strip port numbers from netloc. This weird `if...encode`` dance is 203 # used for Python 3.2, which doesn't support unicode literals. 204 splitstr = b':' 205 if isinstance(url, str): 206 splitstr = splitstr.decode('ascii') 207 host = ri.netloc.split(splitstr)[0] 208 209 try: 210 _netrc = netrc(netrc_path).authenticators(host) 211 if _netrc: 212 # Return with login / password 213 login_i = (0 if _netrc[0] else 1) 214 return (_netrc[login_i], _netrc[2]) 215 except (NetrcParseError, IOError): 216 # If there was a parsing error or a permissions issue reading the file, 217 # we'll just skip netrc auth unless explicitly asked to raise errors. 218 if raise_errors: 219 raise 220 221 # App Engine hackiness. 222 except (ImportError, AttributeError): 223 pass 224 225 226 def guess_filename(obj): 227 """Tries to guess the filename of the given object.""" 228 name = getattr(obj, 'name', None) 229 if (name and isinstance(name, basestring) and name[0] != '<' and 230 name[-1] != '>'): 231 return os.path.basename(name) 232 233 234 def extract_zipped_paths(path): 235 """Replace nonexistent paths that look like they refer to a member of a zip 236 archive with the location of an extracted copy of the target, or else 237 just return the provided path unchanged. 238 """ 239 if os.path.exists(path): 240 # this is already a valid path, no need to do anything further 241 return path 242 243 # find the first valid part of the provided path and treat that as a zip archive 244 # assume the rest of the path is the name of a member in the archive 245 archive, member = os.path.split(path) 246 while archive and not os.path.exists(archive): 247 archive, prefix = os.path.split(archive) 248 member = '/'.join([prefix, member]) 249 250 if not zipfile.is_zipfile(archive): 251 return path 252 253 zip_file = zipfile.ZipFile(archive) 254 if member not in zip_file.namelist(): 255 return path 256 257 # we have a valid zip archive and a valid member of that archive 258 tmp = tempfile.gettempdir() 259 extracted_path = os.path.join(tmp, *member.split('/')) 260 if not os.path.exists(extracted_path): 261 extracted_path = zip_file.extract(member, path=tmp) 262 263 return extracted_path 264 265 266 def from_key_val_list(value): 267 """Take an object and test to see if it can be represented as a 268 dictionary. Unless it can not be represented as such, return an 269 OrderedDict, e.g., 270 271 :: 272 273 >>> from_key_val_list([('key', 'val')]) 274 OrderedDict([('key', 'val')]) 275 >>> from_key_val_list('string') 276 Traceback (most recent call last): 277 ... 278 ValueError: cannot encode objects that are not 2-tuples 279 >>> from_key_val_list({'key': 'val'}) 280 OrderedDict([('key', 'val')]) 281 282 :rtype: OrderedDict 283 """ 284 if value is None: 285 return None 286 287 if isinstance(value, (str, bytes, bool, int)): 288 raise ValueError('cannot encode objects that are not 2-tuples') 289 290 return OrderedDict(value) 291 292 293 def to_key_val_list(value): 294 """Take an object and test to see if it can be represented as a 295 dictionary. If it can be, return a list of tuples, e.g., 296 297 :: 298 299 >>> to_key_val_list([('key', 'val')]) 300 [('key', 'val')] 301 >>> to_key_val_list({'key': 'val'}) 302 [('key', 'val')] 303 >>> to_key_val_list('string') 304 Traceback (most recent call last): 305 ... 306 ValueError: cannot encode objects that are not 2-tuples 307 308 :rtype: list 309 """ 310 if value is None: 311 return None 312 313 if isinstance(value, (str, bytes, bool, int)): 314 raise ValueError('cannot encode objects that are not 2-tuples') 315 316 if isinstance(value, Mapping): 317 value = value.items() 318 319 return list(value) 320 321 322 # From mitsuhiko/werkzeug (used with permission). 323 def parse_list_header(value): 324 """Parse lists as described by RFC 2068 Section 2. 325 326 In particular, parse comma-separated lists where the elements of 327 the list may include quoted-strings. A quoted-string could 328 contain a comma. A non-quoted string could have quotes in the 329 middle. Quotes are removed automatically after parsing. 330 331 It basically works like :func:`parse_set_header` just that items 332 may appear multiple times and case sensitivity is preserved. 333 334 The return value is a standard :class:`list`: 335 336 >>> parse_list_header('token, "quoted value"') 337 ['token', 'quoted value'] 338 339 To create a header from the :class:`list` again, use the 340 :func:`dump_header` function. 341 342 :param value: a string with a list header. 343 :return: :class:`list` 344 :rtype: list 345 """ 346 result = [] 347 for item in _parse_list_header(value): 348 if item[:1] == item[-1:] == '"': 349 item = unquote_header_value(item[1:-1]) 350 result.append(item) 351 return result 352 353 354 # From mitsuhiko/werkzeug (used with permission). 355 def parse_dict_header(value): 356 """Parse lists of key, value pairs as described by RFC 2068 Section 2 and 357 convert them into a python dict: 358 359 >>> d = parse_dict_header('foo="is a fish", bar="as well"') 360 >>> type(d) is dict 361 True 362 >>> sorted(d.items()) 363 [('bar', 'as well'), ('foo', 'is a fish')] 364 365 If there is no value for a key it will be `None`: 366 367 >>> parse_dict_header('key_without_value') 368 {'key_without_value': None} 369 370 To create a header from the :class:`dict` again, use the 371 :func:`dump_header` function. 372 373 :param value: a string with a dict header. 374 :return: :class:`dict` 375 :rtype: dict 376 """ 377 result = {} 378 for item in _parse_list_header(value): 379 if '=' not in item: 380 result[item] = None 381 continue 382 name, value = item.split('=', 1) 383 if value[:1] == value[-1:] == '"': 384 value = unquote_header_value(value[1:-1]) 385 result[name] = value 386 return result 387 388 389 # From mitsuhiko/werkzeug (used with permission). 390 def unquote_header_value(value, is_filename=False): 391 r"""Unquotes a header value. (Reversal of :func:`quote_header_value`). 392 This does not use the real unquoting but what browsers are actually 393 using for quoting. 394 395 :param value: the header value to unquote. 396 :rtype: str 397 """ 398 if value and value[0] == value[-1] == '"': 399 # this is not the real unquoting, but fixing this so that the 400 # RFC is met will result in bugs with internet explorer and 401 # probably some other browsers as well. IE for example is 402 # uploading files with "C:\foo\bar.txt" as filename 403 value = value[1:-1] 404 405 # if this is a filename and the starting characters look like 406 # a UNC path, then just return the value without quotes. Using the 407 # replace sequence below on a UNC path has the effect of turning 408 # the leading double slash into a single slash and then 409 # _fix_ie_filename() doesn't work correctly. See #458. 410 if not is_filename or value[:2] != '\\\\': 411 return value.replace('\\\\', '\\').replace('\\"', '"') 412 return value 413 414 415 def dict_from_cookiejar(cj): 416 """Returns a key/value dictionary from a CookieJar. 417 418 :param cj: CookieJar object to extract cookies from. 419 :rtype: dict 420 """ 421 422 cookie_dict = {} 423 424 for cookie in cj: 425 cookie_dict[cookie.name] = cookie.value 426 427 return cookie_dict 428 429 430 def add_dict_to_cookiejar(cj, cookie_dict): 431 """Returns a CookieJar from a key/value dictionary. 432 433 :param cj: CookieJar to insert cookies into. 434 :param cookie_dict: Dict of key/values to insert into CookieJar. 435 :rtype: CookieJar 436 """ 437 438 return cookiejar_from_dict(cookie_dict, cj) 439 440 441 def get_encodings_from_content(content): 442 """Returns encodings from given content string. 443 444 :param content: bytestring to extract encodings from. 445 """ 446 warnings.warn(( 447 'In requests 3.0, get_encodings_from_content will be removed. For ' 448 'more information, please see the discussion on issue #2266. (This' 449 ' warning should only appear once.)'), 450 DeprecationWarning) 451 452 charset_re = re.compile(r'<meta.*?charset=["\']*(.+?)["\'>]', flags=re.I) 453 pragma_re = re.compile(r'<meta.*?content=["\']*;?charset=(.+?)["\'>]', flags=re.I) 454 xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]') 455 456 return (charset_re.findall(content) + 457 pragma_re.findall(content) + 458 xml_re.findall(content)) 459 460 461 def _parse_content_type_header(header): 462 """Returns content type and parameters from given header 463 464 :param header: string 465 :return: tuple containing content type and dictionary of 466 parameters 467 """ 468 469 tokens = header.split(';') 470 content_type, params = tokens[0].strip(), tokens[1:] 471 params_dict = {} 472 items_to_strip = "\"' " 473 474 for param in params: 475 param = param.strip() 476 if param: 477 key, value = param, True 478 index_of_equals = param.find("=") 479 if index_of_equals != -1: 480 key = param[:index_of_equals].strip(items_to_strip) 481 value = param[index_of_equals + 1:].strip(items_to_strip) 482 params_dict[key.lower()] = value 483 return content_type, params_dict 484 485 486 def get_encoding_from_headers(headers): 487 """Returns encodings from given HTTP Header Dict. 488 489 :param headers: dictionary to extract encoding from. 490 :rtype: str 491 """ 492 493 content_type = headers.get('content-type') 494 495 if not content_type: 496 return None 497 498 content_type, params = _parse_content_type_header(content_type) 499 500 if 'charset' in params: 501 return params['charset'].strip("'\"") 502 503 if 'text' in content_type: 504 return 'ISO-8859-1' 505 506 if 'application/json' in content_type: 507 # Assume UTF-8 based on RFC 4627: https://www.ietf.org/rfc/rfc4627.txt since the charset was unset 508 return 'utf-8' 509 510 511 def stream_decode_response_unicode(iterator, r): 512 """Stream decodes a iterator.""" 513 514 if r.encoding is None: 515 for item in iterator: 516 yield item 517 return 518 519 decoder = codecs.getincrementaldecoder(r.encoding)(errors='replace') 520 for chunk in iterator: 521 rv = decoder.decode(chunk) 522 if rv: 523 yield rv 524 rv = decoder.decode(b'', final=True) 525 if rv: 526 yield rv 527 528 529 def iter_slices(string, slice_length): 530 """Iterate over slices of a string.""" 531 pos = 0 532 if slice_length is None or slice_length <= 0: 533 slice_length = len(string) 534 while pos < len(string): 535 yield string[pos:pos + slice_length] 536 pos += slice_length 537 538 539 def get_unicode_from_response(r): 540 """Returns the requested content back in unicode. 541 542 :param r: Response object to get unicode content from. 543 544 Tried: 545 546 1. charset from content-type 547 2. fall back and replace all unicode characters 548 549 :rtype: str 550 """ 551 warnings.warn(( 552 'In requests 3.0, get_unicode_from_response will be removed. For ' 553 'more information, please see the discussion on issue #2266. (This' 554 ' warning should only appear once.)'), 555 DeprecationWarning) 556 557 tried_encodings = [] 558 559 # Try charset from content-type 560 encoding = get_encoding_from_headers(r.headers) 561 562 if encoding: 563 try: 564 return str(r.content, encoding) 565 except UnicodeError: 566 tried_encodings.append(encoding) 567 568 # Fall back: 569 try: 570 return str(r.content, encoding, errors='replace') 571 except TypeError: 572 return r.content 573 574 575 # The unreserved URI characters (RFC 3986) 576 UNRESERVED_SET = frozenset( 577 "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + "0123456789-._~") 578 579 580 def unquote_unreserved(uri): 581 """Un-escape any percent-escape sequences in a URI that are unreserved 582 characters. This leaves all reserved, illegal and non-ASCII bytes encoded. 583 584 :rtype: str 585 """ 586 parts = uri.split('%') 587 for i in range(1, len(parts)): 588 h = parts[i][0:2] 589 if len(h) == 2 and h.isalnum(): 590 try: 591 c = chr(int(h, 16)) 592 except ValueError: 593 raise InvalidURL("Invalid percent-escape sequence: '%s'" % h) 594 595 if c in UNRESERVED_SET: 596 parts[i] = c + parts[i][2:] 597 else: 598 parts[i] = '%' + parts[i] 599 else: 600 parts[i] = '%' + parts[i] 601 return ''.join(parts) 602 603 604 def requote_uri(uri): 605 """Re-quote the given URI. 606 607 This function passes the given URI through an unquote/quote cycle to 608 ensure that it is fully and consistently quoted. 609 610 :rtype: str 611 """ 612 safe_with_percent = "!#$%&'()*+,/:;=?@[]~" 613 safe_without_percent = "!#$&'()*+,/:;=?@[]~" 614 try: 615 # Unquote only the unreserved characters 616 # Then quote only illegal characters (do not quote reserved, 617 # unreserved, or '%') 618 return quote(unquote_unreserved(uri), safe=safe_with_percent) 619 except InvalidURL: 620 # We couldn't unquote the given URI, so let's try quoting it, but 621 # there may be unquoted '%'s in the URI. We need to make sure they're 622 # properly quoted so they do not cause issues elsewhere. 623 return quote(uri, safe=safe_without_percent) 624 625 626 def address_in_network(ip, net): 627 """This function allows you to check if an IP belongs to a network subnet 628 629 Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24 630 returns False if ip = 192.168.1.1 and net = 192.168.100.0/24 631 632 :rtype: bool 633 """ 634 ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0] 635 netaddr, bits = net.split('/') 636 netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0] 637 network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask 638 return (ipaddr & netmask) == (network & netmask) 639 640 641 def dotted_netmask(mask): 642 """Converts mask from /xx format to xxx.xxx.xxx.xxx 643 644 Example: if mask is 24 function returns 255.255.255.0 645 646 :rtype: str 647 """ 648 bits = 0xffffffff ^ (1 << 32 - mask) - 1 649 return socket.inet_ntoa(struct.pack('>I', bits)) 650 651 652 def is_ipv4_address(string_ip): 653 """ 654 :rtype: bool 655 """ 656 try: 657 socket.inet_aton(string_ip) 658 except socket.error: 659 return False 660 return True 661 662 663 def is_valid_cidr(string_network): 664 """ 665 Very simple check of the cidr format in no_proxy variable. 666 667 :rtype: bool 668 """ 669 if string_network.count('/') == 1: 670 try: 671 mask = int(string_network.split('/')[1]) 672 except ValueError: 673 return False 674 675 if mask < 1 or mask > 32: 676 return False 677 678 try: 679 socket.inet_aton(string_network.split('/')[0]) 680 except socket.error: 681 return False 682 else: 683 return False 684 return True 685 686 687 @contextlib.contextmanager 688 def set_environ(env_name, value): 689 """Set the environment variable 'env_name' to 'value' 690 691 Save previous value, yield, and then restore the previous value stored in 692 the environment variable 'env_name'. 693 694 If 'value' is None, do nothing""" 695 value_changed = value is not None 696 if value_changed: 697 old_value = os.environ.get(env_name) 698 os.environ[env_name] = value 699 try: 700 yield 701 finally: 702 if value_changed: 703 if old_value is None: 704 del os.environ[env_name] 705 else: 706 os.environ[env_name] = old_value 707 708 709 def should_bypass_proxies(url, no_proxy): 710 """ 711 Returns whether we should bypass proxies or not. 712 713 :rtype: bool 714 """ 715 # Prioritize lowercase environment variables over uppercase 716 # to keep a consistent behaviour with other http projects (curl, wget). 717 get_proxy = lambda k: os.environ.get(k) or os.environ.get(k.upper()) 718 719 # First check whether no_proxy is defined. If it is, check that the URL 720 # we're getting isn't in the no_proxy list. 721 no_proxy_arg = no_proxy 722 if no_proxy is None: 723 no_proxy = get_proxy('no_proxy') 724 parsed = urlparse(url) 725 726 if parsed.hostname is None: 727 # URLs don't always have hostnames, e.g. file:/// urls. 728 return True 729 730 if no_proxy: 731 # We need to check whether we match here. We need to see if we match 732 # the end of the hostname, both with and without the port. 733 no_proxy = ( 734 host for host in no_proxy.replace(' ', '').split(',') if host 735 ) 736 737 if is_ipv4_address(parsed.hostname): 738 for proxy_ip in no_proxy: 739 if is_valid_cidr(proxy_ip): 740 if address_in_network(parsed.hostname, proxy_ip): 741 return True 742 elif parsed.hostname == proxy_ip: 743 # If no_proxy ip was defined in plain IP notation instead of cidr notation & 744 # matches the IP of the index 745 return True 746 else: 747 host_with_port = parsed.hostname 748 if parsed.port: 749 host_with_port += ':{}'.format(parsed.port) 750 751 for host in no_proxy: 752 if parsed.hostname.endswith(host) or host_with_port.endswith(host): 753 # The URL does match something in no_proxy, so we don't want 754 # to apply the proxies on this URL. 755 return True 756 757 with set_environ('no_proxy', no_proxy_arg): 758 # parsed.hostname can be `None` in cases such as a file URI. 759 try: 760 bypass = proxy_bypass(parsed.hostname) 761 except (TypeError, socket.gaierror): 762 bypass = False 763 764 if bypass: 765 return True 766 767 return False 768 769 770 def get_environ_proxies(url, no_proxy=None): 771 """ 772 Return a dict of environment proxies. 773 774 :rtype: dict 775 """ 776 if should_bypass_proxies(url, no_proxy=no_proxy): 777 return {} 778 else: 779 return getproxies() 780 781 782 def select_proxy(url, proxies): 783 """Select a proxy for the url, if applicable. 784 785 :param url: The url being for the request 786 :param proxies: A dictionary of schemes or schemes and hosts to proxy URLs 787 """ 788 proxies = proxies or {} 789 urlparts = urlparse(url) 790 if urlparts.hostname is None: 791 return proxies.get(urlparts.scheme, proxies.get('all')) 792 793 proxy_keys = [ 794 urlparts.scheme + '://' + urlparts.hostname, 795 urlparts.scheme, 796 'all://' + urlparts.hostname, 797 'all', 798 ] 799 proxy = None 800 for proxy_key in proxy_keys: 801 if proxy_key in proxies: 802 proxy = proxies[proxy_key] 803 break 804 805 return proxy 806 807 808 def default_user_agent(name="python-requests"): 809 """ 810 Return a string representing the default user agent. 811 812 :rtype: str 813 """ 814 return '%s/%s' % (name, __version__) 815 816 817 def default_headers(): 818 """ 819 :rtype: requests.structures.CaseInsensitiveDict 820 """ 821 return CaseInsensitiveDict({ 822 'User-Agent': default_user_agent(), 823 'Accept-Encoding': ', '.join(('gzip', 'deflate')), 824 'Accept': '*/*', 825 'Connection': 'keep-alive', 826 }) 827 828 829 def parse_header_links(value): 830 """Return a list of parsed link headers proxies. 831 832 i.e. Link: <http:/.../front.jpeg>; rel=front; type="image/jpeg",<http://.../back.jpeg>; rel=back;type="image/jpeg" 833 834 :rtype: list 835 """ 836 837 links = [] 838 839 replace_chars = ' \'"' 840 841 value = value.strip(replace_chars) 842 if not value: 843 return links 844 845 for val in re.split(', *<', value): 846 try: 847 url, params = val.split(';', 1) 848 except ValueError: 849 url, params = val, '' 850 851 link = {'url': url.strip('<> \'"')} 852 853 for param in params.split(';'): 854 try: 855 key, value = param.split('=') 856 except ValueError: 857 break 858 859 link[key.strip(replace_chars)] = value.strip(replace_chars) 860 861 links.append(link) 862 863 return links 864 865 866 # Null bytes; no need to recreate these on each call to guess_json_utf 867 _null = '\x00'.encode('ascii') # encoding to ASCII for Python 3 868 _null2 = _null * 2 869 _null3 = _null * 3 870 871 872 def guess_json_utf(data): 873 """ 874 :rtype: str 875 """ 876 # JSON always starts with two ASCII characters, so detection is as 877 # easy as counting the nulls and from their location and count 878 # determine the encoding. Also detect a BOM, if present. 879 sample = data[:4] 880 if sample in (codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE): 881 return 'utf-32' # BOM included 882 if sample[:3] == codecs.BOM_UTF8: 883 return 'utf-8-sig' # BOM included, MS style (discouraged) 884 if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE): 885 return 'utf-16' # BOM included 886 nullcount = sample.count(_null) 887 if nullcount == 0: 888 return 'utf-8' 889 if nullcount == 2: 890 if sample[::2] == _null2: # 1st and 3rd are null 891 return 'utf-16-be' 892 if sample[1::2] == _null2: # 2nd and 4th are null 893 return 'utf-16-le' 894 # Did not detect 2 valid UTF-16 ascii-range characters 895 if nullcount == 3: 896 if sample[:3] == _null3: 897 return 'utf-32-be' 898 if sample[1:] == _null3: 899 return 'utf-32-le' 900 # Did not detect a valid UTF-32 ascii-range character 901 return None 902 903 904 def prepend_scheme_if_needed(url, new_scheme): 905 """Given a URL that may or may not have a scheme, prepend the given scheme. 906 Does not replace a present scheme with the one provided as an argument. 907 908 :rtype: str 909 """ 910 scheme, netloc, path, params, query, fragment = urlparse(url, new_scheme) 911 912 # urlparse is a finicky beast, and sometimes decides that there isn't a 913 # netloc present. Assume that it's being over-cautious, and switch netloc 914 # and path if urlparse decided there was no netloc. 915 if not netloc: 916 netloc, path = path, netloc 917 918 return urlunparse((scheme, netloc, path, params, query, fragment)) 919 920 921 def get_auth_from_url(url): 922 """Given a url with authentication components, extract them into a tuple of 923 username,password. 924 925 :rtype: (str,str) 926 """ 927 parsed = urlparse(url) 928 929 try: 930 auth = (unquote(parsed.username), unquote(parsed.password)) 931 except (AttributeError, TypeError): 932 auth = ('', '') 933 934 return auth 935 936 937 # Moved outside of function to avoid recompile every call 938 _CLEAN_HEADER_REGEX_BYTE = re.compile(b'^\\S[^\\r\\n]*$|^$') 939 _CLEAN_HEADER_REGEX_STR = re.compile(r'^\S[^\r\n]*$|^$') 940 941 942 def check_header_validity(header): 943 """Verifies that header value is a string which doesn't contain 944 leading whitespace or return characters. This prevents unintended 945 header injection. 946 947 :param header: tuple, in the format (name, value). 948 """ 949 name, value = header 950 951 if isinstance(value, bytes): 952 pat = _CLEAN_HEADER_REGEX_BYTE 953 else: 954 pat = _CLEAN_HEADER_REGEX_STR 955 try: 956 if not pat.match(value): 957 raise InvalidHeader("Invalid return character or leading space in header: %s" % name) 958 except TypeError: 959 raise InvalidHeader("Value for header {%s: %s} must be of type str or " 960 "bytes, not %s" % (name, value, type(value))) 961 962 963 def urldefragauth(url): 964 """ 965 Given a url remove the fragment and the authentication part. 966 967 :rtype: str 968 """ 969 scheme, netloc, path, params, query, fragment = urlparse(url) 970 971 # see func:`prepend_scheme_if_needed` 972 if not netloc: 973 netloc, path = path, netloc 974 975 netloc = netloc.rsplit('@', 1)[-1] 976 977 return urlunparse((scheme, netloc, path, params, query, '')) 978 979 980 def rewind_body(prepared_request): 981 """Move file pointer back to its recorded starting position 982 so it can be read again on redirect. 983 """ 984 body_seek = getattr(prepared_request.body, 'seek', None) 985 if body_seek is not None and isinstance(prepared_request._body_position, integer_types): 986 try: 987 body_seek(prepared_request._body_position) 988 except (IOError, OSError): 989 raise UnrewindableBodyError("An error occurred when rewinding request " 990 "body for redirect.") 991 else: 992 raise UnrewindableBodyError("Unable to rewind request body for redirect.") 993 [end of src/pip/_vendor/requests/utils.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pypa/pip
4b8004a1b994f44673ddb3f775fda220fea208e0
Set encoding for tar file and use unicode path for unpacking When tarfile.TarFile decodes filenames in Python 2.7 by default it uses sys.getfilesystemencoding. On Windows this returns "mbcs", which is lossy when converting from proper utf-8 to bytes (results in '?' for out of range characters). We now pass an encoding to tarfile.open which will be used instead. Since the encoding argument is only ever used for the PAX format, and since the PAX format guarantees utf-8 encoded information, this should work in all circumstances. For filesystem APIs in Python 2, the type of the path object passed dictates the underlying Windows API that is called. For `str` it is the `*A` (for ANSI) APIs. For `unicode` it is the `*W` (for Wide character) APIs. To use the second set of APIs, which properly handles unicode filenames, we convert the byte path to utf-8. Fixes #7667. Filename encoding error in some environments with PAX sdist **Environment** * pip version: any * Python version: 2.7 * OS: Windows, non-Windows in C locale (pip Windows CI hits this) **Description** The PAX format wheel 0.34.1 sdists fail to install on Python 2.7 on Windows with a UnicodeEncodeError, or on non-Windows systems in a non-utf-8 locale: https://github.com/pypa/wheel/issues/331 **Expected behavior** Unicode filename from the PAX tarball is correctly encoded for the local filesystem. **How to Reproduce** Attempt to install a PAX formatted tarball containing a file name that cannot be encoded to the default code page (Windows) or the default locale encoding (non-Windows). In GNU tar, the affected paths are pre-mangled to something ASCII compatible, but PAX tar preserves them correctly, so the installer needs to handle them itself. **Output** See https://dev.azure.com/pypa/pip/_build/results?buildId=18040&view=logs&j=404e6841-f5ba-57d9-f2c8-8c5322057572&t=0219f6bf-240d-5b08-c877-377b12af5079&l=309 for a Windows example in the pip test suite. The wheel issue linked above has some Linux examples.
This should resolve the issue on Windows, but non-Windows systems have the opposite problem: if the locale uses a non-universal encoding (e.g. ascii), then they'll trigger an encoding error when attempting to open the Unicode path. There's no obviously correct answer for what to do in that case. Failing loudly at install time at least highlights the potential filename corruption issue, but another reasonable alternative would be to try encoding with the locale encoding first, and fall back to utf-8 if that fails (emitting a warning that the filename may not be correctly encoded due to locale issues). I suggest that we ignore this because it's a Python 2 specific issue, and there's are more impactful tasks for volunteers to work on, than this problem that's only occurring on EoL Python versions. It's our documented policy that pip's maintainers won't necessarily be solving problems that are Python 2 only. According to #7667 this also affects Python < 3.7 on systems using less capable encodings (e.g. `C`) (Back on a real computer rather than my phone) The Windows aspect is genuinely fixed at the interpreter level, as of CPython 3.6 (the filesystem encoding is always UTF-8 instead of mbcs). The problem still exists in Python 3.5 as well as in 2.7 (Chris's patch fixes that, but at the risk of causing new problems on non-Windows systems). For non-Windows systems, the problem exists by default in 3.5 and 3.6, and can still be induced in 3.7 or later by setting "LC_ALL=C" (since that will not only set a bad filesystem encoding, it will also turn off locale coercion). (I'm not actually sure what Python 2.7 will do, but I suspect it will just unconditionally pass the UTF-8 bytes from the PAX file to the local filesystem) On non-Windows systems, the problem is amenable to an environment fix, which is "Ensure your installs run with a proper locale set". At the CPython level, we officially gave up on ever getting the C locale to actually work properly in a Unicode-centric world: https://www.python.org/dev/peps/pep-0011/#legacy-c-locale So while I do think it would be nice to actually handle this at the pip level, it's also reasonable to tell people that hit these kinds of encoding issues to make sure that they at least set "LANG=C.UTF-8" in their build environments. Closing since this is a Python-2 only fix, and, well... it's not been updated in quite a while! I’d say this affects Python 3 as well (3.6 is supported for at least another year) since more and more people are running pip in containers, where `LANG` is usually set to `C`. Fixing this would save us from answering to support tickets asking this question, which IMO is worth it given the small changeset. Reopening so I don’t forget to do something. I’ll probably file another PR against master later today to replace this. Hello! I am an automated bot and I have noticed that this pull request is not currently able to be merged. If you are able to either merge the ``master`` branch into this pull request or rebase this pull request against ``master`` then it will be eligible for code review and hopefully merging! @ncoghlan Just an FYI, the issue I noted on https://github.com/pypa/wheel/issues/331 was using Python 3.6 (in case that has any bearing here). In the process of justifying not fixing this, I figured out enough to fix it. :( See #7668. @johnthagen Yeah, the non-universal locale encoding problem I mention in https://github.com/pypa/pip/pull/7668#issuecomment-579706165 will apply Python 3 as well. However 3.7+ mitigate it significantly, as they don't believe the OS when it claims to be using ASCII, and automatically switch to using UTF-8 instead.
2021-02-08T09:59:34Z
<patch> diff --git a/src/pip/_internal/utils/unpacking.py b/src/pip/_internal/utils/unpacking.py --- a/src/pip/_internal/utils/unpacking.py +++ b/src/pip/_internal/utils/unpacking.py @@ -178,7 +178,7 @@ def untar_file(filename, location): filename, ) mode = "r:*" - tar = tarfile.open(filename, mode) + tar = tarfile.open(filename, mode, encoding="utf-8") try: leading = has_leading_dir([member.name for member in tar.getmembers()]) for member in tar.getmembers(): </patch>
[]
[]
googleapis__google-cloud-python-371
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Key name w/ slash quoted twice during upload The fix for #354 in #364 mistakenly quotes the `Key.name` when adding it the the `query_params` for the upload API call, but those params are already going to be quoted inside `Connection.make_api_request` (by way of `urllib.urlencode`). </issue> <code> [start of README.rst] 1 Google Cloud Python Client 2 ========================== 3 4 Python idiomatic client for Google Cloud Platform services. 5 6 |build| |coverage| 7 ------------------ 8 9 - `Homepage <https://googlecloudplatform.github.io/gcloud-python/>`__ 10 11 This client supports the following Google Cloud Platform services: 12 13 - `Google Cloud 14 Datastore <https://cloud.google.com/products/cloud-datastore/>`__ 15 - `Google Cloud 16 Storage <https://cloud.google.com/products/cloud-storage/>`__ 17 18 If you need support for other Google APIs, check out the `Google APIs 19 Python Client 20 library <https://github.com/google/google-api-python-client>`__. 21 22 Quickstart 23 ---------- 24 25 :: 26 27 $ pip install gcloud 28 29 Google Cloud Datastore 30 ---------------------- 31 32 `Google Cloud Datastore <https://developers.google.com/datastore/>`__ is 33 a fully managed, schemaless database for storing non-relational data. 34 Cloud Datastore automatically scales with your users and supports ACID 35 transactions, high availability of reads and writes, strong consistency 36 for reads and ancestor queries, and eventual consistency for all other 37 queries. 38 39 See the `Google Cloud Datastore 40 docs <https://developers.google.com/datastore/docs/activate>`__ for more 41 details on how to activate Cloud Datastore for your project. 42 43 See `the gcloud-python API 44 documentation <https://googlecloudplatform.github.io/gcloud-python/datastore-api.html>`__ 45 to learn how to interact with the Cloud Datastore using this Client 46 Library. 47 48 .. code:: python 49 50 from gcloud import datastore 51 dataset = datastore.get_dataset('dataset-id-here', 52 '[email protected]', 53 '/path/to/private.key') 54 # Then do other things... 55 query = dataset.query().kind('EntityKind') 56 entity = dataset.entity('EntityKind') 57 58 Google Cloud Storage 59 -------------------- 60 61 `Google Cloud Storage <https://developers.google.com/storage/>`__ allows 62 you to store data on Google infrastructure with very high reliability, 63 performance and availability, and can be used to distribute large data 64 objects to users via direct download. 65 66 You need to create a Google Cloud Storage bucket to use this client 67 library. Follow the steps on the `Google Cloud Storage 68 docs <https://developers.google.com/storage/docs/cloud-console#_creatingbuckets>`__ 69 to learn how to create a bucket. 70 71 See `the gcloud-python API 72 documentation <https://googlecloudplatform.github.io/gcloud-python/storage-api.html>`__ 73 to learn how to connect to the Cloud Storage using this Client Library. 74 75 .. code:: python 76 77 import gcloud.storage 78 bucket = gcloud.storage.get_bucket('bucket-id-here', 79 '[email protected]', 80 '/path/to/private.key') 81 # Then do other things... 82 key = bucket.get_key('/remote/path/to/file.txt') 83 print key.get_contents_as_string() 84 key.set_contents_from_string('New contents!') 85 bucket.upload_file('/remote/path/storage.txt', '/local/path.txt') 86 87 Contributing 88 ------------ 89 90 Contributions to this library are always welcome and highly encouraged. 91 92 See `CONTRIBUTING <CONTRIBUTING.rst>`__ for more information on how to 93 get started. 94 95 License 96 ------- 97 98 Apache 2.0 - See `LICENSE <LICENSE>`__ for more information. 99 100 .. |build| image:: https://travis-ci.org/GoogleCloudPlatform/gcloud-python.svg?branch=master 101 :target: https://travis-ci.org/GoogleCloudPlatform/gcloud-python 102 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/gcloud-python/badge.png?branch=master 103 :target: https://coveralls.io/r/GoogleCloudPlatform/gcloud-python?branch=master 104 [end of README.rst] [start of gcloud/storage/bucket.py] 1 """Create / interact with gcloud storage buckets.""" 2 3 import os 4 5 from gcloud.storage._helpers import _PropertyMixin 6 from gcloud.storage._helpers import _scalar_property 7 from gcloud.storage import exceptions 8 from gcloud.storage.acl import BucketACL 9 from gcloud.storage.acl import DefaultObjectACL 10 from gcloud.storage.iterator import Iterator 11 from gcloud.storage.key import Key 12 13 14 class _KeyIterator(Iterator): 15 """An iterator listing keys in a bucket 16 17 You shouldn't have to use this directly, but instead should use the 18 helper methods on :class:`gcloud.storage.key.Bucket` objects. 19 20 :type bucket: :class:`gcloud.storage.bucket.Bucket` 21 :param bucket: The bucket from which to list keys. 22 """ 23 def __init__(self, bucket, extra_params=None): 24 self.bucket = bucket 25 self.prefixes = () 26 super(_KeyIterator, self).__init__( 27 connection=bucket.connection, path=bucket.path + '/o', 28 extra_params=extra_params) 29 30 def get_items_from_response(self, response): 31 """Yield :class:`.storage.key.Key` items from response. 32 33 :type response: dict 34 :param response: The JSON API response for a page of keys. 35 """ 36 self.prefixes = tuple(response.get('prefixes', ())) 37 for item in response.get('items', []): 38 yield Key.from_dict(item, bucket=self.bucket) 39 40 41 class Bucket(_PropertyMixin): 42 """A class representing a Bucket on Cloud Storage. 43 44 :type connection: :class:`gcloud.storage.connection.Connection` 45 :param connection: The connection to use when sending requests. 46 47 :type name: string 48 :param name: The name of the bucket. 49 """ 50 _iterator_class = _KeyIterator 51 52 CUSTOM_PROPERTY_ACCESSORS = { 53 'acl': 'get_acl()', 54 'cors': 'get_cors()', 55 'defaultObjectAcl': 'get_default_object_acl()', 56 'etag': 'etag', 57 'id': 'id', 58 'lifecycle': 'get_lifecycle()', 59 'location': 'location', 60 'logging': 'get_logging()', 61 'metageneration': 'metageneration', 62 'name': 'name', 63 'owner': 'owner', 64 'projectNumber': 'project_number', 65 'selfLink': 'self_link', 66 'storageClass': 'storage_class', 67 'timeCreated': 'time_created', 68 'versioning': 'versioning_enabled', 69 } 70 """Map field name -> accessor for fields w/ custom accessors.""" 71 72 # ACL rules are lazily retrieved. 73 _acl = _default_object_acl = None 74 75 def __init__(self, connection=None, name=None, properties=None): 76 super(Bucket, self).__init__(name=name, properties=properties) 77 self._connection = connection 78 79 @classmethod 80 def from_dict(cls, bucket_dict, connection=None): 81 """Construct a new bucket from a dictionary of data from Cloud Storage. 82 83 :type bucket_dict: dict 84 :param bucket_dict: The dictionary of data to construct a bucket from. 85 86 :rtype: :class:`Bucket` 87 :returns: A bucket constructed from the data provided. 88 """ 89 return cls(connection=connection, name=bucket_dict['name'], 90 properties=bucket_dict) 91 92 def __repr__(self): 93 return '<Bucket: %s>' % self.name 94 95 def __iter__(self): 96 return iter(self._iterator_class(bucket=self)) 97 98 def __contains__(self, key): 99 return self.get_key(key) is not None 100 101 @property 102 def acl(self): 103 """Create our ACL on demand.""" 104 if self._acl is None: 105 self._acl = BucketACL(self) 106 return self._acl 107 108 @property 109 def default_object_acl(self): 110 """Create our defaultObjectACL on demand.""" 111 if self._default_object_acl is None: 112 self._default_object_acl = DefaultObjectACL(self) 113 return self._default_object_acl 114 115 @property 116 def connection(self): 117 """Getter property for the connection to use with this Bucket. 118 119 :rtype: :class:`gcloud.storage.connection.Connection` 120 :returns: The connection to use. 121 """ 122 return self._connection 123 124 @property 125 def path(self): 126 """The URL path to this bucket.""" 127 if not self.name: 128 raise ValueError('Cannot determine path without bucket name.') 129 130 return '/b/' + self.name 131 132 def get_key(self, key): 133 """Get a key object by name. 134 135 This will return None if the key doesn't exist:: 136 137 >>> from gcloud import storage 138 >>> connection = storage.get_connection(project, email, key_path) 139 >>> bucket = connection.get_bucket('my-bucket') 140 >>> print bucket.get_key('/path/to/key.txt') 141 <Key: my-bucket, /path/to/key.txt> 142 >>> print bucket.get_key('/does-not-exist.txt') 143 None 144 145 :type key: string or :class:`gcloud.storage.key.Key` 146 :param key: The name of the key to retrieve. 147 148 :rtype: :class:`gcloud.storage.key.Key` or None 149 :returns: The key object if it exists, otherwise None. 150 """ 151 # Coerce this to a key object (either from a Key or a string). 152 key = self.new_key(key) 153 154 try: 155 response = self.connection.api_request(method='GET', path=key.path) 156 return Key.from_dict(response, bucket=self) 157 except exceptions.NotFound: 158 return None 159 160 def get_all_keys(self): 161 """List all the keys in this bucket. 162 163 This will **not** retrieve all the data for all the keys, it 164 will only retrieve the keys. 165 166 This is equivalent to:: 167 168 keys = [key for key in bucket] 169 170 :rtype: list of :class:`gcloud.storage.key.Key` 171 :returns: A list of all the Key objects in this bucket. 172 """ 173 return list(self) 174 175 def iterator(self, prefix=None, delimiter=None, max_results=None, 176 versions=None): 177 """Return an iterator used to find keys in the bucket. 178 179 :type prefix: string or None 180 :param prefix: optional prefix used to filter keys. 181 182 :type delimiter: string or None 183 :param delimiter: optional delimter, used with ``prefix`` to 184 emulate hierarchy. 185 186 :type max_results: integer or None 187 :param max_results: maximum number of keys to return. 188 189 :type versions: boolean or None 190 :param versions: whether object versions should be returned as 191 separate keys. 192 193 :rtype: :class:`_KeyIterator` 194 """ 195 extra_params = {} 196 197 if prefix is not None: 198 extra_params['prefix'] = prefix 199 200 if delimiter is not None: 201 extra_params['delimiter'] = delimiter 202 203 if max_results is not None: 204 extra_params['maxResults'] = max_results 205 206 if versions is not None: 207 extra_params['versions'] = versions 208 209 return self._iterator_class(self, extra_params=extra_params) 210 211 def new_key(self, key): 212 """Given path name (or Key), return a :class:`.storage.key.Key` object. 213 214 This is really useful when you're not sure if you have a Key 215 object or a string path name. Given either of those types, this 216 returns the corresponding Key object. 217 218 :type key: string or :class:`gcloud.storage.key.Key` 219 :param key: A path name or actual key object. 220 221 :rtype: :class:`gcloud.storage.key.Key` 222 :returns: A Key object with the path provided. 223 """ 224 if isinstance(key, Key): 225 return key 226 227 # Support Python 2 and 3. 228 try: 229 string_type = basestring 230 except NameError: # pragma: NO COVER PY3k 231 string_type = str 232 233 if isinstance(key, string_type): 234 return Key(bucket=self, name=key) 235 236 raise TypeError('Invalid key: %s' % key) 237 238 def delete(self, force=False): 239 """Delete this bucket. 240 241 The bucket **must** be empty in order to delete it. If the 242 bucket doesn't exist, this will raise a 243 :class:`gcloud.storage.exceptions.NotFound`. If the bucket 244 is not empty, this will raise an Exception. 245 246 If you want to delete a non-empty bucket you can pass in a force 247 parameter set to true. This will iterate through the bucket's 248 keys and delete the related objects, before deleting the bucket. 249 250 :type force: bool 251 :param full: If True, empties the bucket's objects then deletes it. 252 253 :raises: :class:`gcloud.storage.exceptions.NotFound` if the 254 bucket does not exist, or 255 :class:`gcloud.storage.exceptions.Conflict` if the 256 bucket has keys and `force` is not passed. 257 """ 258 return self.connection.delete_bucket(self.name, force=force) 259 260 def delete_key(self, key): 261 """Deletes a key from the current bucket. 262 263 If the key isn't found, 264 this will throw a :class:`gcloud.storage.exceptions.NotFound`. 265 266 For example:: 267 268 >>> from gcloud import storage 269 >>> from gcloud.storage import exceptions 270 >>> connection = storage.get_connection(project, email, key_path) 271 >>> bucket = connection.get_bucket('my-bucket') 272 >>> print bucket.get_all_keys() 273 [<Key: my-bucket, my-file.txt>] 274 >>> bucket.delete_key('my-file.txt') 275 >>> try: 276 ... bucket.delete_key('doesnt-exist') 277 ... except exceptions.NotFound: 278 ... pass 279 280 281 :type key: string or :class:`gcloud.storage.key.Key` 282 :param key: A key name or Key object to delete. 283 284 :rtype: :class:`gcloud.storage.key.Key` 285 :returns: The key that was just deleted. 286 :raises: :class:`gcloud.storage.exceptions.NotFound` (to suppress 287 the exception, call ``delete_keys``, passing a no-op 288 ``on_error`` callback, e.g.:: 289 290 >>> bucket.delete_keys([key], on_error=lambda key: pass) 291 """ 292 key = self.new_key(key) 293 self.connection.api_request(method='DELETE', path=key.path) 294 return key 295 296 def delete_keys(self, keys, on_error=None): 297 """Deletes a list of keys from the current bucket. 298 299 Uses :func:`Bucket.delete_key` to delete each individual key. 300 301 :type keys: list of string or :class:`gcloud.storage.key.Key` 302 :param keys: A list of key names or Key objects to delete. 303 304 :type on_error: a callable taking (key) 305 :param on_error: If not ``None``, called once for each key raising 306 :class:`gcloud.storage.exceptions.NotFound`; 307 otherwise, the exception is propagated. 308 309 :raises: :class:`gcloud.storage.exceptions.NotFound` (if 310 `on_error` is not passed). 311 """ 312 for key in keys: 313 try: 314 self.delete_key(key) 315 except exceptions.NotFound: 316 if on_error is not None: 317 on_error(key) 318 else: 319 raise 320 321 def copy_key(self, key, destination_bucket, new_name=None): 322 """Copy the given key to the given bucket, optionally with a new name. 323 324 :type key: string or :class:`gcloud.storage.key.Key` 325 :param key: The key to be copied. 326 327 :type destination_bucket: :class:`gcloud.storage.bucket.Bucket` 328 :param destination_bucket: The bucket into which the key should be 329 copied. 330 331 :type new_name: string 332 :param new_name: (optional) the new name for the copied file. 333 334 :rtype: :class:`gcloud.storage.key.Key` 335 :returns: The new Key. 336 """ 337 if new_name is None: 338 new_name = key.name 339 new_key = destination_bucket.new_key(new_name) 340 api_path = key.path + '/copyTo' + new_key.path 341 self.connection.api_request(method='POST', path=api_path) 342 return new_key 343 344 def upload_file(self, filename, key=None): 345 """Shortcut method to upload a file into this bucket. 346 347 Use this method to quickly put a local file in Cloud Storage. 348 349 For example:: 350 351 >>> from gcloud import storage 352 >>> connection = storage.get_connection(project, email, key_path) 353 >>> bucket = connection.get_bucket('my-bucket') 354 >>> bucket.upload_file('~/my-file.txt', 'remote-text-file.txt') 355 >>> print bucket.get_all_keys() 356 [<Key: my-bucket, remote-text-file.txt>] 357 358 If you don't provide a key value, we will try to upload the file 359 using the local filename as the key (**not** the complete 360 path):: 361 362 >>> from gcloud import storage 363 >>> connection = storage.get_connection(project, email, key_path) 364 >>> bucket = connection.get_bucket('my-bucket') 365 >>> bucket.upload_file('~/my-file.txt') 366 >>> print bucket.get_all_keys() 367 [<Key: my-bucket, my-file.txt>] 368 369 :type filename: string 370 :param filename: Local path to the file you want to upload. 371 372 :type key: string or :class:`gcloud.storage.key.Key` 373 :param key: The key (either an object or a remote path) of where 374 to put the file. If this is blank, we will try to 375 upload the file to the root of the bucket with the 376 same name as on your local file system. 377 """ 378 if key is None: 379 key = os.path.basename(filename) 380 key = self.new_key(key) 381 key.upload_from_filename(filename) 382 return key 383 384 def upload_file_object(self, file_obj, key=None): 385 """Shortcut method to upload a file object into this bucket. 386 387 Use this method to quickly put a local file in Cloud Storage. 388 389 For example:: 390 391 >>> from gcloud import storage 392 >>> connection = storage.get_connection(project, email, key_path) 393 >>> bucket = connection.get_bucket('my-bucket') 394 >>> bucket.upload_file(open('~/my-file.txt'), 'remote-text-file.txt') 395 >>> print bucket.get_all_keys() 396 [<Key: my-bucket, remote-text-file.txt>] 397 398 If you don't provide a key value, we will try to upload the file 399 using the local filename as the key (**not** the complete 400 path):: 401 402 >>> from gcloud import storage 403 >>> connection = storage.get_connection(project, email, key_path) 404 >>> bucket = connection.get_bucket('my-bucket') 405 >>> bucket.upload_file(open('~/my-file.txt')) 406 >>> print bucket.get_all_keys() 407 [<Key: my-bucket, my-file.txt>] 408 409 :type file_obj: file 410 :param file_obj: A file handle open for reading. 411 412 :type key: string or :class:`gcloud.storage.key.Key` 413 :param key: The key (either an object or a remote path) of where 414 to put the file. If this is blank, we will try to 415 upload the file to the root of the bucket with the 416 same name as on your local file system. 417 """ 418 if key: 419 key = self.new_key(key) 420 else: 421 key = self.new_key(os.path.basename(file_obj.name)) 422 return key.upload_from_file(file_obj) 423 424 def get_cors(self): 425 """Retrieve CORS policies configured for this bucket. 426 427 See: http://www.w3.org/TR/cors/ and 428 https://cloud.google.com/storage/docs/json_api/v1/buckets 429 430 :rtype: list(dict) 431 :returns: A sequence of mappings describing each CORS policy. 432 """ 433 return [policy.copy() for policy in self.properties.get('cors', ())] 434 435 def update_cors(self, entries): 436 """Update CORS policies configured for this bucket. 437 438 See: http://www.w3.org/TR/cors/ and 439 https://cloud.google.com/storage/docs/json_api/v1/buckets 440 441 :type entries: list(dict) 442 :param entries: A sequence of mappings describing each CORS policy. 443 """ 444 self._patch_properties({'cors': entries}) 445 446 def get_default_object_acl(self): 447 """Get the current Default Object ACL rules. 448 449 If the acl isn't available locally, this method will reload it from 450 Cloud Storage. 451 452 :rtype: :class:`gcloud.storage.acl.DefaultObjectACL` 453 :returns: A DefaultObjectACL object for this bucket. 454 """ 455 if not self.default_object_acl.loaded: 456 self.default_object_acl.reload() 457 return self.default_object_acl 458 459 @property 460 def etag(self): 461 """Retrieve the ETag for the bucket. 462 463 See: http://tools.ietf.org/html/rfc2616#section-3.11 and 464 https://cloud.google.com/storage/docs/json_api/v1/buckets 465 466 :rtype: string 467 """ 468 return self.properties['etag'] 469 470 @property 471 def id(self): 472 """Retrieve the ID for the bucket. 473 474 See: https://cloud.google.com/storage/docs/json_api/v1/buckets 475 476 :rtype: string 477 """ 478 return self.properties['id'] 479 480 def get_lifecycle(self): 481 """Retrieve lifecycle rules configured for this bucket. 482 483 See: https://cloud.google.com/storage/docs/lifecycle and 484 https://cloud.google.com/storage/docs/json_api/v1/buckets 485 486 :rtype: list(dict) 487 :returns: A sequence of mappings describing each lifecycle rule. 488 """ 489 info = self.properties.get('lifecycle', {}) 490 return [rule.copy() for rule in info.get('rule', ())] 491 492 def update_lifecycle(self, rules): 493 """Update CORS policies configured for this bucket. 494 495 See: https://cloud.google.com/storage/docs/lifecycle and 496 https://cloud.google.com/storage/docs/json_api/v1/buckets 497 498 :type rules: list(dict) 499 :param rules: A sequence of mappings describing each lifecycle rule. 500 """ 501 self._patch_properties({'lifecycle': {'rule': rules}}) 502 503 location = _scalar_property('location') 504 """Retrieve location configured for this bucket. 505 506 See: https://cloud.google.com/storage/docs/json_api/v1/buckets and 507 https://cloud.google.com/storage/docs/concepts-techniques#specifyinglocations 508 509 :rtype: string 510 """ 511 512 def get_logging(self): 513 """Return info about access logging for this bucket. 514 515 See: https://cloud.google.com/storage/docs/accesslogs#status 516 517 :rtype: dict or None 518 :returns: a dict w/ keys, ``logBucket`` and ``logObjectPrefix`` 519 (if logging is enabled), or None (if not). 520 """ 521 info = self.properties.get('logging') 522 if info is not None: 523 return info.copy() 524 525 def enable_logging(self, bucket_name, object_prefix=''): 526 """Enable access logging for this bucket. 527 528 See: https://cloud.google.com/storage/docs/accesslogs#delivery 529 530 :type bucket_name: string 531 :param bucket_name: name of bucket in which to store access logs 532 533 :type object_prefix: string 534 :param object_prefix: prefix for access log filenames 535 """ 536 info = {'logBucket': bucket_name, 'logObjectPrefix': object_prefix} 537 self._patch_properties({'logging': info}) 538 539 def disable_logging(self): 540 """Disable access logging for this bucket. 541 542 See: https://cloud.google.com/storage/docs/accesslogs#disabling 543 """ 544 self._patch_properties({'logging': None}) 545 546 @property 547 def metageneration(self): 548 """Retrieve the metageneration for the bucket. 549 550 See: https://cloud.google.com/storage/docs/json_api/v1/buckets 551 552 :rtype: integer 553 """ 554 return self.properties['metageneration'] 555 556 @property 557 def owner(self): 558 """Retrieve info about the owner of the bucket. 559 560 See: https://cloud.google.com/storage/docs/json_api/v1/buckets 561 562 :rtype: dict 563 :returns: mapping of owner's role/ID. 564 """ 565 return self.properties['owner'].copy() 566 567 @property 568 def project_number(self): 569 """Retrieve the number of the project to which the bucket is assigned. 570 571 See: https://cloud.google.com/storage/docs/json_api/v1/buckets 572 573 :rtype: integer 574 """ 575 return self.properties['projectNumber'] 576 577 @property 578 def self_link(self): 579 """Retrieve the URI for the bucket. 580 581 See: https://cloud.google.com/storage/docs/json_api/v1/buckets 582 583 :rtype: string 584 """ 585 return self.properties['selfLink'] 586 587 @property 588 def storage_class(self): 589 """Retrieve the storage class for the bucket. 590 591 See: https://cloud.google.com/storage/docs/json_api/v1/buckets and 592 https://cloud.google.com/storage/docs/durable-reduced-availability 593 594 :rtype: string 595 :returns: Currently one of "STANDARD", "DURABLE_REDUCED_AVAILABILITY" 596 """ 597 return self.properties['storageClass'] 598 599 @property 600 def time_created(self): 601 """Retrieve the timestamp at which the bucket was created. 602 603 See: https://cloud.google.com/storage/docs/json_api/v1/buckets 604 605 :rtype: string 606 :returns: timestamp in RFC 3339 format. 607 """ 608 return self.properties['timeCreated'] 609 610 @property 611 def versioning_enabled(self): 612 """Is versioning enabled for this bucket? 613 614 See: https://cloud.google.com/storage/docs/object-versioning for 615 details. 616 617 :rtype: boolean 618 :returns: True if enabled, else False. 619 """ 620 versioning = self.properties.get('versioning', {}) 621 return versioning.get('enabled', False) 622 623 @versioning_enabled.setter 624 def versioning_enabled(self, value): 625 """Enable versioning for this bucket. 626 627 See: https://cloud.google.com/storage/docs/object-versioning for 628 details. 629 630 :type value: convertible to bool 631 :param value: should versioning be anabled for the bucket? 632 """ 633 self._patch_properties({'versioning': {'enabled': bool(value)}}) 634 635 def configure_website(self, main_page_suffix=None, not_found_page=None): 636 """Configure website-related properties. 637 638 See: https://developers.google.com/storage/docs/website-configuration 639 640 .. note:: 641 This (apparently) only works 642 if your bucket name is a domain name 643 (and to do that, you need to get approved somehow...). 644 645 If you want this bucket to host a website, just provide the name 646 of an index page and a page to use when a key isn't found:: 647 648 >>> from gcloud import storage 649 >>> connection = storage.get_connection(project, email, 650 private_key_path) 651 >>> bucket = connection.get_bucket(bucket_name) 652 >>> bucket.configure_website('index.html', '404.html') 653 654 You probably should also make the whole bucket public:: 655 656 >>> bucket.make_public(recursive=True, future=True) 657 658 This says: "Make the bucket public, and all the stuff already in 659 the bucket, and anything else I add to the bucket. Just make it 660 all public." 661 662 :type main_page_suffix: string 663 :param main_page_suffix: The page to use as the main page 664 of a directory. 665 Typically something like index.html. 666 667 :type not_found_page: string 668 :param not_found_page: The file to use when a page isn't found. 669 """ 670 data = { 671 'website': { 672 'mainPageSuffix': main_page_suffix, 673 'notFoundPage': not_found_page, 674 }, 675 } 676 return self._patch_properties(data) 677 678 def disable_website(self): 679 """Disable the website configuration for this bucket. 680 681 This is really just a shortcut for setting the website-related 682 attributes to ``None``. 683 """ 684 return self.configure_website(None, None) 685 686 def make_public(self, recursive=False, future=False): 687 """Make a bucket public. 688 689 :type recursive: bool 690 :param recursive: If True, this will make all keys inside the bucket 691 public as well. 692 693 :type future: bool 694 :param future: If True, this will make all objects created in the 695 future public as well. 696 """ 697 self.get_acl().all().grant_read() 698 self.acl.save() 699 700 if future: 701 doa = self.get_default_object_acl() 702 doa.all().grant_read() 703 doa.save() 704 705 if recursive: 706 for key in self: 707 key.get_acl().all().grant_read() 708 key.save_acl() 709 [end of gcloud/storage/bucket.py] [start of gcloud/storage/connection.py] 1 """Create / interact with gcloud storage connections.""" 2 3 import base64 4 import calendar 5 import datetime 6 import json 7 import urllib 8 9 from Crypto.Hash import SHA256 10 from Crypto.PublicKey import RSA 11 from Crypto.Signature import PKCS1_v1_5 12 from OpenSSL import crypto 13 import pytz 14 15 from gcloud.connection import Connection as _Base 16 from gcloud.storage import exceptions 17 from gcloud.storage.bucket import Bucket 18 from gcloud.storage.iterator import Iterator 19 20 21 def _utcnow(): # pragma: NO COVER testing replaces 22 """Returns current time as UTC datetime. 23 24 NOTE: on the module namespace so tests can replace it. 25 """ 26 return datetime.datetime.utcnow() 27 28 29 class Connection(_Base): 30 """A connection to Google Cloud Storage via the JSON REST API. 31 32 This class should understand only the basic types (and protobufs) 33 in method arguments, however should be capable of returning advanced types. 34 35 See :class:`gcloud.connection.Connection` for a full list of parameters. 36 :class:`Connection` differs only in needing a project name 37 (which you specify when creating a project in the Cloud Console). 38 39 A typical use of this is to operate on 40 :class:`gcloud.storage.bucket.Bucket` objects:: 41 42 >>> from gcloud import storage 43 >>> connection = storage.get_connection(project, email, key_path) 44 >>> bucket = connection.create_bucket('my-bucket-name') 45 46 You can then delete this bucket:: 47 48 >>> bucket.delete() 49 >>> # or 50 >>> connection.delete_bucket(bucket) 51 52 If you want to access an existing bucket:: 53 54 >>> bucket = connection.get_bucket('my-bucket-name') 55 56 A :class:`Connection` is actually iterable and will return the 57 :class:`gcloud.storage.bucket.Bucket` objects inside the project:: 58 59 >>> for bucket in connection: 60 >>> print bucket 61 <Bucket: my-bucket-name> 62 63 In that same way, you can check for whether a bucket exists inside 64 the project using Python's ``in`` operator:: 65 66 >>> print 'my-bucket-name' in connection 67 True 68 """ 69 70 API_VERSION = 'v1' 71 """The version of the API, used in building the API call's URL.""" 72 73 API_URL_TEMPLATE = '{api_base_url}/storage/{api_version}{path}' 74 """A template for the URL of a particular API call.""" 75 76 API_ACCESS_ENDPOINT = 'https://storage.googleapis.com' 77 78 def __init__(self, project, *args, **kwargs): 79 """:type project: string 80 81 :param project: The project name to connect to. 82 """ 83 super(Connection, self).__init__(*args, **kwargs) 84 self.project = project 85 86 def __iter__(self): 87 return iter(_BucketIterator(connection=self)) 88 89 def __contains__(self, bucket_name): 90 return self.lookup(bucket_name) is not None 91 92 def build_api_url(self, path, query_params=None, api_base_url=None, 93 api_version=None): 94 """Construct an API url given a few components, some optional. 95 96 Typically, you shouldn't need to use this method. 97 98 :type path: string 99 :param path: The path to the resource (ie, ``'/b/bucket-name'``). 100 101 :type query_params: dict 102 :param query_params: A dictionary of keys and values to insert into 103 the query string of the URL. 104 105 :type api_base_url: string 106 :param api_base_url: The base URL for the API endpoint. 107 Typically you won't have to provide this. 108 109 :type api_version: string 110 :param api_version: The version of the API to call. 111 Typically you shouldn't provide this and instead 112 use the default for the library. 113 114 :rtype: string 115 :returns: The URL assembled from the pieces provided. 116 """ 117 url = self.API_URL_TEMPLATE.format( 118 api_base_url=(api_base_url or self.API_BASE_URL), 119 api_version=(api_version or self.API_VERSION), 120 path=path) 121 122 query_params = query_params or {} 123 query_params.update({'project': self.project}) 124 url += '?' + urllib.urlencode(query_params) 125 126 return url 127 128 def make_request(self, method, url, data=None, content_type=None, 129 headers=None): 130 """A low level method to send a request to the API. 131 132 Typically, you shouldn't need to use this method. 133 134 :type method: string 135 :param method: The HTTP method to use in the request. 136 137 :type url: string 138 :param url: The URL to send the request to. 139 140 :type data: string 141 :param data: The data to send as the body of the request. 142 143 :type content_type: string 144 :param content_type: The proper MIME type of the data provided. 145 146 :type headers: dict 147 :param headers: A dictionary of HTTP headers to send with the request. 148 149 :rtype: tuple of ``response`` (a dictionary of sorts) 150 and ``content`` (a string). 151 :returns: The HTTP response object and the content of the response. 152 """ 153 headers = headers or {} 154 headers['Accept-Encoding'] = 'gzip' 155 156 if data: 157 content_length = len(str(data)) 158 else: 159 content_length = 0 160 161 headers['Content-Length'] = content_length 162 163 if content_type: 164 headers['Content-Type'] = content_type 165 166 headers['User-Agent'] = self.USER_AGENT 167 168 return self.http.request(uri=url, method=method, headers=headers, 169 body=data) 170 171 def api_request(self, method, path, query_params=None, 172 data=None, content_type=None, 173 api_base_url=None, api_version=None, 174 expect_json=True): 175 """Make a request over the HTTP transport to the Cloud Storage API. 176 177 You shouldn't need to use this method, but if you plan to 178 interact with the API using these primitives, this is the 179 correct one to use... 180 181 :type method: string 182 :param method: The HTTP method name (ie, ``GET``, ``POST``, etc). 183 Required. 184 185 :type path: string 186 :param path: The path to the resource (ie, ``'/b/bucket-name'``). 187 Required. 188 189 :type query_params: dict 190 :param query_params: A dictionary of keys and values to insert into 191 the query string of the URL. Default is 192 empty dict. 193 194 :type data: string 195 :param data: The data to send as the body of the request. Default is 196 the empty string. 197 198 :type content_type: string 199 :param content_type: The proper MIME type of the data provided. Default 200 is None. 201 202 :type api_base_url: string 203 :param api_base_url: The base URL for the API endpoint. 204 Typically you won't have to provide this. 205 Default is the standard API base URL. 206 207 :type api_version: string 208 :param api_version: The version of the API to call. Typically 209 you shouldn't provide this and instead use 210 the default for the library. Default is the 211 latest API version supported by 212 gcloud-python. 213 214 :type expect_json: bool 215 :param expect_json: If True, this method will try to parse the 216 response as JSON and raise an exception if 217 that cannot be done. Default is True. 218 219 :raises: Exception if the response code is not 200 OK. 220 """ 221 url = self.build_api_url(path=path, query_params=query_params, 222 api_base_url=api_base_url, 223 api_version=api_version) 224 225 # Making the executive decision that any dictionary 226 # data will be sent properly as JSON. 227 if data and isinstance(data, dict): 228 data = json.dumps(data) 229 content_type = 'application/json' 230 231 response, content = self.make_request( 232 method=method, url=url, data=data, content_type=content_type) 233 234 if not 200 <= response.status < 300: 235 raise exceptions.make_exception(response, content) 236 237 if content and expect_json: 238 content_type = response.get('content-type', '') 239 if not content_type.startswith('application/json'): 240 raise TypeError('Expected JSON, got %s' % content_type) 241 return json.loads(content) 242 243 return content 244 245 def get_all_buckets(self): 246 """Get all buckets in the project. 247 248 This will not populate the list of keys available in each 249 bucket. 250 251 You can also iterate over the connection object, so these two 252 operations are identical:: 253 254 >>> from gcloud import storage 255 >>> connection = storage.get_connection(project, email, key_path) 256 >>> for bucket in connection.get_all_buckets(): 257 >>> print bucket 258 >>> # ... is the same as ... 259 >>> for bucket in connection: 260 >>> print bucket 261 262 :rtype: list of :class:`gcloud.storage.bucket.Bucket` objects. 263 :returns: All buckets belonging to this project. 264 """ 265 return list(self) 266 267 def get_bucket(self, bucket_name): 268 """Get a bucket by name. 269 270 If the bucket isn't found, this will raise a 271 :class:`gcloud.storage.exceptions.NotFound`. If you would 272 rather get a bucket by name, and return ``None`` if the bucket 273 isn't found (like ``{}.get('...')``) then use 274 :func:`Connection.lookup`. 275 276 For example:: 277 278 >>> from gcloud import storage 279 >>> from gcloud.storage import exceptions 280 >>> connection = storage.get_connection(project, email, key_path) 281 >>> try: 282 >>> bucket = connection.get_bucket('my-bucket') 283 >>> except exceptions.NotFound: 284 >>> print 'Sorry, that bucket does not exist!' 285 286 :type bucket_name: string 287 :param bucket_name: The name of the bucket to get. 288 289 :rtype: :class:`gcloud.storage.bucket.Bucket` 290 :returns: The bucket matching the name provided. 291 :raises: :class:`gcloud.storage.exceptions.NotFound` 292 """ 293 bucket = self.new_bucket(bucket_name) 294 response = self.api_request(method='GET', path=bucket.path) 295 return Bucket.from_dict(response, connection=self) 296 297 def lookup(self, bucket_name): 298 """Get a bucket by name, returning None if not found. 299 300 You can use this if you would rather checking for a None value 301 than catching an exception:: 302 303 >>> from gcloud import storage 304 >>> connection = storage.get_connection(project, email, key_path) 305 >>> bucket = connection.get_bucket('doesnt-exist') 306 >>> print bucket 307 None 308 >>> bucket = connection.get_bucket('my-bucket') 309 >>> print bucket 310 <Bucket: my-bucket> 311 312 :type bucket_name: string 313 :param bucket_name: The name of the bucket to get. 314 315 :rtype: :class:`gcloud.storage.bucket.Bucket` 316 :returns: The bucket matching the name provided or None if not found. 317 """ 318 try: 319 return self.get_bucket(bucket_name) 320 except exceptions.NotFound: 321 return None 322 323 def create_bucket(self, bucket): 324 """Create a new bucket. 325 326 For example:: 327 328 >>> from gcloud import storage 329 >>> connection = storage.get_connection(project, client, key_path) 330 >>> bucket = connection.create_bucket('my-bucket') 331 >>> print bucket 332 <Bucket: my-bucket> 333 334 :type bucket: string or :class:`gcloud.storage.bucket.Bucket` 335 :param bucket: The bucket name (or bucket object) to create. 336 337 :rtype: :class:`gcloud.storage.bucket.Bucket` 338 :returns: The newly created bucket. 339 :raises: :class:`gcloud.storage.exceptions.Conflict` if 340 there is a confict (bucket already exists, invalid name, etc.) 341 """ 342 bucket = self.new_bucket(bucket) 343 response = self.api_request(method='POST', path='/b', 344 data={'name': bucket.name}) 345 return Bucket.from_dict(response, connection=self) 346 347 def delete_bucket(self, bucket, force=False): 348 """Delete a bucket. 349 350 You can use this method to delete a bucket by name, or to delete 351 a bucket object:: 352 353 >>> from gcloud import storage 354 >>> connection = storage.get_connection(project, email, key_path) 355 >>> connection.delete_bucket('my-bucket') 356 True 357 358 You can also delete pass in the bucket object:: 359 360 >>> bucket = connection.get_bucket('other-bucket') 361 >>> connection.delete_bucket(bucket) 362 True 363 364 If the bucket doesn't exist, this will raise a 365 :class:`gcloud.storage.exceptions.NotFound`:: 366 367 >>> from gcloud.storage import exceptions 368 >>> try: 369 >>> connection.delete_bucket('my-bucket') 370 >>> except exceptions.NotFound: 371 >>> print 'That bucket does not exist!' 372 373 :type bucket: string or :class:`gcloud.storage.bucket.Bucket` 374 :param bucket: The bucket name (or bucket object) to create. 375 376 :type force: bool 377 :param full: If True, empties the bucket's objects then deletes it. 378 379 :rtype: bool 380 :returns: True if the bucket was deleted. 381 :raises: :class:`gcloud.storage.exceptions.NotFound` if the 382 bucket doesn't exist, or 383 :class:`gcloud.storage.exceptions.Conflict` if the 384 bucket has keys and `force` is not passed. 385 """ 386 bucket = self.new_bucket(bucket) 387 388 # This force delete operation is slow. 389 if force: 390 for key in bucket: 391 key.delete() 392 393 self.api_request(method='DELETE', path=bucket.path) 394 return True 395 396 def new_bucket(self, bucket): 397 """Factory method for creating a new (unsaved) bucket object. 398 399 This method is really useful when you're not sure whether you 400 have an actual :class:`gcloud.storage.bucket.Bucket` object or 401 just a name of a bucket. It always returns the object:: 402 403 >>> bucket = connection.new_bucket('bucket') 404 >>> print bucket 405 <Bucket: bucket> 406 >>> bucket = connection.new_bucket(bucket) 407 >>> print bucket 408 <Bucket: bucket> 409 410 :type bucket: string or :class:`gcloud.storage.bucket.Bucket` 411 :param bucket: A name of a bucket or an existing Bucket object. 412 """ 413 if isinstance(bucket, Bucket): 414 return bucket 415 416 # Support Python 2 and 3. 417 try: 418 string_type = basestring 419 except NameError: # pragma: NO COVER PY3k 420 string_type = str 421 422 if isinstance(bucket, string_type): 423 return Bucket(connection=self, name=bucket) 424 425 raise TypeError('Invalid bucket: %s' % bucket) 426 427 def generate_signed_url(self, resource, expiration, 428 method='GET', content_md5=None, 429 content_type=None): 430 """Generate signed URL to provide query-string auth'n to a resource. 431 432 :type resource: string 433 :param resource: A pointer to a specific resource 434 (typically, ``/bucket-name/path/to/key.txt``). 435 436 :type expiration: int, long, datetime.datetime, datetime.timedelta 437 :param expiration: When the signed URL should expire. 438 439 :type method: string 440 :param method: The HTTP verb that will be used when requesting the URL. 441 442 :type content_md5: string 443 :param content_md5: The MD5 hash of the object referenced by 444 ``resource``. 445 446 :type content_type: string 447 :param content_type: The content type of the object referenced by 448 ``resource``. 449 450 :rtype: string 451 :returns: A signed URL you can use to access the resource 452 until expiration. 453 """ 454 expiration = _get_expiration_seconds(expiration) 455 456 # Generate the string to sign. 457 signature_string = '\n'.join([ 458 method, 459 content_md5 or '', 460 content_type or '', 461 str(expiration), 462 resource]) 463 464 # Take our PKCS12 (.p12) key and make it into a RSA key we can use... 465 pkcs12 = crypto.load_pkcs12( 466 base64.b64decode(self.credentials.private_key), 467 'notasecret') 468 pem = crypto.dump_privatekey( 469 crypto.FILETYPE_PEM, pkcs12.get_privatekey()) 470 pem_key = RSA.importKey(pem) 471 472 # Sign the string with the RSA key. 473 signer = PKCS1_v1_5.new(pem_key) 474 signature_hash = SHA256.new(signature_string) 475 signature_bytes = signer.sign(signature_hash) 476 signature = base64.b64encode(signature_bytes) 477 478 # Set the right query parameters. 479 query_params = { 480 'GoogleAccessId': self.credentials.service_account_name, 481 'Expires': str(expiration), 482 'Signature': signature, 483 } 484 485 # Return the built URL. 486 return '{endpoint}{resource}?{querystring}'.format( 487 endpoint=self.API_ACCESS_ENDPOINT, resource=resource, 488 querystring=urllib.urlencode(query_params)) 489 490 491 class _BucketIterator(Iterator): 492 """An iterator listing all buckets. 493 494 You shouldn't have to use this directly, but instead should use the helper 495 methods on :class:`gcloud.storage.connection.Connection` objects. 496 497 :type connection: :class:`gcloud.storage.connection.Connection` 498 :param connection: The connection to use for querying the list of buckets. 499 """ 500 501 def __init__(self, connection): 502 super(_BucketIterator, self).__init__(connection=connection, path='/b') 503 504 def get_items_from_response(self, response): 505 """Factory method which yields :class:`.Bucket` items from a response. 506 507 :type response: dict 508 :param response: The JSON API response for a page of buckets. 509 """ 510 for item in response.get('items', []): 511 yield Bucket.from_dict(item, connection=self.connection) 512 513 514 def _get_expiration_seconds(expiration): 515 """Convert 'expiration' to a number of seconds in the future. 516 517 :type expiration: int, long, datetime.datetime, datetime.timedelta 518 :param expiration: When the signed URL should expire. 519 520 :rtype: int 521 :returns: a timestamp as an absolute number of seconds. 522 """ 523 # If it's a timedelta, add it to `now` in UTC. 524 if isinstance(expiration, datetime.timedelta): 525 now = _utcnow().replace(tzinfo=pytz.utc) 526 expiration = now + expiration 527 528 # If it's a datetime, convert to a timestamp. 529 if isinstance(expiration, datetime.datetime): 530 # Make sure the timezone on the value is UTC 531 # (either by converting or replacing the value). 532 if expiration.tzinfo: 533 expiration = expiration.astimezone(pytz.utc) 534 else: 535 expiration = expiration.replace(tzinfo=pytz.utc) 536 537 # Turn the datetime into a timestamp (seconds, not microseconds). 538 expiration = int(calendar.timegm(expiration.timetuple())) 539 540 if not isinstance(expiration, (int, long)): 541 raise TypeError('Expected an integer timestamp, datetime, or ' 542 'timedelta. Got %s' % type(expiration)) 543 return expiration 544 [end of gcloud/storage/connection.py] [start of gcloud/storage/demo/demo.py] 1 # pragma NO COVER 2 # Welcome to the gCloud Storage Demo! (hit enter) 3 4 # We're going to walk through some of the basics..., 5 # Don't worry though. You don't need to do anything, just keep hitting enter... 6 7 # Let's start by importing the demo module and getting a connection: 8 from gcloud.storage import demo 9 connection = demo.get_connection() 10 11 # OK, now let's look at all of the buckets... 12 print connection.get_all_buckets() # This might take a second... 13 14 # Now let's create a new bucket... 15 import time 16 bucket_name = ("bucket-%s" % time.time()).replace(".", "") # Get rid of dots. 17 print bucket_name 18 bucket = connection.create_bucket(bucket_name) 19 print bucket 20 21 # Let's look at all of the buckets again... 22 print connection.get_all_buckets() 23 24 # How about we create a new key inside this bucket. 25 key = bucket.new_key("my-new-file.txt") 26 27 # Now let's put some data in there. 28 key.set_contents_from_string("this is some data!") 29 30 # ... and we can read that data back again. 31 print key.get_contents_as_string() 32 33 # Now let's delete that key. 34 print key.delete() 35 36 # And now that we're done, let's delete that bucket... 37 print bucket.delete() 38 39 # Alright! That's all! 40 # Here's an interactive prompt for you now... 41 [end of gcloud/storage/demo/demo.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
googleapis/google-cloud-python
cf0fca04d7365515754024f5d7b50f05b1a2765f
Key name w/ slash quoted twice during upload The fix for #354 in #364 mistakenly quotes the `Key.name` when adding it the the `query_params` for the upload API call, but those params are already going to be quoted inside `Connection.make_api_request` (by way of `urllib.urlencode`).
2014-11-12T02:12:02Z
<patch> diff --git a/gcloud/storage/key.py b/gcloud/storage/key.py --- a/gcloud/storage/key.py +++ b/gcloud/storage/key.py @@ -285,7 +285,7 @@ def upload_from_file(self, file_obj, rewind=False, size=None, query_params = { 'uploadType': 'resumable', - 'name': urllib.quote_plus(self.name), + 'name': self.name, } upload_url = self.connection.build_api_url( </patch>
[]
[]
conan-io__conan-9596
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Default ``cmake_layout()`` source folder This layout is assuming that the main library ``CMakeLists.txt`` is inside a ``src`` subfolder in the repo. This is probably not the most common user layout, in which they will have a CMakeLists.txt in the root of the folder: - How to model this? - Should we change the ``cmake_layout()`` default to ``folders.source = "."``? - Check implications of defining the root as source folder. </issue> <code> [start of README.rst] 1 |Logo| 2 3 Conan 4 ===== 5 6 Decentralized, open-source (MIT), C/C++ package manager. 7 8 - Homepage: https://conan.io/ 9 - Github: https://github.com/conan-io/conan 10 - Docs: https://docs.conan.io/en/latest/ 11 - Slack: https://cpplang-inviter.cppalliance.org/ (#conan channel) 12 - Twitter: https://twitter.com/conan_io 13 14 15 Conan is a package manager for C and C++ developers: 16 17 - It is fully decentralized. Users can host their packages on their servers, privately. Integrates with Artifactory and Bintray. 18 - Portable. Works across all platforms, including Linux, OSX, Windows (with native and first-class support, WSL, MinGW), 19 Solaris, FreeBSD, embedded and cross-compiling, docker, WSL 20 - Manage binaries. It can create, upload and download binaries for any configuration and platform, 21 even cross-compiling, saving lots of time in development and continuous integration. The binary compatibility can be configured 22 and customized. Manage all your artifacts in the same way on all platforms. 23 - Integrates with any build system, including any proprietary and custom one. Provides tested support for major build systems 24 (CMake, MSBuild, Makefiles, Meson, etc). 25 - Extensible: Its python based recipes, together with extensions points allows for great power and flexibility. 26 - Large and active community, especially in Github (https://github.com/conan-io/conan) and Slack (https://cpplang-inviter.cppalliance.org/ #conan channel). 27 This community also creates and maintains packages in ConanCenter and Bincrafters repositories in Bintray. 28 - Stable. Used in production by many companies, since 1.0 there is a commitment not to break package recipes and documented behavior. 29 30 31 32 +-------------------------+-------------------------+ 33 | **develop** | **Code Climate** | 34 +=========================+=========================+ 35 | |Build Status Develop| | |Develop climate| | 36 +-------------------------+-------------------------+ 37 38 39 Setup 40 ===== 41 42 Please read https://docs.conan.io/en/latest/installation.html to know how to 43 install and start using Conan. TL;DR: 44 45 .. code-block:: 46 47 $ pip install conan 48 49 50 Install a development version 51 ----------------------------- 52 53 You can run **Conan** client and server in Windows, MacOS, and Linux. 54 55 - **Install pip following** `pip docs`_. 56 57 - **Clone Conan repository:** 58 59 .. code-block:: bash 60 61 $ git clone https://github.com/conan-io/conan.git conan-io 62 63 NOTE: repository directory name matters, some directories are known to be problematic to run tests (e.g. `conan`). `conan-io` directory name was tested and guaranteed to be working. 64 65 - **Install in editable mode** 66 67 .. code-block:: bash 68 69 $ cd conan-io && sudo pip install -e . 70 71 If you are in Windows, using ``sudo`` is not required. 72 73 - **You are ready, try to run Conan:** 74 75 .. code-block:: 76 77 $ conan --help 78 79 Consumer commands 80 install Installs the requirements specified in a conanfile (.py or .txt). 81 config Manages configuration. Edits the conan.conf or installs config files. 82 get Gets a file or list a directory of a given reference or package. 83 info Gets information about the dependency graph of a recipe. 84 search Searches package recipes and binaries in the local cache or in a remote. 85 Creator commands 86 new Creates a new package recipe template with a 'conanfile.py'. 87 create Builds a binary package for a recipe (conanfile.py) located in the current dir. 88 upload Uploads a recipe and binary packages to a remote. 89 export Copies the recipe (conanfile.py & associated files) to your local cache. 90 export-pkg Exports a recipe & creates a package with given files calling 'package'. 91 test Test a package, consuming it with a conanfile recipe with a test() method. 92 Package development commands 93 source Calls your local conanfile.py 'source()' method. 94 build Calls your local conanfile.py 'build()' method. 95 package Calls your local conanfile.py 'package()' method. 96 Misc commands 97 profile Lists profiles in the '.conan/profiles' folder, or shows profile details. 98 remote Manages the remote list and the package recipes associated with a remote. 99 user Authenticates against a remote with user/pass, caching the auth token. 100 imports Calls your local conanfile.py or conanfile.txt 'imports' method. 101 copy Copies conan recipes and packages to another user/channel. 102 remove Removes packages or binaries matching pattern from local cache or remote. 103 alias Creates and exports an 'alias recipe'. 104 download Downloads recipe and binaries to the local cache, without using settings. 105 106 Conan commands. Type "conan <command> -h" for help 107 108 Contributing to the project 109 =========================== 110 111 Feedback and contribution are always welcome in this project. 112 Please read our `contributing guide <https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md>`_. 113 Also, if you plan to contribute, please add some testing for your changes. You can read the `Conan 114 tests guidelines section <https://github.com/conan-io/conan/blob/develop/conans/test/README.md>`_ for 115 some advise on how to write tests for Conan. 116 117 Running the tests 118 ================= 119 120 Using tox 121 --------- 122 123 .. code-block:: bash 124 125 $ python -m tox 126 127 It will install the needed requirements and launch `pytest` skipping some heavy and slow tests. 128 If you want to run the full test suite: 129 130 .. code-block:: bash 131 132 $ python -m tox -e full 133 134 Without tox 135 ----------- 136 137 **Install python requirements** 138 139 .. code-block:: bash 140 141 $ python -m pip install -r conans/requirements.txt 142 $ python -m pip install -r conans/requirements_server.txt 143 $ python -m pip install -r conans/requirements_dev.txt 144 145 If you are not Windows and you are not using a python virtual environment, you will need to run these 146 commands using `sudo`. 147 148 Before you can run the tests, you need to set a few environment variables first. 149 150 .. code-block:: bash 151 152 $ export PYTHONPATH=$PYTHONPATH:$(pwd) 153 154 On Windows it would be (while being in the Conan root directory): 155 156 .. code-block:: bash 157 158 $ set PYTHONPATH=. 159 160 Ensure that your ``cmake`` has version 2.8 or later. You can see the 161 version with the following command: 162 163 .. code-block:: bash 164 165 $ cmake --version 166 167 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your 168 operating system and your requirements. 169 170 These should work for the GCC from ``build-essential`` on Ubuntu 14.04: 171 172 .. code-block:: bash 173 174 $ export CONAN_COMPILER=gcc 175 $ export CONAN_COMPILER_VERSION=4.8 176 177 These should work for OS X: 178 179 .. code-block:: bash 180 181 $ export CONAN_COMPILER=clang 182 $ export CONAN_COMPILER_VERSION=3.5 183 184 You can run the actual tests like this: 185 186 .. code-block:: bash 187 188 $ python -m pytest . 189 190 191 There are a couple of test attributes defined, as ``slow`` that you can use 192 to filter the tests, and do not execute them: 193 194 .. code-block:: bash 195 196 $ python -m pytest . -m "not slow" 197 198 A few minutes later it should print ``OK``: 199 200 .. code-block:: bash 201 202 ............................................................................................ 203 ---------------------------------------------------------------------- 204 Ran 146 tests in 50.993s 205 206 OK 207 208 To run specific tests, you can specify the test name too, something like: 209 210 .. code-block:: bash 211 212 $ python -m pytest conans/test/unittests/client/cmd/export_test.py::ExportTest::test_export_warning -s 213 214 The ``-s`` argument can be useful to see some output that otherwise is captured by pytest. 215 216 Also, you can run tests against an instance of Artifactory. Those tests should add the attribute 217 ``artifactory_ready``. 218 219 .. code-block:: bash 220 221 $ python -m pytest . -m artifactory_ready 222 223 Some environment variables have to be defined to run them. For example, for an 224 Artifactory instance that is running on the localhost with default user and password configured, the 225 variables could take the values: 226 227 .. code-block:: bash 228 229 $ export CONAN_TEST_WITH_ARTIFACTORY=1 230 $ export ARTIFACTORY_DEFAULT_URL=http://localhost:8081/artifactory 231 $ export ARTIFACTORY_DEFAULT_USER=admin 232 $ export ARTIFACTORY_DEFAULT_PASSWORD=password 233 234 ``ARTIFACTORY_DEFAULT_URL`` is the base url for the Artifactory repo, not one for a specific 235 repository. Running the tests with a real Artifactory instance will create repos on the fly so please 236 use a separate server for testing purposes. 237 238 License 239 ------- 240 241 `MIT LICENSE <./LICENSE.md>`__ 242 243 .. |Build Status Develop| image:: https://ci.conan.io/buildStatus/icon?job=ConanTestSuite/develop 244 :target: https://ci.conan.io/job/ConanTestSuite/job/develop/ 245 246 .. |Develop climate| image:: https://api.codeclimate.com/v1/badges/081b53e570d5220b34e4/maintainability.svg 247 :target: https://codeclimate.com/github/conan-io/conan/maintainability 248 249 .. |Logo| image:: https://conan.io/img/jfrog_conan_logo.png 250 251 252 .. _`pip docs`: https://pip.pypa.io/en/stable/installation/ 253 [end of README.rst] [start of conans/client/generators/ycm.py] 1 import json 2 3 from conans.model import Generator 4 5 6 class YouCompleteMeGenerator(Generator): 7 template = ''' 8 # This file is NOT licensed under the GPLv3, which is the license for the rest 9 # of YouCompleteMe. 10 # 11 # Here's the license text for this file: 12 # 13 # This is free and unencumbered software released into the public domain. 14 # 15 # Anyone is free to copy, modify, publish, use, compile, sell, or 16 # distribute this software, either in source code form or as a compiled 17 # binary, for any purpose, commercial or non-commercial, and by any 18 # means. 19 # 20 # In jurisdictions that recognize copyright laws, the author or authors 21 # of this software dedicate any and all copyright interest in the 22 # software to the public domain. We make this dedication for the benefit 23 # of the public at large and to the detriment of our heirs and 24 # successors. We intend this dedication to be an overt act of 25 # relinquishment in perpetuity of all present and future rights to this 26 # software under copyright law. 27 # 28 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 29 # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 30 # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 31 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR 32 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 33 # ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 34 # OTHER DEALINGS IN THE SOFTWARE. 35 # 36 # For more information, please refer to <http://unlicense.org/> 37 38 import os 39 import json 40 import ycm_core 41 import logging 42 43 44 _logger = logging.getLogger(__name__) 45 46 47 def DirectoryOfThisScript(): 48 return os.path.dirname( os.path.abspath( __file__ ) ) 49 50 51 # These are the compilation flags that will be used in case there's no 52 # compilation database set (by default, one is not set). 53 # CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR. 54 flags = [ 55 '-x', 'c++' 56 ] 57 58 conan_flags = json.loads(open("conan_ycm_flags.json", "r").read()) 59 60 flags.extend(conan_flags["flags"]) 61 flags.extend(conan_flags["defines"]) 62 flags.extend(conan_flags["includes"]) 63 64 65 # Set this to the absolute path to the folder (NOT the file!) containing the 66 # compile_commands.json file to use that instead of 'flags'. See here for 67 # more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html 68 # 69 # You can get CMake to generate this file for you by adding: 70 # set( CMAKE_EXPORT_COMPILE_COMMANDS 1 ) 71 # to your CMakeLists.txt file. 72 # 73 # Most projects will NOT need to set this to anything; you can just change the 74 # 'flags' list of compilation flags. Notice that YCM itself uses that approach. 75 compilation_database_folder = os.path.join(DirectoryOfThisScript(), 'Debug') 76 77 if os.path.exists( compilation_database_folder ): 78 database = ycm_core.CompilationDatabase( compilation_database_folder ) 79 if not database.DatabaseSuccessfullyLoaded(): 80 _logger.warn("Failed to load database") 81 database = None 82 else: 83 database = None 84 85 SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ] 86 87 def GetAbsolutePath(include_path, working_directory): 88 if os.path.isabs(include_path): 89 return include_path 90 return os.path.join(working_directory, include_path) 91 92 93 def MakeRelativePathsInFlagsAbsolute( flags, working_directory ): 94 if not working_directory: 95 return list( flags ) 96 new_flags = [] 97 make_next_absolute = False 98 path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ] 99 for flag in flags: 100 new_flag = flag 101 102 if make_next_absolute: 103 make_next_absolute = False 104 new_flag = GetAbsolutePath(flag, working_directory) 105 106 for path_flag in path_flags: 107 if flag == path_flag: 108 make_next_absolute = True 109 break 110 111 if flag.startswith( path_flag ): 112 path = flag[ len( path_flag ): ] 113 new_flag = flag[:len(path_flag)] + GetAbsolutePath(path, working_directory) 114 break 115 116 if new_flag: 117 new_flags.append( new_flag ) 118 return new_flags 119 120 121 def IsHeaderFile( filename ): 122 extension = os.path.splitext( filename )[ 1 ] 123 return extension.lower() in [ '.h', '.hxx', '.hpp', '.hh' ] 124 125 126 def GetCompilationInfoForFile( filename ): 127 # The compilation_commands.json file generated by CMake does not have entries 128 # for header files. So we do our best by asking the db for flags for a 129 # corresponding source file, if any. If one exists, the flags for that file 130 # should be good enough. 131 if IsHeaderFile( filename ): 132 basename = os.path.splitext( filename )[ 0 ] 133 for extension in SOURCE_EXTENSIONS: 134 replacement_file = basename + extension 135 if os.path.exists( replacement_file ): 136 compilation_info = database.GetCompilationInfoForFile( replacement_file ) 137 if compilation_info.compiler_flags_: 138 return compilation_info 139 return None 140 return database.GetCompilationInfoForFile( filename ) 141 142 143 def Settings( filename, **kwargs ): 144 relative_to = None 145 compiler_flags = None 146 147 if database: 148 # Bear in mind that compilation_info.compiler_flags_ does NOT return a 149 # python list, but a "list-like" StringVec object 150 compilation_info = GetCompilationInfoForFile( filename ) 151 if compilation_info is None: 152 relative_to = DirectoryOfThisScript() 153 compiler_flags = flags 154 else: 155 relative_to = compilation_info.compiler_working_dir_ 156 compiler_flags = compilation_info.compiler_flags_ 157 158 else: 159 relative_to = DirectoryOfThisScript() 160 compiler_flags = flags 161 162 final_flags = MakeRelativePathsInFlagsAbsolute( compiler_flags, relative_to ) 163 for flag in final_flags: 164 if flag.startswith("-W"): 165 final_flags.remove(flag) 166 _logger.info("Final flags for %s are %s" % (filename, ' '.join(final_flags))) 167 168 return {{ 169 'flags': final_flags + ["-I/usr/include", "-I/usr/include/c++/{cxx_version}"], 170 'do_cache': True 171 }} 172 ''' 173 174 @property 175 def filename(self): 176 pass 177 178 @property 179 def content(self): 180 def prefixed(prefix, values): 181 return [prefix + x for x in values] 182 183 conan_flags = { 184 "includes": prefixed("-isystem", self.deps_build_info.include_paths), 185 "defines": prefixed("-D", self.deps_build_info.defines), 186 "flags": self.deps_build_info.cxxflags 187 } 188 189 cxx_version = '' 190 try: 191 cxx_version = str(self.settings.compiler.version).split('.')[0] 192 except Exception: 193 pass 194 195 ycm_data = self.template.format(cxx_version=cxx_version) 196 return {"conan_ycm_extra_conf.py": ycm_data, 197 "conan_ycm_flags.json": json.dumps(conan_flags, indent=2)} 198 [end of conans/client/generators/ycm.py] [start of conans/client/installer.py] 1 import os 2 import shutil 3 import textwrap 4 import time 5 from multiprocessing.pool import ThreadPool 6 7 from conans.client import tools 8 from conans.client.conanfile.build import run_build_method 9 from conans.client.conanfile.package import run_package_method 10 from conans.client.file_copier import report_copied_files 11 from conans.client.generators import TXTGenerator, write_toolchain 12 from conans.client.graph.graph import BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_EDITABLE, \ 13 BINARY_MISSING, BINARY_SKIP, BINARY_UPDATE, BINARY_UNKNOWN, CONTEXT_HOST, BINARY_INVALID 14 from conans.client.importer import remove_imports, run_imports 15 from conans.client.packager import update_package_metadata 16 from conans.client.recorder.action_recorder import INSTALL_ERROR_BUILDING, INSTALL_ERROR_MISSING, \ 17 INSTALL_ERROR_MISSING_BUILD_FOLDER 18 from conans.client.source import retrieve_exports_sources, config_source 19 from conans.client.tools.env import pythonpath 20 from conans.errors import (ConanException, ConanExceptionInUserConanfileMethod, 21 conanfile_exception_formatter, ConanInvalidConfiguration) 22 from conans.model.build_info import CppInfo, DepCppInfo, CppInfoDefaultValues 23 from conans.model.conan_file import ConanFile 24 from conans.model.editable_layout import EditableLayout 25 from conans.model.env_info import EnvInfo 26 from conans.model.graph_info import GraphInfo 27 from conans.model.graph_lock import GraphLockFile 28 from conans.model.info import PACKAGE_ID_UNKNOWN 29 from conans.model.new_build_info import NewCppInfo, fill_old_cppinfo 30 from conans.model.ref import PackageReference 31 from conans.model.user_info import DepsUserInfo 32 from conans.model.user_info import UserInfo 33 from conans.paths import BUILD_INFO, CONANINFO, RUN_LOG_NAME 34 from conans.util.env_reader import get_env 35 from conans.util.files import clean_dirty, is_dirty, make_read_only, mkdir, rmdir, save, set_dirty 36 from conans.util.log import logger 37 from conans.util.tracer import log_package_built, log_package_got_from_local_cache 38 39 40 def build_id(conan_file): 41 if hasattr(conan_file, "build_id"): 42 # construct new ConanInfo 43 build_id_info = conan_file.info.copy() 44 conan_file.info_build = build_id_info 45 # effectively call the user function to change the package values 46 with conanfile_exception_formatter(str(conan_file), "build_id"): 47 conan_file.build_id() 48 # compute modified ID 49 return build_id_info.package_id() 50 return None 51 52 53 def add_env_conaninfo(conan_file, subtree_libnames): 54 for package_name, env_vars in conan_file._conan_env_values.data.items(): 55 for name, value in env_vars.items(): 56 if not package_name or package_name in subtree_libnames or \ 57 package_name == conan_file.name: 58 conan_file.info.env_values.add(name, value, package_name) 59 60 61 class _PackageBuilder(object): 62 def __init__(self, cache, output, hook_manager, remote_manager, generators): 63 self._cache = cache 64 self._output = output 65 self._hook_manager = hook_manager 66 self._remote_manager = remote_manager 67 self._generator_manager = generators 68 69 def _get_build_folder(self, conanfile, package_layout, pref, keep_build, recorder): 70 # Build folder can use a different package_ID if build_id() is defined. 71 # This function decides if the build folder should be re-used (not build again) 72 # and returns the build folder 73 new_id = build_id(conanfile) 74 build_pref = PackageReference(pref.ref, new_id) if new_id else pref 75 build_folder = package_layout.build(build_pref) 76 77 if is_dirty(build_folder): 78 self._output.warn("Build folder is dirty, removing it: %s" % build_folder) 79 rmdir(build_folder) 80 clean_dirty(build_folder) 81 82 # Decide if the build folder should be kept 83 skip_build = conanfile.develop and keep_build 84 if skip_build: 85 self._output.info("Won't be built as specified by --keep-build") 86 if not os.path.exists(build_folder): 87 msg = "--keep-build specified, but build folder not found" 88 recorder.package_install_error(pref, INSTALL_ERROR_MISSING_BUILD_FOLDER, 89 msg, remote_name=None) 90 raise ConanException(msg) 91 elif build_pref != pref and os.path.exists(build_folder) and hasattr(conanfile, "build_id"): 92 self._output.info("Won't be built, using previous build folder as defined in build_id()") 93 skip_build = True 94 95 return build_folder, skip_build 96 97 def _prepare_sources(self, conanfile, pref, package_layout, remotes): 98 export_folder = package_layout.export() 99 export_source_folder = package_layout.export_sources() 100 scm_sources_folder = package_layout.scm_sources() 101 conanfile_path = package_layout.conanfile() 102 source_folder = package_layout.source() 103 104 retrieve_exports_sources(self._remote_manager, self._cache, conanfile, pref.ref, remotes) 105 106 conanfile.folders.set_base_source(source_folder) 107 conanfile.folders.set_base_build(None) 108 conanfile.folders.set_base_package(None) 109 110 config_source(export_folder, export_source_folder, scm_sources_folder, 111 conanfile, self._output, conanfile_path, pref.ref, 112 self._hook_manager, self._cache) 113 114 @staticmethod 115 def _copy_sources(conanfile, source_folder, build_folder): 116 # Copies the sources to the build-folder, unless no_copy_source is defined 117 _remove_folder_raising(build_folder) 118 if not getattr(conanfile, 'no_copy_source', False): 119 conanfile.output.info('Copying sources to build folder') 120 try: 121 shutil.copytree(source_folder, build_folder, symlinks=True) 122 except Exception as e: 123 msg = str(e) 124 if "206" in msg: # System error shutil.Error 206: Filename or extension too long 125 msg += "\nUse short_paths=True if paths too long" 126 raise ConanException("%s\nError copying sources to build folder" % msg) 127 logger.debug("BUILD: Copied to %s", build_folder) 128 logger.debug("BUILD: Files copied %s", ",".join(os.listdir(build_folder))) 129 130 def _build(self, conanfile, pref): 131 # Read generators from conanfile and generate the needed files 132 logger.info("GENERATORS: Writing generators") 133 self._generator_manager.write_generators(conanfile, conanfile.build_folder, 134 conanfile.generators_folder, self._output) 135 136 logger.info("TOOLCHAIN: Writing toolchain") 137 write_toolchain(conanfile, conanfile.generators_folder, self._output) 138 139 # Build step might need DLLs, binaries as protoc to generate source files 140 # So execute imports() before build, storing the list of copied_files 141 142 copied_files = run_imports(conanfile) 143 144 try: 145 mkdir(conanfile.build_folder) 146 with tools.chdir(conanfile.build_folder): 147 run_build_method(conanfile, self._hook_manager, reference=pref.ref, package_id=pref.id) 148 self._output.success("Package '%s' built" % pref.id) 149 self._output.info("Build folder %s" % conanfile.build_folder) 150 except Exception as exc: 151 self._output.writeln("") 152 self._output.error("Package '%s' build failed" % pref.id) 153 self._output.warn("Build folder %s" % conanfile.build_folder) 154 if isinstance(exc, ConanExceptionInUserConanfileMethod): 155 raise exc 156 raise ConanException(exc) 157 finally: 158 # Now remove all files that were imported with imports() 159 remove_imports(conanfile, copied_files, self._output) 160 161 def _package(self, conanfile, pref, package_layout, conanfile_path): 162 # FIXME: Is weak to assign here the recipe_hash 163 manifest = package_layout.recipe_manifest() 164 conanfile.info.recipe_hash = manifest.summary_hash 165 166 # Creating ***info.txt files 167 save(os.path.join(conanfile.folders.base_build, CONANINFO), conanfile.info.dumps()) 168 self._output.info("Generated %s" % CONANINFO) 169 save(os.path.join(conanfile.folders.base_build, BUILD_INFO), 170 TXTGenerator(conanfile).content) 171 self._output.info("Generated %s" % BUILD_INFO) 172 173 package_id = pref.id 174 # Do the actual copy, call the conanfile.package() method 175 # While installing, the infos goes to build folder 176 conanfile.folders.set_base_install(conanfile.folders.base_build) 177 178 prev = run_package_method(conanfile, package_id, self._hook_manager, conanfile_path, 179 pref.ref) 180 181 update_package_metadata(prev, package_layout, package_id, pref.ref.revision) 182 183 if get_env("CONAN_READ_ONLY_CACHE", False): 184 make_read_only(conanfile.folders.base_package) 185 # FIXME: Conan 2.0 Clear the registry entry (package ref) 186 return prev 187 188 def build_package(self, node, keep_build, recorder, remotes): 189 t1 = time.time() 190 191 conanfile = node.conanfile 192 pref = node.pref 193 194 package_layout = self._cache.package_layout(pref.ref, conanfile.short_paths) 195 base_source = package_layout.source() 196 conanfile_path = package_layout.conanfile() 197 base_package = package_layout.package(pref) 198 199 base_build, skip_build = self._get_build_folder(conanfile, package_layout, 200 pref, keep_build, recorder) 201 # PREPARE SOURCES 202 if not skip_build: 203 with package_layout.conanfile_write_lock(self._output): 204 set_dirty(base_build) 205 self._prepare_sources(conanfile, pref, package_layout, remotes) 206 self._copy_sources(conanfile, base_source, base_build) 207 208 # BUILD & PACKAGE 209 with package_layout.conanfile_read_lock(self._output): 210 self._output.info('Building your package in %s' % base_build) 211 try: 212 if getattr(conanfile, 'no_copy_source', False): 213 conanfile.folders.set_base_source(base_source) 214 else: 215 conanfile.folders.set_base_source(base_build) 216 217 conanfile.folders.set_base_build(base_build) 218 conanfile.folders.set_base_imports(base_build) 219 conanfile.folders.set_base_package(base_package) 220 221 if not skip_build: 222 # In local cache, generators folder always in build_folder 223 conanfile.folders.set_base_generators(base_build) 224 # In local cache, install folder always is build_folder 225 conanfile.folders.set_base_install(base_build) 226 self._build(conanfile, pref) 227 clean_dirty(base_build) 228 229 prev = self._package(conanfile, pref, package_layout, conanfile_path) 230 assert prev 231 node.prev = prev 232 log_file = os.path.join(base_build, RUN_LOG_NAME) 233 log_file = log_file if os.path.exists(log_file) else None 234 log_package_built(pref, time.time() - t1, log_file) 235 recorder.package_built(pref) 236 except ConanException as exc: 237 recorder.package_install_error(pref, INSTALL_ERROR_BUILDING, str(exc), 238 remote_name=None) 239 raise exc 240 241 return node.pref 242 243 244 def _remove_folder_raising(folder): 245 try: 246 rmdir(folder) 247 except OSError as e: 248 raise ConanException("%s\n\nCouldn't remove folder, might be busy or open\n" 249 "Close any app using it, and retry" % str(e)) 250 251 252 def _handle_system_requirements(conan_file, pref, cache, out): 253 """ check first the system_reqs/system_requirements.txt existence, if not existing 254 check package/sha1/ 255 256 Used after remote package retrieving and before package building 257 """ 258 # TODO: Check if this idiom should be generalize to all methods defined in base ConanFile 259 # Instead of calling empty methods 260 if type(conan_file).system_requirements == ConanFile.system_requirements: 261 return 262 263 package_layout = cache.package_layout(pref.ref) 264 system_reqs_path = package_layout.system_reqs() 265 system_reqs_package_path = package_layout.system_reqs_package(pref) 266 if os.path.exists(system_reqs_path) or os.path.exists(system_reqs_package_path): 267 return 268 269 ret = call_system_requirements(conan_file, out) 270 271 try: 272 ret = str(ret or "") 273 except Exception: 274 out.warn("System requirements didn't return a string") 275 ret = "" 276 if getattr(conan_file, "global_system_requirements", None): 277 save(system_reqs_path, ret) 278 else: 279 save(system_reqs_package_path, ret) 280 281 282 def call_system_requirements(conanfile, output): 283 try: 284 return conanfile.system_requirements() 285 except Exception as e: 286 output.error("while executing system_requirements(): %s" % str(e)) 287 raise ConanException("Error in system requirements") 288 289 290 class BinaryInstaller(object): 291 """ main responsible of retrieving binary packages or building them from source 292 locally in case they are not found in remotes 293 """ 294 def __init__(self, app, recorder): 295 self._cache = app.cache 296 self._out = app.out 297 self._remote_manager = app.remote_manager 298 self._recorder = recorder 299 self._binaries_analyzer = app.binaries_analyzer 300 self._hook_manager = app.hook_manager 301 self._generator_manager = app.generator_manager 302 # Load custom generators from the cache, generators are part of the binary 303 # build and install. Generators loaded here from the cache will have precedence 304 # and overwrite possible generators loaded from packages (requires) 305 for generator_path in app.cache.generators: 306 app.loader.load_generators(generator_path) 307 308 def install(self, deps_graph, remotes, build_mode, update, profile_host, profile_build, 309 graph_lock, keep_build=False): 310 # order by levels and separate the root node (ref=None) from the rest 311 nodes_by_level = deps_graph.by_levels() 312 root_level = nodes_by_level.pop() 313 root_node = root_level[0] 314 # Get the nodes in order and if we have to build them 315 self._out.info("Installing (downloading, building) binaries...") 316 self._build(nodes_by_level, keep_build, root_node, profile_host, profile_build, 317 graph_lock, remotes, build_mode, update) 318 319 @staticmethod 320 def _classify(nodes_by_level): 321 missing, invalid, downloads = [], [], [] 322 for level in nodes_by_level: 323 for node in level: 324 if node.binary == BINARY_MISSING: 325 missing.append(node) 326 elif node.binary == BINARY_INVALID: 327 invalid.append(node) 328 elif node.binary in (BINARY_UPDATE, BINARY_DOWNLOAD): 329 downloads.append(node) 330 return missing, invalid, downloads 331 332 def _raise_missing(self, missing): 333 if not missing: 334 return 335 336 missing_prefs = set(n.pref for n in missing) # avoid duplicated 337 missing_prefs = list(sorted(missing_prefs)) 338 for pref in missing_prefs: 339 self._out.error("Missing binary: %s" % str(pref)) 340 self._out.writeln("") 341 342 # Report details just the first one 343 node = missing[0] 344 package_id = node.package_id 345 ref, conanfile = node.ref, node.conanfile 346 dependencies = [str(dep.dst) for dep in node.dependencies] 347 348 settings_text = ", ".join(conanfile.info.full_settings.dumps().splitlines()) 349 options_text = ", ".join(conanfile.info.full_options.dumps().splitlines()) 350 dependencies_text = ', '.join(dependencies) 351 requires_text = ", ".join(conanfile.info.requires.dumps().splitlines()) 352 353 msg = textwrap.dedent('''\ 354 Can't find a '%s' package for the specified settings, options and dependencies: 355 - Settings: %s 356 - Options: %s 357 - Dependencies: %s 358 - Requirements: %s 359 - Package ID: %s 360 ''' % (ref, settings_text, options_text, dependencies_text, requires_text, package_id)) 361 conanfile.output.warn(msg) 362 self._recorder.package_install_error(PackageReference(ref, package_id), 363 INSTALL_ERROR_MISSING, msg) 364 missing_pkgs = "', '".join([str(pref.ref) for pref in missing_prefs]) 365 if len(missing_prefs) >= 5: 366 build_str = "--build=missing" 367 else: 368 build_str = " ".join(["--build=%s" % pref.ref.name for pref in missing_prefs]) 369 370 raise ConanException(textwrap.dedent('''\ 371 Missing prebuilt package for '%s' 372 Try to build from sources with '%s' 373 Use 'conan search <reference> --table table.html' 374 Or read 'http://docs.conan.io/en/latest/faq/troubleshooting.html#error-missing-prebuilt-package' 375 ''' % (missing_pkgs, build_str))) 376 377 def _download(self, downloads, processed_package_refs): 378 """ executes the download of packages (both download and update), only once for a given 379 PREF, even if node duplicated 380 :param downloads: all nodes to be downloaded or updated, included repetitions 381 """ 382 if not downloads: 383 return 384 385 download_nodes = [] 386 for node in downloads: 387 pref = node.pref 388 bare_pref = PackageReference(pref.ref, pref.id) 389 if bare_pref in processed_package_refs: 390 continue 391 processed_package_refs[bare_pref] = pref.revision 392 assert node.prev, "PREV for %s is None" % str(node.pref) 393 download_nodes.append(node) 394 395 def _download(n): 396 npref = n.pref 397 layout = self._cache.package_layout(npref.ref, n.conanfile.short_paths) 398 # We cannot embed the package_lock inside the remote.get_package() 399 # because the handle_node_cache has its own lock 400 with layout.package_lock(pref): 401 self._download_pkg(layout, n) 402 403 parallel = self._cache.config.parallel_download 404 if parallel is not None: 405 self._out.info("Downloading binary packages in %s parallel threads" % parallel) 406 thread_pool = ThreadPool(parallel) 407 thread_pool.map(_download, [n for n in download_nodes]) 408 thread_pool.close() 409 thread_pool.join() 410 else: 411 for node in download_nodes: 412 _download(node) 413 414 def _download_pkg(self, layout, node): 415 self._remote_manager.get_package(node.conanfile, node.pref, layout, node.binary_remote, 416 node.conanfile.output, self._recorder) 417 418 def _build(self, nodes_by_level, keep_build, root_node, profile_host, profile_build, graph_lock, 419 remotes, build_mode, update): 420 using_build_profile = bool(profile_build) 421 missing, invalid, downloads = self._classify(nodes_by_level) 422 if invalid: 423 msg = ["There are invalid packages (packages that cannot exist for this configuration):"] 424 for node in invalid: 425 msg.append("{}: Invalid ID: {}".format(node.conanfile, node.conanfile.info.invalid)) 426 raise ConanInvalidConfiguration("\n".join(msg)) 427 self._raise_missing(missing) 428 processed_package_refs = {} 429 self._download(downloads, processed_package_refs) 430 431 for level in nodes_by_level: 432 for node in level: 433 ref, conan_file = node.ref, node.conanfile 434 output = conan_file.output 435 436 self._propagate_info(node, using_build_profile) 437 if node.binary == BINARY_EDITABLE: 438 self._handle_node_editable(node, profile_host, profile_build, graph_lock) 439 # Need a temporary package revision for package_revision_mode 440 # Cannot be PREV_UNKNOWN otherwise the consumers can't compute their packageID 441 node.prev = "editable" 442 else: 443 if node.binary == BINARY_SKIP: # Privates not necessary 444 continue 445 assert ref.revision is not None, "Installer should receive RREV always" 446 if node.binary == BINARY_UNKNOWN: 447 self._binaries_analyzer.reevaluate_node(node, remotes, build_mode, update) 448 if node.binary == BINARY_MISSING: 449 self._raise_missing([node]) 450 _handle_system_requirements(conan_file, node.pref, self._cache, output) 451 self._handle_node_cache(node, keep_build, processed_package_refs, remotes) 452 453 # Finally, propagate information to root node (ref=None) 454 self._propagate_info(root_node, using_build_profile) 455 456 def _handle_node_editable(self, node, profile_host, profile_build, graph_lock): 457 # Get source of information 458 conanfile = node.conanfile 459 ref = node.ref 460 package_layout = self._cache.package_layout(ref) 461 base_path = package_layout.base_folder() 462 self._call_package_info(conanfile, package_folder=base_path, ref=ref, is_editable=True) 463 464 # New editables mechanism based on Folders 465 if hasattr(conanfile, "layout"): 466 conanfile.folders.set_base_package(base_path) 467 conanfile.folders.set_base_source(base_path) 468 conanfile.folders.set_base_build(base_path) 469 conanfile.folders.set_base_install(base_path) 470 conanfile.folders.set_base_imports(base_path) 471 472 output = conanfile.output 473 output.info("Rewriting files of editable package " 474 "'{}' at '{}'".format(conanfile.name, conanfile.generators_folder)) 475 self._generator_manager.write_generators(conanfile, conanfile.install_folder, 476 conanfile.generators_folder, output) 477 write_toolchain(conanfile, conanfile.generators_folder, output) 478 output.info("Generated toolchain") 479 graph_info_node = GraphInfo(profile_host, root_ref=node.ref) 480 graph_info_node.options = node.conanfile.options.values 481 graph_info_node.graph_lock = graph_lock 482 graph_info_node.save(base_path) 483 output.info("Generated conan.lock") 484 copied_files = run_imports(conanfile) 485 report_copied_files(copied_files, output) 486 return 487 488 node.conanfile.cpp_info.filter_empty = False 489 # OLD EDITABLE LAYOUTS: 490 # Try with package-provided file 491 editable_cpp_info = package_layout.editable_cpp_info() 492 if editable_cpp_info: 493 editable_cpp_info.apply_to(ref, 494 conanfile.cpp_info, 495 settings=conanfile.settings, 496 options=conanfile.options) 497 build_folder = editable_cpp_info.folder(ref, EditableLayout.BUILD_FOLDER, 498 settings=conanfile.settings, 499 options=conanfile.options) 500 if build_folder is not None: 501 build_folder = os.path.join(base_path, build_folder) 502 output = conanfile.output 503 self._generator_manager.write_generators(conanfile, build_folder, build_folder, output) 504 write_toolchain(conanfile, build_folder, output) 505 save(os.path.join(build_folder, CONANINFO), conanfile.info.dumps()) 506 output.info("Generated %s" % CONANINFO) 507 508 graph_info_node = GraphInfo(profile_host, root_ref=node.ref) 509 graph_info_node.options = node.conanfile.options.values 510 graph_info_node.graph_lock = graph_lock 511 graph_info_node.save(build_folder) 512 output.info("Generated graphinfo") 513 graph_lock_file = GraphLockFile(profile_host, profile_build, graph_lock) 514 graph_lock_file.save(os.path.join(build_folder, "conan.lock")) 515 516 save(os.path.join(build_folder, BUILD_INFO), TXTGenerator(conanfile).content) 517 output.info("Generated %s" % BUILD_INFO) 518 # Build step might need DLLs, binaries as protoc to generate source files 519 # So execute imports() before build, storing the list of copied_files 520 conanfile.folders.set_base_imports(build_folder) 521 copied_files = run_imports(conanfile) 522 report_copied_files(copied_files, output) 523 524 def _handle_node_cache(self, node, keep_build, processed_package_references, remotes): 525 pref = node.pref 526 assert pref.id, "Package-ID without value" 527 assert pref.id != PACKAGE_ID_UNKNOWN, "Package-ID error: %s" % str(pref) 528 conanfile = node.conanfile 529 output = conanfile.output 530 531 layout = self._cache.package_layout(pref.ref, conanfile.short_paths) 532 533 with layout.package_lock(pref): 534 bare_pref = PackageReference(pref.ref, pref.id) 535 processed_prev = processed_package_references.get(bare_pref) 536 if processed_prev is None: # This package-id has not been processed before 537 if node.binary == BINARY_BUILD: 538 assert node.prev is None, "PREV for %s to be built should be None" % str(pref) 539 layout.package_remove(pref) 540 with layout.set_dirty_context_manager(pref): 541 pref = self._build_package(node, output, keep_build, remotes) 542 assert node.prev, "Node PREV shouldn't be empty" 543 assert node.pref.revision, "Node PREF revision shouldn't be empty" 544 assert pref.revision is not None, "PREV for %s to be built is None" % str(pref) 545 elif node.binary in (BINARY_UPDATE, BINARY_DOWNLOAD): 546 # this can happen after a re-evaluation of packageID with Package_ID_unknown 547 self._download_pkg(layout, node) 548 elif node.binary == BINARY_CACHE: 549 assert node.prev, "PREV for %s is None" % str(pref) 550 output.success('Already installed!') 551 log_package_got_from_local_cache(pref) 552 self._recorder.package_fetched_from_cache(pref) 553 processed_package_references[bare_pref] = node.prev 554 else: 555 # We need to update the PREV of this node, as its processing has been skipped, 556 # but it could be that another node with same PREF was built and obtained a new PREV 557 node.prev = processed_prev 558 559 package_folder = layout.package(pref) 560 assert os.path.isdir(package_folder), ("Package '%s' folder must exist: %s\n" 561 % (str(pref), package_folder)) 562 # Call the info method 563 self._call_package_info(conanfile, package_folder, ref=pref.ref, is_editable=False) 564 self._recorder.package_cpp_info(pref, conanfile.cpp_info) 565 566 def _build_package(self, node, output, keep_build, remotes): 567 conanfile = node.conanfile 568 # It is necessary to complete the sources of python requires, which might be used 569 # Only the legacy python_requires allow this 570 python_requires = getattr(conanfile, "python_requires", None) 571 if python_requires and isinstance(python_requires, dict): # Old legacy python_requires 572 for python_require in python_requires.values(): 573 assert python_require.ref.revision is not None, \ 574 "Installer should receive python_require.ref always" 575 retrieve_exports_sources(self._remote_manager, self._cache, 576 python_require.conanfile, python_require.ref, remotes) 577 578 builder = _PackageBuilder(self._cache, output, self._hook_manager, self._remote_manager, 579 self._generator_manager) 580 pref = builder.build_package(node, keep_build, self._recorder, remotes) 581 if node.graph_lock_node: 582 node.graph_lock_node.prev = pref.revision 583 return pref 584 585 def _propagate_info(self, node, using_build_profile): 586 # it is necessary to recompute 587 # the node transitive information necessary to compute the package_id 588 # as it will be used by reevaluate_node() when package_revision_mode is used and 589 # PACKAGE_ID_UNKNOWN happens due to unknown revisions 590 self._binaries_analyzer.package_id_transitive_reqs(node) 591 # Get deps_cpp_info from upstream nodes 592 node_order = [n for n in node.public_closure if n.binary != BINARY_SKIP] 593 # List sort is stable, will keep the original order of the closure, but prioritize levels 594 conan_file = node.conanfile 595 # FIXME: Not the best place to assign the _conan_using_build_profile 596 conan_file._conan_using_build_profile = using_build_profile 597 transitive = [it for it in node.transitive_closure.values()] 598 599 br_host = [] 600 for it in node.dependencies: 601 if it.require.build_require_context == CONTEXT_HOST: 602 br_host.extend(it.dst.transitive_closure.values()) 603 604 # Initialize some members if we are using different contexts 605 if using_build_profile: 606 conan_file.user_info_build = DepsUserInfo() 607 608 for n in node_order: 609 if n not in transitive: 610 conan_file.output.info("Applying build-requirement: %s" % str(n.ref)) 611 612 dep_cpp_info = n.conanfile._conan_dep_cpp_info 613 614 if not using_build_profile: # Do not touch anything 615 conan_file.deps_user_info[n.ref.name] = n.conanfile.user_info 616 conan_file.deps_cpp_info.add(n.ref.name, dep_cpp_info) 617 conan_file.deps_env_info.update(n.conanfile.env_info, n.ref.name) 618 else: 619 if n in transitive or n in br_host: 620 conan_file.deps_user_info[n.ref.name] = n.conanfile.user_info 621 conan_file.deps_cpp_info.add(n.ref.name, dep_cpp_info) 622 else: 623 conan_file.user_info_build[n.ref.name] = n.conanfile.user_info 624 env_info = EnvInfo() 625 env_info._values_ = n.conanfile.env_info._values_.copy() 626 # Add cpp_info.bin_paths/lib_paths to env_info (it is needed for runtime) 627 env_info.DYLD_LIBRARY_PATH.extend(dep_cpp_info.lib_paths) 628 env_info.DYLD_FRAMEWORK_PATH.extend(dep_cpp_info.framework_paths) 629 env_info.LD_LIBRARY_PATH.extend(dep_cpp_info.lib_paths) 630 env_info.PATH.extend(dep_cpp_info.bin_paths) 631 conan_file.deps_env_info.update(env_info, n.ref.name) 632 633 # Update the info but filtering the package values that not apply to the subtree 634 # of this current node and its dependencies. 635 subtree_libnames = [node.ref.name for node in node_order] 636 add_env_conaninfo(conan_file, subtree_libnames) 637 638 def _call_package_info(self, conanfile, package_folder, ref, is_editable): 639 conanfile.cpp_info = CppInfo(conanfile.name, package_folder) 640 conanfile.cpp_info.version = conanfile.version 641 conanfile.cpp_info.description = conanfile.description 642 643 conanfile.folders.set_base_package(package_folder) 644 conanfile.folders.set_base_source(None) 645 conanfile.folders.set_base_build(None) 646 conanfile.folders.set_base_install(None) 647 648 conanfile.env_info = EnvInfo() 649 conanfile.user_info = UserInfo() 650 651 # Get deps_cpp_info from upstream nodes 652 public_deps = [name for name, req in conanfile.requires.items() if not req.private 653 and not req.override] 654 conanfile.cpp_info.public_deps = public_deps 655 # Once the node is build, execute package info, so it has access to the 656 # package folder and artifacts 657 # Minimal pythonpath, not the whole context, make it 50% slower 658 # FIXME Conan 2.0, Remove old ways of reusing python code 659 with pythonpath(conanfile): 660 with tools.chdir(package_folder): 661 with conanfile_exception_formatter(str(conanfile), "package_info"): 662 self._hook_manager.execute("pre_package_info", conanfile=conanfile, 663 reference=ref) 664 if hasattr(conanfile, "layout"): 665 # Old cpp info without defaults (the defaults are in the new one) 666 conanfile.cpp_info = CppInfo(conanfile.name, package_folder, 667 default_values=CppInfoDefaultValues()) 668 if not is_editable: 669 package_cppinfo = conanfile.cpp.package.copy() 670 package_cppinfo.set_relative_base_folder(conanfile.folders.package) 671 # Copy the infos.package into the old cppinfo 672 fill_old_cppinfo(conanfile.cpp.package, conanfile.cpp_info) 673 else: 674 conanfile.cpp_info.filter_empty = False 675 676 conanfile.package_info() 677 678 if hasattr(conanfile, "layout") and is_editable: 679 # Adjust the folders of the layout to consolidate the rootfolder of the 680 # cppinfos inside 681 conanfile.folders.set_base_build(package_folder) 682 conanfile.folders.set_base_source(package_folder) 683 conanfile.folders.set_base_generators(package_folder) 684 685 # convert directory entries to be relative to the declared folders.build 686 build_cppinfo = conanfile.cpp.build.copy() 687 build_cppinfo.set_relative_base_folder(conanfile.folders.build) 688 689 # convert directory entries to be relative to the declared folders.source 690 source_cppinfo = conanfile.cpp.source.copy() 691 source_cppinfo.set_relative_base_folder(conanfile.folders.source) 692 693 full_editable_cppinfo = NewCppInfo() 694 full_editable_cppinfo.merge(source_cppinfo) 695 full_editable_cppinfo.merge(build_cppinfo) 696 # Paste the editable cpp_info but prioritizing it, only if a 697 # variable is not declared at build/source, the package will keep the value 698 fill_old_cppinfo(full_editable_cppinfo, conanfile.cpp_info) 699 700 if conanfile._conan_dep_cpp_info is None: 701 try: 702 if not is_editable and not hasattr(conanfile, "layout"): 703 # FIXME: The default for the cppinfo from build are not the same 704 # so this check fails when editable 705 # FIXME: Remove when new cppinfo model. If using the layout method 706 # the cppinfo object is filled from self.cpp.package new 707 # model and we cannot check if the defaults have been modified 708 # because it doesn't exist in the new model where the defaults 709 # for the components are always empty 710 conanfile.cpp_info._raise_incorrect_components_definition( 711 conanfile.name, conanfile.requires) 712 except ConanException as e: 713 raise ConanException("%s package_info(): %s" % (str(conanfile), e)) 714 conanfile._conan_dep_cpp_info = DepCppInfo(conanfile.cpp_info) 715 self._hook_manager.execute("post_package_info", conanfile=conanfile, 716 reference=ref) 717 [end of conans/client/installer.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conan-io/conan
10dc5360a487d1da00b10c4e3c87694dc0171e0d
Default ``cmake_layout()`` source folder This layout is assuming that the main library ``CMakeLists.txt`` is inside a ``src`` subfolder in the repo. This is probably not the most common user layout, in which they will have a CMakeLists.txt in the root of the folder: - How to model this? - Should we change the ``cmake_layout()`` default to ``folders.source = "."``? - Check implications of defining the root as source folder.
A good reason for having a `CMakeLists.txt` file in the project root is that is used as an entry point for most IDEs. Having it in the `src` subdirectory could be problematic for some IDEs.
2021-09-13T10:27:15Z
<patch> diff --git a/conan/tools/layout/__init__.py b/conan/tools/layout/__init__.py --- a/conan/tools/layout/__init__.py +++ b/conan/tools/layout/__init__.py @@ -14,7 +14,7 @@ def cmake_layout(conanfile, generator=None): else: multi = False - conanfile.folders.source = "src" + conanfile.folders.source = "." if multi: conanfile.folders.build = "build" conanfile.folders.generators = "build/conan" @@ -23,7 +23,7 @@ def cmake_layout(conanfile, generator=None): conanfile.folders.build = "cmake-build-{}".format(build_type) conanfile.folders.generators = os.path.join(conanfile.folders.build, "conan") - conanfile.cpp.source.includedirs = ["."] + conanfile.cpp.source.includedirs = ["src"] if multi: conanfile.cpp.build.libdirs = ["{}".format(conanfile.settings.build_type)] conanfile.cpp.build.bindirs = ["{}".format(conanfile.settings.build_type)] diff --git a/conans/assets/templates/new_v2_cmake.py b/conans/assets/templates/new_v2_cmake.py --- a/conans/assets/templates/new_v2_cmake.py +++ b/conans/assets/templates/new_v2_cmake.py @@ -20,7 +20,7 @@ class {package_name}Conan(ConanFile): default_options = {{"shared": False, "fPIC": True}} # Sources are located in the same place as this recipe, copy them to the recipe - exports_sources = "src/*" + exports_sources = "CMakeLists.txt", "src/*" def config_options(self): if self.settings.os == "Windows": @@ -81,7 +81,7 @@ def test(self): find_package({name} CONFIG REQUIRED) -add_executable(example example.cpp) +add_executable(example src/example.cpp) target_link_libraries(example {name}::{name}) """ @@ -89,9 +89,9 @@ def test(self): cmake_v2 = """cmake_minimum_required(VERSION 3.15) project({name} CXX) -add_library({name} {name}.cpp) +add_library({name} src/{name}.cpp) -set_target_properties({name} PROPERTIES PUBLIC_HEADER "{name}.h") +set_target_properties({name} PROPERTIES PUBLIC_HEADER "src/{name}.h") install(TARGETS {name} DESTINATION "." PUBLIC_HEADER DESTINATION include RUNTIME DESTINATION bin @@ -216,12 +216,12 @@ def get_cmake_lib_files(name, version, package_name="Pkg"): package_name=package_name), "src/{}.cpp".format(name): source_cpp.format(name=name, version=version), "src/{}.h".format(name): source_h.format(name=name, version=version), - "src/CMakeLists.txt": cmake_v2.format(name=name, version=version), + "CMakeLists.txt": cmake_v2.format(name=name, version=version), "test_package/conanfile.py": test_conanfile_v2.format(name=name, version=version, package_name=package_name), "test_package/src/example.cpp": test_main.format(name=name), - "test_package/src/CMakeLists.txt": test_cmake_v2.format(name=name)} + "test_package/CMakeLists.txt": test_cmake_v2.format(name=name)} return files @@ -245,7 +245,7 @@ class {package_name}Conan(ConanFile): settings = "os", "compiler", "build_type", "arch" # Sources are located in the same place as this recipe, copy them to the recipe - exports_sources = "src/*" + exports_sources = "CMakeLists.txt", "src/*" def layout(self): cmake_layout(self) @@ -267,7 +267,7 @@ def package(self): cmake_exe_v2 = """cmake_minimum_required(VERSION 3.15) project({name} CXX) -add_executable({name} {name}.cpp main.cpp) +add_executable({name} src/{name}.cpp src/main.cpp) install(TARGETS {name} DESTINATION "." RUNTIME DESTINATION bin @@ -299,7 +299,7 @@ def get_cmake_exe_files(name, version, package_name="Pkg"): "src/{}.cpp".format(name): source_cpp.format(name=name, version=version), "src/{}.h".format(name): source_h.format(name=name, version=version), "src/main.cpp": test_main.format(name=name), - "src/CMakeLists.txt": cmake_exe_v2.format(name=name, version=version), + "CMakeLists.txt": cmake_exe_v2.format(name=name, version=version), "test_package/conanfile.py": test_conanfile_exe_v2.format(name=name, version=version, package_name=package_name) </patch>
[]
[]
conan-io__conan-3876
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [python_requires] imports inside python_required modules Title: [python_requires] imports inside python_required modules - [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md). - [x] I've specified the Conan version, operating system version and any tool that can be relevant. - [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. Conan 1.8.2 Python 3.6.5 Windows 7 according to the official Conan documentation for python_requires, we created a base class package with helper functions in a separate python file (helper.py) which is imported from the base class with a regular Python import statement like described in the documentation. This worked perfectly for us as long as we had only one version of the base package. The problem occured after we created version 2.0 of the base package with one module using version 2.0 and another module still using version 1.0. The helper.py import statement inside the base package cannot distinguish from which version of the base package it is called and therefore always helper.py from the first version mentioned in a Conan file is used. Here are the steps to reproduce this problem. I hope it gets a little bit more clear then: **base/conanfile.py** ```python from conans import ConanFile import helper class Base(ConanFile): exports = "*.py" ``` **base/helper.py** ```python def getVersion(): return "1.0" ``` Conan command: `conan export . Base/1.0@user/channel` This exports Base/1.0@user/channel correctly. **module1/conanfile.py** ```python from conans import ConanFile, python_requires base = python_requires("Base/1.0@user/channel") class Module1(ConanFile): name = "module1" version = base.helper.getVersion() ``` Conan command: `conan export . user/channel`. This exports module1/1.0@user/channel correctly. **module2/conanfile.py** ```python from conans import ConanFile, python_requires base = python_requires("Base/1.0@user/channel") module1 = python_requires("module1/1.0@user/channel") class MyPackage(ConanFile): name = "module2" version = base.helper.getVersion() ``` Conan command: `conan export . user/channel`. This exports module2/1.0@user/channel correctly. So far everthing works well. Now we create a new version 2.0 of the Base package. In the new version we rename the helper.py method getVersion() to getVersionInADifferentWay(): **base/helper.py** ```python def getVersionInADifferentWay(): return "2.0" ``` Conan command: `conan export . Base/2.0@user/channel` This exports Base/2.0@user/channel correctly. Now module2 should use the new Base package 2.0 whereas module1 should still use the old version Base/1.0: **module2/conanfile.py** ```python from conans import ConanFile, python_requires base = python_requires("Base/2.0@user/channel") module1 = python_requires("module1/1.0@user/channel") class MyPackage(ConanFile): name = "module2" version = base.helper.getVersionInADifferentWay() ``` Conan command: `conan export . user/channel`. This leads to the following error: ``` ERROR: Unable to load conanfile in module2/V2.0/conanfile.py KeyError: 'module1/1.0@user/channel' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Python3/lib/site-packages/conans/client/loader.py", line 235, in _parse_file loaded = imp.load_source(filename, conan_file_path) File "Python3/lib/imp.py", line 172, in load_source module = _load(spec) File "<frozen importlib._bootstrap>", line 684, in _load File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "module2/V2.0/conanfile.py", line 3, in <module> module1 = python_requires("module1/1.0@user/channel") File "Python3/lib/site-packages/conans/client/graph/python_requires.py", line 41, in __call__ module = imp.load_source(str(r).replace(".", "*"), path) File "Python3/lib/imp.py", line 172, in load_source module = _load(spec) File "<frozen importlib._bootstrap>", line 684, in _load File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File ".conan/data/module1/1.0/user/channel/export/conanfile.py", line 3, in <module> class Module1(ConanFile): File ".conan/data/module1/1.0/user/channel/export/conanfile.py", line 5, in Module1 version = base.helper.getVersion() AttributeError: module 'helper' has no attribute 'getVersion' ``` Could you please give us a hint or suggestions how we could solve this problem with Conan Thank You! </issue> <code> [start of README.rst] 1 Conan 2 ===== 3 4 A distributed, open-source, C/C++ package manager. 5 6 +------------------------+-------------------------+ 7 | **master** | **develop** | 8 +========================+=========================+ 9 | |Build Status Master| | |Build Status Develop| | 10 +------------------------+-------------------------+ 11 12 13 +------------------------+---------------------------+---------------------------------------------+ 14 | **Coverage master** | **Coverage develop** | **Coverage graph** | 15 +========================+===========================+=============================================+ 16 | |Master coverage| | |Develop coverage| | |Coverage graph| | 17 +------------------------+---------------------------+---------------------------------------------+ 18 19 20 Setup 21 ====== 22 23 From binaries 24 ------------- 25 26 We have installers for `most platforms here <http://conan.io>`__ but you 27 can run **conan** from sources if you want. 28 29 From pip 30 -------- 31 32 Conan is compatible with Python 2 and Python 3. 33 34 - Install pip following `pip docs`_. 35 - Install conan: 36 37 .. code-block:: bash 38 39 $ pip install conan 40 41 From Homebrew (OSx) 42 ------------------- 43 44 - Install Homebrew following `brew homepage`_. 45 46 .. code-block:: bash 47 48 $ brew update 49 $ brew install conan 50 51 From source 52 ----------- 53 54 You can run **conan** client and server in Windows, MacOS, and Linux. 55 56 - **Install pip following** `pip docs`_. 57 58 - **Clone conan repository:** 59 60 .. code-block:: bash 61 62 $ git clone https://github.com/conan-io/conan.git 63 64 - **Install in editable mode** 65 66 .. code-block:: bash 67 68 $ cd conan && sudo pip install -e . 69 70 If you are in Windows, using ``sudo`` is not required. 71 72 - **You are ready, try to run conan:** 73 74 .. code-block:: 75 76 $ conan --help 77 78 Consumer commands 79 install Installs the requirements specified in a conanfile (.py or .txt). 80 config Manages configuration. Edits the conan.conf or installs config files. 81 get Gets a file or list a directory of a given reference or package. 82 info Gets information about the dependency graph of a recipe. 83 search Searches package recipes and binaries in the local cache or in a remote. 84 Creator commands 85 new Creates a new package recipe template with a 'conanfile.py'. 86 create Builds a binary package for recipe (conanfile.py) located in current dir. 87 upload Uploads a recipe and binary packages to a remote. 88 export Copies the recipe (conanfile.py & associated files) to your local cache. 89 export-pkg Exports a recipe & creates a package with given files calling 'package'. 90 test Test a package, consuming it with a conanfile recipe with a test() method. 91 Package development commands 92 source Calls your local conanfile.py 'source()' method. 93 build Calls your local conanfile.py 'build()' method. 94 package Calls your local conanfile.py 'package()' method. 95 Misc commands 96 profile Lists profiles in the '.conan/profiles' folder, or shows profile details. 97 remote Manages the remote list and the package recipes associated to a remote. 98 user Authenticates against a remote with user/pass, caching the auth token. 99 imports Calls your local conanfile.py or conanfile.txt 'imports' method. 100 copy Copies conan recipes and packages to another user/channel. 101 remove Removes packages or binaries matching pattern from local cache or remote. 102 alias Creates and exports an 'alias recipe'. 103 download Downloads recipe and binaries to the local cache, without using settings. 104 105 Conan commands. Type "conan <command> -h" for help 106 107 Running the tests 108 ================= 109 110 **Install python requirements** 111 112 .. code-block:: bash 113 114 $ pip install -r conans/requirements.txt 115 $ pip install -r conans/requirements_server.txt 116 $ pip install -r conans/requirements_dev.txt 117 118 119 Only in OSX: 120 121 122 .. code-block:: bash 123 124 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX 125 126 127 If you are not Windows and you are not using a python virtual environment, you will need to run these 128 commands using `sudo`. 129 130 Before you can run the tests, you need to set a few environment variables first. 131 132 .. code-block:: bash 133 134 $ export PYTHONPATH=$PYTHONPATH:$(pwd) 135 136 On Windows it would be (while being in the conan root directory): 137 138 .. code-block:: bash 139 140 $ set PYTHONPATH=. 141 142 Ensure that your ``cmake`` has version 2.8 or later. You can see the 143 version with the following command: 144 145 .. code-block:: bash 146 147 $ cmake --version 148 149 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your 150 operating system and your requirements. 151 152 These should work for the GCC from ``build-essential`` on Ubuntu 14.04: 153 154 .. code-block:: bash 155 156 $ export CONAN_COMPILER=gcc 157 $ export CONAN_COMPILER_VERSION=4.8 158 159 These should work for OS X: 160 161 .. code-block:: bash 162 163 $ export CONAN_COMPILER=clang 164 $ export CONAN_COMPILER_VERSION=3.5 165 166 Finally, there are some tests that use conan to package Go-lang 167 libraries, so you might **need to install go-lang** in your computer and 168 add it to the path. 169 170 You can run the actual tests like this: 171 172 .. code-block:: bash 173 174 $ nosetests . 175 176 177 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use 178 to filter the tests, and do not execute them: 179 180 .. code-block:: bash 181 182 $ nosetests . -a !golang 183 184 A few minutes later it should print ``OK``: 185 186 .. code-block:: bash 187 188 ............................................................................................ 189 ---------------------------------------------------------------------- 190 Ran 146 tests in 50.993s 191 192 OK 193 194 To run specific tests, you can specify the test name too, something like: 195 196 .. code-block:: bash 197 198 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture 199 200 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests. 201 202 License 203 ------- 204 205 `MIT LICENSE <./LICENSE.md>`__ 206 207 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master 208 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master 209 210 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop 211 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop 212 213 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg 214 :target: https://codecov.io/gh/conan-io/conan/branch/master 215 216 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg 217 :target: https://codecov.io/gh/conan-io/conan/branch/develop 218 219 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg 220 :height: 50px 221 :width: 50 px 222 :alt: Conan develop coverage 223 224 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/ 225 226 .. _`brew homepage`: http://brew.sh/ 227 [end of README.rst] [start of conans/client/cmd/export.py] 1 import ast 2 import os 3 import shutil 4 import six 5 6 from conans.client.cmd.export_linter import conan_linter 7 from conans.client.file_copier import FileCopier 8 from conans.client.output import ScopedOutput 9 from conans.client.source import get_scm_data 10 from conans.errors import ConanException 11 from conans.model.manifest import FileTreeManifest 12 from conans.model.scm import SCM 13 from conans.paths import CONAN_MANIFEST, CONANFILE 14 from conans.search.search import search_recipes 15 from conans.util.files import save, rmdir, is_dirty, set_dirty, mkdir, load 16 from conans.util.log import logger 17 18 19 def export_alias(reference, target_reference, client_cache): 20 if reference.name != target_reference.name: 21 raise ConanException("An alias can only be defined to a package with the same name") 22 conanfile = """ 23 from conans import ConanFile 24 25 class AliasConanfile(ConanFile): 26 alias = "%s" 27 """ % str(target_reference) 28 29 export_path = client_cache.export(reference) 30 mkdir(export_path) 31 save(os.path.join(export_path, CONANFILE), conanfile) 32 mkdir(client_cache.export_sources(reference)) 33 digest = FileTreeManifest.create(export_path) 34 digest.save(export_path) 35 36 37 def cmd_export(conanfile_path, conanfile, reference, keep_source, output, client_cache, 38 hook_manager, registry): 39 """ Export the recipe 40 param conanfile_path: the original source directory of the user containing a 41 conanfile.py 42 """ 43 hook_manager.execute("pre_export", conanfile=conanfile, conanfile_path=conanfile_path, 44 reference=reference) 45 logger.debug("Exporting %s" % conanfile_path) 46 output.highlight("Exporting package recipe") 47 48 conan_linter(conanfile_path, output) 49 # Maybe a platform check could be added, but depends on disk partition 50 conan_ref_str = str(reference) 51 refs = search_recipes(client_cache, conan_ref_str, ignorecase=True) 52 if refs and reference not in refs: 53 raise ConanException("Cannot export package with same name but different case\n" 54 "You exported '%s' but already existing '%s'" 55 % (conan_ref_str, " ".join(str(s) for s in refs))) 56 57 with client_cache.conanfile_write_lock(reference): 58 _export_conanfile(conanfile_path, conanfile.output, client_cache, conanfile, reference, 59 keep_source) 60 conanfile_cache_path = client_cache.conanfile(reference) 61 hook_manager.execute("post_export", conanfile=conanfile, conanfile_path=conanfile_cache_path, 62 reference=reference) 63 64 65 def _capture_export_scm_data(conanfile, conanfile_dir, destination_folder, output, paths, conan_ref): 66 67 scm_src_file = paths.scm_folder(conan_ref) 68 if os.path.exists(scm_src_file): 69 os.unlink(scm_src_file) 70 71 scm_data = get_scm_data(conanfile) 72 73 if not scm_data or not (scm_data.capture_origin or scm_data.capture_revision): 74 return 75 76 scm = SCM(scm_data, conanfile_dir) 77 78 if scm_data.url == "auto": 79 origin = scm.get_qualified_remote_url() 80 if not origin: 81 raise ConanException("Repo origin cannot be deduced by 'auto'") 82 if scm.is_local_repository(): 83 output.warn("Repo origin looks like a local path: %s" % origin) 84 output.success("Repo origin deduced by 'auto': %s" % origin) 85 scm_data.url = origin 86 if scm_data.revision == "auto": 87 if not scm.is_pristine(): 88 output.warn("Repo status is not pristine: there might be modified files") 89 scm_data.revision = scm.get_revision() 90 output.success("Revision deduced by 'auto': %s" % scm_data.revision) 91 92 # Generate the scm_folder.txt file pointing to the src_path 93 src_path = scm.get_repo_root() 94 save(scm_src_file, src_path.replace("\\", "/")) 95 _replace_scm_data_in_conanfile(os.path.join(destination_folder, "conanfile.py"), 96 scm_data) 97 98 99 def _replace_scm_data_in_conanfile(conanfile_path, scm_data): 100 # Parsing and replacing the SCM field 101 content = load(conanfile_path) 102 headers = [] 103 104 if six.PY2: 105 # Workaround for https://bugs.python.org/issue22221 106 lines_without_headers = [] 107 lines = content.splitlines(True) 108 for line in lines: 109 if not lines_without_headers and line.startswith("#"): 110 headers.append(line) 111 else: 112 lines_without_headers.append(line) 113 content = ''.join(lines_without_headers) 114 115 lines = content.splitlines(True) 116 tree = ast.parse(content) 117 to_replace = [] 118 for i_body, item in enumerate(tree.body): 119 if isinstance(item, ast.ClassDef): 120 statements = item.body 121 for i, stmt in enumerate(item.body): 122 if isinstance(stmt, ast.Assign) and len(stmt.targets) == 1: 123 if isinstance(stmt.targets[0], ast.Name) and stmt.targets[0].id == "scm": 124 try: 125 if i + 1 == len(statements): # Last statement in my ClassDef 126 if i_body + 1 == len(tree.body): # Last statement over all 127 next_line = len(lines) 128 else: 129 next_line = tree.body[i_body+1].lineno - 1 130 else: 131 next_line = statements[i+1].lineno - 1 132 except IndexError: 133 next_line = stmt.lineno 134 replace = [line for line in lines[(stmt.lineno-1):next_line] 135 if line.strip()] 136 to_replace.append("".join(replace).lstrip()) 137 break 138 if len(to_replace) != 1: 139 raise ConanException("The conanfile.py defines more than one class level 'scm' attribute") 140 141 new_text = "scm = " + ",\n ".join(str(scm_data).split(",")) + "\n" 142 content = content.replace(to_replace[0], new_text) 143 content = content if not headers else ''.join(headers) + content 144 save(conanfile_path, content) 145 146 147 def _export_conanfile(conanfile_path, output, paths, conanfile, conan_ref, keep_source): 148 exports_folder = paths.export(conan_ref) 149 exports_source_folder = paths.export_sources(conan_ref, conanfile.short_paths) 150 previous_digest = _init_export_folder(exports_folder, exports_source_folder) 151 origin_folder = os.path.dirname(conanfile_path) 152 export_recipe(conanfile, origin_folder, exports_folder, output) 153 export_source(conanfile, origin_folder, exports_source_folder, output) 154 shutil.copy2(conanfile_path, os.path.join(exports_folder, CONANFILE)) 155 156 _capture_export_scm_data(conanfile, os.path.dirname(conanfile_path), exports_folder, 157 output, paths, conan_ref) 158 159 digest = FileTreeManifest.create(exports_folder, exports_source_folder) 160 161 if previous_digest and previous_digest == digest: 162 output.info("The stored package has not changed") 163 modified_recipe = False 164 digest = previous_digest # Use the old one, keep old timestamp 165 else: 166 output.success('A new %s version was exported' % CONANFILE) 167 output.info('Folder: %s' % exports_folder) 168 modified_recipe = True 169 digest.save(exports_folder) 170 171 # FIXME: Conan 2.0 Clear the registry entry if the recipe has changed 172 173 source = paths.source(conan_ref, conanfile.short_paths) 174 remove = False 175 if is_dirty(source): 176 output.info("Source folder is corrupted, forcing removal") 177 remove = True 178 elif modified_recipe and not keep_source and os.path.exists(source): 179 output.info("Package recipe modified in export, forcing source folder removal") 180 output.info("Use the --keep-source, -k option to skip it") 181 remove = True 182 if remove: 183 output.info("Removing 'source' folder, this can take a while for big packages") 184 try: 185 # remove only the internal 186 rmdir(source) 187 except BaseException as e: 188 output.error("Unable to delete source folder. " 189 "Will be marked as corrupted for deletion") 190 output.warn(str(e)) 191 set_dirty(source) 192 193 194 def _init_export_folder(destination_folder, destination_src_folder): 195 previous_digest = None 196 try: 197 if os.path.exists(destination_folder): 198 if os.path.exists(os.path.join(destination_folder, CONAN_MANIFEST)): 199 previous_digest = FileTreeManifest.load(destination_folder) 200 # Maybe here we want to invalidate cache 201 rmdir(destination_folder) 202 os.makedirs(destination_folder) 203 except Exception as e: 204 raise ConanException("Unable to create folder %s\n%s" % (destination_folder, str(e))) 205 try: 206 if os.path.exists(destination_src_folder): 207 rmdir(destination_src_folder) 208 os.makedirs(destination_src_folder) 209 except Exception as e: 210 raise ConanException("Unable to create folder %s\n%s" % (destination_src_folder, str(e))) 211 return previous_digest 212 213 214 def _classify_patterns(patterns): 215 patterns = patterns or [] 216 included, excluded = [], [] 217 for p in patterns: 218 if p.startswith("!"): 219 excluded.append(p[1:]) 220 else: 221 included.append(p) 222 return included, excluded 223 224 225 def export_source(conanfile, origin_folder, destination_source_folder, output): 226 if isinstance(conanfile.exports_sources, str): 227 conanfile.exports_sources = (conanfile.exports_sources, ) 228 229 included_sources, excluded_sources = _classify_patterns(conanfile.exports_sources) 230 copier = FileCopier(origin_folder, destination_source_folder) 231 for pattern in included_sources: 232 copier(pattern, links=True, excludes=excluded_sources) 233 package_output = ScopedOutput("%s exports_sources" % output.scope, output) 234 copier.report(package_output) 235 236 237 def export_recipe(conanfile, origin_folder, destination_folder, output): 238 if isinstance(conanfile.exports, str): 239 conanfile.exports = (conanfile.exports, ) 240 241 included_exports, excluded_exports = _classify_patterns(conanfile.exports) 242 243 try: 244 os.unlink(os.path.join(origin_folder, CONANFILE + 'c')) 245 except OSError: 246 pass 247 248 copier = FileCopier(origin_folder, destination_folder) 249 for pattern in included_exports: 250 copier(pattern, links=True, excludes=excluded_exports) 251 package_output = ScopedOutput("%s exports" % output.scope, output) 252 copier.report(package_output) 253 [end of conans/client/cmd/export.py] [start of conans/client/loader.py] 1 import imp 2 import inspect 3 import os 4 import sys 5 6 import uuid 7 8 from conans.client.generators import registered_generators 9 from conans.client.loader_txt import ConanFileTextLoader 10 from conans.client.output import ScopedOutput 11 from conans.client.tools.files import chdir 12 from conans.errors import ConanException, NotFoundException 13 from conans.model.conan_file import ConanFile 14 from conans.model.conan_generator import Generator 15 from conans.model.options import OptionsValues 16 from conans.model.profile import Profile 17 from conans.model.ref import ConanFileReference 18 from conans.model.settings import Settings 19 from conans.model.values import Values 20 from conans.util.files import load 21 22 23 class ProcessedProfile(object): 24 def __init__(self, settings=None, profile=None, create_reference=None): 25 settings = settings or Settings() 26 profile = profile or Profile() 27 assert isinstance(settings, Settings) 28 # assert package_settings is None or isinstance(package_settings, dict) 29 self._settings = settings 30 self._user_options = profile.options.copy() 31 32 self._package_settings = profile.package_settings_values 33 self._env_values = profile.env_values 34 # Make sure the paths are normalized first, so env_values can be just a copy 35 self._dev_reference = create_reference 36 37 38 class ConanFileLoader(object): 39 def __init__(self, runner, output, python_requires): 40 self._runner = runner 41 self._output = output 42 self._python_requires = python_requires 43 sys.modules["conans"].python_requires = python_requires 44 45 def load_class(self, conanfile_path): 46 loaded, filename = _parse_file(conanfile_path) 47 try: 48 conanfile = _parse_module(loaded, filename) 49 conanfile.python_requires = self._python_requires.requires 50 return conanfile 51 except Exception as e: # re-raise with file name 52 raise ConanException("%s: %s" % (conanfile_path, str(e))) 53 54 def load_export(self, conanfile_path, name, version, user, channel): 55 conanfile = self.load_class(conanfile_path) 56 57 # check name and version were specified 58 if not conanfile.name: 59 if name: 60 conanfile.name = name 61 else: 62 raise ConanException("conanfile didn't specify name") 63 elif name and name != conanfile.name: 64 raise ConanException("Package recipe exported with name %s!=%s" % (name, conanfile.name)) 65 66 if not conanfile.version: 67 if version: 68 conanfile.version = version 69 else: 70 raise ConanException("conanfile didn't specify version") 71 elif version and version != conanfile.version: 72 raise ConanException("Package recipe exported with version %s!=%s" 73 % (version, conanfile.version)) 74 75 conan_ref = ConanFileReference(conanfile.name, conanfile.version, user, channel) 76 output = ScopedOutput(str(conan_ref), self._output) 77 return conan_ref, conanfile(output, self._runner, user, channel) 78 79 def load_basic(self, conanfile_path, output, reference=None): 80 result = self.load_class(conanfile_path) 81 try: 82 if reference: 83 result.name, result.version, user, channel = reference 84 else: 85 user, channel = None, None 86 result.in_local_cache = False 87 88 # Instance the conanfile 89 result = result(output, self._runner, user, channel) 90 return result 91 except Exception as e: # re-raise with file name 92 raise ConanException("%s: %s" % (conanfile_path, str(e))) 93 94 def load_conanfile(self, conanfile_path, output, processed_profile, 95 consumer=False, reference=None, local=False): 96 """ loads a ConanFile object from the given file 97 """ 98 conanfile = self.load_basic(conanfile_path, output, reference) 99 if processed_profile._dev_reference and processed_profile._dev_reference == reference: 100 conanfile.develop = True 101 try: 102 # Prepare the settings for the loaded conanfile 103 # Mixing the global settings with the specified for that name if exist 104 tmp_settings = processed_profile._settings.copy() 105 if (processed_profile._package_settings and 106 conanfile.name in processed_profile._package_settings): 107 # Update the values, keeping old ones (confusing assign) 108 values_tuple = processed_profile._package_settings[conanfile.name] 109 tmp_settings.values = Values.from_list(values_tuple) 110 111 conanfile.initialize(tmp_settings, processed_profile._env_values, local) 112 113 if consumer: 114 conanfile.develop = True 115 processed_profile._user_options.descope_options(conanfile.name) 116 conanfile.options.initialize_upstream(processed_profile._user_options, local=local, 117 name=conanfile.name) 118 processed_profile._user_options.clear_unscoped_options() 119 120 return conanfile 121 except Exception as e: # re-raise with file name 122 raise ConanException("%s: %s" % (conanfile_path, str(e))) 123 124 def load_conanfile_txt(self, conan_txt_path, output, processed_profile): 125 if not os.path.exists(conan_txt_path): 126 raise NotFoundException("Conanfile not found!") 127 128 contents = load(conan_txt_path) 129 path = os.path.dirname(conan_txt_path) 130 131 conanfile = self._parse_conan_txt(contents, path, output, processed_profile) 132 return conanfile 133 134 def _parse_conan_txt(self, contents, path, output, processed_profile): 135 conanfile = ConanFile(output, self._runner) 136 conanfile.initialize(Settings(), processed_profile._env_values) 137 # It is necessary to copy the settings, because the above is only a constraint of 138 # conanfile settings, and a txt doesn't define settings. Necessary for generators, 139 # as cmake_multi, that check build_type. 140 conanfile.settings = processed_profile._settings.copy_values() 141 142 try: 143 parser = ConanFileTextLoader(contents) 144 except Exception as e: 145 raise ConanException("%s:\n%s" % (path, str(e))) 146 for requirement_text in parser.requirements: 147 ConanFileReference.loads(requirement_text) # Raise if invalid 148 conanfile.requires.add(requirement_text) 149 for build_requirement_text in parser.build_requirements: 150 ConanFileReference.loads(build_requirement_text) 151 if not hasattr(conanfile, "build_requires"): 152 conanfile.build_requires = [] 153 conanfile.build_requires.append(build_requirement_text) 154 155 conanfile.generators = parser.generators 156 157 options = OptionsValues.loads(parser.options) 158 conanfile.options.values = options 159 conanfile.options.initialize_upstream(processed_profile._user_options) 160 161 # imports method 162 conanfile.imports = parser.imports_method(conanfile) 163 conanfile._conan_env_values.update(processed_profile._env_values) 164 return conanfile 165 166 def load_virtual(self, references, processed_profile, scope_options=True, 167 build_requires_options=None): 168 # If user don't specify namespace in options, assume that it is 169 # for the reference (keep compatibility) 170 conanfile = ConanFile(None, self._runner, processed_profile._settings.copy()) 171 conanfile.initialize(processed_profile._settings.copy(), processed_profile._env_values) 172 conanfile.settings = processed_profile._settings.copy_values() 173 174 for reference in references: 175 conanfile.requires.add(str(reference)) # Convert to string necessary 176 # Allows options without package namespace in conan install commands: 177 # conan install zlib/1.2.8@lasote/stable -o shared=True 178 if scope_options: 179 assert len(references) == 1 180 processed_profile._user_options.scope_options(references[0].name) 181 if build_requires_options: 182 conanfile.options.initialize_upstream(build_requires_options) 183 else: 184 conanfile.options.initialize_upstream(processed_profile._user_options) 185 186 conanfile.generators = [] # remove the default txt generator 187 188 return conanfile 189 190 191 def _parse_module(conanfile_module, module_id): 192 """ Parses a python in-memory module, to extract the classes, mainly the main 193 class defining the Recipe, but also process possible existing generators 194 @param conanfile_module: the module to be processed 195 @return: the main ConanFile class from the module 196 """ 197 result = None 198 for name, attr in conanfile_module.__dict__.items(): 199 if (name.startswith("_") or not inspect.isclass(attr) or 200 attr.__dict__.get("__module__") != module_id): 201 continue 202 203 if issubclass(attr, ConanFile) and attr != ConanFile: 204 if result is None: 205 result = attr 206 else: 207 raise ConanException("More than 1 conanfile in the file") 208 elif issubclass(attr, Generator) and attr != Generator: 209 registered_generators.add(attr.__name__, attr) 210 211 if result is None: 212 raise ConanException("No subclass of ConanFile") 213 214 return result 215 216 217 def _invalid_python_requires(require): 218 raise ConanException("Invalid use of python_requires(%s)" % require) 219 220 221 def _parse_file(conan_file_path): 222 """ From a given path, obtain the in memory python import module 223 """ 224 225 if not os.path.exists(conan_file_path): 226 raise NotFoundException("%s not found!" % conan_file_path) 227 228 try: 229 module_id = str(uuid.uuid1()) 230 current_dir = os.path.dirname(conan_file_path) 231 sys.path.append(current_dir) 232 old_modules = list(sys.modules.keys()) 233 with chdir(current_dir): 234 sys.dont_write_bytecode = True 235 loaded = imp.load_source(module_id, conan_file_path) 236 loaded.python_requires = _invalid_python_requires 237 sys.dont_write_bytecode = False 238 239 # These lines are necessary, otherwise local conanfile imports with same name 240 # collide, but no error, and overwrite other packages imports!! 241 added_modules = set(sys.modules).difference(old_modules) 242 for added in added_modules: 243 module = sys.modules[added] 244 if module: 245 try: 246 folder = os.path.dirname(module.__file__) 247 except AttributeError: # some module doesn't have __file__ 248 pass 249 else: 250 if folder.startswith(current_dir): 251 module = sys.modules.pop(added) 252 sys.modules["%s.%s" % (module_id, added)] = module 253 except Exception: 254 import traceback 255 trace = traceback.format_exc().split('\n') 256 raise ConanException("Unable to load conanfile in %s\n%s" % (conan_file_path, 257 '\n'.join(trace[3:]))) 258 finally: 259 sys.path.pop() 260 261 return loaded, module_id 262 [end of conans/client/loader.py] [start of conans/model/conan_file.py] 1 import os 2 from contextlib import contextmanager 3 4 from conans import tools # @UnusedImport KEEP THIS! Needed for pyinstaller to copy to exe. 5 from conans.client.tools.env import pythonpath 6 from conans.errors import ConanException 7 from conans.model.build_info import DepsCppInfo 8 from conans.model.env_info import DepsEnvInfo 9 from conans.model.options import Options, PackageOptions, OptionsValues 10 from conans.model.requires import Requirements 11 from conans.model.user_info import DepsUserInfo 12 from conans.paths import RUN_LOG_NAME 13 from conans.tools import environment_append, no_op 14 from conans.client.output import Color 15 from conans.client.tools.oss import os_info 16 17 18 def create_options(conanfile): 19 try: 20 package_options = PackageOptions(getattr(conanfile, "options", None)) 21 options = Options(package_options) 22 23 default_options = getattr(conanfile, "default_options", None) 24 if default_options: 25 if isinstance(default_options, (list, tuple, dict)): 26 default_values = OptionsValues(default_options) 27 elif isinstance(default_options, str): 28 default_values = OptionsValues.loads(default_options) 29 else: 30 raise ConanException("Please define your default_options as list, " 31 "multiline string or dictionary") 32 options.values = default_values 33 return options 34 except Exception as e: 35 raise ConanException("Error while initializing options. %s" % str(e)) 36 37 38 def create_requirements(conanfile): 39 try: 40 # Actual requirements of this package 41 if not hasattr(conanfile, "requires"): 42 return Requirements() 43 else: 44 if not conanfile.requires: 45 return Requirements() 46 if isinstance(conanfile.requires, tuple): 47 return Requirements(*conanfile.requires) 48 else: 49 return Requirements(conanfile.requires, ) 50 except Exception as e: 51 raise ConanException("Error while initializing requirements. %s" % str(e)) 52 53 54 def create_settings(conanfile, settings, local): 55 try: 56 defined_settings = getattr(conanfile, "settings", None) 57 if isinstance(defined_settings, str): 58 defined_settings = [defined_settings] 59 current = defined_settings or {} 60 settings.constraint(current, raise_undefined_field=not local) 61 return settings 62 except Exception as e: 63 raise ConanException("Error while initializing settings. %s" % str(e)) 64 65 66 @contextmanager 67 def _env_and_python(conanfile): 68 with environment_append(conanfile.env): 69 with pythonpath(conanfile): 70 yield 71 72 73 def get_env_context_manager(conanfile, without_python=False): 74 if not conanfile.apply_env: 75 return no_op() 76 if without_python: 77 return environment_append(conanfile.env) 78 return _env_and_python(conanfile) 79 80 81 class ConanFile(object): 82 """ The base class for all package recipes 83 """ 84 85 name = None 86 version = None # Any str, can be "1.1" or whatever 87 url = None # The URL where this File is located, as github, to collaborate in package 88 # The license of the PACKAGE, just a shortcut, does not replace or 89 # change the actual license of the source code 90 license = None 91 author = None # Main maintainer/responsible for the package, any format 92 description = None 93 topics = None 94 homepage = None 95 build_policy = None 96 short_paths = False 97 apply_env = True # Apply environment variables from requires deps_env_info and profiles 98 exports = None 99 exports_sources = None 100 generators = ["txt"] 101 102 # Vars to control the build steps (build(), package()) 103 should_configure = True 104 should_build = True 105 should_install = True 106 should_test = True 107 in_local_cache = True 108 develop = False 109 110 # Defaulting the reference fields 111 default_channel = None 112 default_user = None 113 114 def __init__(self, output, runner, user=None, channel=None): 115 # an output stream (writeln, info, warn error) 116 self.output = output 117 # something that can run commands, as os.sytem 118 self._conan_runner = runner 119 self._conan_user = user 120 self._conan_channel = channel 121 122 def initialize(self, settings, env, local=None): 123 if isinstance(self.generators, str): 124 self.generators = [self.generators] 125 # User defined options 126 self.options = create_options(self) 127 self.requires = create_requirements(self) 128 self.settings = create_settings(self, settings, local) 129 try: 130 if self.settings.os_build and self.settings.os: 131 self.output.writeln("*"*60, front=Color.BRIGHT_RED) 132 self.output.writeln(" This package defines both 'os' and 'os_build' ", 133 front=Color.BRIGHT_RED) 134 self.output.writeln(" Please use 'os' for libraries and 'os_build'", 135 front=Color.BRIGHT_RED) 136 self.output.writeln(" only for build-requires used for cross-building", 137 front=Color.BRIGHT_RED) 138 self.output.writeln("*"*60, front=Color.BRIGHT_RED) 139 except ConanException: 140 pass 141 142 # needed variables to pack the project 143 self.cpp_info = None # Will be initialized at processing time 144 self.deps_cpp_info = DepsCppInfo() 145 146 # environment variables declared in the package_info 147 self.env_info = None # Will be initialized at processing time 148 self.deps_env_info = DepsEnvInfo() 149 150 # user declared variables 151 self.user_info = None 152 # Keys are the package names, and the values a dict with the vars 153 self.deps_user_info = DepsUserInfo() 154 155 # user specified env variables 156 self._conan_env_values = env.copy() # user specified -e 157 158 @property 159 def env(self): 160 """Apply the self.deps_env_info into a copy of self._conan_env_values (will prioritize the 161 self._conan_env_values, user specified from profiles or -e first, then inherited)""" 162 # Cannot be lazy cached, because it's called in configure node, and we still don't have 163 # the deps_env_info objects available 164 tmp_env_values = self._conan_env_values.copy() 165 tmp_env_values.update(self.deps_env_info) 166 167 ret, multiple = tmp_env_values.env_dicts(self.name) 168 ret.update(multiple) 169 return ret 170 171 @property 172 def channel(self): 173 if not self._conan_channel: 174 self._conan_channel = os.getenv("CONAN_CHANNEL") or self.default_channel 175 if not self._conan_channel: 176 raise ConanException("CONAN_CHANNEL environment variable not defined, " 177 "but self.channel is used in conanfile") 178 return self._conan_channel 179 180 @property 181 def user(self): 182 if not self._conan_user: 183 self._conan_user = os.getenv("CONAN_USERNAME") or self.default_user 184 if not self._conan_user: 185 raise ConanException("CONAN_USERNAME environment variable not defined, " 186 "but self.user is used in conanfile") 187 return self._conan_user 188 189 def collect_libs(self, folder=None): 190 self.output.warn("'self.collect_libs' is deprecated, " 191 "use 'tools.collect_libs(self)' instead") 192 return tools.collect_libs(self, folder=folder) 193 194 @property 195 def build_policy_missing(self): 196 return self.build_policy == "missing" 197 198 @property 199 def build_policy_always(self): 200 return self.build_policy == "always" 201 202 def source(self): 203 pass 204 205 def system_requirements(self): 206 """ this method can be overwritten to implement logic for system package 207 managers, as apt-get 208 209 You can define self.global_system_requirements = True, if you want the installation 210 to be for all packages (not depending on settings/options/requirements) 211 """ 212 213 def config_options(self): 214 """ modify options, probably conditioned to some settings. This call is executed 215 before config_settings. E.g. 216 if self.settings.os == "Windows": 217 del self.options.shared # shared/static not supported in win 218 """ 219 220 def configure(self): 221 """ modify settings, probably conditioned to some options. This call is executed 222 after config_options. E.g. 223 if self.options.header_only: 224 self.settings.clear() 225 This is also the place for conditional requirements 226 """ 227 228 def build(self): 229 """ build your project calling the desired build tools as done in the command line. 230 E.g. self.run("cmake --build .") Or use the provided build helpers. E.g. cmake.build() 231 """ 232 self.output.warn("This conanfile has no build step") 233 234 def package(self): 235 """ package the needed files from source and build folders. 236 E.g. self.copy("*.h", src="src/includes", dst="includes") 237 """ 238 self.output.warn("This conanfile has no package step") 239 240 def package_info(self): 241 """ define cpp_build_info, flags, etc 242 """ 243 244 def run(self, command, output=True, cwd=None, win_bash=False, subsystem=None, msys_mingw=True, 245 ignore_errors=False, run_environment=False): 246 def _run(): 247 if not win_bash: 248 return self._conan_runner(command, output, os.path.abspath(RUN_LOG_NAME), cwd) 249 # FIXME: run in windows bash is not using output 250 return tools.run_in_windows_bash(self, bashcmd=command, cwd=cwd, subsystem=subsystem, 251 msys_mingw=msys_mingw) 252 if run_environment: 253 with tools.run_environment(self): 254 if os_info.is_macos: 255 command = 'DYLD_LIBRARY_PATH="%s" %s' % (os.environ.get('DYLD_LIBRARY_PATH', ''), 256 command) 257 retcode = _run() 258 else: 259 retcode = _run() 260 261 if not ignore_errors and retcode != 0: 262 raise ConanException("Error %d while executing %s" % (retcode, command)) 263 264 return retcode 265 266 def package_id(self): 267 """ modify the conans info, typically to narrow values 268 eg.: conaninfo.package_references = [] 269 """ 270 271 def test(self): 272 """ test the generated executable. 273 E.g. self.run("./example") 274 """ 275 raise ConanException("You need to create a method 'test' in your test/conanfile.py") 276 277 def __repr__(self): 278 if self.name and self.version and self._conan_channel and self._conan_user: 279 return "%s/%s@%s/%s" % (self.name, self.version, self.user, self.channel) 280 elif self.name and self.version: 281 return "%s/%s@PROJECT" % (self.name, self.version) 282 else: 283 return "PROJECT" 284 [end of conans/model/conan_file.py] [start of setup.py] 1 """A setuptools based setup module. 2 See: 3 https://packaging.python.org/en/latest/distributing.html 4 https://github.com/pypa/sampleproject 5 """ 6 7 # Always prefer setuptools over distutils 8 from setuptools import setup, find_packages 9 # To use a consistent encoding 10 from codecs import open 11 from os import path 12 import os 13 import re 14 import platform 15 16 17 here = path.abspath(path.dirname(__file__)) 18 19 20 def get_requires(filename): 21 requirements = [] 22 with open(filename, "rt") as req_file: 23 for line in req_file.read().splitlines(): 24 if not line.strip().startswith("#"): 25 requirements.append(line) 26 return requirements 27 28 29 project_requirements = get_requires("conans/requirements.txt") 30 if platform.system() == "Darwin": 31 project_requirements.extend(get_requires("conans/requirements_osx.txt")) 32 project_requirements.extend(get_requires("conans/requirements_server.txt")) 33 dev_requirements = get_requires("conans/requirements_dev.txt") 34 35 36 def load_version(): 37 '''Loads a file content''' 38 filename = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), 39 "conans", "__init__.py")) 40 with open(filename, "rt") as version_file: 41 conan_init = version_file.read() 42 version = re.search("__version__ = '([0-9a-z.-]+)'", conan_init).group(1) 43 return version 44 45 46 # def generate_long_description_file(): 47 # import pypandoc 48 # 49 # output = pypandoc.convert('README.md', 'rst') 50 # return output 51 52 setup( 53 name='conan', 54 # Versions should comply with PEP440. For a discussion on single-sourcing 55 # the version across setup.py and the project code, see 56 # https://packaging.python.org/en/latest/single_source_version.html 57 version=load_version(), # + ".rc1", 58 59 description='Conan C/C++ package manager', 60 # long_description="An open source, decentralized package manager, to automate building and sharing of packages", 61 # long_description=generate_long_description_file(), 62 63 # The project's main homepage. 64 url='https://conan.io', 65 66 # Author details 67 author='JFrog LTD', 68 author_email='[email protected]', 69 70 # Choose your license 71 license='MIT', 72 73 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers 74 classifiers=[ 75 'Development Status :: 5 - Production/Stable', 76 'Intended Audience :: Developers', 77 'Topic :: Software Development :: Build Tools', 78 'License :: OSI Approved :: MIT License', 79 'Programming Language :: Python :: 2', 80 'Programming Language :: Python :: 2.7', 81 'Programming Language :: Python :: 3', 82 'Programming Language :: Python :: 3.6' 83 ], 84 85 # What does your project relate to? 86 keywords=['C/C++', 'package', 'libraries', 'developer', 'manager', 87 'dependency', 'tool', 'c', 'c++', 'cpp'], 88 89 # You can just specify the packages manually here if your project is 90 # simple. Or you can use find_packages(). 91 packages=find_packages(), 92 93 # Alternatively, if you want to distribute just a my_module.py, uncomment 94 # this: 95 # py_modules=["my_module"], 96 97 # List run-time dependencies here. These will be installed by pip when 98 # your project is installed. For an analysis of "install_requires" vs pip's 99 # requirements files see: 100 # https://packaging.python.org/en/latest/requirements.html 101 install_requires=project_requirements, 102 103 # List additional groups of dependencies here (e.g. development 104 # dependencies). You can install these using the following syntax, 105 # for example: 106 # $ pip install -e .[dev,test] 107 extras_require={ 108 'dev': dev_requirements, 109 'test': dev_requirements, 110 }, 111 112 # If there are data files included in your packages that need to be 113 # installed, specify them here. If using Python 2.6 or less, then these 114 # have to be included in MANIFEST.in as well. 115 package_data={ 116 'conans': ['*.txt'], 117 }, 118 119 # Although 'package_data' is the preferred approach, in some case you may 120 # need to place data files outside of your packages. See: 121 # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa 122 # In this case, 'data_file' will be installed into '<sys.prefix>/my_data' 123 # data_files=[('my_data', ['data/data_file'])], 124 125 # To provide executable scripts, use entry points in preference to the 126 # "scripts" keyword. Entry points provide cross-platform support and allow 127 # pip to create the appropriate form of executable for the target platform. 128 entry_points={ 129 'console_scripts': [ 130 'conan=conans.conan:run', 131 'conan_server=conans.conan_server:run', 132 'conan_build_info=conans.build_info.command:run' 133 ], 134 }, 135 ) 136 [end of setup.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conan-io/conan
2a9f3d734d7a4607d1db2ff8130425d3220e0e31
[python_requires] imports inside python_required modules Title: [python_requires] imports inside python_required modules - [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md). - [x] I've specified the Conan version, operating system version and any tool that can be relevant. - [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion. Conan 1.8.2 Python 3.6.5 Windows 7 according to the official Conan documentation for python_requires, we created a base class package with helper functions in a separate python file (helper.py) which is imported from the base class with a regular Python import statement like described in the documentation. This worked perfectly for us as long as we had only one version of the base package. The problem occured after we created version 2.0 of the base package with one module using version 2.0 and another module still using version 1.0. The helper.py import statement inside the base package cannot distinguish from which version of the base package it is called and therefore always helper.py from the first version mentioned in a Conan file is used. Here are the steps to reproduce this problem. I hope it gets a little bit more clear then: **base/conanfile.py** ```python from conans import ConanFile import helper class Base(ConanFile): exports = "*.py" ``` **base/helper.py** ```python def getVersion(): return "1.0" ``` Conan command: `conan export . Base/1.0@user/channel` This exports Base/1.0@user/channel correctly. **module1/conanfile.py** ```python from conans import ConanFile, python_requires base = python_requires("Base/1.0@user/channel") class Module1(ConanFile): name = "module1" version = base.helper.getVersion() ``` Conan command: `conan export . user/channel`. This exports module1/1.0@user/channel correctly. **module2/conanfile.py** ```python from conans import ConanFile, python_requires base = python_requires("Base/1.0@user/channel") module1 = python_requires("module1/1.0@user/channel") class MyPackage(ConanFile): name = "module2" version = base.helper.getVersion() ``` Conan command: `conan export . user/channel`. This exports module2/1.0@user/channel correctly. So far everthing works well. Now we create a new version 2.0 of the Base package. In the new version we rename the helper.py method getVersion() to getVersionInADifferentWay(): **base/helper.py** ```python def getVersionInADifferentWay(): return "2.0" ``` Conan command: `conan export . Base/2.0@user/channel` This exports Base/2.0@user/channel correctly. Now module2 should use the new Base package 2.0 whereas module1 should still use the old version Base/1.0: **module2/conanfile.py** ```python from conans import ConanFile, python_requires base = python_requires("Base/2.0@user/channel") module1 = python_requires("module1/1.0@user/channel") class MyPackage(ConanFile): name = "module2" version = base.helper.getVersionInADifferentWay() ``` Conan command: `conan export . user/channel`. This leads to the following error: ``` ERROR: Unable to load conanfile in module2/V2.0/conanfile.py KeyError: 'module1/1.0@user/channel' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Python3/lib/site-packages/conans/client/loader.py", line 235, in _parse_file loaded = imp.load_source(filename, conan_file_path) File "Python3/lib/imp.py", line 172, in load_source module = _load(spec) File "<frozen importlib._bootstrap>", line 684, in _load File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "module2/V2.0/conanfile.py", line 3, in <module> module1 = python_requires("module1/1.0@user/channel") File "Python3/lib/site-packages/conans/client/graph/python_requires.py", line 41, in __call__ module = imp.load_source(str(r).replace(".", "*"), path) File "Python3/lib/imp.py", line 172, in load_source module = _load(spec) File "<frozen importlib._bootstrap>", line 684, in _load File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File ".conan/data/module1/1.0/user/channel/export/conanfile.py", line 3, in <module> class Module1(ConanFile): File ".conan/data/module1/1.0/user/channel/export/conanfile.py", line 5, in Module1 version = base.helper.getVersion() AttributeError: module 'helper' has no attribute 'getVersion' ``` Could you please give us a hint or suggestions how we could solve this problem with Conan Thank You!
2018-10-29T17:12:14Z
<patch> diff --git a/conans/client/graph/python_requires.py b/conans/client/graph/python_requires.py --- a/conans/client/graph/python_requires.py +++ b/conans/client/graph/python_requires.py @@ -1,11 +1,9 @@ -import imp -import sys -import os +from collections import namedtuple -from conans.model.ref import ConanFileReference +from conans.client.loader import parse_conanfile from conans.client.recorder.action_recorder import ActionRecorder +from conans.model.ref import ConanFileReference from conans.model.requires import Requirement -from collections import namedtuple PythonRequire = namedtuple("PythonRequire", "conan_ref module") @@ -36,13 +34,7 @@ def __call__(self, require): result = self._proxy.get_recipe(r, False, False, remote_name=None, recorder=ActionRecorder()) path, _, _, reference = result - try: - dirname = os.path.dirname(path) - sys.path.append(dirname) - # replace avoid warnings in Py2 with dots - module = imp.load_source(str(r).replace(".", "*"), path) - finally: - sys.path.pop() + module, _ = parse_conanfile(path) python_require = PythonRequire(reference, module) self._cached_requires[require] = python_require self._requires.append(python_require) diff --git a/conans/client/loader.py b/conans/client/loader.py --- a/conans/client/loader.py +++ b/conans/client/loader.py @@ -43,7 +43,7 @@ def __init__(self, runner, output, python_requires): sys.modules["conans"].python_requires = python_requires def load_class(self, conanfile_path): - loaded, filename = _parse_file(conanfile_path) + loaded, filename = parse_conanfile(conanfile_path) try: conanfile = _parse_module(loaded, filename) conanfile.python_requires = self._python_requires.requires @@ -218,17 +218,17 @@ def _invalid_python_requires(require): raise ConanException("Invalid use of python_requires(%s)" % require) -def _parse_file(conan_file_path): +def parse_conanfile(conan_file_path): """ From a given path, obtain the in memory python import module """ if not os.path.exists(conan_file_path): raise NotFoundException("%s not found!" % conan_file_path) + module_id = str(uuid.uuid1()) + current_dir = os.path.dirname(conan_file_path) + sys.path.insert(0, current_dir) try: - module_id = str(uuid.uuid1()) - current_dir = os.path.dirname(conan_file_path) - sys.path.append(current_dir) old_modules = list(sys.modules.keys()) with chdir(current_dir): sys.dont_write_bytecode = True @@ -256,6 +256,6 @@ def _parse_file(conan_file_path): raise ConanException("Unable to load conanfile in %s\n%s" % (conan_file_path, '\n'.join(trace[3:]))) finally: - sys.path.pop() + sys.path.pop(0) return loaded, module_id </patch>
[]
[]
googleapis__google-cloud-python-9033
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Synthesis failed for automl Hello! Autosynth couldn't regenerate automl. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-automl' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/automl/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:45263333b058a4b3c26a8b7680a2710f43eae3d250f791a6cb66423991dcb2df Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/cloud/automl/artman_automl_v1beta1.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1. synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text_extraction.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text_extraction.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/io.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/io.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/classification.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/classification.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/operations.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/operations.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/tables.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/tables.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/data_stats.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/data_stats.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/ranges.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/ranges.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/column_spec.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/column_spec.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/detection.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/detection.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/dataset.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/dataset.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/model.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/model.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/data_items.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/data_items.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/annotation_payload.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/annotation_payload.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/temporal.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/temporal.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text_sentiment.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text_sentiment.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/annotation_spec.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/annotation_spec.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/model_evaluation.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/model_evaluation.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/translation.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/translation.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/service.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/service.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/image.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/image.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/prediction_service.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/prediction_service.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/table_spec.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/table_spec.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/regression.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/regression.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/data_types.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/data_types.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/geometry.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/geometry.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text_segment.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text_segment.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/video.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/video.proto synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto. synthtool > Replaced 'metadata_type=operations_pb2.OperationMetadata' in google/cloud/automl_v1beta1/gapic/auto_ml_client.py. synthtool > No replacements made in google/cloud/automl_v1beta1/gapic/prediction_service_client.py for pattern ^\s+::, maybe replacement is not longer needed? synthtool > No replacements made in google/cloud/automl_v1beta1/gapic/auto_ml_client.py for pattern ^(\s+)(::) \s+?([^\s]), maybe replacement is not longer needed? synthtool > Replaced 'Sample in-line\n JSON Lines file.*?\\}`\\n' in google/cloud/automl_v1beta1/proto/io_pb2.py. synthtool > Replaced 'Sample\n in-line JSON Lines file.*?\\}`\\n' in google/cloud/automl_v1beta1/proto/io_pb2.py. synthtool > Replaced '__doc__ = \\"\\"\\"- For Translation: CSV file ``translation\\.csv``, with each ' in google/cloud/automl_v1beta1/proto/io_pb2.py. synthtool > Replaced ':raw-latex:`\\\\t `' in google/cloud/automl_v1beta1/proto/io_pb2.py. .coveragerc .flake8 MANIFEST.in noxfile.py.j2 setup.cfg Running session blacken Creating virtualenv using python3.6 in .nox/blacken pip install black black docs google tests noxfile.py setup.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/__init__.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/enums.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/docs/conf.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/prediction_service_client_config.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/auto_ml_client_config.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/transports/prediction_service_grpc_transport.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/transports/auto_ml_grpc_transport.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_payload_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/prediction_service_client.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_spec_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_spec_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/classification_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_payload_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/column_spec_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/column_spec_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_items_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_items_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_stats_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/classification_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_types_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_types_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/dataset_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/auto_ml_client.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/detection_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/geometry_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/geometry_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_stats_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/image_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/dataset_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/io_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/detection_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_evaluation_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/image_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_evaluation_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/operations_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/prediction_service_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/ranges_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/ranges_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/prediction_service_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/regression_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/io_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/regression_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/table_spec_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/table_spec_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/operations_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/tables_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/service_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/temporal_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/temporal_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_extraction_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_extraction_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_segment_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_segment_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_sentiment_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/tables_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/translation_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/video_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/video_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_sentiment_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/types.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/translation_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/noxfile.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/tests/unit/gapic/v1beta1/test_prediction_service_client_v1beta1.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/tests/unit/gapic/v1beta1/test_auto_ml_client_v1beta1.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/service_pb2.py All done! ✨ 🍰 ✨ 70 files reformatted, 6 files left unchanged. Session blacken was successful. synthtool > Cleaned up 2 temporary directories. synthtool > Wrote metadata to synth.metadata. Changed files: M automl/google/cloud/automl_v1beta1/gapic/auto_ml_client.py M automl/google/cloud/automl_v1beta1/gapic/prediction_service_client.py M automl/synth.metadata [autosynth-automl 1aa594d] [CHANGE ME] Re-generated automl to pick up changes in the API or client library generator. 3 files changed, 57 insertions(+), 27 deletions(-) To https://github.com/googleapis/google-cloud-python.git + 9a77677...1aa594d autosynth-automl -> autosynth-automl (forced update) Traceback (most recent call last): File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 603, in urlopen chunked=chunked) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request six.raise_from(e, None) File "<string>", line 2, in raise_from File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request httplib_response = conn.getresponse() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 1331, in getresponse response.begin() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 297, in begin version, status, reason = self._read_status() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 266, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 641, in urlopen _stacktrace=sys.exc_info()[2]) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/util/retry.py", line 368, in increment raise six.reraise(type(error), error, _stacktrace) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/packages/six.py", line 685, in reraise raise value.with_traceback(tb) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 603, in urlopen chunked=chunked) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request six.raise_from(e, None) File "<string>", line 2, in raise_from File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request httplib_response = conn.getresponse() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 1331, in getresponse response.begin() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 297, in begin version, status, reason = self._read_status() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 266, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 223, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 211, in main args.repository, branch=branch, title=pr_title, body=pr_body File "/tmpfs/src/git/autosynth/autosynth/github.py", line 62, in create_pull_request "maintainer_can_modify": True, File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/sessions.py", line 581, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/adapters.py", line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) ``` Google internal developers can see the full log [here](https://sponge/a0f45fe9-789a-48ac-a9e2-b6d0338a6994). </issue> <code> [start of README.rst] 1 Google Cloud Python Client 2 ========================== 3 4 Python idiomatic clients for `Google Cloud Platform`_ services. 5 6 .. _Google Cloud Platform: https://cloud.google.com/ 7 8 **Heads up**! These libraries are supported on App Engine standard's `Python 3 runtime`_ but are *not* supported on App Engine's `Python 2 runtime`_. 9 10 .. _Python 3 runtime: https://cloud.google.com/appengine/docs/standard/python3 11 .. _Python 2 runtime: https://cloud.google.com/appengine/docs/standard/python 12 13 General Availability 14 -------------------- 15 16 **GA** (general availability) indicates that the client library for a 17 particular service is stable, and that the code surface will not change in 18 backwards-incompatible ways unless either absolutely necessary (e.g. because 19 of critical security issues) or with an extensive deprecation period. 20 Issues and requests against GA libraries are addressed with the highest 21 priority. 22 23 .. note:: 24 25 Sub-components of GA libraries explicitly marked as beta in the 26 import path (e.g. ``google.cloud.language_v1beta2``) should be considered 27 to be beta. 28 29 The following client libraries have **GA** support: 30 31 - `Google BigQuery`_ (`BigQuery README`_) 32 - `Google Cloud Datastore`_ (`Datastore README`_) 33 - `Google Cloud KMS`_ (`KMS README`_) 34 - `Google Cloud Natural Language`_ (`Natural Language README`_) 35 - `Google Cloud Scheduler`_ (`Scheduler README`_) 36 - `Google Cloud Spanner`_ (`Spanner README`_) 37 - `Google Cloud Speech`_ (`Speech README`_) 38 - `Google Cloud Storage`_ (`Storage README`_) 39 - `Google Cloud Tasks`_ (`Tasks README`_) 40 - `Google Cloud Translation`_ (`Translation README`_) 41 - `Stackdriver Logging`_ (`Logging README`_) 42 43 .. _Google BigQuery: https://pypi.org/project/google-cloud-bigquery/ 44 .. _BigQuery README: https://github.com/googleapis/google-cloud-python/tree/master/bigquery 45 .. _Google Cloud Datastore: https://pypi.org/project/google-cloud-datastore/ 46 .. _Datastore README: https://github.com/googleapis/google-cloud-python/tree/master/datastore 47 .. _Google Cloud KMS: https://pypi.org/project/google-cloud-kms/ 48 .. _KMS README: https://github.com/googleapis/google-cloud-python/tree/master/kms 49 .. _Google Cloud Natural Language: https://pypi.org/project/google-cloud-language/ 50 .. _Natural Language README: https://github.com/googleapis/google-cloud-python/tree/master/language 51 .. _Google Cloud Spanner: https://pypi.org/project/google-cloud-spanner 52 .. _Spanner README: https://github.com/googleapis/google-cloud-python/tree/master/spanner 53 .. _Google Cloud Speech: https://pypi.org/project/google-cloud-speech/ 54 .. _Speech README: https://github.com/googleapis/google-cloud-python/tree/master/speech 55 .. _Google Cloud Storage: https://pypi.org/project/google-cloud-storage/ 56 .. _Storage README: https://github.com/googleapis/google-cloud-python/tree/master/storage 57 .. _Google Cloud Tasks: https://pypi.org/project/google-cloud-tasks/ 58 .. _Tasks README: https://github.com/googleapis/google-cloud-python/tree/master/tasks 59 .. _Google Cloud Translation: https://pypi.org/project/google-cloud-translate/ 60 .. _Translation README: https://github.com/googleapis/google-cloud-python/tree/master/translate 61 .. _Google Cloud Scheduler: https://pypi.org/project/google-cloud-scheduler/ 62 .. _Scheduler README: https://github.com/googleapis/google-cloud-python/tree/master/scheduler 63 .. _Stackdriver Logging: https://pypi.org/project/google-cloud-logging/ 64 .. _Logging README: https://github.com/googleapis/google-cloud-python/tree/master/logging 65 66 Beta Support 67 ------------ 68 69 **Beta** indicates that the client library for a particular service is 70 mostly stable and is being prepared for release. Issues and requests 71 against beta libraries are addressed with a higher priority. 72 73 The following client libraries have **beta** support: 74 75 - `Google Cloud Bigtable`_ (`Bigtable README`_) 76 - `Google Cloud Firestore`_ (`Firestore README`_) 77 - `Google Cloud Pub/Sub`_ (`Pub/Sub README`_) 78 - `Google Cloud Video Intelligence`_ (`Video Intelligence README`_) 79 - `Google Cloud Vision`_ (`Vision README`_) 80 81 .. _Google Cloud Bigtable: https://pypi.org/project/google-cloud-bigtable/ 82 .. _Bigtable README: https://github.com/googleapis/google-cloud-python/tree/master/bigtable 83 .. _Google Cloud Firestore: https://pypi.org/project/google-cloud-firestore/ 84 .. _Firestore README: https://github.com/googleapis/google-cloud-python/tree/master/firestore 85 .. _Google Cloud Pub/Sub: https://pypi.org/project/google-cloud-pubsub/ 86 .. _Pub/Sub README: https://github.com/googleapis/google-cloud-python/tree/master/pubsub 87 .. _Google Cloud Video Intelligence: https://pypi.org/project/google-cloud-videointelligence 88 .. _Video Intelligence README: https://github.com/googleapis/google-cloud-python/tree/master/videointelligence 89 .. _Google Cloud Vision: https://pypi.org/project/google-cloud-vision/ 90 .. _Vision README: https://github.com/googleapis/google-cloud-python/tree/master/vision 91 92 93 Alpha Support 94 ------------- 95 96 **Alpha** indicates that the client library for a particular service is 97 still a work-in-progress and is more likely to get backwards-incompatible 98 updates. See `versioning`_ for more details. 99 100 The following client libraries have **alpha** support: 101 102 - `Google Cloud Asset`_ (`Asset README`_) 103 - `Google Cloud AutoML`_ (`AutoML README`_) 104 - `Google BigQuery Data Transfer`_ (`BigQuery Data Transfer README`_) 105 - `Google Cloud Bigtable - HappyBase`_ (`HappyBase README`_) 106 - `Google Cloud Container`_ (`Container README`_) 107 - `Google Cloud Container Analysis`_ (`Container Analysis README`_) 108 - `Google Cloud Dataproc`_ (`Dataproc README`_) 109 - `Google Cloud DLP`_ (`DLP README`_) 110 - `Google Cloud DNS`_ (`DNS README`_) 111 - `Google Cloud IoT`_ (`IoT README`_) 112 - `Google Cloud Memorystore for Redis`_ (`Redis README`_) 113 - `Google Cloud Resource Manager`_ (`Resource Manager README`_) 114 - `Google Cloud Runtime Configuration`_ (`Runtime Config README`_) 115 - `Google Cloud Security Scanner`_ (`Security Scanner README`_ ) 116 - `Google Cloud Trace`_ (`Trace README`_) 117 - `Google Cloud Text-to-Speech`_ (`Text-to-Speech README`_) 118 - `Grafeas`_ (`Grafeas README`_) 119 - `Stackdriver Error Reporting`_ (`Error Reporting README`_) 120 - `Stackdriver Monitoring`_ (`Monitoring README`_) 121 122 .. _Google Cloud Asset: https://pypi.org/project/google-cloud-asset/ 123 .. _Asset README: https://github.com/googleapis/google-cloud-python/blob/master/asset 124 .. _Google Cloud AutoML: https://pypi.org/project/google-cloud-automl/ 125 .. _AutoML README: https://github.com/googleapis/google-cloud-python/blob/master/automl 126 .. _Google BigQuery Data Transfer: https://pypi.org/project/google-cloud-bigquery-datatransfer/ 127 .. _BigQuery Data Transfer README: https://github.com/googleapis/google-cloud-python/tree/master/bigquery_datatransfer 128 .. _Google Cloud Bigtable - HappyBase: https://pypi.org/project/google-cloud-happybase/ 129 .. _HappyBase README: https://github.com/googleapis/google-cloud-python-happybase 130 .. _Google Cloud Container: https://pypi.org/project/google-cloud-container/ 131 .. _Container README: https://github.com/googleapis/google-cloud-python/tree/master/container 132 .. _Google Cloud Container Analysis: https://pypi.org/project/google-cloud-containeranalysis/ 133 .. _Container Analysis README: https://github.com/googleapis/google-cloud-python/tree/master/containeranalysis 134 .. _Google Cloud Dataproc: https://pypi.org/project/google-cloud-dataproc/ 135 .. _Dataproc README: https://github.com/googleapis/google-cloud-python/tree/master/dataproc 136 .. _Google Cloud DLP: https://pypi.org/project/google-cloud-dlp/ 137 .. _DLP README: https://github.com/googleapis/google-cloud-python/tree/master/dlp 138 .. _Google Cloud DNS: https://pypi.org/project/google-cloud-dns/ 139 .. _DNS README: https://github.com/googleapis/google-cloud-python/tree/master/dns 140 .. _Google Cloud IoT: https://pypi.org/project/google-cloud-iot/ 141 .. _IoT README: https://github.com/googleapis/google-cloud-python/tree/master/iot 142 .. _Google Cloud Memorystore for Redis: https://pypi.org/project/google-cloud-redis/ 143 .. _Redis README: https://github.com/googleapis/google-cloud-python/tree/master/redis 144 .. _Google Cloud Resource Manager: https://pypi.org/project/google-cloud-resource-manager/ 145 .. _Resource Manager README: https://github.com/googleapis/google-cloud-python/tree/master/resource_manager 146 .. _Google Cloud Runtime Configuration: https://pypi.org/project/google-cloud-runtimeconfig/ 147 .. _Runtime Config README: https://github.com/googleapis/google-cloud-python/tree/master/runtimeconfig 148 .. _Google Cloud Security Scanner: https://pypi.org/project/google-cloud-websecurityscanner/ 149 .. _Security Scanner README: https://github.com/googleapis/google-cloud-python/blob/master/websecurityscanner 150 .. _Google Cloud Text-to-Speech: https://pypi.org/project/google-cloud-texttospeech/ 151 .. _Text-to-Speech README: https://github.com/googleapis/google-cloud-python/tree/master/texttospeech 152 .. _Google Cloud Trace: https://pypi.org/project/google-cloud-trace/ 153 .. _Trace README: https://github.com/googleapis/google-cloud-python/tree/master/trace 154 .. _Grafeas: https://pypi.org/project/grafeas/ 155 .. _Grafeas README: https://github.com/googleapis/google-cloud-python/tree/master/grafeas 156 .. _Stackdriver Error Reporting: https://pypi.org/project/google-cloud-error-reporting/ 157 .. _Error Reporting README: https://github.com/googleapis/google-cloud-python/tree/master/error_reporting 158 .. _Stackdriver Monitoring: https://pypi.org/project/google-cloud-monitoring/ 159 .. _Monitoring README: https://github.com/googleapis/google-cloud-python/tree/master/monitoring 160 161 .. _versioning: https://github.com/googleapis/google-cloud-python/blob/master/CONTRIBUTING.rst#versioning 162 163 If you need support for other Google APIs, check out the 164 `Google APIs Python Client library`_. 165 166 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client 167 168 169 Example Applications 170 -------------------- 171 172 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine. 173 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage 174 175 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python 176 .. _tutorial: https://cloud.google.com/python 177 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo 178 179 180 Authentication 181 -------------- 182 183 With ``google-cloud-python`` we try to make authentication as painless as possible. 184 Check out the `Authentication section`_ in our documentation to learn more. 185 You may also find the `authentication document`_ shared by all the 186 ``google-cloud-*`` libraries to be helpful. 187 188 .. _Authentication section: https://google-cloud-python.readthedocs.io/en/latest/core/auth.html 189 .. _authentication document: https://github.com/googleapis/google-cloud-common/tree/master/authentication 190 191 Contributing 192 ------------ 193 194 Contributions to this library are always welcome and highly encouraged. 195 196 See the `CONTRIBUTING doc`_ for more information on how to get started. 197 198 .. _CONTRIBUTING doc: https://github.com/googleapis/google-cloud-python/blob/master/CONTRIBUTING.rst 199 200 201 Community 202 --------- 203 204 Google Cloud Platform Python developers hang out in `Slack`_ in the ``#python`` 205 channel, click here to `get an invitation`_. 206 207 .. _Slack: https://googlecloud-community.slack.com 208 .. _get an invitation: https://gcp-slack.appspot.com/ 209 210 211 License 212 ------- 213 214 Apache 2.0 - See `the LICENSE`_ for more information. 215 216 .. _the LICENSE: https://github.com/googleapis/google-cloud-python/blob/master/LICENSE 217 [end of README.rst] [start of automl/google/cloud/automl_v1beta1/proto/video_pb2.py] 1 # -*- coding: utf-8 -*- 2 # Generated by the protocol buffer compiler. DO NOT EDIT! 3 # source: google/cloud/automl_v1beta1/proto/video.proto 4 5 import sys 6 7 _b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) 8 from google.protobuf import descriptor as _descriptor 9 from google.protobuf import message as _message 10 from google.protobuf import reflection as _reflection 11 from google.protobuf import symbol_database as _symbol_database 12 13 # @@protoc_insertion_point(imports) 14 15 _sym_db = _symbol_database.Default() 16 17 18 from google.cloud.automl_v1beta1.proto import ( 19 classification_pb2 as google_dot_cloud_dot_automl__v1beta1_dot_proto_dot_classification__pb2, 20 ) 21 from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 22 23 24 DESCRIPTOR = _descriptor.FileDescriptor( 25 name="google/cloud/automl_v1beta1/proto/video.proto", 26 package="google.cloud.automl.v1beta1", 27 syntax="proto3", 28 serialized_options=_b( 29 "\n\037com.google.cloud.automl.v1beta1B\nVideoProtoP\001ZAgoogle.golang.org/genproto/googleapis/cloud/automl/v1beta1;automl\312\002\033Google\\Cloud\\AutoMl\\V1beta1\352\002\036Google::Cloud::AutoML::V1beta1" 30 ), 31 serialized_pb=_b( 32 '\n-google/cloud/automl_v1beta1/proto/video.proto\x12\x1bgoogle.cloud.automl.v1beta1\x1a\x36google/cloud/automl_v1beta1/proto/classification.proto\x1a\x1cgoogle/api/annotations.proto"$\n"VideoClassificationDatasetMetadata"$\n"VideoObjectTrackingDatasetMetadata""\n VideoClassificationModelMetadata""\n VideoObjectTrackingModelMetadataB\xb1\x01\n\x1f\x63om.google.cloud.automl.v1beta1B\nVideoProtoP\x01ZAgoogle.golang.org/genproto/googleapis/cloud/automl/v1beta1;automl\xca\x02\x1bGoogle\\Cloud\\AutoMl\\V1beta1\xea\x02\x1eGoogle::Cloud::AutoML::V1beta1b\x06proto3' 33 ), 34 dependencies=[ 35 google_dot_cloud_dot_automl__v1beta1_dot_proto_dot_classification__pb2.DESCRIPTOR, 36 google_dot_api_dot_annotations__pb2.DESCRIPTOR, 37 ], 38 ) 39 40 41 _VIDEOCLASSIFICATIONDATASETMETADATA = _descriptor.Descriptor( 42 name="VideoClassificationDatasetMetadata", 43 full_name="google.cloud.automl.v1beta1.VideoClassificationDatasetMetadata", 44 filename=None, 45 file=DESCRIPTOR, 46 containing_type=None, 47 fields=[], 48 extensions=[], 49 nested_types=[], 50 enum_types=[], 51 serialized_options=None, 52 is_extendable=False, 53 syntax="proto3", 54 extension_ranges=[], 55 oneofs=[], 56 serialized_start=164, 57 serialized_end=200, 58 ) 59 60 61 _VIDEOOBJECTTRACKINGDATASETMETADATA = _descriptor.Descriptor( 62 name="VideoObjectTrackingDatasetMetadata", 63 full_name="google.cloud.automl.v1beta1.VideoObjectTrackingDatasetMetadata", 64 filename=None, 65 file=DESCRIPTOR, 66 containing_type=None, 67 fields=[], 68 extensions=[], 69 nested_types=[], 70 enum_types=[], 71 serialized_options=None, 72 is_extendable=False, 73 syntax="proto3", 74 extension_ranges=[], 75 oneofs=[], 76 serialized_start=202, 77 serialized_end=238, 78 ) 79 80 81 _VIDEOCLASSIFICATIONMODELMETADATA = _descriptor.Descriptor( 82 name="VideoClassificationModelMetadata", 83 full_name="google.cloud.automl.v1beta1.VideoClassificationModelMetadata", 84 filename=None, 85 file=DESCRIPTOR, 86 containing_type=None, 87 fields=[], 88 extensions=[], 89 nested_types=[], 90 enum_types=[], 91 serialized_options=None, 92 is_extendable=False, 93 syntax="proto3", 94 extension_ranges=[], 95 oneofs=[], 96 serialized_start=240, 97 serialized_end=274, 98 ) 99 100 101 _VIDEOOBJECTTRACKINGMODELMETADATA = _descriptor.Descriptor( 102 name="VideoObjectTrackingModelMetadata", 103 full_name="google.cloud.automl.v1beta1.VideoObjectTrackingModelMetadata", 104 filename=None, 105 file=DESCRIPTOR, 106 containing_type=None, 107 fields=[], 108 extensions=[], 109 nested_types=[], 110 enum_types=[], 111 serialized_options=None, 112 is_extendable=False, 113 syntax="proto3", 114 extension_ranges=[], 115 oneofs=[], 116 serialized_start=276, 117 serialized_end=310, 118 ) 119 120 DESCRIPTOR.message_types_by_name[ 121 "VideoClassificationDatasetMetadata" 122 ] = _VIDEOCLASSIFICATIONDATASETMETADATA 123 DESCRIPTOR.message_types_by_name[ 124 "VideoObjectTrackingDatasetMetadata" 125 ] = _VIDEOOBJECTTRACKINGDATASETMETADATA 126 DESCRIPTOR.message_types_by_name[ 127 "VideoClassificationModelMetadata" 128 ] = _VIDEOCLASSIFICATIONMODELMETADATA 129 DESCRIPTOR.message_types_by_name[ 130 "VideoObjectTrackingModelMetadata" 131 ] = _VIDEOOBJECTTRACKINGMODELMETADATA 132 _sym_db.RegisterFileDescriptor(DESCRIPTOR) 133 134 VideoClassificationDatasetMetadata = _reflection.GeneratedProtocolMessageType( 135 "VideoClassificationDatasetMetadata", 136 (_message.Message,), 137 dict( 138 DESCRIPTOR=_VIDEOCLASSIFICATIONDATASETMETADATA, 139 __module__="google.cloud.automl_v1beta1.proto.video_pb2", 140 __doc__="""Dataset metadata specific to video classification. All Video 141 Classification datasets are treated as multi label. 142 """, 143 # @@protoc_insertion_point(class_scope:google.cloud.automl.v1beta1.VideoClassificationDatasetMetadata) 144 ), 145 ) 146 _sym_db.RegisterMessage(VideoClassificationDatasetMetadata) 147 148 VideoObjectTrackingDatasetMetadata = _reflection.GeneratedProtocolMessageType( 149 "VideoObjectTrackingDatasetMetadata", 150 (_message.Message,), 151 dict( 152 DESCRIPTOR=_VIDEOOBJECTTRACKINGDATASETMETADATA, 153 __module__="google.cloud.automl_v1beta1.proto.video_pb2", 154 __doc__="""Dataset metadata specific to video object tracking. 155 """, 156 # @@protoc_insertion_point(class_scope:google.cloud.automl.v1beta1.VideoObjectTrackingDatasetMetadata) 157 ), 158 ) 159 _sym_db.RegisterMessage(VideoObjectTrackingDatasetMetadata) 160 161 VideoClassificationModelMetadata = _reflection.GeneratedProtocolMessageType( 162 "VideoClassificationModelMetadata", 163 (_message.Message,), 164 dict( 165 DESCRIPTOR=_VIDEOCLASSIFICATIONMODELMETADATA, 166 __module__="google.cloud.automl_v1beta1.proto.video_pb2", 167 __doc__="""Model metadata specific to video classification. 168 """, 169 # @@protoc_insertion_point(class_scope:google.cloud.automl.v1beta1.VideoClassificationModelMetadata) 170 ), 171 ) 172 _sym_db.RegisterMessage(VideoClassificationModelMetadata) 173 174 VideoObjectTrackingModelMetadata = _reflection.GeneratedProtocolMessageType( 175 "VideoObjectTrackingModelMetadata", 176 (_message.Message,), 177 dict( 178 DESCRIPTOR=_VIDEOOBJECTTRACKINGMODELMETADATA, 179 __module__="google.cloud.automl_v1beta1.proto.video_pb2", 180 __doc__="""Model metadata specific to video object tracking. 181 """, 182 # @@protoc_insertion_point(class_scope:google.cloud.automl.v1beta1.VideoObjectTrackingModelMetadata) 183 ), 184 ) 185 _sym_db.RegisterMessage(VideoObjectTrackingModelMetadata) 186 187 188 DESCRIPTOR._options = None 189 # @@protoc_insertion_point(module_scope) 190 [end of automl/google/cloud/automl_v1beta1/proto/video_pb2.py] [start of automl/synth.py] 1 # Copyright 2018 Google LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """This script is used to synthesize generated parts of this library.""" 16 17 import re 18 19 import synthtool as s 20 from synthtool import gcp 21 22 gapic = gcp.GAPICGenerator() 23 common = gcp.CommonTemplates() 24 versions = ["v1beta1"] 25 26 27 # ---------------------------------------------------------------------------- 28 # Generate automl GAPIC layer 29 # ---------------------------------------------------------------------------- 30 for version in versions: 31 library = gapic.py_library("automl", version, include_protos=True) 32 s.move(library / f"google/cloud/automl_{version}") 33 s.move(library / f"tests/unit/gapic/{version}") 34 s.move(library / f"docs/gapic/{version}") 35 36 s.replace( 37 f"google/cloud/automl_{version}/__init__.py", 38 f"from google.cloud.automl_v1beta1.gapic import prediction_service_client", 39 f"from google.cloud.automl_v1beta1.gapic import prediction_service_client" 40 f"from google.cloud.automl_v1beta1.tables import tables_client" 41 f"\n\n" 42 f"class TablesClient(tables_client.TablesClient):" 43 f" __doc__ = tables_client.TablesClient.__doc__" 44 ) 45 46 s.replace( 47 f"google/cloud/automl_{version}/__init__.py", 48 f"__all__ = (\"enums\", \"types\", \"AutoMlClient\", \"PredictionServiceClient\")", 49 f"__all__ = (\"enums\", \"types\", \"AutoMlClient\", \"PredictionServiceClient\", \"TablesClient\")" 50 ) 51 52 s.move(library / f"docs/conf.py") 53 54 # Use the highest version library to generate import alias. 55 s.move(library / "google/cloud/automl.py") 56 57 # Fixup issues in generated code 58 s.replace( 59 "**/gapic/*_client.py", 60 r"metadata_type=operations_pb2.OperationMetadata", 61 r"metadata_type=proto_operations_pb2.OperationMetadata", 62 ) 63 64 # Fix spacing/'::' issues in docstrings 65 s.replace( 66 "google/cloud/automl_v1beta1/gapic/prediction_service_client.py", "^\s+::", "" 67 ) 68 69 s.replace( 70 "google/cloud/automl_v1beta1/gapic/auto_ml_client.py", 71 "^(\s+)(::)\n\n\s+?([^\s])", 72 " \g<1>\g<2>\n \g<1>\g<3>", 73 ) 74 75 # Remove 'raw-latex' sections with sample JSON Lines files 76 s.replace( 77 "google/cloud/**/io_pb2.py", 78 r"""Sample in-line 79 JSON Lines file.*?\}`\n""", 80 "\n", 81 flags=re.DOTALL, 82 ) 83 84 # Remove 'raw-latex' sections with sample JSON Lines files 85 s.replace( 86 "google/cloud/**/io_pb2.py", 87 r"""Sample 88 in-line JSON Lines file.*?\}`\n""", 89 "\n", 90 flags=re.DOTALL, 91 ) 92 93 # Replace docstring with no summary line 94 s.replace( 95 "google/cloud/**/io_pb2.py", 96 r"""__doc__ = \"\"\"- For Translation: CSV file ``translation\.csv``, with each """, 97 r'''__doc__ = """ 98 - For Translation: CSV file ``translation.csv``, with each ''', 99 flags=re.DOTALL, 100 ) 101 102 s.replace( 103 "google/cloud/**/io_pb2.py", 104 r":raw-latex:`\\t `", 105 r"\\\\t") 106 # ---------------------------------------------------------------------------- 107 # Add templated files 108 # ---------------------------------------------------------------------------- 109 templated_files = common.py_library(unit_cov_level=82, cov_level=83) 110 s.move(templated_files) 111 112 s.shell.run(["nox", "-s", "blacken"], hide_output=False) 113 [end of automl/synth.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
googleapis/google-cloud-python
5a063e1124014d88e98468bc8f1196dab7592ea1
Synthesis failed for automl Hello! Autosynth couldn't regenerate automl. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-automl' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/automl/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:45263333b058a4b3c26a8b7680a2710f43eae3d250f791a6cb66423991dcb2df Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/cloud/automl/artman_automl_v1beta1.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1. synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text_extraction.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text_extraction.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/io.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/io.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/classification.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/classification.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/operations.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/operations.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/tables.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/tables.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/data_stats.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/data_stats.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/ranges.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/ranges.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/column_spec.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/column_spec.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/detection.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/detection.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/dataset.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/dataset.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/model.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/model.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/data_items.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/data_items.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/annotation_payload.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/annotation_payload.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/temporal.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/temporal.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text_sentiment.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text_sentiment.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/annotation_spec.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/annotation_spec.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/model_evaluation.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/model_evaluation.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/translation.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/translation.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/service.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/service.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/image.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/image.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/prediction_service.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/prediction_service.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/table_spec.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/table_spec.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/regression.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/regression.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/data_types.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/data_types.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/geometry.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/geometry.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/text_segment.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/text_segment.proto synthtool > Copy: /home/kbuilder/.cache/synthtool/googleapis/google/cloud/automl/v1beta1/video.proto to /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto/video.proto synthtool > Placed proto files into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/automl-v1beta1/google/cloud/automl_v1beta1/proto. synthtool > Replaced 'metadata_type=operations_pb2.OperationMetadata' in google/cloud/automl_v1beta1/gapic/auto_ml_client.py. synthtool > No replacements made in google/cloud/automl_v1beta1/gapic/prediction_service_client.py for pattern ^\s+::, maybe replacement is not longer needed? synthtool > No replacements made in google/cloud/automl_v1beta1/gapic/auto_ml_client.py for pattern ^(\s+)(::) \s+?([^\s]), maybe replacement is not longer needed? synthtool > Replaced 'Sample in-line\n JSON Lines file.*?\\}`\\n' in google/cloud/automl_v1beta1/proto/io_pb2.py. synthtool > Replaced 'Sample\n in-line JSON Lines file.*?\\}`\\n' in google/cloud/automl_v1beta1/proto/io_pb2.py. synthtool > Replaced '__doc__ = \\"\\"\\"- For Translation: CSV file ``translation\\.csv``, with each ' in google/cloud/automl_v1beta1/proto/io_pb2.py. synthtool > Replaced ':raw-latex:`\\\\t `' in google/cloud/automl_v1beta1/proto/io_pb2.py. .coveragerc .flake8 MANIFEST.in noxfile.py.j2 setup.cfg Running session blacken Creating virtualenv using python3.6 in .nox/blacken pip install black black docs google tests noxfile.py setup.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/__init__.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/enums.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/docs/conf.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/prediction_service_client_config.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/auto_ml_client_config.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/transports/prediction_service_grpc_transport.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/transports/auto_ml_grpc_transport.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_payload_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/prediction_service_client.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_spec_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_spec_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/classification_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/annotation_payload_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/column_spec_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/column_spec_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_items_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_items_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_stats_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/classification_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_types_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_types_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/dataset_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/gapic/auto_ml_client.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/detection_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/geometry_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/geometry_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/data_stats_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/image_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/dataset_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/io_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/detection_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_evaluation_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/image_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_evaluation_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/operations_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/model_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/prediction_service_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/ranges_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/ranges_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/prediction_service_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/regression_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/io_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/regression_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/table_spec_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/table_spec_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/operations_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/tables_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/service_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/temporal_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/temporal_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_extraction_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_extraction_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_segment_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_segment_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_sentiment_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/tables_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/translation_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/video_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/video_pb2_grpc.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/text_sentiment_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/types.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/translation_pb2.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/noxfile.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/tests/unit/gapic/v1beta1/test_prediction_service_client_v1beta1.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/tests/unit/gapic/v1beta1/test_auto_ml_client_v1beta1.py reformatted /tmpfs/src/git/autosynth/working_repo/automl/google/cloud/automl_v1beta1/proto/service_pb2.py All done! ✨ 🍰 ✨ 70 files reformatted, 6 files left unchanged. Session blacken was successful. synthtool > Cleaned up 2 temporary directories. synthtool > Wrote metadata to synth.metadata. Changed files: M automl/google/cloud/automl_v1beta1/gapic/auto_ml_client.py M automl/google/cloud/automl_v1beta1/gapic/prediction_service_client.py M automl/synth.metadata [autosynth-automl 1aa594d] [CHANGE ME] Re-generated automl to pick up changes in the API or client library generator. 3 files changed, 57 insertions(+), 27 deletions(-) To https://github.com/googleapis/google-cloud-python.git + 9a77677...1aa594d autosynth-automl -> autosynth-automl (forced update) Traceback (most recent call last): File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 603, in urlopen chunked=chunked) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request six.raise_from(e, None) File "<string>", line 2, in raise_from File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request httplib_response = conn.getresponse() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 1331, in getresponse response.begin() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 297, in begin version, status, reason = self._read_status() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 266, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 641, in urlopen _stacktrace=sys.exc_info()[2]) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/util/retry.py", line 368, in increment raise six.reraise(type(error), error, _stacktrace) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/packages/six.py", line 685, in reraise raise value.with_traceback(tb) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 603, in urlopen chunked=chunked) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request six.raise_from(e, None) File "<string>", line 2, in raise_from File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/urllib3/connectionpool.py", line 383, in _make_request httplib_response = conn.getresponse() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 1331, in getresponse response.begin() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 297, in begin version, status, reason = self._read_status() File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/http/client.py", line 266, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 223, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 211, in main args.repository, branch=branch, title=pr_title, body=pr_body File "/tmpfs/src/git/autosynth/autosynth/github.py", line 62, in create_pull_request "maintainer_can_modify": True, File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/sessions.py", line 581, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/adapters.py", line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)) ``` Google internal developers can see the full log [here](https://sponge/a0f45fe9-789a-48ac-a9e2-b6d0338a6994).
Github is treating the bot hammering it as a DOS. This is a bug in the bot configuration / automl: it needs to retry such cases with backoff.
2019-08-15T16:52:32Z
<patch> diff --git a/automl/synth.py b/automl/synth.py --- a/automl/synth.py +++ b/automl/synth.py @@ -33,27 +33,33 @@ s.move(library / f"tests/unit/gapic/{version}") s.move(library / f"docs/gapic/{version}") - s.replace( - f"google/cloud/automl_{version}/__init__.py", - f"from google.cloud.automl_v1beta1.gapic import prediction_service_client", - f"from google.cloud.automl_v1beta1.gapic import prediction_service_client" - f"from google.cloud.automl_v1beta1.tables import tables_client" - f"\n\n" - f"class TablesClient(tables_client.TablesClient):" - f" __doc__ = tables_client.TablesClient.__doc__" - ) - - s.replace( - f"google/cloud/automl_{version}/__init__.py", - f"__all__ = (\"enums\", \"types\", \"AutoMlClient\", \"PredictionServiceClient\")", - f"__all__ = (\"enums\", \"types\", \"AutoMlClient\", \"PredictionServiceClient\", \"TablesClient\")" - ) - s.move(library / f"docs/conf.py") # Use the highest version library to generate import alias. s.move(library / "google/cloud/automl.py") +# Add tables client to v1beta1 +s.replace( + f"google/cloud/automl_v1beta1/__init__.py", + f"from google.cloud.automl_v1beta1.gapic import prediction_service_client", + f"from google.cloud.automl_v1beta1.gapic import prediction_service_client\n" + f"from google.cloud.automl_v1beta1.tables import tables_client" + f"\n\n" + f"class TablesClient(tables_client.TablesClient):" + f" __doc__ = tables_client.TablesClient.__doc__" +) + +s.replace( + f"google/cloud/automl_v1beta1/__init__.py", + f"""__all__ = \( + 'enums', + 'types', + 'AutoMlClient', + 'PredictionServiceClient', +\)""", + f"__all__ = (\"enums\", \"types\", \"AutoMlClient\", \"PredictionServiceClient\", \"TablesClient\")" +) + # Fixup issues in generated code s.replace( "**/gapic/*_client.py", </patch>
[]
[]
Lightning-AI__lightning-1797
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Customizing hparams after loading checkpoint ## ❓ Questions and Help ### Before asking: 1. search the issues. 2. search the docs. #### What is your question? I'm wondering what the best practice for loading a model with different hparams than what is stored in the checkpoint? I realize I could just load the model and set them afterwards e.g.: ``` model = model.load_from_checkpoint(args.checkpoint_file) # Load model # Set hparams etc.. model.hparams.arg1 = 0.0 model.hparams.arg2 = 1.0 ``` But the problem is that my model __init__ function depends on the hparams arg1 and arg2 so they're set too late. I could also do ``` checkpoint = torch.load(args.checkpoint_file) checkpoint['hparams']['arg1'] = 0.0 checkpoint['hparams']['arg2'] = 1.0 model = model._load_state_dict(checkpoint) ``` The problem here is that i'm using the protected function _load_state_dict. Is there another way of solving this that i've missed? Or could we consider making _load_state_dict public? </issue> <code> [start of README.md] 1 <div align="center"> 2 3 ![Logo](docs/source/_images/logos/lightning_logo.svg) 4 5 # PyTorch Lightning 6 7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.** 8 9 10 [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning) 11 [![PyPI Status](https://pepy.tech/badge/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning) 12 [![codecov](https://codecov.io/gh/PyTorchLightning/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/PyTorchLightning/pytorch-lightning) 13 [![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning) 14 15 [![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=0.7.5)](https://pytorch-lightning.readthedocs.io/en/stable/) 16 [![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ) 17 [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE) 18 [![Next Release](https://img.shields.io/badge/Next%20Release-May%2020-<COLOR>.svg)](https://shields.io/) 19 20 <!-- 21 removed until codecov badge isn't empy. likely a config error showing nothing on master. 22 [![codecov](https://codecov.io/gh/Borda/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/Borda/pytorch-lightning) 23 --> 24 </div> 25 26 --- 27 ## Continuous Integration 28 <center> 29 30 | System / PyTorch ver. | 1.1 (min. reg) | 1.2 | 1.3 | 1.4 | 1.5 (latest) | 31 | :---: | :---: | :---: | :---: | :---: | :---: | 32 | Linux py3.6 [CPU] | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | 33 | Linux py3.7 [GPU] | - | - | - | - | [![Build Status](http://35.192.60.23/api/badges/PyTorchLightning/pytorch-lightning/status.svg)](http://35.192.60.23/PyTorchLightning/pytorch-lightning) | 34 | Linux py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 35 | OSX py3.6 / py3.7 / py3.8| [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 36 | Windows py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | 37 38 </center> 39 40 Simple installation from PyPI 41 ```bash 42 pip install pytorch-lightning 43 ``` 44 45 ## Docs 46 - [master](https://pytorch-lightning.readthedocs.io/en/latest) 47 - [0.7.5](https://pytorch-lightning.readthedocs.io/en/0.7.5/) 48 - [0.7.3](https://pytorch-lightning.readthedocs.io/en/0.7.3/) 49 - [0.7.1](https://pytorch-lightning.readthedocs.io/en/0.7.1/) 50 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/) 51 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/) 52 53 ## Refactoring your PyTorch code + benefits + full walk-through 54 [![Watch the video](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/_images/general/tutorial_cover.png)](https://www.youtube.com/watch?v=QHww1JH7IDU) 55 56 ## Demo 57 Here's a minimal example without a validation or test loop. 58 59 ```python 60 # this is just a plain nn.Module with some structure 61 62 class LitClassifier(pl.LightningModule): 63 64 def __init__(self): 65 super().__init__() 66 self.l1 = torch.nn.Linear(28 * 28, 10) 67 68 def forward(self, x): 69 return torch.relu(self.l1(x.view(x.size(0), -1))) 70 71 def training_step(self, batch, batch_nb): 72 x, y = batch 73 loss = F.cross_entropy(self(x), y) 74 tensorboard_logs = {'train_loss': loss} 75 return {'loss': loss, 'log': tensorboard_logs} 76 77 def configure_optimizers(self): 78 return torch.optim.Adam(self.parameters(), lr=0.02) 79 80 # train! 81 train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32) 82 83 model = LitClassifier() 84 trainer = pl.Trainer(gpus=8, precision=16) 85 trainer.fit(model, train_loader) 86 ``` 87 88 Other examples: 89 [GAN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=P0bSmCw57aV5) 90 [BERT](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=7uQVI-xv9Ddj) 91 [DQN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=NWvMLBDySQI5) 92 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3) 93 94 ## What is it? 95 [READ THIS QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html) 96 97 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. 98 It's more of a PyTorch style-guide than a framework. 99 100 In Lightning, you organize your code into 3 distinct categories: 101 102 1. Research code (goes in the LightningModule). 103 2. Engineering code (you delete, and is handled by the Trainer). 104 3. Non-essential research code (logging, etc... this goes in Callbacks). 105 106 Here's an example of how to refactor your research code into a [LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html). 107 108 ![PT to PL](docs/source/_images/lightning_module/pt_to_pl.png) 109 110 The rest of the code is automated by the [Trainer](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html)! 111 ![PT to PL](docs/source/_images/lightning_module/pt_trainer.png) 112 113 ## Testing Rigour 114 All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). 115 116 In fact, we also train a few models using a vanilla PyTorch loop and compare with the same model trained using the Trainer to make sure we achieve the EXACT same results. [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks). 117 118 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts. 119 120 ## How flexible is it? 121 As you see, you're just organizing your PyTorch code - there's no abstraction. 122 123 And for the stuff that the Trainer abstracts out you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backwards pass. 124 125 For example, here you could do your own backward pass 126 127 ```python 128 class LitModel(LightningModule): 129 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, 130 second_order_closure=None): 131 optimizer.step() 132 optimizer.zero_grad() 133 ``` 134 135 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer. 136 137 ## Who is Lightning for? 138 - Professional researchers 139 - PhD students 140 - Corporate production teams 141 142 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :) 143 144 ## What does lightning control for me? 145 146 Everything in Blue! 147 This is how lightning separates the science (red) from the engineering (blue). 148 149 ![Overview](docs/source/_images/general/pl_overview.gif) 150 151 ## How much effort is it to convert? 152 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour. 153 If your code IS a mess, then you needed to clean up anyhow ;) 154 155 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09). 156 [Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU). 157 158 159 ## Starting a new project? 160 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed) 161 162 ## Why do I want to use lightning? 163 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you. 164 165 ## Support 166 - [8 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, PhD students from top AI labs. 167 - 100+ community contributors. 168 169 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support. 170 171 --- 172 173 ## README Table of Contents 174 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) 175 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me) 176 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard) 177 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable) 178 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples) 179 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials) 180 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help) 181 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md) 182 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge) 183 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles) 184 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team) 185 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq) 186 187 --- 188 189 ## Realistic example 190 Here's how you would organize a realistic PyTorch project into Lightning. 191 192 ![PT to PL](docs/source/_images/mnist_imgs/pt_to_pl.jpg) 193 194 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... 195 It can ALSO define a simple classifier. 196 197 In summary, you: 198 199 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) 200 ```python 201 class LitSystem(pl.LightningModule): 202 203 def __init__(self): 204 super().__init__() 205 # not the best model... 206 self.l1 = torch.nn.Linear(28 * 28, 10) 207 208 def forward(self, x): 209 return torch.relu(self.l1(x.view(x.size(0), -1))) 210 211 def training_step(self, batch, batch_idx): 212 ... 213 ``` 214 215 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html) 216 ```python 217 from pytorch_lightning import Trainer 218 219 model = LitSystem() 220 221 # most basic trainer, uses good defaults 222 trainer = Trainer() 223 trainer.fit(model) 224 ``` 225 226 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 227 228 ## What types of research works? 229 Anything! Remember, that this is just organized PyTorch code. 230 The Training step defines the core complexity found in the training loop. 231 232 #### Could be as complex as a seq2seq 233 234 ```python 235 # define what happens for training here 236 def training_step(self, batch, batch_idx): 237 x, y = batch 238 239 # define your own forward and loss calculation 240 hidden_states = self.encoder(x) 241 242 # even as complex as a seq-2-seq + attn model 243 # (this is just a toy, non-working example to illustrate) 244 start_token = '<SOS>' 245 last_hidden = torch.zeros(...) 246 loss = 0 247 for step in range(max_seq_len): 248 attn_context = self.attention_nn(hidden_states, start_token) 249 pred = self.decoder(start_token, attn_context, last_hidden) 250 last_hidden = pred 251 pred = self.predict_nn(pred) 252 loss += self.loss(last_hidden, y[step]) 253 254 #toy example as well 255 loss = loss / max_seq_len 256 return {'loss': loss} 257 ``` 258 259 #### Or as basic as CNN image classification 260 261 ```python 262 # define what happens for validation here 263 def validation_step(self, batch, batch_idx): 264 x, y = batch 265 266 # or as basic as a CNN classification 267 out = self(x) 268 loss = my_loss(out, y) 269 return {'loss': loss} 270 ``` 271 272 And without changing a single line of code, you could run on CPUs 273 ```python 274 trainer = Trainer(max_epochs=1) 275 ``` 276 277 278 Or GPUs 279 ```python 280 # 8 GPUs 281 trainer = Trainer(max_epochs=1, gpus=8) 282 283 # 256 GPUs 284 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32) 285 ``` 286 287 Or TPUs 288 ```python 289 trainer = Trainer(num_tpu_cores=8) 290 ``` 291 292 When you're done training, run the test accuracy 293 ```python 294 trainer.test() 295 ``` 296 297 ## Visualization 298 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks 299 300 - [Tensorboard](https://pytorch.org/docs/stable/tensorboard.html) 301 - [MLFlow](https://mlflow.org/) 302 - [Neptune.ai](https://neptune.ai/) 303 - [Comet.ml](https://www.comet.ml/site/) 304 - [Wandb](https://www.wandb.com/) 305 - [Trains](https://github.com/allegroai/trains) 306 - ... 307 308 ![tensorboard-support](docs/source/_images/general/tf_loss.png) 309 310 311 ## Lightning automates 40+ parts of DL/ML research 312 - GPU training 313 - Distributed GPU (cluster) training 314 - TPU training 315 - EarlyStopping 316 - Logging/Visualizing 317 - Checkpointing 318 - Experiment management 319 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) 320 321 322 ## Examples 323 Check out this awesome list of research papers and implementations done with Lightning. 324 325 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer) 326 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0) 327 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py) 328 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting) 329 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_) 330 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D) 331 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture) 332 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes) 333 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks) 334 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning) 335 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf) 336 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification) 337 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE) 338 339 ## Tutorials 340 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started. 341 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials). 342 343 --- 344 345 ## Asking for help 346 Welcome to the Lightning community! 347 348 If you have any questions, feel free to: 349 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/). 350 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question). 351 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning. 352 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ). 353 354 --- 355 ## FAQ 356 **How do I use Lightning for rapid research?** 357 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) 358 359 **Why was Lightning created?** 360 Lightning has 3 goals in mind: 361 362 1. Maximal flexibility while abstracting out the common boilerplate across research projects. 363 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format. 364 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning. 365 366 **How does Lightning compare with Ignite and fast.ai?** 367 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a). 368 369 **Is this another library I have to learn?** 370 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions! 371 372 **Are there plans to support Python 2?** 373 Nope. 374 375 **Are there plans to support virtualenv?** 376 Nope. Please use anaconda or miniconda. 377 ```bash 378 conda activate my_env 379 pip install pytorch-lightning 380 ``` 381 382 **Which PyTorch versions do you support?** 383 - **PyTorch 1.1.0** 384 ```bash 385 # install pytorch 1.1.0 using the official instructions 386 387 # install test-tube 0.6.7.6 which supports 1.1.0 388 pip install test-tube==0.6.7.6 389 390 # install latest Lightning version without upgrading deps 391 pip install -U --no-deps pytorch-lightning 392 ``` 393 - **PyTorch 1.2.0+** 394 ```python 395 pip install pytorch-lightning 396 ``` 397 398 ## Custom installation 399 400 ### Bleeding edge 401 402 If you can't wait for the next release, install the most up to date code with: 403 * using GIT (locally clone whole repo with full history) 404 ```bash 405 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade 406 ``` 407 * using instant zip (last state of the repo without git history) 408 ```bash 409 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade 410 ``` 411 412 ### Any release installation 413 414 You can also install any past release `0.X.Y` from this repository: 415 ```bash 416 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade 417 ``` 418 419 ### Lightning team 420 421 #### Leads 422 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder) 423 - Jirka Borovec [(Borda)](https://github.com/Borda) (ghost :) 424 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder) 425 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder) 426 - Justus Schock [(justusschock)](https://github.com/justusschock) (Former Core Member PyTorch Ignite) 427 428 #### Core Maintainers 429 430 - Nick Eggert [(neggert)](https://github.com/neggert) 431 - Jeff Ling [(jeffling)](https://github.com/jeffling) 432 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan) 433 - Tullie Murrell [(tullie)](https://github.com/tullie) 434 - Adrian Wälchli [(awaelchli)](https://github.com/awaelchli) 435 436 #### Funding 437 Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can 438 hire a full-time staff, attend conferences, and move faster through implementing features you request. 439 440 Our goal is to build an incredible research platform and a big supportive community. Many open-source projects 441 have gone on to fund operations through things like support and special help for big corporations! 442 443 If you are one of these corporations, please feel free to reach out to [email protected]! 444 445 ## Bibtex 446 If you want to cite the framework feel free to use this (but only if you loved it 😊): 447 448 ```bibtex 449 @article{falcon2019pytorch, 450 title={PyTorch Lightning}, 451 author={Falcon, WA}, 452 journal={GitHub. Note: https://github. com/williamFalcon/pytorch-lightning Cited by}, 453 volume={3}, 454 year={2019} 455 } 456 ``` 457 [end of README.md] [start of pl_examples/__init__.py] 1 """ 2 Template model definition 3 ------------------------- 4 5 In 99% of cases you want to just copy `one of the examples 6 <https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples>`_ 7 to start a new lightningModule and change the core of what your model is actually trying to do. 8 9 .. code-block:: bash 10 11 # get a copy of the module template 12 wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/pl_examples/new_project_templates/lightning_module_template.py # noqa: E501 13 14 15 Trainer Example 16 --------------- 17 18 **`__main__` function** 19 20 Normally, we want to let the `__main__` function start the training. 21 Inside the main we parse training arguments with whatever hyperparameters we want. 22 Your LightningModule will have a chance to add hyperparameters. 23 24 .. code-block:: python 25 26 from test_tube import HyperOptArgumentParser 27 28 if __name__ == '__main__': 29 30 # use default args given by lightning 31 root_dir = os.path.split(os.path.dirname(sys.modules['__main__'].__file__))[0] 32 parent_parser = HyperOptArgumentParser(strategy='random_search', add_help=False) 33 add_default_args(parent_parser, root_dir) 34 35 # allow model to overwrite or extend args 36 parser = ExampleModel.add_model_specific_args(parent_parser) 37 hyperparams = parser.parse_args() 38 39 # train model 40 main(hyperparams) 41 42 **Main Function** 43 44 The main function is your entry into the program. This is where you init your model, checkpoint directory, 45 and launch the training. The main function should have 3 arguments: 46 47 - hparams: a configuration of hyperparameters. 48 - slurm_manager: Slurm cluster manager object (can be None) 49 - dict: for you to return any values you want (useful in meta-learning, otherwise set to) 50 51 .. code-block:: python 52 53 def main(hparams, cluster, results_dict): 54 # build model 55 model = MyLightningModule(hparams) 56 57 # configure trainer 58 trainer = Trainer() 59 60 # train model 61 trainer.fit(model) 62 63 64 The `__main__` function will start training on your **main** function. 65 If you use the HyperParameterOptimizer in hyper parameter optimization mode, 66 this main function will get one set of hyperparameters. If you use it as a simple 67 argument parser you get the default arguments in the argument parser. 68 69 So, calling main(hyperparams) runs the model with the default argparse arguments.:: 70 71 main(hyperparams) 72 73 74 CPU hyperparameter search 75 ------------------------- 76 77 .. code-block:: python 78 79 # run a grid search over 20 hyperparameter combinations. 80 hyperparams.optimize_parallel_cpu( 81 main_local, 82 nb_trials=20, 83 nb_workers=1 84 ) 85 86 87 Hyperparameter search on a single or multiple GPUs 88 -------------------------------------------------- 89 90 .. code-block:: python 91 92 # run a grid search over 20 hyperparameter combinations. 93 hyperparams.optimize_parallel_gpu( 94 main_local, 95 nb_trials=20, 96 nb_workers=1, 97 gpus=[0,1,2,3] 98 ) 99 100 101 Hyperparameter search on a SLURM HPC cluster 102 -------------------------------------------- 103 104 .. code-block:: python 105 106 def optimize_on_cluster(hyperparams): 107 # enable cluster training 108 cluster = SlurmCluster( 109 hyperparam_optimizer=hyperparams, 110 log_path=hyperparams.tt_save_path, 111 test_tube_exp_name=hyperparams.tt_name 112 ) 113 114 # email for cluster coms 115 cluster.notify_job_status(email='add_email_here', on_done=True, on_fail=True) 116 117 # configure cluster 118 cluster.per_experiment_nb_gpus = hyperparams.per_experiment_nb_gpus 119 cluster.job_time = '48:00:00' 120 cluster.gpu_type = '1080ti' 121 cluster.memory_mb_per_node = 48000 122 123 # any modules for code to run in env 124 cluster.add_command('source activate pytorch_lightning') 125 126 # name of exp 127 job_display_name = hyperparams.tt_name.split('_')[0] 128 job_display_name = job_display_name[0:3] 129 130 # run hopt 131 logging.info('submitting jobs...') 132 cluster.optimize_parallel_cluster_gpu( 133 main, 134 nb_trials=hyperparams.nb_hopt_trials, 135 job_name=job_display_name 136 ) 137 138 # run cluster hyperparameter search 139 optimize_on_cluster(hyperparams) 140 141 """ 142 143 from pl_examples.models.lightning_template import LightningTemplateModel 144 145 __all__ = [ 146 'LightningTemplateModel' 147 ] 148 [end of pl_examples/__init__.py] [start of pytorch_lightning/trainer/training_io.py] 1 """ 2 Lightning can automate saving and loading checkpoints 3 ===================================================== 4 5 Checkpointing is enabled by default to the current working directory. 6 To change the checkpoint path pass in:: 7 8 Trainer(default_root_dir='/your/path/to/save/checkpoints') 9 10 11 To modify the behavior of checkpointing pass in your own callback. 12 13 .. code-block:: python 14 15 from pytorch_lightning.callbacks import ModelCheckpoint 16 17 # DEFAULTS used by the Trainer 18 checkpoint_callback = ModelCheckpoint( 19 filepath=os.getcwd(), 20 save_top_k=1, 21 verbose=True, 22 monitor='val_loss', 23 mode='min', 24 prefix='' 25 ) 26 27 trainer = Trainer(checkpoint_callback=checkpoint_callback) 28 29 30 Restoring training session 31 -------------------------- 32 33 You might want to not only load a model but also continue training it. Use this method to 34 restore the trainer state as well. This will continue from the epoch and global step you last left off. 35 However, the dataloaders will start from the first batch again (if you shuffled it shouldn't matter). 36 37 Lightning will restore the session if you pass a logger with the same version and there's a saved checkpoint. 38 39 .. code-block:: python 40 41 from pytorch_lightning import Trainer 42 43 trainer = Trainer( 44 resume_from_checkpoint=PATH 45 ) 46 47 # this fit call loads model weights and trainer state 48 # the trainer continues seamlessly from where you left off 49 # without having to do anything else. 50 trainer.fit(model) 51 52 53 The trainer restores: 54 55 - global_step 56 - current_epoch 57 - All optimizers 58 - All lr_schedulers 59 - Model weights 60 61 You can even change the logic of your model as long as the weights and "architecture" of 62 the system isn't different. If you add a layer, for instance, it might not work. 63 64 At a rough level, here's what happens inside Trainer :py:mod:`pytorch_lightning.base_module.model_saving.py`: 65 66 .. code-block:: python 67 68 self.global_step = checkpoint['global_step'] 69 self.current_epoch = checkpoint['epoch'] 70 71 # restore the optimizers 72 optimizer_states = checkpoint['optimizer_states'] 73 for optimizer, opt_state in zip(self.optimizers, optimizer_states): 74 optimizer.load_state_dict(opt_state) 75 76 # restore the lr schedulers 77 lr_schedulers = checkpoint['lr_schedulers'] 78 for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers): 79 scheduler['scheduler'].load_state_dict(lrs_state) 80 81 # uses the model you passed into trainer 82 model.load_state_dict(checkpoint['state_dict']) 83 84 """ 85 86 import os 87 import re 88 import signal 89 from abc import ABC 90 from argparse import Namespace 91 from subprocess import call 92 from typing import Union 93 94 import torch 95 import torch.distributed as torch_distrib 96 97 from pytorch_lightning import _logger as log 98 from pytorch_lightning.core.lightning import LightningModule 99 from pytorch_lightning.loggers import LightningLoggerBase 100 from pytorch_lightning.overrides.data_parallel import ( 101 LightningDistributedDataParallel, 102 LightningDataParallel, 103 ) 104 from pytorch_lightning.utilities import rank_zero_warn, parsing 105 106 try: 107 import torch_xla 108 import torch_xla.core.xla_model as xm 109 import torch_xla.distributed.xla_multiprocessing as xmp 110 except ImportError: 111 XLA_AVAILABLE = False 112 else: 113 XLA_AVAILABLE = True 114 115 try: 116 import horovod.torch as hvd 117 except ImportError: 118 HOROVOD_AVAILABLE = False 119 else: 120 HOROVOD_AVAILABLE = True 121 122 123 class TrainerIOMixin(ABC): 124 125 # this is just a summary on variables used in this abstract class, 126 # the proper values/initialisation should be done in child class 127 model: LightningModule 128 on_gpu: bool 129 root_gpu: ... 130 resume_from_checkpoint: ... 131 use_ddp: bool 132 use_ddp2: bool 133 use_horovod: bool 134 checkpoint_callback: ... 135 proc_rank: int 136 weights_save_path: str 137 logger: Union[LightningLoggerBase, bool] 138 early_stop_callback: ... 139 lr_schedulers: ... 140 optimizers: ... 141 on_tpu: bool 142 num_training_batches: int 143 accumulate_grad_batches: int 144 145 def get_model(self): 146 is_dp_module = isinstance(self.model, (LightningDistributedDataParallel, 147 LightningDataParallel)) 148 model = self.model.module if is_dp_module else self.model 149 return model 150 151 # -------------------- 152 # CHECK-POINTING 153 # -------------------- 154 def restore_weights(self, model: LightningModule): 155 """ 156 We attempt to restore weights in this order: 157 1. HPC weights. 158 2. if no HPC weights restore checkpoint_path weights 159 3. otherwise don't restore weights 160 """ 161 # clear cache before restore 162 if self.on_gpu: 163 torch.cuda.empty_cache() 164 165 # if script called from hpc resubmit, load weights 166 did_restore_hpc_weights = self.restore_hpc_weights_if_needed(model) 167 168 # clear cache after restore 169 if self.on_gpu: 170 torch.cuda.empty_cache() 171 172 if not did_restore_hpc_weights: 173 if self.resume_from_checkpoint is not None: 174 self.restore(self.resume_from_checkpoint, on_gpu=self.on_gpu) 175 176 # wait for all models to restore weights 177 if self.use_ddp or self.use_ddp2: 178 # wait for all processes to catch up 179 torch_distrib.barrier() 180 181 # wait for all models to restore weights 182 if self.on_tpu and XLA_AVAILABLE: 183 # wait for all processes to catch up 184 torch_xla.core.xla_model.rendezvous("pl.TrainerIOMixin.restore_weights") 185 186 elif self.use_horovod: 187 # wait for all processes to catch up 188 hvd.join() 189 190 # clear cache after restore 191 if self.on_gpu: 192 torch.cuda.empty_cache() 193 194 # -------------------- 195 # HPC SIGNAL HANDLING 196 # -------------------- 197 def register_slurm_signal_handlers(self): 198 # see if we're using slurm (not interactive) 199 on_slurm = False 200 try: 201 job_name = os.environ['SLURM_JOB_NAME'] 202 if job_name != 'bash': 203 on_slurm = True 204 except Exception as e: 205 pass 206 207 if on_slurm: 208 log.info('Set SLURM handle signals.') 209 signal.signal(signal.SIGUSR1, self.sig_handler) 210 signal.signal(signal.SIGTERM, self.term_handler) 211 212 def sig_handler(self, signum, frame): # pragma: no-cover 213 if self.proc_rank == 0: 214 # save weights 215 log.info('handling SIGUSR1') 216 self.hpc_save(self.weights_save_path, self.logger) 217 218 # find job id 219 job_id = os.environ['SLURM_JOB_ID'] 220 cmd = 'scontrol requeue {}'.format(job_id) 221 222 # requeue job 223 log.info(f'requeing job {job_id}...') 224 result = call(cmd, shell=True) 225 226 # print result text 227 if result == 0: 228 log.info(f'requeued exp {job_id}') 229 else: 230 log.warning('requeue failed...') 231 232 # close experiment to avoid issues 233 self.logger.close() 234 235 def term_handler(self, signum, frame): 236 # save 237 log.info("bypassing sigterm") 238 239 # -------------------- 240 # MODEL SAVE CHECKPOINT 241 # -------------------- 242 def _atomic_save(self, checkpoint, filepath: str): 243 """Saves a checkpoint atomically, avoiding the creation of incomplete checkpoints. 244 245 This will create a temporary checkpoint with a suffix of ``.part``, then copy it to the final location once 246 saving is finished. 247 248 Args: 249 checkpoint: The object to save. 250 Built to be used with the ``dump_checkpoint`` method, but can deal with anything which ``torch.save`` 251 accepts. 252 filepath: The path to which the checkpoint will be saved. 253 This points to the file that the checkpoint will be stored in. 254 """ 255 tmp_path = str(filepath) + ".part" 256 torch.save(checkpoint, tmp_path) 257 os.replace(tmp_path, filepath) 258 259 def save_checkpoint(self, filepath): 260 checkpoint = self.dump_checkpoint() 261 262 if self.proc_rank == 0: 263 # do the actual save 264 try: 265 self._atomic_save(checkpoint, filepath) 266 except AttributeError as e: 267 if 'hparams' in checkpoint: 268 del checkpoint['hparams'] 269 rank_zero_warn('warning, `hparams` dropped from checkpoint.' 270 f' An attribute is not picklable {e}') 271 272 self._atomic_save(checkpoint, filepath) 273 274 def restore(self, checkpoint_path: str, on_gpu: bool): 275 """ 276 Restore training state from checkpoint. 277 Also restores all training state like: 278 - epoch 279 - callbacks 280 - schedulers 281 - optimizer 282 """ 283 284 # if on_gpu: 285 # checkpoint = torch.load(checkpoint_path) 286 # else: 287 # load on CPU first 288 checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage) 289 290 # load model state 291 model = self.get_model() 292 293 # load the state_dict on the model automatically 294 model.load_state_dict(checkpoint['state_dict']) 295 296 # give model a chance to load something 297 model.on_load_checkpoint(checkpoint) 298 299 if on_gpu: 300 model.cuda(self.root_gpu) 301 302 # restore amp scaling 303 if self.use_amp and self.use_native_amp and 'native_amp_scaling_state' in checkpoint: 304 self.scaler.load_state_dict(checkpoint['native_amp_scaling_state']) 305 306 # load training state (affects trainer only) 307 self.restore_training_state(checkpoint) 308 309 def dump_checkpoint(self): 310 checkpoint = { 311 'epoch': self.current_epoch + 1, 312 'global_step': self.global_step + 1, 313 } 314 315 if self.checkpoint_callback is not None and self.checkpoint_callback is not False: 316 checkpoint['checkpoint_callback_best'] = self.checkpoint_callback.best 317 318 if self.early_stop_callback is not None and self.checkpoint_callback is not False: 319 checkpoint['early_stop_callback_wait'] = self.early_stop_callback.wait 320 checkpoint['early_stop_callback_patience'] = self.early_stop_callback.patience 321 322 # save optimizers 323 optimizer_states = [] 324 for i, optimizer in enumerate(self.optimizers): 325 optimizer_states.append(optimizer.state_dict()) 326 327 checkpoint['optimizer_states'] = optimizer_states 328 329 # save lr schedulers 330 lr_schedulers = [] 331 for scheduler in self.lr_schedulers: 332 lr_schedulers.append(scheduler['scheduler'].state_dict()) 333 334 checkpoint['lr_schedulers'] = lr_schedulers 335 336 # add the hparams and state_dict from the model 337 model = self.get_model() 338 339 checkpoint['state_dict'] = model.state_dict() 340 341 # save native amp scaling 342 if self.use_amp and self.use_native_amp: 343 checkpoint['native_amp_scaling_state'] = self.scaler.state_dict() 344 345 if hasattr(model, "hparams"): 346 parsing.clean_namespace(model.hparams) 347 is_namespace = isinstance(model.hparams, Namespace) 348 checkpoint['hparams'] = vars(model.hparams) if is_namespace else model.hparams 349 checkpoint['hparams_type'] = 'namespace' if is_namespace else 'dict' 350 else: 351 rank_zero_warn( 352 "Did not find hyperparameters at model hparams. Saving checkpoint without hyperparameters." 353 ) 354 355 # give the model a chance to add a few things 356 model.on_save_checkpoint(checkpoint) 357 358 return checkpoint 359 360 # -------------------- 361 # HPC IO 362 # -------------------- 363 def restore_hpc_weights_if_needed(self, model: LightningModule): 364 """If there is a set of hpc weights, use as signal to restore model.""" 365 did_restore = False 366 367 # look for hpc weights 368 folderpath = self.weights_save_path 369 if os.path.exists(folderpath): 370 files = os.listdir(folderpath) 371 hpc_weight_paths = [x for x in files if 'hpc_ckpt' in x] 372 373 # if hpc weights exist restore model 374 if len(hpc_weight_paths) > 0: 375 self.hpc_load(folderpath, self.on_gpu) 376 did_restore = True 377 return did_restore 378 379 def restore_training_state(self, checkpoint): 380 """ 381 Restore trainer state. 382 Model will get its change to update 383 :param checkpoint: 384 :return: 385 """ 386 if self.checkpoint_callback is not None and self.checkpoint_callback is not False: 387 self.checkpoint_callback.best = checkpoint['checkpoint_callback_best'] 388 389 if self.early_stop_callback is not None and self.early_stop_callback is not False: 390 self.early_stop_callback.wait = checkpoint['early_stop_callback_wait'] 391 self.early_stop_callback.patience = checkpoint['early_stop_callback_patience'] 392 393 self.global_step = checkpoint['global_step'] 394 self.current_epoch = checkpoint['epoch'] 395 396 # Division deals with global step stepping once per accumulated batch 397 # Inequality deals with different global step for odd vs even num_training_batches 398 n_accum = 1 if self.accumulate_grad_batches is None else self.accumulate_grad_batches 399 expected_steps = self.num_training_batches / n_accum 400 if self.num_training_batches != 0 and self.global_step % expected_steps > 1: 401 rank_zero_warn( 402 "You're resuming from a checkpoint that ended mid-epoch. " 403 "This can cause unreliable results if further training is done, " 404 "consider using an end of epoch checkpoint. " 405 ) 406 407 # restore the optimizers 408 optimizer_states = checkpoint['optimizer_states'] 409 for optimizer, opt_state in zip(self.optimizers, optimizer_states): 410 optimizer.load_state_dict(opt_state) 411 412 # move optimizer to GPU 1 weight at a time 413 # avoids OOM 414 if self.root_gpu is not None: 415 for state in optimizer.state.values(): 416 for k, v in state.items(): 417 if isinstance(v, torch.Tensor): 418 state[k] = v.cuda(self.root_gpu) 419 420 # restore the lr schedulers 421 lr_schedulers = checkpoint['lr_schedulers'] 422 for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers): 423 scheduler['scheduler'].load_state_dict(lrs_state) 424 425 # ---------------------------------- 426 # PRIVATE OPS 427 # ---------------------------------- 428 def hpc_save(self, folderpath: str, logger): 429 # make sure the checkpoint folder exists 430 os.makedirs(folderpath, exist_ok=True) 431 432 # save logger to make sure we get all the metrics 433 logger.save() 434 435 ckpt_number = self.max_ckpt_in_folder(folderpath) + 1 436 437 if not os.path.exists(folderpath): 438 os.makedirs(folderpath, exist_ok=True) 439 filepath = os.path.join(folderpath, f'hpc_ckpt_{ckpt_number}.ckpt') 440 441 # give model a chance to do something on hpc_save 442 model = self.get_model() 443 checkpoint = self.dump_checkpoint() 444 445 model.on_hpc_save(checkpoint) 446 447 # do the actual save 448 # TODO: fix for anything with multiprocess DP, DDP, DDP2 449 try: 450 self._atomic_save(checkpoint, filepath) 451 except AttributeError as e: 452 if 'hparams' in checkpoint: 453 del checkpoint['hparams'] 454 rank_zero_warn('warning, `hparams` dropped from checkpoint.' 455 f' An attribute is not picklable {e}') 456 457 self._atomic_save(checkpoint, filepath) 458 459 return filepath 460 461 def hpc_load(self, folderpath, on_gpu): 462 filepath = '{}/hpc_ckpt_{}.ckpt'.format(folderpath, self.max_ckpt_in_folder(folderpath)) 463 464 # load on CPU first 465 checkpoint = torch.load(filepath, map_location=lambda storage, loc: storage) 466 467 # load model state 468 model = self.get_model() 469 470 # load the state_dict on the model automatically 471 model.load_state_dict(checkpoint['state_dict']) 472 473 # restore amp scaling 474 if self.use_amp and self.use_native_amp and 'native_amp_scaling_state' in checkpoint: 475 self.scaler.load_state_dict(checkpoint['native_amp_scaling_state']) 476 477 if self.root_gpu is not None: 478 model.cuda(self.root_gpu) 479 480 # load training state (affects trainer only) 481 self.restore_training_state(checkpoint) 482 483 # call model hook 484 model.on_hpc_load(checkpoint) 485 486 log.info(f'restored hpc model from: {filepath}') 487 488 def max_ckpt_in_folder(self, path, name_key='ckpt_'): 489 files = os.listdir(path) 490 files = [x for x in files if name_key in x] 491 if len(files) == 0: 492 return 0 493 494 ckpt_vs = [] 495 for name in files: 496 name = name.split(name_key)[-1] 497 name = re.sub('[^0-9]', '', name) 498 ckpt_vs.append(int(name)) 499 500 return max(ckpt_vs) 501 [end of pytorch_lightning/trainer/training_io.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Lightning-AI/lightning
10ce1c0256e562b42cdedbc81beba626ca959f63
Customizing hparams after loading checkpoint ## ❓ Questions and Help ### Before asking: 1. search the issues. 2. search the docs. #### What is your question? I'm wondering what the best practice for loading a model with different hparams than what is stored in the checkpoint? I realize I could just load the model and set them afterwards e.g.: ``` model = model.load_from_checkpoint(args.checkpoint_file) # Load model # Set hparams etc.. model.hparams.arg1 = 0.0 model.hparams.arg2 = 1.0 ``` But the problem is that my model __init__ function depends on the hparams arg1 and arg2 so they're set too late. I could also do ``` checkpoint = torch.load(args.checkpoint_file) checkpoint['hparams']['arg1'] = 0.0 checkpoint['hparams']['arg2'] = 1.0 model = model._load_state_dict(checkpoint) ``` The problem here is that i'm using the protected function _load_state_dict. Is there another way of solving this that i've missed? Or could we consider making _load_state_dict public?
@tullie good question, maybe we can have a flag for disabling hparam use? ```python load_from_checkpoint(PATH, auto_hparam=False, arg1=my_arg, arg2=my_arg2, hparam={...}) ``` load_from_checkpoint currently allows passing in the args directly as shown above ```python load_from_checkpoint(PATH, auto_hparam=False, arg1=my_arg, arg2=my_arg2) ``` so, in this case an arg (hparam) would be a regular arg which you could then construct to be whatever you want. Alternative 2: We add a ```hparam_updates``` arg which sets those updates in the hparams ```python load_from_checkpoint(PATH, hparam_updates={'arg1': 0.0, 'arg2': 0.0}) ``` I've been playing around with both these options and have most preferred alternative 2. I'll send out a PR soon.
2020-05-12T13:07:30Z
<patch> diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py --- a/pytorch_lightning/core/lightning.py +++ b/pytorch_lightning/core/lightning.py @@ -16,8 +16,8 @@ from pytorch_lightning.core.grads import GradInformation from pytorch_lightning.core.hooks import ModelHooks from pytorch_lightning.core.memory import ModelSummary +from pytorch_lightning.core.saving import ModelIO, load_hparams_from_tags_csv, update_hparams from pytorch_lightning.core.properties import DeviceDtypeModuleMixin -from pytorch_lightning.core.saving import ModelIO, load_hparams_from_tags_csv from pytorch_lightning.overrides.data_parallel import LightningDistributedDataParallel from pytorch_lightning.utilities.exceptions import MisconfigurationException from pytorch_lightning.utilities import rank_zero_warn @@ -1439,6 +1439,7 @@ def load_from_checkpoint( checkpoint_path: str, map_location: Optional[Union[Dict[str, str], str, torch.device, int, Callable]] = None, tags_csv: Optional[str] = None, + hparam_overrides: Optional[Dict] = None, *args, **kwargs ) -> 'LightningModule': r""" @@ -1480,6 +1481,7 @@ def __init__(self, hparams): use this method to pass in a .csv file with the hparams you'd like to use. These will be converted into a :class:`~argparse.Namespace` and passed into your :class:`LightningModule` for use. + hparam_overrides: A dictionary with keys to override in the hparams Return: :class:`LightningModule` with loaded weights and hyperparameters (if available). @@ -1503,6 +1505,12 @@ def __init__(self, hparams): tags_csv='/path/to/hparams_file.csv' ) + # override some of the params with new values + MyLightningModule.load_from_checkpoint( + PATH, + hparam_overrides={'num_layers': 128, 'pretrained_ckpt_path': NEW_PATH} + ) + # or load passing whatever args the model takes to load MyLightningModule.load_from_checkpoint( 'path/to/checkpoint.ckpt', @@ -1521,12 +1529,16 @@ def __init__(self, hparams): else: checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage) + # add the hparams from csv file to checkpoint if tags_csv is not None: - # add the hparams from csv file to checkpoint hparams = load_hparams_from_tags_csv(tags_csv) hparams.__setattr__('on_gpu', False) checkpoint['hparams'] = vars(hparams) + # override the hparam keys that were passed in + if hparam_overrides is not None: + update_hparams(hparams, hparam_overrides) + model = cls._load_model_state(checkpoint, *args, **kwargs) return model diff --git a/pytorch_lightning/core/saving.py b/pytorch_lightning/core/saving.py --- a/pytorch_lightning/core/saving.py +++ b/pytorch_lightning/core/saving.py @@ -48,6 +48,37 @@ def on_hpc_load(self, checkpoint: Dict[str, Any]) -> None: """ +def update_hparams(hparams: dict, updates: dict) -> None: + """ + Overrides hparams with new values + + >>> hparams = {'c': 4} + >>> update_hparams(hparams, {'a': {'b': 2}, 'c': 1}) + >>> hparams['a']['b'], hparams['c'] + (2, 1) + >>> update_hparams(hparams, {'a': {'b': 4}, 'c': 7}) + >>> hparams['a']['b'], hparams['c'] + (4, 7) + + Args: + hparams: the original params and also target object + updates: new params to be used as update + + """ + for k, v in updates.items(): + # if missing, add the key + if k not in hparams: + hparams[k] = v + continue + + # recurse if dictionary + if isinstance(v, dict): + update_hparams(hparams[k], updates[k]) + else: + # update the value + hparams.update({k: v}) + + def load_hparams_from_tags_csv(tags_csv: str) -> Namespace: if not os.path.isfile(tags_csv): log.warning(f'Missing Tags: {tags_csv}.') </patch>
[]
[]
Qiskit__qiskit-2978
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> DAGCircuit Documentation types outdated <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Information The documentation of DAGCircuit still references tuples instead of the QuantumRegisters as input type for wires and qargs. This does not work any more. - **Qiskit Terra version**: 0.8.2 - **Python version**: 3.7.4 - **Operating system**: macOS 10.14 ### What is the current behavior? An excerpt from the current documentation: ``` def apply_operation_back(self, op, qargs=None, cargs=None, condition=None): """Apply an operation to the output of the circuit. Args: op (Instruction): the operation associated with the DAG node qargs (list[tuple]): qubits that op will be applied to cargs (list[tuple]): cbits that op will be applied to condition (tuple or None): optional condition (ClassicalRegister, int) Returns: DAGNode: the current max node Raises: DAGCircuitError: if a leaf node is connected to multiple outputs """ ``` which lists tuples as expected type. </issue> <code> [start of README.md] 1 # Qiskit Terra 2 3 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)[![Build Status](https://img.shields.io/travis/com/Qiskit/qiskit-terra/master.svg?style=popout-square)](https://travis-ci.com/Qiskit/qiskit-terra)[![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=master)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master) 4 5 **Qiskit** is an open-source framework for working with Noisy Intermediate-Scale Quantum (NISQ) computers at the level of pulses, circuits, and algorithms. 6 7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built. 8 9 ## Installation 10 11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra. 12 13 ```bash 14 pip install qiskit 15 ``` 16 17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version. 18 19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source). 20 21 ## Creating Your First Quantum Program in Qiskit Terra 22 23 Now that Qiskit is installed, it's time to begin working with Terra. 24 25 We are ready to try out a quantum circuit example, which is simulated locally using 26 the Qiskit BasicAer element. This is a simple example that makes an entangled state. 27 28 ``` 29 $ python 30 ``` 31 32 ```python 33 >>> from qiskit import * 34 >>> qc = QuantumCircuit(2, 2) 35 >>> qc.h(0) 36 >>> qc.cx(0, 1) 37 >>> qc.measure([0,1], [0,1]) 38 >>> backend_sim = BasicAer.get_backend('qasm_simulator') 39 >>> result = execute(qc, backend_sim).result() 40 >>> print(result.get_counts(qc)) 41 ``` 42 43 In this case, the output will be: 44 45 ```python 46 {'00': 513, '11': 511} 47 ``` 48 49 A script is available [here](examples/python/hello_quantum.py), where we also show how to 50 run the same program on a real quantum computer via IBMQ. 51 52 ### Executing your code on a real quantum chip 53 54 You can also use Qiskit to execute your code on a 55 **real quantum chip**. 56 In order to do so, you need to configure Qiskit for using the credentials in 57 your IBM Q account: 58 59 #### Configure your IBMQ credentials 60 61 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so. 62 63 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account. 64 65 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run: 66 67 ```python 68 >>> from qiskit import IBMQ 69 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL') 70 ``` 71 72 After calling `IBMQ.save_account()`, your credentials will be stored on disk. 73 Once they are stored, at any point in the future you can load and use them 74 in your program simply via: 75 76 ```python 77 >>> from qiskit import IBMQ 78 >>> IBMQ.load_account() 79 ``` 80 81 Those who do not want to save their credentials to disk should use instead: 82 83 ```python 84 >>> from qiskit import IBMQ 85 >>> IBMQ.enable_account('MY_API_TOKEN') 86 ``` 87 88 and the token will only be active for the session. For examples using Terra with real 89 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in 90 the levels. 91 92 ## Contribution Guidelines 93 94 If you'd like to contribute to Qiskit Terra, please take a look at our 95 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 96 97 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please 98 [join the Qiskit Slack community](https://join.slack.com/t/qiskit/shared_invite/enQtNjQ5OTc5ODM1ODYyLTc2YWJhOWViZDA2OWI5N2EyMjIxN2YwODM5MWQyN2Q3MjczOGRlMDU4MzMxMWE5MzZjMzEzYzM3MmJiMzU5MzU) 99 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions. 100 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit). 101 102 ## Next Steps 103 104 Now you're set up and ready to check out some of the other examples from our 105 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository. 106 107 ## Authors and Citation 108 109 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 110 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib). 111 112 ## License 113 114 [Apache License 2.0](LICENSE.txt) 115 [end of README.md] [start of qiskit/circuit/quantumcircuit.py] 1 # -*- coding: utf-8 -*- 2 3 # This code is part of Qiskit. 4 # 5 # (C) Copyright IBM 2017. 6 # 7 # This code is licensed under the Apache License, Version 2.0. You may 8 # obtain a copy of this license in the LICENSE.txt file in the root directory 9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 10 # 11 # Any modifications or derivative works of this code must retain this 12 # copyright notice, and modified files need to carry a notice indicating 13 # that they have been altered from the originals. 14 15 """Quantum circuit object.""" 16 17 from copy import deepcopy 18 import itertools 19 import sys 20 import multiprocessing as mp 21 from warnings import warn 22 from collections import OrderedDict 23 from qiskit.circuit.instruction import Instruction 24 from qiskit.qasm.qasm import Qasm 25 from qiskit.exceptions import QiskitError 26 from .parameterexpression import ParameterExpression 27 from .quantumregister import QuantumRegister, Qubit 28 from .classicalregister import ClassicalRegister, Clbit 29 from .parametertable import ParameterTable 30 from .parametervector import ParameterVector 31 from .instructionset import InstructionSet 32 from .register import Register 33 from .bit import Bit 34 35 36 def _is_bit(obj): 37 """Determine if obj is a bit""" 38 # If there is a bit type this could be replaced by isinstance. 39 if isinstance(obj, tuple) and len(obj) == 2: 40 if isinstance(obj[0], Register) and isinstance(obj[1], int) and obj[1] < len(obj[0]): 41 warn('Referring to a bit as a tuple is being deprecated. ' 42 'Instead go of (qr, 0), use qr[0].', DeprecationWarning) 43 return True 44 return False 45 46 47 class QuantumCircuit: 48 """Quantum circuit.""" 49 instances = 0 50 prefix = 'circuit' 51 52 # Class variable OPENQASM header 53 header = "OPENQASM 2.0;" 54 extension_lib = "include \"qelib1.inc\";" 55 56 def __init__(self, *regs, name=None): 57 """Create a new circuit. 58 A circuit is a list of instructions bound to some registers. 59 Args: 60 *regs (list(Register) or list(Int)): To be included in the circuit. 61 - If [Register], the QuantumRegister and/or ClassicalRegister 62 to include in the circuit. 63 E.g.: QuantumCircuit(QuantumRegister(4)) 64 QuantumCircuit(QuantumRegister(4), ClassicalRegister(3)) 65 QuantumCircuit(QuantumRegister(4, 'qr0'), QuantumRegister(2, 'qr1')) 66 - If [Int], the amount of qubits and/or classical bits to include 67 in the circuit. It can be (Int, ) or (Int, Int). 68 E.g.: QuantumCircuit(4) # A QuantumCircuit with 4 qubits 69 QuantumCircuit(4, 3) # A QuantumCircuit with 4 qubits and 3 classical bits 70 name (str or None): the name of the quantum circuit. If 71 None, an automatically generated string will be assigned. 72 73 Raises: 74 QiskitError: if the circuit name, if given, is not valid. 75 """ 76 if name is None: 77 name = self.cls_prefix() + str(self.cls_instances()) 78 # pylint: disable=not-callable 79 # (known pylint bug: https://github.com/PyCQA/pylint/issues/1699) 80 if sys.platform != "win32" and isinstance(mp.current_process(), mp.context.ForkProcess): 81 name += '-{}'.format(mp.current_process().pid) 82 self._increment_instances() 83 84 if not isinstance(name, str): 85 raise QiskitError("The circuit name should be a string " 86 "(or None to auto-generate a name).") 87 88 self.name = name 89 90 # Data contains a list of instructions and their contexts, 91 # in the order they were applied. 92 self.data = [] 93 94 # This is a map of registers bound to this circuit, by name. 95 self.qregs = [] 96 self.cregs = [] 97 self.add_register(*regs) 98 99 # Parameter table tracks instructions with variable parameters. 100 self._parameter_table = ParameterTable() 101 102 self._layout = None 103 104 def __str__(self): 105 return str(self.draw(output='text')) 106 107 def __eq__(self, other): 108 # TODO: remove the DAG from this function 109 from qiskit.converters import circuit_to_dag 110 return circuit_to_dag(self) == circuit_to_dag(other) 111 112 @classmethod 113 def _increment_instances(cls): 114 cls.instances += 1 115 116 @classmethod 117 def cls_instances(cls): 118 """Return the current number of instances of this class, 119 useful for auto naming.""" 120 return cls.instances 121 122 @classmethod 123 def cls_prefix(cls): 124 """Return the prefix to use for auto naming.""" 125 return cls.prefix 126 127 def has_register(self, register): 128 """ 129 Test if this circuit has the register r. 130 131 Args: 132 register (Register): a quantum or classical register. 133 134 Returns: 135 bool: True if the register is contained in this circuit. 136 """ 137 has_reg = False 138 if (isinstance(register, QuantumRegister) and 139 register in self.qregs): 140 has_reg = True 141 elif (isinstance(register, ClassicalRegister) and 142 register in self.cregs): 143 has_reg = True 144 return has_reg 145 146 def mirror(self): 147 """Mirror the circuit by reversing the instructions. 148 149 This is done by recursively mirroring all instructions. 150 It does not invert any gate. 151 152 Returns: 153 QuantumCircuit: the mirrored circuit 154 """ 155 reverse_circ = self.copy(name=self.name + '_mirror') 156 reverse_circ.data = [] 157 for inst, qargs, cargs in reversed(self.data): 158 reverse_circ.data.append((inst.mirror(), qargs, cargs)) 159 return reverse_circ 160 161 def inverse(self): 162 """Invert this circuit. 163 164 This is done by recursively inverting all gates. 165 166 Returns: 167 QuantumCircuit: the inverted circuit 168 169 Raises: 170 QiskitError: if the circuit cannot be inverted. 171 """ 172 inverse_circ = self.copy(name=self.name + '_dg') 173 inverse_circ.data = [] 174 for inst, qargs, cargs in reversed(self.data): 175 inverse_circ.data.append((inst.inverse(), qargs, cargs)) 176 return inverse_circ 177 178 def combine(self, rhs): 179 """ 180 Append rhs to self if self contains compatible registers. 181 182 Two circuits are compatible if they contain the same registers 183 or if they contain different registers with unique names. The 184 returned circuit will contain all unique registers between both 185 circuits. 186 187 Return self + rhs as a new object. 188 """ 189 # Check registers in LHS are compatible with RHS 190 self._check_compatible_regs(rhs) 191 192 # Make new circuit with combined registers 193 combined_qregs = deepcopy(self.qregs) 194 combined_cregs = deepcopy(self.cregs) 195 196 for element in rhs.qregs: 197 if element not in self.qregs: 198 combined_qregs.append(element) 199 for element in rhs.cregs: 200 if element not in self.cregs: 201 combined_cregs.append(element) 202 circuit = QuantumCircuit(*combined_qregs, *combined_cregs) 203 for instruction_context in itertools.chain(self.data, rhs.data): 204 circuit.append(*instruction_context) 205 return circuit 206 207 def extend(self, rhs): 208 """ 209 Append rhs to self if self contains compatible registers. 210 211 Two circuits are compatible if they contain the same registers 212 or if they contain different registers with unique names. The 213 returned circuit will contain all unique registers between both 214 circuits. 215 216 Modify and return self. 217 """ 218 # Check registers in LHS are compatible with RHS 219 self._check_compatible_regs(rhs) 220 221 # Add new registers 222 for element in rhs.qregs: 223 if element not in self.qregs: 224 self.qregs.append(element) 225 for element in rhs.cregs: 226 if element not in self.cregs: 227 self.cregs.append(element) 228 229 # Add new gates 230 for instruction_context in rhs.data: 231 self.append(*instruction_context) 232 return self 233 234 @property 235 def qubits(self): 236 """ 237 Returns a list of quantum bits in the order that the registers had been added. 238 """ 239 return [qbit for qreg in self.qregs for qbit in qreg] 240 241 @property 242 def clbits(self): 243 """ 244 Returns a list of classical bits in the order that the registers had been added. 245 """ 246 return [cbit for creg in self.cregs for cbit in creg] 247 248 def __add__(self, rhs): 249 """Overload + to implement self.combine.""" 250 return self.combine(rhs) 251 252 def __iadd__(self, rhs): 253 """Overload += to implement self.extend.""" 254 return self.extend(rhs) 255 256 def __len__(self): 257 """Return number of operations in circuit.""" 258 return len(self.data) 259 260 def __getitem__(self, item): 261 """Return indexed operation.""" 262 return self.data[item] 263 264 @staticmethod 265 def cast(value, _type): 266 """Best effort to cast value to type. Otherwise, returns the value.""" 267 try: 268 return _type(value) 269 except (ValueError, TypeError): 270 return value 271 272 @staticmethod 273 def _bit_argument_conversion(bit_representation, in_array): 274 ret = None 275 try: 276 if isinstance(bit_representation, Bit): 277 # circuit.h(qr[0]) -> circuit.h([qr[0]]) 278 ret = [bit_representation] 279 elif isinstance(bit_representation, Register): 280 # circuit.h(qr) -> circuit.h([qr[0], qr[1]]) 281 ret = bit_representation[:] 282 elif isinstance(QuantumCircuit.cast(bit_representation, int), int): 283 # circuit.h(0) -> circuit.h([qr[0]]) 284 ret = [in_array[bit_representation]] 285 elif isinstance(bit_representation, slice): 286 # circuit.h(slice(0,2)) -> circuit.h([qr[0], qr[1]]) 287 ret = in_array[bit_representation] 288 elif _is_bit(bit_representation): 289 # circuit.h((qr, 0)) -> circuit.h([qr[0]]) 290 ret = [bit_representation[0][bit_representation[1]]] 291 elif isinstance(bit_representation, list) and \ 292 all(_is_bit(bit) for bit in bit_representation): 293 ret = [bit[0][bit[1]] for bit in bit_representation] 294 elif isinstance(bit_representation, list) and \ 295 all(isinstance(bit, Bit) for bit in bit_representation): 296 # circuit.h([qr[0], qr[1]]) -> circuit.h([qr[0], qr[1]]) 297 ret = bit_representation 298 elif isinstance(QuantumCircuit.cast(bit_representation, list), (range, list)): 299 # circuit.h([0, 1]) -> circuit.h([qr[0], qr[1]]) 300 # circuit.h(range(0,2)) -> circuit.h([qr[0], qr[1]]) 301 ret = [in_array[index] for index in bit_representation] 302 else: 303 raise QiskitError('Not able to expand a %s (%s)' % (bit_representation, 304 type(bit_representation))) 305 except IndexError: 306 raise QiskitError('Index out of range.') 307 except TypeError: 308 raise QiskitError('Type error handling %s (%s)' % (bit_representation, 309 type(bit_representation))) 310 return ret 311 312 def qbit_argument_conversion(self, qubit_representation): 313 """ 314 Converts several qubit representations (such as indexes, range, etc) 315 into a list of qubits. 316 317 Args: 318 qubit_representation (Object): representation to expand 319 320 Returns: 321 List(tuple): Where each tuple is a qubit. 322 """ 323 return QuantumCircuit._bit_argument_conversion(qubit_representation, self.qubits) 324 325 def cbit_argument_conversion(self, clbit_representation): 326 """ 327 Converts several classical bit representations (such as indexes, range, etc) 328 into a list of classical bits. 329 330 Args: 331 clbit_representation (Object): representation to expand 332 333 Returns: 334 List(tuple): Where each tuple is a classical bit. 335 """ 336 return QuantumCircuit._bit_argument_conversion(clbit_representation, self.clbits) 337 338 def append(self, instruction, qargs=None, cargs=None): 339 """Append one or more instructions to the end of the circuit, modifying 340 the circuit in place. Expands qargs and cargs. 341 342 Args: 343 instruction (Instruction or Operation): Instruction instance to append 344 qargs (list(argument)): qubits to attach instruction to 345 cargs (list(argument)): clbits to attach instruction to 346 347 Returns: 348 Instruction: a handle to the instruction that was just added 349 """ 350 # Convert input to instruction 351 if not isinstance(instruction, Instruction) and hasattr(instruction, 'to_instruction'): 352 instruction = instruction.to_instruction() 353 354 expanded_qargs = [self.qbit_argument_conversion(qarg) for qarg in qargs or []] 355 expanded_cargs = [self.cbit_argument_conversion(carg) for carg in cargs or []] 356 357 instructions = InstructionSet() 358 for (qarg, carg) in instruction.broadcast_arguments(expanded_qargs, expanded_cargs): 359 instructions.add(self._append(instruction, qarg, carg), qarg, carg) 360 return instructions 361 362 def _append(self, instruction, qargs, cargs): 363 """Append an instruction to the end of the circuit, modifying 364 the circuit in place. 365 366 Args: 367 instruction (Instruction or Operator): Instruction instance to append 368 qargs (list(tuple)): qubits to attach instruction to 369 cargs (list(tuple)): clbits to attach instruction to 370 371 Returns: 372 Instruction: a handle to the instruction that was just added 373 374 Raises: 375 QiskitError: if the gate is of a different shape than the wires 376 it is being attached to. 377 """ 378 if not isinstance(instruction, Instruction): 379 raise QiskitError('object is not an Instruction.') 380 381 # do some compatibility checks 382 self._check_dups(qargs) 383 self._check_qargs(qargs) 384 self._check_cargs(cargs) 385 386 # add the instruction onto the given wires 387 instruction_context = instruction, qargs, cargs 388 self.data.append(instruction_context) 389 390 # track variable parameters in instruction 391 for param_index, param in enumerate(instruction.params): 392 if isinstance(param, ParameterExpression): 393 current_parameters = self.parameters 394 395 for parameter in param.parameters: 396 if parameter in current_parameters: 397 self._parameter_table[parameter].append((instruction, param_index)) 398 else: 399 if parameter.name in {p.name for p in current_parameters}: 400 raise QiskitError( 401 'Name conflict on adding parameter: {}'.format(parameter.name)) 402 self._parameter_table[parameter] = [(instruction, param_index)] 403 404 return instruction 405 406 def add_register(self, *regs): 407 """Add registers.""" 408 if not regs: 409 return 410 411 if any([isinstance(reg, int) for reg in regs]): 412 # QuantumCircuit defined without registers 413 if len(regs) == 1 and isinstance(regs[0], int): 414 # QuantumCircuit with anonymous quantum wires e.g. QuantumCircuit(2) 415 regs = (QuantumRegister(regs[0], 'q'),) 416 elif len(regs) == 2 and all([isinstance(reg, int) for reg in regs]): 417 # QuantumCircuit with anonymous wires e.g. QuantumCircuit(2, 3) 418 regs = (QuantumRegister(regs[0], 'q'), ClassicalRegister(regs[1], 'c')) 419 else: 420 raise QiskitError("QuantumCircuit parameters can be Registers or Integers." 421 " If Integers, up to 2 arguments. QuantumCircuit was called" 422 " with %s." % (regs,)) 423 424 for register in regs: 425 if register.name in [reg.name for reg in self.qregs + self.cregs]: 426 raise QiskitError("register name \"%s\" already exists" 427 % register.name) 428 if isinstance(register, QuantumRegister): 429 self.qregs.append(register) 430 elif isinstance(register, ClassicalRegister): 431 self.cregs.append(register) 432 else: 433 raise QiskitError("expected a register") 434 435 def _check_dups(self, qubits): 436 """Raise exception if list of qubits contains duplicates.""" 437 squbits = set(qubits) 438 if len(squbits) != len(qubits): 439 raise QiskitError("duplicate qubit arguments") 440 441 def _check_qargs(self, qargs): 442 """Raise exception if a qarg is not in this circuit or bad format.""" 443 if not all(isinstance(i, Qubit) for i in qargs): 444 raise QiskitError("qarg is not a Qubit") 445 if not all(self.has_register(i.register) for i in qargs): 446 raise QiskitError("register not in this circuit") 447 448 def _check_cargs(self, cargs): 449 """Raise exception if clbit is not in this circuit or bad format.""" 450 if not all(isinstance(i, Clbit) for i in cargs): 451 raise QiskitError("carg is not a Clbit") 452 if not all(self.has_register(i.register) for i in cargs): 453 raise QiskitError("register not in this circuit") 454 455 def to_instruction(self, parameter_map=None): 456 """Create an Instruction out of this circuit. 457 458 Args: 459 parameter_map(dict): For parameterized circuits, a mapping from 460 parameters in the circuit to parameters to be used in the 461 instruction. If None, existing circuit parameters will also 462 parameterize the instruction. 463 464 Returns: 465 Instruction: a composite instruction encapsulating this circuit 466 (can be decomposed back) 467 """ 468 from qiskit.converters.circuit_to_instruction import circuit_to_instruction 469 return circuit_to_instruction(self, parameter_map) 470 471 def decompose(self): 472 """Call a decomposition pass on this circuit, 473 to decompose one level (shallow decompose). 474 475 Returns: 476 QuantumCircuit: a circuit one level decomposed 477 """ 478 from qiskit.transpiler.passes.decompose import Decompose 479 from qiskit.converters.circuit_to_dag import circuit_to_dag 480 from qiskit.converters.dag_to_circuit import dag_to_circuit 481 pass_ = Decompose() 482 decomposed_dag = pass_.run(circuit_to_dag(self)) 483 return dag_to_circuit(decomposed_dag) 484 485 def _check_compatible_regs(self, rhs): 486 """Raise exception if the circuits are defined on incompatible registers""" 487 list1 = self.qregs + self.cregs 488 list2 = rhs.qregs + rhs.cregs 489 for element1 in list1: 490 for element2 in list2: 491 if element2.name == element1.name: 492 if element1 != element2: 493 raise QiskitError("circuits are not compatible") 494 495 def qasm(self): 496 """Return OpenQASM string.""" 497 string_temp = self.header + "\n" 498 string_temp += self.extension_lib + "\n" 499 for register in self.qregs: 500 string_temp += register.qasm() + "\n" 501 for register in self.cregs: 502 string_temp += register.qasm() + "\n" 503 for instruction, qargs, cargs in self.data: 504 if instruction.name == 'measure': 505 qubit = qargs[0] 506 clbit = cargs[0] 507 string_temp += "%s %s[%d] -> %s[%d];\n" % (instruction.qasm(), 508 qubit.register.name, qubit.index, 509 clbit.register.name, clbit.index) 510 else: 511 string_temp += "%s %s;\n" % (instruction.qasm(), 512 ",".join(["%s[%d]" % (j.register.name, j.index) 513 for j in qargs + cargs])) 514 return string_temp 515 516 def draw(self, scale=0.7, filename=None, style=None, output=None, 517 interactive=False, line_length=None, plot_barriers=True, 518 reverse_bits=False, justify=None, vertical_compression='medium', idle_wires=True, 519 with_layout=True): 520 """Draw the quantum circuit 521 522 Using the output parameter you can specify the format. The choices are: 523 0. text: ASCII art string 524 1. latex: high-quality images, but heavy external software dependencies 525 2. matplotlib: purely in Python with no external dependencies 526 527 Defaults to an overcomplete basis, in order to not alter gates. 528 529 Args: 530 scale (float): scale of image to draw (shrink if < 1) 531 filename (str): file path to save image to 532 style (dict or str): dictionary of style or file name of style 533 file. You can refer to the 534 :ref:`Style Dict Doc <style-dict-doc>` for more information 535 on the contents. 536 output (str): Select the output method to use for drawing the 537 circuit. Valid choices are `text`, `latex`, `latex_source`, 538 `mpl`. By default the 'text' drawer is used unless a user 539 config file has an alternative backend set as the default. If 540 the output is passed in that backend will always be used. 541 interactive (bool): when set true show the circuit in a new window 542 (for `mpl` this depends on the matplotlib backend being used 543 supporting this). Note when used with either the `text` or the 544 `latex_source` output type this has no effect and will be 545 silently ignored. 546 line_length (int): sets the length of the lines generated by `text` 547 reverse_bits (bool): When set to True reverse the bit order inside 548 registers for the output visualization. 549 plot_barriers (bool): Enable/disable drawing barriers in the output 550 circuit. Defaults to True. 551 justify (string): Options are `left`, `right` or `none`, if anything 552 else is supplied it defaults to left justified. It refers to where 553 gates should be placed in the output circuit if there is an option. 554 `none` results in each gate being placed in its own column. Currently 555 only supported by text drawer. 556 vertical_compression (string): `high`, `medium` or `low`. It merges the 557 lines generated by `text` so the drawing will take less vertical room. 558 Default is `medium`. It is ignored if output is not `text`. 559 idle_wires (bool): Include idle wires. Default is True. 560 with_layout (bool): Include layout information, with labels on the physical 561 layout. Default is True. 562 Returns: 563 PIL.Image or matplotlib.figure or str or TextDrawing: 564 * PIL.Image: (output `latex`) an in-memory representation of the 565 image of the circuit diagram. 566 * matplotlib.figure: (output `mpl`) a matplotlib figure object 567 for the circuit diagram. 568 * str: (output `latex_source`). The LaTeX source code. 569 * TextDrawing: (output `text`). A drawing that can be printed as 570 ascii art 571 572 Raises: 573 VisualizationError: when an invalid output method is selected 574 """ 575 # pylint: disable=cyclic-import 576 from qiskit.visualization import circuit_drawer 577 return circuit_drawer(self, scale=scale, 578 filename=filename, style=style, 579 output=output, 580 interactive=interactive, 581 line_length=line_length, 582 plot_barriers=plot_barriers, 583 reverse_bits=reverse_bits, 584 justify=justify, 585 vertical_compression=vertical_compression, 586 idle_wires=idle_wires, 587 with_layout=with_layout) 588 589 def size(self): 590 """Returns total number of gate operations in circuit. 591 592 Returns: 593 int: Total number of gate operations. 594 """ 595 gate_ops = 0 596 for instr, _, _ in self.data: 597 if instr.name not in ['barrier', 'snapshot']: 598 gate_ops += 1 599 return gate_ops 600 601 def depth(self): 602 """Return circuit depth (i.e. length of critical path). 603 This does not include compiler or simulator directives 604 such as 'barrier' or 'snapshot'. 605 606 Returns: 607 int: Depth of circuit. 608 609 Notes: 610 The circuit depth and the DAG depth need not bt the 611 same. 612 """ 613 # Labels the registers by ints 614 # and then the qubit position in 615 # a register is given by reg_int+qubit_num 616 reg_offset = 0 617 reg_map = {} 618 for reg in self.qregs + self.cregs: 619 reg_map[reg.name] = reg_offset 620 reg_offset += reg.size 621 622 # A list that holds the height of each qubit 623 # and classical bit. 624 op_stack = [0] * reg_offset 625 # Here we are playing a modified version of 626 # Tetris where we stack gates, but multi-qubit 627 # gates, or measurements have a block for each 628 # qubit or cbit that are connected by a virtual 629 # line so that they all stacked at the same depth. 630 # Conditional gates act on all cbits in the register 631 # they are conditioned on. 632 # We treat barriers or snapshots different as 633 # They are transpiler and simulator directives. 634 # The max stack height is the circuit depth. 635 for instr, qargs, cargs in self.data: 636 levels = [] 637 reg_ints = [] 638 # If count then add one to stack heights 639 count = True 640 if instr.name in ['barrier', 'snapshot']: 641 count = False 642 for ind, reg in enumerate(qargs + cargs): 643 # Add to the stacks of the qubits and 644 # cbits used in the gate. 645 reg_ints.append(reg_map[reg.register.name] + reg.index) 646 if count: 647 levels.append(op_stack[reg_ints[ind]] + 1) 648 else: 649 levels.append(op_stack[reg_ints[ind]]) 650 # Assuming here that there is no controlled 651 # snapshots or barriers ever. 652 if instr.control: 653 # Controls operate over all bits in the 654 # classical register they use. 655 cint = reg_map[instr.control[0].name] 656 for off in range(instr.control[0].size): 657 if cint + off not in reg_ints: 658 reg_ints.append(cint + off) 659 levels.append(op_stack[cint + off] + 1) 660 661 max_level = max(levels) 662 for ind in reg_ints: 663 op_stack[ind] = max_level 664 665 return max(op_stack) 666 667 def width(self): 668 """Return number of qubits plus clbits in circuit. 669 670 Returns: 671 int: Width of circuit. 672 673 """ 674 return sum(reg.size for reg in self.qregs + self.cregs) 675 676 @property 677 def n_qubits(self): 678 """ 679 Return number of qubits. 680 """ 681 qubits = 0 682 for reg in self.qregs: 683 qubits += reg.size 684 return qubits 685 686 def count_ops(self): 687 """Count each operation kind in the circuit. 688 689 Returns: 690 OrderedDict: a breakdown of how many operations of each kind, sorted by amount. 691 """ 692 count_ops = {} 693 for instr, _, _ in self.data: 694 if instr.name in count_ops.keys(): 695 count_ops[instr.name] += 1 696 else: 697 count_ops[instr.name] = 1 698 return OrderedDict(sorted(count_ops.items(), key=lambda kv: kv[1], reverse=True)) 699 700 def num_connected_components(self, unitary_only=False): 701 """How many non-entangled subcircuits can the circuit be factored to. 702 703 Args: 704 unitary_only (bool): Compute only unitary part of graph. 705 706 Returns: 707 int: Number of connected components in circuit. 708 """ 709 # Convert registers to ints (as done in depth). 710 reg_offset = 0 711 reg_map = {} 712 713 if unitary_only: 714 regs = self.qregs 715 else: 716 regs = self.qregs + self.cregs 717 718 for reg in regs: 719 reg_map[reg.name] = reg_offset 720 reg_offset += reg.size 721 # Start with each qubit or cbit being its own subgraph. 722 sub_graphs = [[bit] for bit in range(reg_offset)] 723 724 num_sub_graphs = len(sub_graphs) 725 726 # Here we are traversing the gates and looking to see 727 # which of the sub_graphs the gate joins together. 728 for instr, qargs, cargs in self.data: 729 if unitary_only: 730 args = qargs 731 num_qargs = len(args) 732 else: 733 args = qargs + cargs 734 num_qargs = len(args) + (1 if instr.control else 0) 735 736 if num_qargs >= 2 and instr.name not in ['barrier', 'snapshot']: 737 graphs_touched = [] 738 num_touched = 0 739 # Controls necessarily join all the cbits in the 740 # register that they use. 741 if instr.control and not unitary_only: 742 creg = instr.control[0] 743 creg_int = reg_map[creg.name] 744 for coff in range(creg.size): 745 temp_int = creg_int + coff 746 for k in range(num_sub_graphs): 747 if temp_int in sub_graphs[k]: 748 graphs_touched.append(k) 749 num_touched += 1 750 break 751 752 for item in args: 753 reg_int = reg_map[item.register.name] + item.index 754 for k in range(num_sub_graphs): 755 if reg_int in sub_graphs[k]: 756 if k not in graphs_touched: 757 graphs_touched.append(k) 758 num_touched += 1 759 break 760 761 # If the gate touches more than one subgraph 762 # join those graphs together and return 763 # reduced number of subgraphs 764 if num_touched > 1: 765 connections = [] 766 for idx in graphs_touched: 767 connections.extend(sub_graphs[idx]) 768 _sub_graphs = [] 769 for idx in range(num_sub_graphs): 770 if idx not in graphs_touched: 771 _sub_graphs.append(sub_graphs[idx]) 772 _sub_graphs.append(connections) 773 sub_graphs = _sub_graphs 774 num_sub_graphs -= (num_touched - 1) 775 # Cannot go lower than one so break 776 if num_sub_graphs == 1: 777 break 778 return num_sub_graphs 779 780 def num_unitary_factors(self): 781 """Computes the number of tensor factors in the unitary 782 (quantum) part of the circuit only. 783 """ 784 return self.num_connected_components(unitary_only=True) 785 786 def num_tensor_factors(self): 787 """Computes the number of tensor factors in the unitary 788 (quantum) part of the circuit only. 789 790 Notes: 791 This is here for backwards compatibility, and will be 792 removed in a future release of qiskit. You should call 793 `num_unitary_factors` instead. 794 """ 795 return self.num_unitary_factors() 796 797 def copy(self, name=None): 798 """ 799 Args: 800 name (str): name to be given to the copied circuit, if None then the name stays the same 801 Returns: 802 QuantumCircuit: a deepcopy of the current circuit, with the name updated if 803 it was provided 804 """ 805 cpy = deepcopy(self) 806 if name: 807 cpy.name = name 808 return cpy 809 810 @staticmethod 811 def from_qasm_file(path): 812 """Take in a QASM file and generate a QuantumCircuit object. 813 814 Args: 815 path (str): Path to the file for a QASM program 816 Return: 817 QuantumCircuit: The QuantumCircuit object for the input QASM 818 """ 819 qasm = Qasm(filename=path) 820 return _circuit_from_qasm(qasm) 821 822 @staticmethod 823 def from_qasm_str(qasm_str): 824 """Take in a QASM string and generate a QuantumCircuit object. 825 826 Args: 827 qasm_str (str): A QASM program string 828 Return: 829 QuantumCircuit: The QuantumCircuit object for the input QASM 830 """ 831 qasm = Qasm(data=qasm_str) 832 return _circuit_from_qasm(qasm) 833 834 @property 835 def parameters(self): 836 """convenience function to get the parameters defined in the parameter table""" 837 return set(self._parameter_table.keys()) 838 839 def bind_parameters(self, value_dict): 840 """Assign parameters to values yielding a new circuit. 841 842 Args: 843 value_dict (dict): {parameter: value, ...} 844 845 Raises: 846 QiskitError: If value_dict contains parameters not present in the circuit 847 848 Returns: 849 QuantumCircuit: copy of self with assignment substitution. 850 """ 851 new_circuit = self.copy() 852 unrolled_value_dict = self._unroll_param_dict(value_dict) 853 854 if unrolled_value_dict.keys() > self.parameters: 855 raise QiskitError('Cannot bind parameters ({}) not present in the circuit.'.format( 856 [str(p) for p in value_dict.keys() - self.parameters])) 857 858 for parameter, value in unrolled_value_dict.items(): 859 new_circuit._bind_parameter(parameter, value) 860 # clear evaluated expressions 861 for parameter in unrolled_value_dict: 862 del new_circuit._parameter_table[parameter] 863 return new_circuit 864 865 def _unroll_param_dict(self, value_dict): 866 unrolled_value_dict = {} 867 for (param, value) in value_dict.items(): 868 if isinstance(param, ParameterExpression): 869 unrolled_value_dict[param] = value 870 if isinstance(param, ParameterVector): 871 if not len(param) == len(value): 872 raise QiskitError('ParameterVector {} has length {}, which ' 873 'differs from value list {} of ' 874 'len {}'.format(param, len(param), value, len(value))) 875 unrolled_value_dict.update(zip(param, value)) 876 return unrolled_value_dict 877 878 def _bind_parameter(self, parameter, value): 879 """Assigns a parameter value to matching instructions in-place.""" 880 for (instr, param_index) in self._parameter_table[parameter]: 881 instr.params[param_index] = instr.params[param_index].bind({parameter: value}) 882 883 def _substitute_parameters(self, parameter_map): 884 """For every {existing_parameter: replacement_parameter} pair in 885 parameter_map, substitute replacement for existing in all 886 circuit instructions and the parameter table. 887 """ 888 for old_parameter, new_parameter in parameter_map.items(): 889 for (instr, param_index) in self._parameter_table[old_parameter]: 890 new_param = instr.params[param_index].subs({old_parameter: new_parameter}) 891 instr.params[param_index] = new_param 892 self._parameter_table[new_parameter] = self._parameter_table.pop(old_parameter) 893 894 895 def _circuit_from_qasm(qasm): 896 # pylint: disable=cyclic-import 897 from qiskit.converters import ast_to_dag 898 from qiskit.converters import dag_to_circuit 899 ast = qasm.parse() 900 dag = ast_to_dag(ast) 901 return dag_to_circuit(dag) 902 [end of qiskit/circuit/quantumcircuit.py] [start of qiskit/dagcircuit/dagnode.py] 1 # -*- coding: utf-8 -*- 2 3 # This code is part of Qiskit. 4 # 5 # (C) Copyright IBM 2017, 2019. 6 # 7 # This code is licensed under the Apache License, Version 2.0. You may 8 # obtain a copy of this license in the LICENSE.txt file in the root directory 9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 10 # 11 # Any modifications or derivative works of this code must retain this 12 # copyright notice, and modified files need to carry a notice indicating 13 # that they have been altered from the originals. 14 15 """Object to represent the information at a node in the DAGCircuit""" 16 17 from qiskit.exceptions import QiskitError 18 19 20 class DAGNode: 21 """Object to represent the information at a node in the DAGCircuit 22 23 It is used as the return value from `*_nodes()` functions and can 24 be supplied to functions that take a node. 25 """ 26 27 def __init__(self, data_dict, nid=-1): 28 """Create a node """ 29 self._node_id = nid 30 self.data_dict = data_dict 31 32 @property 33 def type(self): 34 """Returns a str which is the type of the node else None""" 35 return self.data_dict.get('type') 36 37 @property 38 def op(self): 39 """Returns the Instruction object corresponding to the op for the node else None""" 40 if 'type' not in self.data_dict or self.data_dict['type'] != 'op': 41 raise QiskitError("The node %s is not an op node" % (str(self))) 42 return self.data_dict.get('op') 43 44 @property 45 def name(self): 46 """Returns a str which is the name of the node else None""" 47 return self.data_dict.get('name') 48 49 @name.setter 50 def name(self, new_name): 51 """Sets the name of the node to be the given value""" 52 self.data_dict['name'] = new_name 53 54 @property 55 def qargs(self): 56 """ 57 Returns list of (QuantumRegister, int) tuples where the int is the index 58 of the qubit else an empty list 59 """ 60 return self.data_dict.get('qargs', []) 61 62 @qargs.setter 63 def qargs(self, new_qargs): 64 """Sets the qargs to be the given list of qargs""" 65 self.data_dict['qargs'] = new_qargs 66 67 @property 68 def cargs(self): 69 """ 70 Returns list of (ClassicalRegister, int) tuples where the int is the index 71 of the cbit else an empty list 72 """ 73 return self.data_dict.get('cargs', []) 74 75 @property 76 def condition(self): 77 """ 78 Returns a tuple (ClassicalRegister, int) where the int is the 79 value of the condition else None 80 """ 81 return self.data_dict.get('condition') 82 83 @property 84 def wire(self): 85 """ 86 Returns (Register, int) tuple where the int is the index of 87 the wire else None 88 """ 89 if self.data_dict['type'] not in ['in', 'out']: 90 raise QiskitError('The node %s is not an input/output node' % str(self)) 91 return self.data_dict.get('wire') 92 93 def __lt__(self, other): 94 return self._node_id < other._node_id 95 96 def __gt__(self, other): 97 return self._node_id > other._node_id 98 99 def __hash__(self): 100 """Needed for ancestors function, which returns a set 101 to be in a set requires the object to be hashable 102 """ 103 return hash(id(self)) 104 105 def __str__(self): 106 # TODO is this used anywhere other than in DAG drawing? 107 # needs to be unique as it is what pydot uses to distinguish nodes 108 return str(id(self)) 109 110 def pop(self, val): 111 """Remove the provided value from the dictionary""" 112 del self.data_dict[val] 113 114 @staticmethod 115 def semantic_eq(node1, node2): 116 """ 117 Check if DAG nodes are considered equivalent, e.g. as a node_match for nx.is_isomorphic. 118 119 Args: 120 node1 (DAGNode): A node to compare. 121 node2 (DAGNode): The other node to compare. 122 123 Return: 124 Bool: If node1 == node2 125 """ 126 # For barriers, qarg order is not significant so compare as sets 127 if 'barrier' == node1.name == node2.name: 128 return set(node1.qargs) == set(node2.qargs) 129 return node1.data_dict == node2.data_dict 130 [end of qiskit/dagcircuit/dagnode.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
453c69f8d83e2f18f9db01ad501d53e270de1924
DAGCircuit Documentation types outdated <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Information The documentation of DAGCircuit still references tuples instead of the QuantumRegisters as input type for wires and qargs. This does not work any more. - **Qiskit Terra version**: 0.8.2 - **Python version**: 3.7.4 - **Operating system**: macOS 10.14 ### What is the current behavior? An excerpt from the current documentation: ``` def apply_operation_back(self, op, qargs=None, cargs=None, condition=None): """Apply an operation to the output of the circuit. Args: op (Instruction): the operation associated with the DAG node qargs (list[tuple]): qubits that op will be applied to cargs (list[tuple]): cbits that op will be applied to condition (tuple or None): optional condition (ClassicalRegister, int) Returns: DAGNode: the current max node Raises: DAGCircuitError: if a leaf node is connected to multiple outputs """ ``` which lists tuples as expected type.
I just took a look at the current master docstring for `apply_operation_back()` and it is correctly listing the input type as a list of Qubit objects and Clbit objects: https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/dagcircuit/dagcircuit.py#L239-L254 @eddieschoute what do you think we need here? We can update the docstring to include more exposition on the docstring to try and make it clearer (it's a bit terse now). But nothing there looks wrong to me. If it's just a matter of the [hosted documentation](https://qiskit.org/documentation/autodoc/qiskit.dagcircuit.dagcircuit.html) and pip release not being up to date, that will get corrected automatically after we release 0.9 with the bit classes. They didn't exist in 0.8.x so the published documentation is correct for the released version. I checked out the latest version and you're right about `apply_operation_back`, however, `apply_operation_front` still has the old description, see https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/dagcircuit/dagcircuit.py#L285. It also looks like `compose_back` still uses the old language, see https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/dagcircuit/dagcircuit.py#L437. I suspect this should be qubits or quantum registers that are being connected by edge_map. Sorry for the slow response. On taking another look at this the `edge_map` case is actually a bit weird. Looking at the code which actually consumes the argument `_check_edgemap_registers()` and `_check_wiremap_validity()` in It looks like the key is still a tuple of `(register, index)` (although they can be bit classes too because the access patterns from tuples were preserved with deprecated functions on the new classes) but the values are the bit classes (either Qubit or Clbit). I'll update the docs for both `compose_back()` and `apply_operation_front()` shortly.
2019-08-13T21:51:03Z
<patch> diff --git a/qiskit/dagcircuit/dagcircuit.py b/qiskit/dagcircuit/dagcircuit.py --- a/qiskit/dagcircuit/dagcircuit.py +++ b/qiskit/dagcircuit/dagcircuit.py @@ -190,7 +190,7 @@ def _check_bits(self, args, amap): Args: args (list): (register,idx) tuples - amap (dict): a dictionary keyed on (register,idx) tuples + amap (dict): a dictionary keyed on bit objects Raises: DAGCircuitError: if a qubit is not contained in amap @@ -199,7 +199,7 @@ def _check_bits(self, args, amap): for wire in args: if wire not in amap: raise DAGCircuitError("(qu)bit %s[%d] not found" % - (wire[0].name, wire[1])) + (wire.register.name, wire.index)) def _bits_in_condition(self, cond): """Return a list of bits in the given condition. @@ -287,8 +287,8 @@ def apply_operation_front(self, op, qargs, cargs, condition=None): Args: op (Instruction): the operation associated with the DAG node - qargs (list[tuple]): qubits that op will be applied to - cargs (list[tuple]): cbits that op will be applied to + qargs (list[Qubit]): Qubits that op will be applied to + cargs (list[Clbit]): Clbits that op will be applied to condition (tuple or None): optional condition (ClassicalRegister, value) Returns: @@ -442,9 +442,10 @@ def compose_back(self, input_circuit, edge_map=None): Args: input_circuit (DAGCircuit): circuit to append - edge_map (dict): map {(Register, int): (Register, int)} - from the output wires of input_circuit to input wires - of self. + edge_map (dict): map {Bit: Bit} from the output wires of + input_circuit to input wires of self. The key and value + can either be of type Qubit or Clbit depending on the + type of the node. Raises: DAGCircuitError: if missing, duplicate or inconsistent wire @@ -477,11 +478,12 @@ def compose_back(self, input_circuit, edge_map=None): m_wire = edge_map.get(nd.wire, nd.wire) # the mapped wire should already exist if m_wire not in self.output_map: - raise DAGCircuitError("wire %s[%d] not in self" % (m_wire[0].name, m_wire[1])) + raise DAGCircuitError("wire %s[%d] not in self" % ( + m_wire.register.name, m_wire.index)) if nd.wire not in input_circuit.wires: raise DAGCircuitError("inconsistent wire type for %s[%d] in input_circuit" - % (nd.wire[0].name, nd.wire[1])) + % (nd.register.name, nd.wire.index)) elif nd.type == "out": # ignore output nodes @@ -540,12 +542,13 @@ def compose_front(self, input_circuit, edge_map=None): m_name = edge_map.get(nd.wire, nd.wire) # the mapped wire should already exist if m_name not in self.input_map: - raise DAGCircuitError("wire %s[%d] not in self" % (m_name[0].name, m_name[1])) + raise DAGCircuitError("wire %s[%d] not in self" % ( + m_name.register.name, m_name.index)) if nd.wire not in input_circuit.wires: raise DAGCircuitError( "inconsistent wire for %s[%d] in input_circuit" - % (nd.wire[0].name, nd.wire[1])) + % (nd.wire.register.name, nd.wire.index)) elif nd.type == "in": # ignore input nodes @@ -692,7 +695,7 @@ def _full_pred_succ_maps(self, pred_map, succ_map, input_circuit, self.output_map[w])[0] if len(list(self._multi_graph.predecessors(self.output_map[w]))) != 1: raise DAGCircuitError("too many predecessors for %s[%d] " - "output node" % (w[0], w[1])) + "output node" % (w.register, w.index)) return full_pred_map, full_succ_map </patch>
[]
[]
docker__compose-2051
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Extend an entire Compose file It is a common pattern to define a base Compose file, and then define Compose files for different environments with a few small changes. It is currently possible to extend single services, but it is very verbose to include a large number of services and make a small change to one of them (for example, setting `RAILS_ENV=production`). It should be possible to extend a Compose file with a complete set of services from another Compose file. All of those services will be copied into the Compose file, as if you were extending each of the services individually: - If you don't define a service in the child file, it is copied as-is. - If you do define a service, it should behave as if you've extended that single service. - There is no way of undefining a service (yet) This is an intentionally simple first step, and I am intentionally not defining a syntax so we can discuss. Design questions: - This is the first top-level configuration we have added to Compose. How should we do this? Related issues / suggested designs: https://github.com/docker/compose/issues/318 #1380 [dcao-merge](https://github.com/dnephin/compose-addons#dcao-merge) [(This is part of an initiative to define an app once in a way that can be used across dev, test and prod.)](https://github.com/docker/compose/issues/1784) </issue> <code> [start of README.md] 1 Docker Compose 2 ============== 3 ![Docker Compose](logo.png?raw=true "Docker Compose Logo") 4 5 *(Previously known as Fig)* 6 7 Compose is a tool for defining and running multi-container applications with 8 Docker. With Compose, you define a multi-container application in a single 9 file, then spin your application up in a single command which does everything 10 that needs to be done to get it running. 11 12 Compose is great for development environments, staging servers, and CI. We don't 13 recommend that you use it in production yet. 14 15 Using Compose is basically a three-step process. 16 17 1. Define your app's environment with a `Dockerfile` so it can be 18 reproduced anywhere. 19 2. Define the services that make up your app in `docker-compose.yml` so 20 they can be run together in an isolated environment: 21 3. Lastly, run `docker-compose up` and Compose will start and run your entire app. 22 23 A `docker-compose.yml` looks like this: 24 25 web: 26 build: . 27 ports: 28 - "5000:5000" 29 volumes: 30 - .:/code 31 links: 32 - redis 33 redis: 34 image: redis 35 36 Compose has commands for managing the whole lifecycle of your application: 37 38 * Start, stop and rebuild services 39 * View the status of running services 40 * Stream the log output of running services 41 * Run a one-off command on a service 42 43 Installation and documentation 44 ------------------------------ 45 46 - Full documentation is available on [Docker's website](http://docs.docker.com/compose/). 47 - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose) 48 49 Contributing 50 ------------ 51 52 [![Build Status](http://jenkins.dockerproject.org/buildStatus/icon?job=Compose%20Master)](http://jenkins.dockerproject.org/job/Compose%20Master/) 53 54 Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md). 55 56 Releasing 57 --------- 58 59 Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/RELEASE_PROCESS.md). 60 [end of README.md] [start of compose/cli/command.py] 1 from __future__ import absolute_import 2 from __future__ import unicode_literals 3 4 import logging 5 import os 6 import re 7 8 import six 9 from requests.exceptions import ConnectionError 10 from requests.exceptions import SSLError 11 12 from . import errors 13 from . import verbose_proxy 14 from .. import __version__ 15 from .. import config 16 from ..project import Project 17 from ..service import ConfigError 18 from .docker_client import docker_client 19 from .docopt_command import DocoptCommand 20 from .utils import call_silently 21 from .utils import is_mac 22 from .utils import is_ubuntu 23 24 log = logging.getLogger(__name__) 25 26 27 class Command(DocoptCommand): 28 base_dir = '.' 29 30 def dispatch(self, *args, **kwargs): 31 try: 32 super(Command, self).dispatch(*args, **kwargs) 33 except SSLError as e: 34 raise errors.UserError('SSL error: %s' % e) 35 except ConnectionError: 36 if call_silently(['which', 'docker']) != 0: 37 if is_mac(): 38 raise errors.DockerNotFoundMac() 39 elif is_ubuntu(): 40 raise errors.DockerNotFoundUbuntu() 41 else: 42 raise errors.DockerNotFoundGeneric() 43 elif call_silently(['which', 'boot2docker']) == 0: 44 raise errors.ConnectionErrorBoot2Docker() 45 else: 46 raise errors.ConnectionErrorGeneric(self.get_client().base_url) 47 48 def perform_command(self, options, handler, command_options): 49 if options['COMMAND'] in ('help', 'version'): 50 # Skip looking up the compose file. 51 handler(None, command_options) 52 return 53 54 if 'FIG_FILE' in os.environ: 55 log.warn('The FIG_FILE environment variable is deprecated.') 56 log.warn('Please use COMPOSE_FILE instead.') 57 58 explicit_config_path = options.get('--file') or os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE') 59 project = self.get_project( 60 explicit_config_path, 61 project_name=options.get('--project-name'), 62 verbose=options.get('--verbose')) 63 64 handler(project, command_options) 65 66 def get_client(self, verbose=False): 67 client = docker_client() 68 if verbose: 69 version_info = six.iteritems(client.version()) 70 log.info("Compose version %s", __version__) 71 log.info("Docker base_url: %s", client.base_url) 72 log.info("Docker version: %s", 73 ", ".join("%s=%s" % item for item in version_info)) 74 return verbose_proxy.VerboseProxy('docker', client) 75 return client 76 77 def get_project(self, config_path=None, project_name=None, verbose=False): 78 config_details = config.find(self.base_dir, config_path) 79 80 try: 81 return Project.from_dicts( 82 self.get_project_name(config_details.working_dir, project_name), 83 config.load(config_details), 84 self.get_client(verbose=verbose)) 85 except ConfigError as e: 86 raise errors.UserError(six.text_type(e)) 87 88 def get_project_name(self, working_dir, project_name=None): 89 def normalize_name(name): 90 return re.sub(r'[^a-z0-9]', '', name.lower()) 91 92 if 'FIG_PROJECT_NAME' in os.environ: 93 log.warn('The FIG_PROJECT_NAME environment variable is deprecated.') 94 log.warn('Please use COMPOSE_PROJECT_NAME instead.') 95 96 project_name = ( 97 project_name or 98 os.environ.get('COMPOSE_PROJECT_NAME') or 99 os.environ.get('FIG_PROJECT_NAME')) 100 if project_name is not None: 101 return normalize_name(project_name) 102 103 project = os.path.basename(os.path.abspath(working_dir)) 104 if project: 105 return normalize_name(project) 106 107 return 'default' 108 [end of compose/cli/command.py] [start of compose/cli/main.py] 1 from __future__ import print_function 2 from __future__ import unicode_literals 3 4 import logging 5 import re 6 import signal 7 import sys 8 from inspect import getdoc 9 from operator import attrgetter 10 11 import dockerpty 12 from docker.errors import APIError 13 from requests.exceptions import ReadTimeout 14 15 from .. import __version__ 16 from .. import legacy 17 from ..config import parse_environment 18 from ..const import DEFAULT_TIMEOUT 19 from ..const import HTTP_TIMEOUT 20 from ..progress_stream import StreamOutputError 21 from ..project import ConfigurationError 22 from ..project import NoSuchService 23 from ..service import BuildError 24 from ..service import ConvergenceStrategy 25 from ..service import NeedsBuildError 26 from .command import Command 27 from .docopt_command import NoSuchCommand 28 from .errors import UserError 29 from .formatter import Formatter 30 from .log_printer import LogPrinter 31 from .utils import get_version_info 32 from .utils import yesno 33 34 log = logging.getLogger(__name__) 35 console_handler = logging.StreamHandler(sys.stderr) 36 37 INSECURE_SSL_WARNING = """ 38 Warning: --allow-insecure-ssl is deprecated and has no effect. 39 It will be removed in a future version of Compose. 40 """ 41 42 43 def main(): 44 setup_logging() 45 try: 46 command = TopLevelCommand() 47 command.sys_dispatch() 48 except KeyboardInterrupt: 49 log.error("\nAborting.") 50 sys.exit(1) 51 except (UserError, NoSuchService, ConfigurationError, legacy.LegacyError) as e: 52 log.error(e.msg) 53 sys.exit(1) 54 except NoSuchCommand as e: 55 log.error("No such command: %s", e.command) 56 log.error("") 57 log.error("\n".join(parse_doc_section("commands:", getdoc(e.supercommand)))) 58 sys.exit(1) 59 except APIError as e: 60 log.error(e.explanation) 61 sys.exit(1) 62 except BuildError as e: 63 log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason)) 64 sys.exit(1) 65 except StreamOutputError as e: 66 log.error(e) 67 sys.exit(1) 68 except NeedsBuildError as e: 69 log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name) 70 sys.exit(1) 71 except ReadTimeout as e: 72 log.error( 73 "An HTTP request took too long to complete. Retry with --verbose to obtain debug information.\n" 74 "If you encounter this issue regularly because of slow network conditions, consider setting " 75 "COMPOSE_HTTP_TIMEOUT to a higher value (current value: %s)." % HTTP_TIMEOUT 76 ) 77 78 79 def setup_logging(): 80 root_logger = logging.getLogger() 81 root_logger.addHandler(console_handler) 82 root_logger.setLevel(logging.DEBUG) 83 84 # Disable requests logging 85 logging.getLogger("requests").propagate = False 86 87 88 # stolen from docopt master 89 def parse_doc_section(name, source): 90 pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)', 91 re.IGNORECASE | re.MULTILINE) 92 return [s.strip() for s in pattern.findall(source)] 93 94 95 class TopLevelCommand(Command): 96 """Define and run multi-container applications with Docker. 97 98 Usage: 99 docker-compose [options] [COMMAND] [ARGS...] 100 docker-compose -h|--help 101 102 Options: 103 -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) 104 -p, --project-name NAME Specify an alternate project name (default: directory name) 105 --verbose Show more output 106 -v, --version Print version and exit 107 108 Commands: 109 build Build or rebuild services 110 help Get help on a command 111 kill Kill containers 112 logs View output from containers 113 pause Pause services 114 port Print the public port for a port binding 115 ps List containers 116 pull Pulls service images 117 restart Restart services 118 rm Remove stopped containers 119 run Run a one-off command 120 scale Set number of containers for a service 121 start Start services 122 stop Stop services 123 unpause Unpause services 124 up Create and start containers 125 migrate-to-labels Recreate containers to add labels 126 version Show the Docker-Compose version information 127 128 """ 129 def docopt_options(self): 130 options = super(TopLevelCommand, self).docopt_options() 131 options['version'] = get_version_info('compose') 132 return options 133 134 def perform_command(self, options, *args, **kwargs): 135 if options.get('--verbose'): 136 console_handler.setFormatter(logging.Formatter('%(name)s.%(funcName)s: %(message)s')) 137 console_handler.setLevel(logging.DEBUG) 138 else: 139 console_handler.setFormatter(logging.Formatter()) 140 console_handler.setLevel(logging.INFO) 141 142 return super(TopLevelCommand, self).perform_command(options, *args, **kwargs) 143 144 def build(self, project, options): 145 """ 146 Build or rebuild services. 147 148 Services are built once and then tagged as `project_service`, 149 e.g. `composetest_db`. If you change a service's `Dockerfile` or the 150 contents of its build directory, you can run `docker-compose build` to rebuild it. 151 152 Usage: build [options] [SERVICE...] 153 154 Options: 155 --no-cache Do not use cache when building the image. 156 --pull Always attempt to pull a newer version of the image. 157 """ 158 no_cache = bool(options.get('--no-cache', False)) 159 pull = bool(options.get('--pull', False)) 160 project.build(service_names=options['SERVICE'], no_cache=no_cache, pull=pull) 161 162 def help(self, project, options): 163 """ 164 Get help on a command. 165 166 Usage: help COMMAND 167 """ 168 handler = self.get_handler(options['COMMAND']) 169 raise SystemExit(getdoc(handler)) 170 171 def kill(self, project, options): 172 """ 173 Force stop service containers. 174 175 Usage: kill [options] [SERVICE...] 176 177 Options: 178 -s SIGNAL SIGNAL to send to the container. 179 Default signal is SIGKILL. 180 """ 181 signal = options.get('-s', 'SIGKILL') 182 183 project.kill(service_names=options['SERVICE'], signal=signal) 184 185 def logs(self, project, options): 186 """ 187 View output from containers. 188 189 Usage: logs [options] [SERVICE...] 190 191 Options: 192 --no-color Produce monochrome output. 193 """ 194 containers = project.containers(service_names=options['SERVICE'], stopped=True) 195 196 monochrome = options['--no-color'] 197 print("Attaching to", list_containers(containers)) 198 LogPrinter(containers, monochrome=monochrome).run() 199 200 def pause(self, project, options): 201 """ 202 Pause services. 203 204 Usage: pause [SERVICE...] 205 """ 206 project.pause(service_names=options['SERVICE']) 207 208 def port(self, project, options): 209 """ 210 Print the public port for a port binding. 211 212 Usage: port [options] SERVICE PRIVATE_PORT 213 214 Options: 215 --protocol=proto tcp or udp [default: tcp] 216 --index=index index of the container if there are multiple 217 instances of a service [default: 1] 218 """ 219 index = int(options.get('--index')) 220 service = project.get_service(options['SERVICE']) 221 try: 222 container = service.get_container(number=index) 223 except ValueError as e: 224 raise UserError(str(e)) 225 print(container.get_local_port( 226 options['PRIVATE_PORT'], 227 protocol=options.get('--protocol') or 'tcp') or '') 228 229 def ps(self, project, options): 230 """ 231 List containers. 232 233 Usage: ps [options] [SERVICE...] 234 235 Options: 236 -q Only display IDs 237 """ 238 containers = sorted( 239 project.containers(service_names=options['SERVICE'], stopped=True) + 240 project.containers(service_names=options['SERVICE'], one_off=True), 241 key=attrgetter('name')) 242 243 if options['-q']: 244 for container in containers: 245 print(container.id) 246 else: 247 headers = [ 248 'Name', 249 'Command', 250 'State', 251 'Ports', 252 ] 253 rows = [] 254 for container in containers: 255 command = container.human_readable_command 256 if len(command) > 30: 257 command = '%s ...' % command[:26] 258 rows.append([ 259 container.name, 260 command, 261 container.human_readable_state, 262 container.human_readable_ports, 263 ]) 264 print(Formatter().table(headers, rows)) 265 266 def pull(self, project, options): 267 """ 268 Pulls images for services. 269 270 Usage: pull [options] [SERVICE...] 271 272 Options: 273 --allow-insecure-ssl Deprecated - no effect. 274 """ 275 if options['--allow-insecure-ssl']: 276 log.warn(INSECURE_SSL_WARNING) 277 278 project.pull( 279 service_names=options['SERVICE'], 280 ) 281 282 def rm(self, project, options): 283 """ 284 Remove stopped service containers. 285 286 Usage: rm [options] [SERVICE...] 287 288 Options: 289 -f, --force Don't ask to confirm removal 290 -v Remove volumes associated with containers 291 """ 292 all_containers = project.containers(service_names=options['SERVICE'], stopped=True) 293 stopped_containers = [c for c in all_containers if not c.is_running] 294 295 if len(stopped_containers) > 0: 296 print("Going to remove", list_containers(stopped_containers)) 297 if options.get('--force') \ 298 or yesno("Are you sure? [yN] ", default=False): 299 project.remove_stopped( 300 service_names=options['SERVICE'], 301 v=options.get('-v', False) 302 ) 303 else: 304 print("No stopped containers") 305 306 def run(self, project, options): 307 """ 308 Run a one-off command on a service. 309 310 For example: 311 312 $ docker-compose run web python manage.py shell 313 314 By default, linked services will be started, unless they are already 315 running. If you do not want to start linked services, use 316 `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`. 317 318 Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...] 319 320 Options: 321 --allow-insecure-ssl Deprecated - no effect. 322 -d Detached mode: Run container in the background, print 323 new container name. 324 --name NAME Assign a name to the container 325 --entrypoint CMD Override the entrypoint of the image. 326 -e KEY=VAL Set an environment variable (can be used multiple times) 327 -u, --user="" Run as specified username or uid 328 --no-deps Don't start linked services. 329 --rm Remove container after run. Ignored in detached mode. 330 -p, --publish=[] Publish a container's port(s) to the host 331 --service-ports Run command with the service's ports enabled and mapped 332 to the host. 333 -T Disable pseudo-tty allocation. By default `docker-compose run` 334 allocates a TTY. 335 """ 336 service = project.get_service(options['SERVICE']) 337 338 if options['--allow-insecure-ssl']: 339 log.warn(INSECURE_SSL_WARNING) 340 341 if not options['--no-deps']: 342 deps = service.get_linked_service_names() 343 344 if len(deps) > 0: 345 project.up( 346 service_names=deps, 347 start_deps=True, 348 strategy=ConvergenceStrategy.never, 349 ) 350 351 tty = True 352 if options['-d'] or options['-T'] or not sys.stdin.isatty(): 353 tty = False 354 355 if options['COMMAND']: 356 command = [options['COMMAND']] + options['ARGS'] 357 else: 358 command = service.options.get('command') 359 360 container_options = { 361 'command': command, 362 'tty': tty, 363 'stdin_open': not options['-d'], 364 'detach': options['-d'], 365 } 366 367 if options['-e']: 368 container_options['environment'] = parse_environment(options['-e']) 369 370 if options['--entrypoint']: 371 container_options['entrypoint'] = options.get('--entrypoint') 372 373 if options['--rm']: 374 container_options['restart'] = None 375 376 if options['--user']: 377 container_options['user'] = options.get('--user') 378 379 if not options['--service-ports']: 380 container_options['ports'] = [] 381 382 if options['--publish']: 383 container_options['ports'] = options.get('--publish') 384 385 if options['--publish'] and options['--service-ports']: 386 raise UserError( 387 'Service port mapping and manual port mapping ' 388 'can not be used togather' 389 ) 390 391 if options['--name']: 392 container_options['name'] = options['--name'] 393 394 try: 395 container = service.create_container( 396 quiet=True, 397 one_off=True, 398 **container_options 399 ) 400 except APIError as e: 401 legacy.check_for_legacy_containers( 402 project.client, 403 project.name, 404 [service.name], 405 allow_one_off=False, 406 ) 407 408 raise e 409 410 if options['-d']: 411 service.start_container(container) 412 print(container.name) 413 else: 414 dockerpty.start(project.client, container.id, interactive=not options['-T']) 415 exit_code = container.wait() 416 if options['--rm']: 417 project.client.remove_container(container.id) 418 sys.exit(exit_code) 419 420 def scale(self, project, options): 421 """ 422 Set number of containers to run for a service. 423 424 Numbers are specified in the form `service=num` as arguments. 425 For example: 426 427 $ docker-compose scale web=2 worker=3 428 429 Usage: scale [options] [SERVICE=NUM...] 430 431 Options: 432 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. 433 (default: 10) 434 """ 435 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) 436 437 for s in options['SERVICE=NUM']: 438 if '=' not in s: 439 raise UserError('Arguments to scale should be in the form service=num') 440 service_name, num = s.split('=', 1) 441 try: 442 num = int(num) 443 except ValueError: 444 raise UserError('Number of containers for service "%s" is not a ' 445 'number' % service_name) 446 project.get_service(service_name).scale(num, timeout=timeout) 447 448 def start(self, project, options): 449 """ 450 Start existing containers. 451 452 Usage: start [SERVICE...] 453 """ 454 project.start(service_names=options['SERVICE']) 455 456 def stop(self, project, options): 457 """ 458 Stop running containers without removing them. 459 460 They can be started again with `docker-compose start`. 461 462 Usage: stop [options] [SERVICE...] 463 464 Options: 465 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. 466 (default: 10) 467 """ 468 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) 469 project.stop(service_names=options['SERVICE'], timeout=timeout) 470 471 def restart(self, project, options): 472 """ 473 Restart running containers. 474 475 Usage: restart [options] [SERVICE...] 476 477 Options: 478 -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. 479 (default: 10) 480 """ 481 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) 482 project.restart(service_names=options['SERVICE'], timeout=timeout) 483 484 def unpause(self, project, options): 485 """ 486 Unpause services. 487 488 Usage: unpause [SERVICE...] 489 """ 490 project.unpause(service_names=options['SERVICE']) 491 492 def up(self, project, options): 493 """ 494 Builds, (re)creates, starts, and attaches to containers for a service. 495 496 Unless they are already running, this command also starts any linked services. 497 498 The `docker-compose up` command aggregates the output of each container. When 499 the command exits, all containers are stopped. Running `docker-compose up -d` 500 starts the containers in the background and leaves them running. 501 502 If there are existing containers for a service, and the service's configuration 503 or image was changed after the container's creation, `docker-compose up` picks 504 up the changes by stopping and recreating the containers (preserving mounted 505 volumes). To prevent Compose from picking up changes, use the `--no-recreate` 506 flag. 507 508 If you want to force Compose to stop and recreate all containers, use the 509 `--force-recreate` flag. 510 511 Usage: up [options] [SERVICE...] 512 513 Options: 514 --allow-insecure-ssl Deprecated - no effect. 515 -d Detached mode: Run containers in the background, 516 print new container names. 517 --no-color Produce monochrome output. 518 --no-deps Don't start linked services. 519 --force-recreate Recreate containers even if their configuration and 520 image haven't changed. Incompatible with --no-recreate. 521 --no-recreate If containers already exist, don't recreate them. 522 Incompatible with --force-recreate. 523 --no-build Don't build an image, even if it's missing 524 -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown 525 when attached or when containers are already 526 running. (default: 10) 527 """ 528 if options['--allow-insecure-ssl']: 529 log.warn(INSECURE_SSL_WARNING) 530 531 monochrome = options['--no-color'] 532 start_deps = not options['--no-deps'] 533 service_names = options['SERVICE'] 534 timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) 535 536 to_attach = project.up( 537 service_names=service_names, 538 start_deps=start_deps, 539 strategy=convergence_strategy_from_opts(options), 540 do_build=not options['--no-build'], 541 timeout=timeout 542 ) 543 544 if not options['-d']: 545 log_printer = build_log_printer(to_attach, service_names, monochrome) 546 attach_to_logs(project, log_printer, service_names, timeout) 547 548 def migrate_to_labels(self, project, _options): 549 """ 550 Recreate containers to add labels 551 552 If you're coming from Compose 1.2 or earlier, you'll need to remove or 553 migrate your existing containers after upgrading Compose. This is 554 because, as of version 1.3, Compose uses Docker labels to keep track 555 of containers, and so they need to be recreated with labels added. 556 557 If Compose detects containers that were created without labels, it 558 will refuse to run so that you don't end up with two sets of them. If 559 you want to keep using your existing containers (for example, because 560 they have data volumes you want to preserve) you can migrate them with 561 the following command: 562 563 docker-compose migrate-to-labels 564 565 Alternatively, if you're not worried about keeping them, you can 566 remove them - Compose will just create new ones. 567 568 docker rm -f myapp_web_1 myapp_db_1 ... 569 570 Usage: migrate-to-labels 571 """ 572 legacy.migrate_project_to_labels(project) 573 574 def version(self, project, options): 575 """ 576 Show version informations 577 578 Usage: version [--short] 579 580 Options: 581 --short Shows only Compose's version number. 582 """ 583 if options['--short']: 584 print(__version__) 585 else: 586 print(get_version_info('full')) 587 588 589 def convergence_strategy_from_opts(options): 590 no_recreate = options['--no-recreate'] 591 force_recreate = options['--force-recreate'] 592 if force_recreate and no_recreate: 593 raise UserError("--force-recreate and --no-recreate cannot be combined.") 594 595 if force_recreate: 596 return ConvergenceStrategy.always 597 598 if no_recreate: 599 return ConvergenceStrategy.never 600 601 return ConvergenceStrategy.changed 602 603 604 def build_log_printer(containers, service_names, monochrome): 605 if service_names: 606 containers = [c for c in containers if c.service in service_names] 607 return LogPrinter(containers, monochrome=monochrome) 608 609 610 def attach_to_logs(project, log_printer, service_names, timeout): 611 print("Attaching to", list_containers(log_printer.containers)) 612 try: 613 log_printer.run() 614 finally: 615 def handler(signal, frame): 616 project.kill(service_names=service_names) 617 sys.exit(0) 618 signal.signal(signal.SIGINT, handler) 619 620 print("Gracefully stopping... (press Ctrl+C again to force)") 621 project.stop(service_names=service_names, timeout=timeout) 622 623 624 def list_containers(containers): 625 return ", ".join(c.name for c in containers) 626 [end of compose/cli/main.py] [start of compose/cli/multiplexer.py] 1 from __future__ import absolute_import 2 3 from threading import Thread 4 5 from six.moves import _thread as thread 6 7 try: 8 from Queue import Queue, Empty 9 except ImportError: 10 from queue import Queue, Empty # Python 3.x 11 12 13 STOP = object() 14 15 16 class Multiplexer(object): 17 """ 18 Create a single iterator from several iterators by running all of them in 19 parallel and yielding results as they come in. 20 """ 21 22 def __init__(self, iterators): 23 self.iterators = iterators 24 self._num_running = len(iterators) 25 self.queue = Queue() 26 27 def loop(self): 28 self._init_readers() 29 30 while self._num_running > 0: 31 try: 32 item, exception = self.queue.get(timeout=0.1) 33 34 if exception: 35 raise exception 36 37 if item is STOP: 38 self._num_running -= 1 39 else: 40 yield item 41 except Empty: 42 pass 43 # See https://github.com/docker/compose/issues/189 44 except thread.error: 45 raise KeyboardInterrupt() 46 47 def _init_readers(self): 48 for iterator in self.iterators: 49 t = Thread(target=_enqueue_output, args=(iterator, self.queue)) 50 t.daemon = True 51 t.start() 52 53 54 def _enqueue_output(iterator, queue): 55 try: 56 for item in iterator: 57 queue.put((item, None)) 58 queue.put((STOP, None)) 59 except Exception as e: 60 queue.put((None, e)) 61 [end of compose/cli/multiplexer.py] [start of compose/config/errors.py] 1 class ConfigurationError(Exception): 2 def __init__(self, msg): 3 self.msg = msg 4 5 def __str__(self): 6 return self.msg 7 8 9 class CircularReference(ConfigurationError): 10 def __init__(self, trail): 11 self.trail = trail 12 13 @property 14 def msg(self): 15 lines = [ 16 "{} in {}".format(service_name, filename) 17 for (filename, service_name) in self.trail 18 ] 19 return "Circular reference:\n {}".format("\n extends ".join(lines)) 20 21 22 class ComposeFileNotFound(ConfigurationError): 23 def __init__(self, supported_filenames): 24 super(ComposeFileNotFound, self).__init__(""" 25 Can't find a suitable configuration file in this directory or any parent. Are you in the right directory? 26 27 Supported filenames: %s 28 """ % ", ".join(supported_filenames)) 29 [end of compose/config/errors.py] [start of compose/config/interpolation.py] 1 import logging 2 import os 3 from string import Template 4 5 import six 6 7 from .errors import ConfigurationError 8 log = logging.getLogger(__name__) 9 10 11 def interpolate_environment_variables(config): 12 mapping = BlankDefaultDict(os.environ) 13 14 return dict( 15 (service_name, process_service(service_name, service_dict, mapping)) 16 for (service_name, service_dict) in config.items() 17 ) 18 19 20 def process_service(service_name, service_dict, mapping): 21 if not isinstance(service_dict, dict): 22 raise ConfigurationError( 23 'Service "%s" doesn\'t have any configuration options. ' 24 'All top level keys in your docker-compose.yml must map ' 25 'to a dictionary of configuration options.' % service_name 26 ) 27 28 return dict( 29 (key, interpolate_value(service_name, key, val, mapping)) 30 for (key, val) in service_dict.items() 31 ) 32 33 34 def interpolate_value(service_name, config_key, value, mapping): 35 try: 36 return recursive_interpolate(value, mapping) 37 except InvalidInterpolation as e: 38 raise ConfigurationError( 39 'Invalid interpolation format for "{config_key}" option ' 40 'in service "{service_name}": "{string}"' 41 .format( 42 config_key=config_key, 43 service_name=service_name, 44 string=e.string, 45 ) 46 ) 47 48 49 def recursive_interpolate(obj, mapping): 50 if isinstance(obj, six.string_types): 51 return interpolate(obj, mapping) 52 elif isinstance(obj, dict): 53 return dict( 54 (key, recursive_interpolate(val, mapping)) 55 for (key, val) in obj.items() 56 ) 57 elif isinstance(obj, list): 58 return [recursive_interpolate(val, mapping) for val in obj] 59 else: 60 return obj 61 62 63 def interpolate(string, mapping): 64 try: 65 return Template(string).substitute(mapping) 66 except ValueError: 67 raise InvalidInterpolation(string) 68 69 70 class BlankDefaultDict(dict): 71 def __init__(self, *args, **kwargs): 72 super(BlankDefaultDict, self).__init__(*args, **kwargs) 73 self.missing_keys = [] 74 75 def __getitem__(self, key): 76 try: 77 return super(BlankDefaultDict, self).__getitem__(key) 78 except KeyError: 79 if key not in self.missing_keys: 80 log.warn( 81 "The {} variable is not set. Substituting a blank string." 82 .format(key) 83 ) 84 self.missing_keys.append(key) 85 86 return "" 87 88 89 class InvalidInterpolation(Exception): 90 def __init__(self, string): 91 self.string = string 92 [end of compose/config/interpolation.py] [start of compose/config/validation.py] 1 import json 2 import logging 3 import os 4 import sys 5 from functools import wraps 6 7 from docker.utils.ports import split_port 8 from jsonschema import Draft4Validator 9 from jsonschema import FormatChecker 10 from jsonschema import RefResolver 11 from jsonschema import ValidationError 12 13 from .errors import ConfigurationError 14 15 16 log = logging.getLogger(__name__) 17 18 19 DOCKER_CONFIG_HINTS = { 20 'cpu_share': 'cpu_shares', 21 'add_host': 'extra_hosts', 22 'hosts': 'extra_hosts', 23 'extra_host': 'extra_hosts', 24 'device': 'devices', 25 'link': 'links', 26 'memory_swap': 'memswap_limit', 27 'port': 'ports', 28 'privilege': 'privileged', 29 'priviliged': 'privileged', 30 'privilige': 'privileged', 31 'volume': 'volumes', 32 'workdir': 'working_dir', 33 } 34 35 36 VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]' 37 38 39 @FormatChecker.cls_checks( 40 format="ports", 41 raises=ValidationError( 42 "Invalid port formatting, it should be " 43 "'[[remote_ip:]remote_port:]port[/protocol]'")) 44 def format_ports(instance): 45 try: 46 split_port(instance) 47 except ValueError: 48 return False 49 return True 50 51 52 @FormatChecker.cls_checks(format="environment") 53 def format_boolean_in_environment(instance): 54 """ 55 Check if there is a boolean in the environment and display a warning. 56 Always return True here so the validation won't raise an error. 57 """ 58 if isinstance(instance, bool): 59 log.warn( 60 "Warning: There is a boolean value, {0} in the 'environment' key.\n" 61 "Environment variables can only be strings.\nPlease add quotes to any boolean values to make them string " 62 "(eg, '{0}').\nThis warning will become an error in a future release. \r\n".format(instance) 63 ) 64 return True 65 66 67 def validate_service_names(func): 68 @wraps(func) 69 def func_wrapper(config): 70 for service_name in config.keys(): 71 if type(service_name) is int: 72 raise ConfigurationError( 73 "Service name: {} needs to be a string, eg '{}'".format(service_name, service_name) 74 ) 75 return func(config) 76 return func_wrapper 77 78 79 def validate_top_level_object(func): 80 @wraps(func) 81 def func_wrapper(config): 82 if not isinstance(config, dict): 83 raise ConfigurationError( 84 "Top level object needs to be a dictionary. Check your .yml file that you have defined a service at the top level." 85 ) 86 return func(config) 87 return func_wrapper 88 89 90 def validate_extends_file_path(service_name, extends_options, filename): 91 """ 92 The service to be extended must either be defined in the config key 'file', 93 or within 'filename'. 94 """ 95 error_prefix = "Invalid 'extends' configuration for %s:" % service_name 96 97 if 'file' not in extends_options and filename is None: 98 raise ConfigurationError( 99 "%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix 100 ) 101 102 103 def validate_extended_service_exists(extended_service_name, full_extended_config, extended_config_path): 104 if extended_service_name not in full_extended_config: 105 msg = ( 106 "Cannot extend service '%s' in %s: Service not found" 107 ) % (extended_service_name, extended_config_path) 108 raise ConfigurationError(msg) 109 110 111 def get_unsupported_config_msg(service_name, error_key): 112 msg = "Unsupported config option for '{}' service: '{}'".format(service_name, error_key) 113 if error_key in DOCKER_CONFIG_HINTS: 114 msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key]) 115 return msg 116 117 118 def anglicize_validator(validator): 119 if validator in ["array", "object"]: 120 return 'an ' + validator 121 return 'a ' + validator 122 123 124 def process_errors(errors, service_name=None): 125 """ 126 jsonschema gives us an error tree full of information to explain what has 127 gone wrong. Process each error and pull out relevant information and re-write 128 helpful error messages that are relevant. 129 """ 130 def _parse_key_from_error_msg(error): 131 return error.message.split("'")[1] 132 133 def _clean_error_message(message): 134 return message.replace("u'", "'") 135 136 def _parse_valid_types_from_validator(validator): 137 """ 138 A validator value can be either an array of valid types or a string of 139 a valid type. Parse the valid types and prefix with the correct article. 140 """ 141 if isinstance(validator, list): 142 if len(validator) >= 2: 143 first_type = anglicize_validator(validator[0]) 144 last_type = anglicize_validator(validator[-1]) 145 types_from_validator = "{}{}".format(first_type, ", ".join(validator[1:-1])) 146 147 msg = "{} or {}".format( 148 types_from_validator, 149 last_type 150 ) 151 else: 152 msg = "{}".format(anglicize_validator(validator[0])) 153 else: 154 msg = "{}".format(anglicize_validator(validator)) 155 156 return msg 157 158 def _parse_oneof_validator(error): 159 """ 160 oneOf has multiple schemas, so we need to reason about which schema, sub 161 schema or constraint the validation is failing on. 162 Inspecting the context value of a ValidationError gives us information about 163 which sub schema failed and which kind of error it is. 164 """ 165 constraint = [context for context in error.context if len(context.path) > 0] 166 if constraint: 167 valid_types = _parse_valid_types_from_validator(constraint[0].validator_value) 168 msg = "contains {}, which is an invalid type, it should be {}".format( 169 constraint[0].instance, 170 valid_types 171 ) 172 return msg 173 174 uniqueness = [context for context in error.context if context.validator == 'uniqueItems'] 175 if uniqueness: 176 msg = "contains non unique items, please remove duplicates from {}".format( 177 uniqueness[0].instance 178 ) 179 return msg 180 181 types = [context.validator_value for context in error.context if context.validator == 'type'] 182 valid_types = _parse_valid_types_from_validator(types) 183 184 msg = "contains an invalid type, it should be {}".format(valid_types) 185 186 return msg 187 188 root_msgs = [] 189 invalid_keys = [] 190 required = [] 191 type_errors = [] 192 other_errors = [] 193 194 for error in errors: 195 # handle root level errors 196 if len(error.path) == 0 and not error.instance.get('name'): 197 if error.validator == 'type': 198 msg = "Top level object needs to be a dictionary. Check your .yml file that you have defined a service at the top level." 199 root_msgs.append(msg) 200 elif error.validator == 'additionalProperties': 201 invalid_service_name = _parse_key_from_error_msg(error) 202 msg = "Invalid service name '{}' - only {} characters are allowed".format(invalid_service_name, VALID_NAME_CHARS) 203 root_msgs.append(msg) 204 else: 205 root_msgs.append(_clean_error_message(error.message)) 206 207 else: 208 if not service_name: 209 # field_schema errors will have service name on the path 210 service_name = error.path[0] 211 error.path.popleft() 212 else: 213 # service_schema errors have the service name passed in, as that 214 # is not available on error.path or necessarily error.instance 215 service_name = service_name 216 217 if error.validator == 'additionalProperties': 218 invalid_config_key = _parse_key_from_error_msg(error) 219 invalid_keys.append(get_unsupported_config_msg(service_name, invalid_config_key)) 220 elif error.validator == 'anyOf': 221 if 'image' in error.instance and 'build' in error.instance: 222 required.append( 223 "Service '{}' has both an image and build path specified. " 224 "A service can either be built to image or use an existing " 225 "image, not both.".format(service_name)) 226 elif 'image' not in error.instance and 'build' not in error.instance: 227 required.append( 228 "Service '{}' has neither an image nor a build path " 229 "specified. Exactly one must be provided.".format(service_name)) 230 elif 'image' in error.instance and 'dockerfile' in error.instance: 231 required.append( 232 "Service '{}' has both an image and alternate Dockerfile. " 233 "A service can either be built to image or use an existing " 234 "image, not both.".format(service_name)) 235 else: 236 required.append(_clean_error_message(error.message)) 237 elif error.validator == 'oneOf': 238 config_key = error.path[0] 239 msg = _parse_oneof_validator(error) 240 241 type_errors.append("Service '{}' configuration key '{}' {}".format( 242 service_name, config_key, msg) 243 ) 244 elif error.validator == 'type': 245 msg = _parse_valid_types_from_validator(error.validator_value) 246 247 if len(error.path) > 0: 248 config_key = " ".join(["'%s'" % k for k in error.path]) 249 type_errors.append( 250 "Service '{}' configuration key {} contains an invalid " 251 "type, it should be {}".format( 252 service_name, 253 config_key, 254 msg)) 255 else: 256 root_msgs.append( 257 "Service '{}' doesn\'t have any configuration options. " 258 "All top level keys in your docker-compose.yml must map " 259 "to a dictionary of configuration options.'".format(service_name)) 260 elif error.validator == 'required': 261 config_key = error.path[0] 262 required.append( 263 "Service '{}' option '{}' is invalid, {}".format( 264 service_name, 265 config_key, 266 _clean_error_message(error.message))) 267 elif error.validator == 'dependencies': 268 dependency_key = list(error.validator_value.keys())[0] 269 required_keys = ",".join(error.validator_value[dependency_key]) 270 required.append("Invalid '{}' configuration for '{}' service: when defining '{}' you must set '{}' as well".format( 271 dependency_key, service_name, dependency_key, required_keys)) 272 else: 273 config_key = " ".join(["'%s'" % k for k in error.path]) 274 err_msg = "Service '{}' configuration key {} value {}".format(service_name, config_key, error.message) 275 other_errors.append(err_msg) 276 277 return "\n".join(root_msgs + invalid_keys + required + type_errors + other_errors) 278 279 280 def validate_against_fields_schema(config): 281 schema_filename = "fields_schema.json" 282 format_checkers = ["ports", "environment"] 283 return _validate_against_schema(config, schema_filename, format_checkers) 284 285 286 def validate_against_service_schema(config, service_name): 287 schema_filename = "service_schema.json" 288 format_checkers = ["ports"] 289 return _validate_against_schema(config, schema_filename, format_checkers, service_name) 290 291 292 def _validate_against_schema(config, schema_filename, format_checker=[], service_name=None): 293 config_source_dir = os.path.dirname(os.path.abspath(__file__)) 294 295 if sys.platform == "win32": 296 file_pre_fix = "///" 297 config_source_dir = config_source_dir.replace('\\', '/') 298 else: 299 file_pre_fix = "//" 300 301 resolver_full_path = "file:{}{}/".format(file_pre_fix, config_source_dir) 302 schema_file = os.path.join(config_source_dir, schema_filename) 303 304 with open(schema_file, "r") as schema_fh: 305 schema = json.load(schema_fh) 306 307 resolver = RefResolver(resolver_full_path, schema) 308 validation_output = Draft4Validator(schema, resolver=resolver, format_checker=FormatChecker(format_checker)) 309 310 errors = [error for error in sorted(validation_output.iter_errors(config), key=str)] 311 if errors: 312 error_msg = process_errors(errors, service_name) 313 raise ConfigurationError("Validation failed, reason(s):\n{}".format(error_msg)) 314 [end of compose/config/validation.py] [start of compose/legacy.py] 1 import logging 2 import re 3 4 from .const import LABEL_VERSION 5 from .container import Container 6 from .container import get_container_name 7 8 9 log = logging.getLogger(__name__) 10 11 12 # TODO: remove this section when migrate_project_to_labels is removed 13 NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$') 14 15 ERROR_MESSAGE_FORMAT = """ 16 Compose found the following containers without labels: 17 18 {names_list} 19 20 As of Compose 1.3.0, containers are identified with labels instead of naming 21 convention. If you want to continue using these containers, run: 22 23 $ docker-compose migrate-to-labels 24 25 Alternatively, remove them: 26 27 $ docker rm -f {rm_args} 28 """ 29 30 ONE_OFF_ADDENDUM_FORMAT = """ 31 You should also remove your one-off containers: 32 33 $ docker rm -f {rm_args} 34 """ 35 36 ONE_OFF_ERROR_MESSAGE_FORMAT = """ 37 Compose found the following containers without labels: 38 39 {names_list} 40 41 As of Compose 1.3.0, containers are identified with labels instead of naming convention. 42 43 Remove them before continuing: 44 45 $ docker rm -f {rm_args} 46 """ 47 48 49 def check_for_legacy_containers( 50 client, 51 project, 52 services, 53 allow_one_off=True): 54 """Check if there are containers named using the old naming convention 55 and warn the user that those containers may need to be migrated to 56 using labels, so that compose can find them. 57 """ 58 containers = get_legacy_containers(client, project, services, one_off=False) 59 60 if containers: 61 one_off_containers = get_legacy_containers(client, project, services, one_off=True) 62 63 raise LegacyContainersError( 64 [c.name for c in containers], 65 [c.name for c in one_off_containers], 66 ) 67 68 if not allow_one_off: 69 one_off_containers = get_legacy_containers(client, project, services, one_off=True) 70 71 if one_off_containers: 72 raise LegacyOneOffContainersError( 73 [c.name for c in one_off_containers], 74 ) 75 76 77 class LegacyError(Exception): 78 def __unicode__(self): 79 return self.msg 80 81 __str__ = __unicode__ 82 83 84 class LegacyContainersError(LegacyError): 85 def __init__(self, names, one_off_names): 86 self.names = names 87 self.one_off_names = one_off_names 88 89 self.msg = ERROR_MESSAGE_FORMAT.format( 90 names_list="\n".join(" {}".format(name) for name in names), 91 rm_args=" ".join(names), 92 ) 93 94 if one_off_names: 95 self.msg += ONE_OFF_ADDENDUM_FORMAT.format(rm_args=" ".join(one_off_names)) 96 97 98 class LegacyOneOffContainersError(LegacyError): 99 def __init__(self, one_off_names): 100 self.one_off_names = one_off_names 101 102 self.msg = ONE_OFF_ERROR_MESSAGE_FORMAT.format( 103 names_list="\n".join(" {}".format(name) for name in one_off_names), 104 rm_args=" ".join(one_off_names), 105 ) 106 107 108 def add_labels(project, container): 109 project_name, service_name, one_off, number = NAME_RE.match(container.name).groups() 110 if project_name != project.name or service_name not in project.service_names: 111 return 112 service = project.get_service(service_name) 113 service.recreate_container(container) 114 115 116 def migrate_project_to_labels(project): 117 log.info("Running migration to labels for project %s", project.name) 118 119 containers = get_legacy_containers( 120 project.client, 121 project.name, 122 project.service_names, 123 one_off=False, 124 ) 125 126 for container in containers: 127 add_labels(project, container) 128 129 130 def get_legacy_containers( 131 client, 132 project, 133 services, 134 one_off=False): 135 136 return list(_get_legacy_containers_iter( 137 client, 138 project, 139 services, 140 one_off=one_off, 141 )) 142 143 144 def _get_legacy_containers_iter( 145 client, 146 project, 147 services, 148 one_off=False): 149 150 containers = client.containers(all=True) 151 152 for service in services: 153 for container in containers: 154 if LABEL_VERSION in (container.get('Labels') or {}): 155 continue 156 157 name = get_container_name(container) 158 if has_container(project, service, name, one_off=one_off): 159 yield Container.from_ps(client, container) 160 161 162 def has_container(project, service, name, one_off=False): 163 if not name or not is_valid_name(name, one_off): 164 return False 165 container_project, container_service, _container_number = parse_name(name) 166 return container_project == project and container_service == service 167 168 169 def is_valid_name(name, one_off=False): 170 match = NAME_RE.match(name) 171 if match is None: 172 return False 173 if one_off: 174 return match.group(3) == 'run_' 175 else: 176 return match.group(3) is None 177 178 179 def parse_name(name): 180 match = NAME_RE.match(name) 181 (project, service_name, _, suffix) = match.groups() 182 return (project, service_name, int(suffix)) 183 [end of compose/legacy.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
docker/compose
bbc8b74c17b1fb6c29dd4013d470d591c6a417a0
Extend an entire Compose file It is a common pattern to define a base Compose file, and then define Compose files for different environments with a few small changes. It is currently possible to extend single services, but it is very verbose to include a large number of services and make a small change to one of them (for example, setting `RAILS_ENV=production`). It should be possible to extend a Compose file with a complete set of services from another Compose file. All of those services will be copied into the Compose file, as if you were extending each of the services individually: - If you don't define a service in the child file, it is copied as-is. - If you do define a service, it should behave as if you've extended that single service. - There is no way of undefining a service (yet) This is an intentionally simple first step, and I am intentionally not defining a syntax so we can discuss. Design questions: - This is the first top-level configuration we have added to Compose. How should we do this? Related issues / suggested designs: https://github.com/docker/compose/issues/318 #1380 [dcao-merge](https://github.com/dnephin/compose-addons#dcao-merge) [(This is part of an initiative to define an app once in a way that can be used across dev, test and prod.)](https://github.com/docker/compose/issues/1784)
+100 #1 BUT, I guess you imply this but should be mentioned: it should keep the volume and link setup so you don't need to re define that. #2 because of #1 it will have different semantics then extend. And personally I think it should allow composition, so that means "import" semantics. Example import application 1 and 2 and make them talk together by means of link/own-container.. @bfirsh Let's assume a dev has a Compose file `comp-dev.yml` and op has a different one `comp-op.yml`, will be possible to run compose has: `docker-compose --extends op-comp.yml -f comp-dev.yml up`? If one config depends on another, I would expect that dependency to be declared as part of the contents of the file, instead of a command line argument. It doesn't, it belongs to the operator. This way, the operator only needs to have its own compose file for its own infrastructure. Thus, for 10000 developers with 10000 different compose files, the single operator could easily override all options the he intended on any developer's compose file as: ``` for file in *-dev.yml; do docker-compose --extends op-comp.yml -f $file up -d done ``` After some discussion, @dnephin and I came up with the following: 1. As @aanm suggests, extending at the command-line is more flexible than doing it in the file, because you can re-use the same overrides with multiple bases. It also means we don't have to do any disruptive new design on the file format. If you could pass the `-f FILE` flag multiple times, that would allow you to apply arbitrarily many overrides: ``` $ docker-compose -f docker-compose.yml -f docker-compose.overrides.yml up ``` 2. However, it's really verbose. If we want the idiomatic setup to be a `docker-compose.yml` with just the base configuration, plus a dev-specific file with overrides, that's a lot of typing, and users are going to have a terrible time. So there should be a sensible default: if `docker-compose.overrides.yml` exists, we apply it as if the user had typed the full command above; if not, we behave as we currently do (i.e. as if they'd just typed `docker-compose -f docker-compose.yml up`). 3. In non-development environments, whatever spins up `docker-compose` should explicitly pass the set of files in: ``` # don't apply any overrides $ docker-compose -f docker-compose.yml up # apply production overrides $ docker-compose -f docker-compose.yml -f docker-compose.production.yml up ``` This caters to the use case of wanting to configure an app for multiple environments: put the core stuff in `docker-compose.yml`, the development overrides in `docker-compose.overrides.yml`, and any other environment-specific overrides in other files (e.g. `docker-compose.production.yml`). It also caters to the use case of wanting to distribute an app's code with sensible defaults, but allowing people to override it when running it locally (for which one solution has already been proposed in #1999): - put the defaults in `docker-compose.yml` - add `docker-compose.overrides.yml` to `.gitignore` - users can create `docker-compose.overrides.yml` and override stuff locally if they want to One limitation is that it can't currently serve _both_ use cases at once.
2015-09-15T19:03:01Z
<patch> diff --git a/compose/cli/command.py b/compose/cli/command.py --- a/compose/cli/command.py +++ b/compose/cli/command.py @@ -51,57 +51,68 @@ def perform_command(self, options, handler, command_options): handler(None, command_options) return - if 'FIG_FILE' in os.environ: - log.warn('The FIG_FILE environment variable is deprecated.') - log.warn('Please use COMPOSE_FILE instead.') - - explicit_config_path = options.get('--file') or os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE') - project = self.get_project( - explicit_config_path, + project = get_project( + self.base_dir, + get_config_path(options.get('--file')), project_name=options.get('--project-name'), verbose=options.get('--verbose')) handler(project, command_options) - def get_client(self, verbose=False): - client = docker_client() - if verbose: - version_info = six.iteritems(client.version()) - log.info("Compose version %s", __version__) - log.info("Docker base_url: %s", client.base_url) - log.info("Docker version: %s", - ", ".join("%s=%s" % item for item in version_info)) - return verbose_proxy.VerboseProxy('docker', client) - return client - def get_project(self, config_path=None, project_name=None, verbose=False): - config_details = config.find(self.base_dir, config_path) +def get_config_path(file_option): + if file_option: + return file_option - try: - return Project.from_dicts( - self.get_project_name(config_details.working_dir, project_name), - config.load(config_details), - self.get_client(verbose=verbose)) - except ConfigError as e: - raise errors.UserError(six.text_type(e)) - - def get_project_name(self, working_dir, project_name=None): - def normalize_name(name): - return re.sub(r'[^a-z0-9]', '', name.lower()) - - if 'FIG_PROJECT_NAME' in os.environ: - log.warn('The FIG_PROJECT_NAME environment variable is deprecated.') - log.warn('Please use COMPOSE_PROJECT_NAME instead.') - - project_name = ( - project_name or - os.environ.get('COMPOSE_PROJECT_NAME') or - os.environ.get('FIG_PROJECT_NAME')) - if project_name is not None: - return normalize_name(project_name) - - project = os.path.basename(os.path.abspath(working_dir)) - if project: - return normalize_name(project) - - return 'default' + if 'FIG_FILE' in os.environ: + log.warn('The FIG_FILE environment variable is deprecated.') + log.warn('Please use COMPOSE_FILE instead.') + + config_file = os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE') + return [config_file] if config_file else None + + +def get_client(verbose=False): + client = docker_client() + if verbose: + version_info = six.iteritems(client.version()) + log.info("Compose version %s", __version__) + log.info("Docker base_url: %s", client.base_url) + log.info("Docker version: %s", + ", ".join("%s=%s" % item for item in version_info)) + return verbose_proxy.VerboseProxy('docker', client) + return client + + +def get_project(base_dir, config_path=None, project_name=None, verbose=False): + config_details = config.find(base_dir, config_path) + + try: + return Project.from_dicts( + get_project_name(config_details.working_dir, project_name), + config.load(config_details), + get_client(verbose=verbose)) + except ConfigError as e: + raise errors.UserError(six.text_type(e)) + + +def get_project_name(working_dir, project_name=None): + def normalize_name(name): + return re.sub(r'[^a-z0-9]', '', name.lower()) + + if 'FIG_PROJECT_NAME' in os.environ: + log.warn('The FIG_PROJECT_NAME environment variable is deprecated.') + log.warn('Please use COMPOSE_PROJECT_NAME instead.') + + project_name = ( + project_name or + os.environ.get('COMPOSE_PROJECT_NAME') or + os.environ.get('FIG_PROJECT_NAME')) + if project_name is not None: + return normalize_name(project_name) + + project = os.path.basename(os.path.abspath(working_dir)) + if project: + return normalize_name(project) + + return 'default' diff --git a/compose/cli/main.py b/compose/cli/main.py --- a/compose/cli/main.py +++ b/compose/cli/main.py @@ -96,7 +96,7 @@ class TopLevelCommand(Command): """Define and run multi-container applications with Docker. Usage: - docker-compose [options] [COMMAND] [ARGS...] + docker-compose [-f=<arg>...] [options] [COMMAND] [ARGS...] docker-compose -h|--help Options: diff --git a/compose/cli/utils.py b/compose/cli/utils.py --- a/compose/cli/utils.py +++ b/compose/cli/utils.py @@ -36,25 +36,6 @@ def yesno(prompt, default=None): return None -def find_candidates_in_parent_dirs(filenames, path): - """ - Given a directory path to start, looks for filenames in the - directory, and then each parent directory successively, - until found. - - Returns tuple (candidates, path). - """ - candidates = [filename for filename in filenames - if os.path.exists(os.path.join(path, filename))] - - if len(candidates) == 0: - parent_dir = os.path.join(path, '..') - if os.path.abspath(parent_dir) != os.path.abspath(path): - return find_candidates_in_parent_dirs(filenames, parent_dir) - - return (candidates, path) - - def split_buffer(reader, separator): """ Given a generator which yields strings and a separator string, diff --git a/compose/config/config.py b/compose/config/config.py --- a/compose/config/config.py +++ b/compose/config/config.py @@ -16,7 +16,6 @@ from .validation import validate_extends_file_path from .validation import validate_service_names from .validation import validate_top_level_object -from compose.cli.utils import find_candidates_in_parent_dirs DOCKER_CONFIG_KEYS = [ @@ -77,6 +76,7 @@ 'fig.yaml', ] +DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml' PATH_START_CHARS = [ '/', @@ -88,24 +88,45 @@ log = logging.getLogger(__name__) -ConfigDetails = namedtuple('ConfigDetails', 'config working_dir filename') +class ConfigDetails(namedtuple('_ConfigDetails', 'working_dir config_files')): + """ + :param working_dir: the directory to use for relative paths in the config + :type working_dir: string + :param config_files: list of configuration files to load + :type config_files: list of :class:`ConfigFile` + """ + + +class ConfigFile(namedtuple('_ConfigFile', 'filename config')): + """ + :param filename: filename of the config file + :type filename: string + :param config: contents of the config file + :type config: :class:`dict` + """ -def find(base_dir, filename): - if filename == '-': - return ConfigDetails(yaml.safe_load(sys.stdin), os.getcwd(), None) +def find(base_dir, filenames): + if filenames == ['-']: + return ConfigDetails( + os.getcwd(), + [ConfigFile(None, yaml.safe_load(sys.stdin))]) - if filename: - filename = os.path.join(base_dir, filename) + if filenames: + filenames = [os.path.join(base_dir, f) for f in filenames] else: - filename = get_config_path(base_dir) - return ConfigDetails(load_yaml(filename), os.path.dirname(filename), filename) + filenames = get_default_config_files(base_dir) + log.debug("Using configuration files: {}".format(",".join(filenames))) + return ConfigDetails( + os.path.dirname(filenames[0]), + [ConfigFile(f, load_yaml(f)) for f in filenames]) -def get_config_path(base_dir): + +def get_default_config_files(base_dir): (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir) - if len(candidates) == 0: + if not candidates: raise ComposeFileNotFound(SUPPORTED_FILENAMES) winner = candidates[0] @@ -123,7 +144,31 @@ def get_config_path(base_dir): log.warn("%s is deprecated and will not be supported in future. " "Please rename your config file to docker-compose.yml\n" % winner) - return os.path.join(path, winner) + return [os.path.join(path, winner)] + get_default_override_file(path) + + +def get_default_override_file(path): + override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME) + return [override_filename] if os.path.exists(override_filename) else [] + + +def find_candidates_in_parent_dirs(filenames, path): + """ + Given a directory path to start, looks for filenames in the + directory, and then each parent directory successively, + until found. + + Returns tuple (candidates, path). + """ + candidates = [filename for filename in filenames + if os.path.exists(os.path.join(path, filename))] + + if not candidates: + parent_dir = os.path.join(path, '..') + if os.path.abspath(parent_dir) != os.path.abspath(path): + return find_candidates_in_parent_dirs(filenames, parent_dir) + + return (candidates, path) @validate_top_level_object @@ -133,29 +178,49 @@ def pre_process_config(config): Pre validation checks and processing of the config file to interpolate env vars returning a config dict ready to be tested against the schema. """ - config = interpolate_environment_variables(config) - return config + return interpolate_environment_variables(config) def load(config_details): - config, working_dir, filename = config_details - - processed_config = pre_process_config(config) - validate_against_fields_schema(processed_config) + """Load the configuration from a working directory and a list of + configuration files. Files are loaded in order, and merged on top + of each other to create the final configuration. - service_dicts = [] + Return a fully interpolated, extended and validated configuration. + """ - for service_name, service_dict in list(processed_config.items()): + def build_service(filename, service_name, service_dict): loader = ServiceLoader( - working_dir=working_dir, - filename=filename, - service_name=service_name, - service_dict=service_dict) + config_details.working_dir, + filename, + service_name, + service_dict) service_dict = loader.make_service_dict() validate_paths(service_dict) - service_dicts.append(service_dict) - - return service_dicts + return service_dict + + def load_file(filename, config): + processed_config = pre_process_config(config) + validate_against_fields_schema(processed_config) + return [ + build_service(filename, name, service_config) + for name, service_config in processed_config.items() + ] + + def merge_services(base, override): + all_service_names = set(base) | set(override) + return { + name: merge_service_dicts(base.get(name, {}), override.get(name, {})) + for name in all_service_names + } + + config_file = config_details.config_files[0] + for next_file in config_details.config_files[1:]: + config_file = ConfigFile( + config_file.filename, + merge_services(config_file.config, next_file.config)) + + return load_file(config_file.filename, config_file.config) class ServiceLoader(object): </patch>
[]
[]
huggingface__transformers-15085
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Addition of Swin Transformer for Computer Vision # 🌟 Addition Swin Transformer ## Model description Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 masks AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on Val), surpassing previous models by a large margin. ## Open source status * [x] the model implementation is available: https://github.com/microsoft/Swin-Transformer * [x] the model weights are available: https://github.com/microsoft/Swin-Transformer * [x] who are the authors: [Swin Transformer](https://arxiv.org/pdf/2103.14030.pdf) ## Possible Task Support: Opensource Version Supports below tasks * Image Classification * Object Detection * Instance Segmentation * Semantic Segmentation * Video Recognition </issue> <code> [start of README.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <b>English</b> | 44 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hant.md">繁體中文</a> | 46 <a href="https://github.com/huggingface/transformers/blob/master/README_ko.md">한국어</a> 47 <p> 48 </h4> 49 50 <h3 align="center"> 51 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p> 52 </h3> 53 54 <h3 align="center"> 55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 56 </h3> 57 58 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. 59 60 These models can be applied on: 61 62 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. 63 * 🖼️ Images, for tasks like image classification, object detection, and segmentation. 64 * 🗣️ Audio, for tasks like speech recognition and audio classification. 65 66 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. 67 68 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments. 69 70 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. 71 72 ## Online demos 73 74 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models. 75 76 Here are a few examples: 77 78 In Natural Language Processing: 79 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 80 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 81 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 82 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 83 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 84 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 85 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 86 87 In Computer Vision: 88 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224) 89 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50) 90 - [Image Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic) 91 92 In Audio: 93 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) 94 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) 95 96 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities. 97 98 ## If you are looking for custom support from the Hugging Face team 99 100 <a target="_blank" href="https://huggingface.co/support"> 101 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 102 </a><br> 103 104 ## Quick tour 105 106 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts: 107 108 ```python 109 >>> from transformers import pipeline 110 111 # Allocate a pipeline for sentiment-analysis 112 >>> classifier = pipeline('sentiment-analysis') 113 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 114 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 115 ``` 116 117 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%. 118 119 Many NLP tasks have a pre-trained `pipeline` ready to go. For example, we can easily extract question answers given context: 120 121 ``` python 122 >>> from transformers import pipeline 123 124 # Allocate a pipeline for question-answering 125 >>> question_answerer = pipeline('question-answering') 126 >>> question_answerer({ 127 ... 'question': 'What is the name of the repository ?', 128 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 129 ... }) 130 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 131 132 ``` 133 134 In addition to the answer, the pretrained model used here returned its confidence score, along with the start position and end position of the answer in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary). 135 136 To download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version: 137 ```python 138 >>> from transformers import AutoTokenizer, AutoModel 139 140 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 141 >>> model = AutoModel.from_pretrained("bert-base-uncased") 142 143 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 144 >>> outputs = model(**inputs) 145 ``` 146 And here is the equivalent code for TensorFlow: 147 ```python 148 >>> from transformers import AutoTokenizer, TFAutoModel 149 150 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 151 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 152 153 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 154 >>> outputs = model(**inputs) 155 ``` 156 157 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator. 158 159 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset. 160 161 ## Why should I use transformers? 162 163 1. Easy-to-use state-of-the-art models: 164 - High performance on natural language understanding & generation, computer vision, and audio tasks. 165 - Low barrier to entry for educators and practitioners. 166 - Few user-facing abstractions with just three classes to learn. 167 - A unified API for using all our pretrained models. 168 169 1. Lower compute costs, smaller carbon footprint: 170 - Researchers can share trained models instead of always retraining. 171 - Practitioners can reduce compute time and production costs. 172 - Dozens of architectures with over 20,000 pretrained models, some in more than 100 languages. 173 174 1. Choose the right framework for every part of a model's lifetime: 175 - Train state-of-the-art models in 3 lines of code. 176 - Move a single model between TF2.0/PyTorch/JAX frameworks at will. 177 - Seamlessly pick the right framework for training, evaluation and production. 178 179 1. Easily customize a model or an example to your needs: 180 - We provide examples for each architecture to reproduce the results published by its original authors. 181 - Model internals are exposed as consistently as possible. 182 - Model files can be used independently of the library for quick experiments. 183 184 ## Why shouldn't I use transformers? 185 186 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. 187 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library. 188 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. 189 190 ## Installation 191 192 ### With pip 193 194 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+. 195 196 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). 197 198 First, create a virtual environment with the version of Python you're going to use and activate it. 199 200 Then, you will need to install at least one of Flax, PyTorch or TensorFlow. 201 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific install command for your platform. 202 203 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows: 204 205 ```bash 206 pip install transformers 207 ``` 208 209 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source). 210 211 ### With conda 212 213 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`. 214 215 🤗 Transformers can be installed using conda as follows: 216 217 ```shell script 218 conda install -c huggingface transformers 219 ``` 220 221 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda. 222 223 ## Model architectures 224 225 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations). 226 227 Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 228 229 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them): 230 231 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 232 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 233 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 234 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 235 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 236 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 237 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 238 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 239 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 240 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 241 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 242 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 243 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 244 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 245 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 246 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 247 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 248 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 249 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 250 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 251 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 252 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 253 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 254 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 255 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 256 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation) and a German version of DistilBERT. 257 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval 258 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon 259 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 260 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 261 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 262 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 263 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 264 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 265 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 266 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 267 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 268 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 269 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 270 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 271 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 272 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 273 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 274 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 275 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 276 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 277 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 278 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 279 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 280 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 281 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 282 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 283 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 284 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 285 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 286 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 287 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 288 1. **[Nyströmformer](https://huggingface.co/docs/transformers/master/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 289 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 290 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 291 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 292 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 293 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 294 1. **[REALM](https://huggingface.co/transformers/master/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 295 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 296 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 297 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 298 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 299 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 300 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 301 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 302 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 303 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 304 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 305 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 306 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 307 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 308 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 309 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 310 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 311 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 312 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER 313 AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 314 1. **[ViLT)](https://huggingface.co/docs/transformers/master/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 315 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 316 1. **[ViTMAE)](https://huggingface.co/docs/transformers/master/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 317 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 318 1. **[WavLM](https://huggingface.co/docs/transformers/master/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 319 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 320 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/master/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 321 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 322 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 323 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 324 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 325 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 326 1. **[XLS-R](https://huggingface.co/docs/master/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 327 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR. 328 329 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks). 330 331 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples). 332 333 334 ## Learn more 335 336 | Section | Description | 337 |-|-| 338 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials | 339 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers | 340 | [Preprocessing tutorial](https://huggingface.co/docstransformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models | 341 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API | 342 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks | 343 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community | 344 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` | 345 346 ## Citation 347 348 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library: 349 ```bibtex 350 @inproceedings{wolf-etal-2020-transformers, 351 title = "Transformers: State-of-the-Art Natural Language Processing", 352 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 353 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 354 month = oct, 355 year = "2020", 356 address = "Online", 357 publisher = "Association for Computational Linguistics", 358 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 359 pages = "38--45" 360 } 361 ``` 362 [end of README.md] [start of README_ko.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <a href="https://github.com/huggingface/transformers/">English</a> | 44 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hant.md">繁體中文</a> | 46 <b>한국어</b> 47 <p> 48 </h4> 49 50 <h3 align="center"> 51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p> 52 </h3> 53 54 <h3 align="center"> 55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 56 </h3> 57 58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다. 59 60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다. 61 62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다. 63 64 ## 온라인 데모 65 66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다. 67 68 예시: 69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 76 77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다. 78 79 ## Hugging Face 팀의 커스텀 지원을 원한다면 80 81 <a target="_blank" href="https://huggingface.co/support"> 82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 83 </a><br> 84 85 ## 퀵 투어 86 87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다: 88 89 ```python 90 >>> from transformers import pipeline 91 92 # Allocate a pipeline for sentiment-analysis 93 >>> classifier = pipeline('sentiment-analysis') 94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 96 ``` 97 98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다. 99 100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다: 101 102 ``` python 103 >>> from transformers import pipeline 104 105 # Allocate a pipeline for question-answering 106 >>> question_answerer = pipeline('question-answering') 107 >>> question_answerer({ 108 ... 'question': 'What is the name of the repository ?', 109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 110 ... }) 111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 112 113 ``` 114 115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다. 116 117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다: 118 ```python 119 >>> from transformers import AutoTokenizer, AutoModel 120 121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 122 >>> model = AutoModel.from_pretrained("bert-base-uncased") 123 124 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 125 >>> outputs = model(**inputs) 126 ``` 127 다음은 TensorFlow 버전입니다: 128 ```python 129 >>> from transformers import AutoTokenizer, TFAutoModel 130 131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 133 134 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 135 >>> outputs = model(**inputs) 136 ``` 137 138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다. 139 140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다. 141 142 ## 왜 transformers를 사용해야 할까요? 143 144 1. 손쉽게 사용할 수 있는 최첨단 모델: 145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다. 146 - 교육자 실무자에게 진입 장벽이 낮습니다. 147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다. 148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다. 149 150 1. 더 적은 계산 비용, 더 적은 탄소 발자국: 151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다. 152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다. 153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등. 154 155 1. 모델의 각 생애주기에 적합한 프레임워크: 156 - 코드 3줄로 최첨단 모델을 학습하세요. 157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요. 158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요. 159 160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요: 161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다. 162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다. 163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다. 164 165 ## 왜 transformers를 사용하지 말아야 할까요? 166 167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다. 168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요. 169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/master/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다. 170 171 ## 설치 172 173 ### pip로 설치하기 174 175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다. 176 177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요. 178 179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요. 180 181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다. 182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요. 183 184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다: 185 186 ```bash 187 pip install transformers 188 ``` 189 190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다. 191 192 ### conda로 설치하기 193 194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`. 195 196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다: 197 198 ```shell script 199 conda install -c huggingface transformers 200 ``` 201 202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요. 203 204 ## 모델 구조 205 206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다. 207 208 현재 사용 가능한 모델 체크포인트의 개수: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 209 210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요): 211 212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 224 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 225 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 226 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 227 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 228 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 229 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 230 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 231 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 232 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 233 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 234 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 235 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 236 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 237 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT. 238 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 239 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 240 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 241 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 242 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 243 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 244 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 245 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 246 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 247 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 248 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 249 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 250 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 251 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 252 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 253 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 254 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 255 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 256 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 257 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 258 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 259 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 260 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 261 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 262 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 263 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 264 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 265 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 266 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 267 1. **[Nyströmformer](https://huggingface.co/docs/transformers/master/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 268 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 269 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 270 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 271 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 272 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 273 1. **[REALM](https://huggingface.co/transformers/master/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 274 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 275 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 276 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 277 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 278 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 279 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 280 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 281 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 282 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 283 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 284 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 285 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 286 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 287 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 288 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 289 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 290 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 291 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 292 1. **[ViLT)](https://huggingface.co/docs/transformers/master/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 293 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 294 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 295 1. **[ViTMAE)](https://huggingface.co/docs/transformers/master/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 296 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 297 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/master/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 298 1. **[WavLM](https://huggingface.co/docs/transformers/master/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 299 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 300 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 301 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 302 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 303 1. **[XLS-R](https://huggingface.co/docs/master/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 304 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 305 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다. 306 307 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요. 308 309 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다. 310 311 ## 더 알아보기 312 313 | 섹션 | 설명 | 314 |-|-| 315 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 | 316 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 | 317 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 | 318 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 | 319 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/master/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 | 320 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 | 321 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기| 322 323 ## 인용 324 325 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요: 326 ```bibtex 327 @inproceedings{wolf-etal-2020-transformers, 328 title = "Transformers: State-of-the-Art Natural Language Processing", 329 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 330 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 331 month = oct, 332 year = "2020", 333 address = "Online", 334 publisher = "Association for Computational Linguistics", 335 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 336 pages = "38--45" 337 } 338 ``` 339 [end of README_ko.md] [start of README_zh-hans.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Chinese translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。 20 - Use square quotes, e.g.,「引用」 21 22 Dictionary 23 24 Hugging Face: 抱抱脸 25 token: 词符(并用括号标注原英文) 26 tokenize: 词符化(并用括号标注原英文) 27 tokenizer: 词符化器(并用括号标注原英文) 28 transformer: transformer(不翻译) 29 pipeline: 流水线 30 API: API (不翻译) 31 inference: 推理 32 Trainer: 训练器。当作为类名出现时不翻译。 33 pretrained/pretrain: 预训练 34 finetune: 微调 35 community: 社区 36 example: 当特指仓库中 example 目录时翻译为「用例」 37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文 38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理 39 checkpoint: 检查点 40 --> 41 42 <p align="center"> 43 <br> 44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 45 <br> 46 <p> 47 <p align="center"> 48 <a href="https://circleci.com/gh/huggingface/transformers"> 49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"> 50 </a> 51 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE"> 52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 53 </a> 54 <a href="https://huggingface.co/docs/transformers/index"> 55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 56 </a> 57 <a href="https://github.com/huggingface/transformers/releases"> 58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 59 </a> 60 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md"> 61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 62 </a> 63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 64 </p> 65 66 <h4 align="center"> 67 <p> 68 <a href="https://github.com/huggingface/transformers/">English</a> | 69 <b>简体中文</b> | 70 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hant.md">繁體中文</a> | 71 <a href="https://github.com/huggingface/transformers/blob/master/README_ko.md">한국어</a> 72 <p> 73 </h4> 74 75 <h3 align="center"> 76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p> 77 </h3> 78 79 <h3 align="center"> 80 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 81 </h3> 82 83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。 84 85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。 86 87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。 88 89 ## 在线演示 90 91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。 92 93 这里是一些例子: 94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 101 102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。 103 104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务 105 106 <a target="_blank" href="https://huggingface.co/support"> 107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 108 </a><br> 109 110 ## 快速上手 111 112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子: 113 114 ```python 115 >>> from transformers import pipeline 116 117 # 使用情绪分析流水线 118 >>> classifier = pipeline('sentiment-analysis') 119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 121 ``` 122 123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。 124 125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案: 126 127 ``` python 128 >>> from transformers import pipeline 129 130 # 使用问答流水线 131 >>> question_answerer = pipeline('question-answering') 132 >>> question_answerer({ 133 ... 'question': 'What is the name of the repository ?', 134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 135 ... }) 136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 137 138 ``` 139 140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。 141 142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例: 143 ```python 144 >>> from transformers import AutoTokenizer, AutoModel 145 146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 147 >>> model = AutoModel.from_pretrained("bert-base-uncased") 148 149 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 150 >>> outputs = model(**inputs) 151 ``` 152 这里是等效的 TensorFlow 代码: 153 ```python 154 >>> from transformers import AutoTokenizer, TFAutoModel 155 156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 158 159 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 160 >>> outputs = model(**inputs) 161 ``` 162 163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。 164 165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。 166 167 ## 为什么要用 transformers? 168 169 1. 便于使用的先进模型: 170 - NLU 和 NLG 上表现优越 171 - 对教学和实践友好且低门槛 172 - 高级抽象,只需了解三个类 173 - 对所有模型统一的API 174 175 1. 更低计算开销,更少的碳排放: 176 - 研究人员可以分享亿训练的模型而非次次从头开始训练 177 - 工程师可以减少计算用时和生产环境开销 178 - 数十种模型架构、两千多个预训练模型、100多种语言支持 179 180 1. 对于模型生命周期的每一个部分都面面俱到: 181 - 训练先进的模型,只需 3 行代码 182 - 模型在不同深度学习框架间任意转移,随你心意 183 - 为训练、评估和生产选择最适合的框架,衔接无缝 184 185 1. 为你的需求轻松定制专属模型和用例: 186 - 我们为每种模型架构提供了多个用例来复现原论文结果 187 - 模型内部结构保持透明一致 188 - 模型文件可单独使用,方便魔改和快速实验 189 190 ## 什么情况下我不该用 transformers? 191 192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。 193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。 194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/master/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。 195 196 ## 安装 197 198 ### 使用 pip 199 200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。 201 202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 203 204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。 205 206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。 207 208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装: 209 210 ```bash 211 pip install transformers 212 ``` 213 214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。 215 216 ### 使用 conda 217 218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。 219 220 🤗 Transformers 可以通过 conda 依此安装: 221 222 ```shell script 223 conda install -c huggingface transformers 224 ``` 225 226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。 227 228 ## 模型架构 229 230 **🤗 Transformers 支持的[所有的模型检查点](https://huggingface.co/models)** 由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。 231 232 目前的检查点数量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 233 234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)): 235 236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。 237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。 238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。 239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。 240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。 241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。 242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。 244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 248 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。 249 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。 250 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。 251 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。 252 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。 253 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。 254 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。 255 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。 256 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 257 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 258 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。 259 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。 260 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。 261 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) 和德语版 DistilBERT。 262 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。 263 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。 264 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 265 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。 266 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。 267 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。 268 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。 269 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。 270 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。 271 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。 272 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。 273 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。 274 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。 275 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。 276 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。 277 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。 278 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 279 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 280 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。 281 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。 282 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。 283 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。 284 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。 285 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。 286 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 287 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 288 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。 289 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。 290 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。 291 1. **[Nyströmformer](https://huggingface.co/docs/transformers/master/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。 292 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。 293 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。 294 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。 295 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 296 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。 297 1. **[REALM](https://huggingface.co/transformers/master/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。 298 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。 299 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。 300 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。 301 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。 302 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。 303 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 304 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 305 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。 306 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。 307 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。 308 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。 309 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 310 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 311 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。 312 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。 313 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。 314 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。 315 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。 316 1. **[ViLT)](https://huggingface.co/docs/transformers/master/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。 317 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。 318 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。 319 1. **[ViTMAE)](https://huggingface.co/docs/transformers/master/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。 320 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。 321 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/master/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。 322 1. **[WavLM](https://huggingface.co/docs/transformers/master/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 323 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。 324 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 325 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。 326 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。 327 1. **[XLS-R](https://huggingface.co/docs/master/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。 328 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。 329 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。 330 331 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 332 333 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。 334 335 336 ## 了解更多 337 338 | 章节 | 描述 | 339 |-|-| 340 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 | 341 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 | 342 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 | 343 | [训练和微调](https://huggingface.co/docstransformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 | 344 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/master/examples) | 为各种任务提供的用例脚本 | 345 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 | 346 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers | 347 348 ## 引用 349 350 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用: 351 ```bibtex 352 @inproceedings{wolf-etal-2020-transformers, 353 title = "Transformers: State-of-the-Art Natural Language Processing", 354 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 355 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 356 month = oct, 357 year = "2020", 358 address = "Online", 359 publisher = "Association for Computational Linguistics", 360 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 361 pages = "38--45" 362 } 363 ``` 364 [end of README_zh-hans.md] [start of README_zh-hant.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。 20 - Use square quotes, e.g.,「引用」 21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese. 22 23 Dictionary 24 25 API: API (不翻譯) 26 add: 加入 27 checkpoint: 檢查點 28 code: 程式碼 29 community: 社群 30 confidence: 信賴度 31 dataset: 資料集 32 documentation: 文件 33 example: 基本翻譯為「範例」,或依語意翻為「例子」 34 finetune: 微調 35 Hugging Face: Hugging Face(不翻譯) 36 implementation: 實作 37 inference: 推論 38 library: 函式庫 39 module: 模組 40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理 41 online demos: 線上Demo 42 pipeline: pipeline(不翻譯) 43 pretrained/pretrain: 預訓練 44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文 45 repository: repository(不翻譯) 46 summary: 概覽 47 token-: token-(不翻譯) 48 Trainer: Trainer(不翻譯) 49 transformer: transformer(不翻譯) 50 tutorial: 教學 51 user: 使用者 52 --> 53 54 <p align="center"> 55 <br> 56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 57 <br> 58 <p> 59 <p align="center"> 60 <a href="https://circleci.com/gh/huggingface/transformers"> 61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"> 62 </a> 63 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE"> 64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 65 </a> 66 <a href="https://huggingface.co/docs/transformers/index"> 67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 68 </a> 69 <a href="https://github.com/huggingface/transformers/releases"> 70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 71 </a> 72 <a href="https://github.com/huggingface/transformers/blob/master/CODE_OF_CONDUCT.md"> 73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 74 </a> 75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 76 </p> 77 78 <h4 align="center"> 79 <p> 80 <a href="https://github.com/huggingface/transformers/">English</a> | 81 <a href="https://github.com/huggingface/transformers/blob/master/README_zh-hans.md">简体中文</a> | 82 <b>繁體中文</b> | 83 <a href="https://github.com/huggingface/transformers/blob/master/README_ko.md">한국어</a> 84 <p> 85 </h4> 86 87 <h3 align="center"> 88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p> 89 </h3> 90 91 <h3 align="center"> 92 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 93 </h3> 94 95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。 96 97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。 98 99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。 100 101 ## 線上Demo 102 103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。 104 105 這裡是一些範例: 106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 113 114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。 115 116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務 117 118 <a target="_blank" href="https://huggingface.co/support"> 119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 120 </a><br> 121 122 ## 快速上手 123 124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子: 125 126 ```python 127 >>> from transformers import pipeline 128 129 # 使用情緒分析 pipeline 130 >>> classifier = pipeline('sentiment-analysis') 131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 133 ``` 134 135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。 136 137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案: 138 139 ``` python 140 >>> from transformers import pipeline 141 142 # 使用問答 pipeline 143 >>> question_answerer = pipeline('question-answering') 144 >>> question_answerer({ 145 ... 'question': 'What is the name of the repository ?', 146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 147 ... }) 148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 149 150 ``` 151 152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。 153 154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例: 155 ```python 156 >>> from transformers import AutoTokenizer, AutoModel 157 158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 159 >>> model = AutoModel.from_pretrained("bert-base-uncased") 160 161 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 162 >>> outputs = model(**inputs) 163 ``` 164 這裡是對應的 TensorFlow 程式碼: 165 ```python 166 >>> from transformers import AutoTokenizer, TFAutoModel 167 168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 170 171 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 172 >>> outputs = model(**inputs) 173 ``` 174 175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。 176 177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。 178 179 ## 為什麼要用 transformers? 180 181 1. 便於使用的先進模型: 182 - NLU 和 NLG 上性能卓越 183 - 對教學和實作友好且低門檻 184 - 高度抽象,使用者只須學習 3 個類別 185 - 對所有模型使用的制式化API 186 187 1. 更低的運算成本,更少的碳排放: 188 - 研究人員可以分享預訓練的模型而非從頭開始訓練 189 - 工程師可以減少計算時間以及生產成本 190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援 191 192 1. 對於模型生命週期的每一個部分都面面俱到: 193 - 訓練先進的模型,只需 3 行程式碼 194 - 模型可以在不同深度學習框架之間任意轉換 195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接 196 197 1. 為你的需求輕鬆客製化專屬模型和範例: 198 - 我們為每種模型架構提供了多個範例來重現原論文結果 199 - 一致的模型內部架構 200 - 模型檔案可單獨使用,便於修改和快速實驗 201 202 ## 什麼情況下我不該用 transformers? 203 204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。 205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。 206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/master/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。 207 208 ## 安裝 209 210 ### 使用 pip 211 212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。 213 214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 215 216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。 217 218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。 219 220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝: 221 222 ```bash 223 pip install transformers 224 ``` 225 226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。 227 228 ### 使用 conda 229 230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。 231 232 🤗 Transformers 可以藉由 conda 依此安裝: 233 234 ```shell script 235 conda install -c huggingface transformers 236 ``` 237 238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。 239 240 ## 模型架構 241 242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。 243 244 目前的檢查點數量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 245 246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)): 247 248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 260 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 261 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 262 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 263 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 264 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 265 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 266 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 267 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 268 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 269 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 270 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 271 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 272 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 273 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT. 274 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 275 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 276 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 277 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 278 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 279 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 280 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 281 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 282 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 283 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 284 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 285 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 286 1. **[ImageGPT](https://huggingface.co/docs/transformers/master/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 287 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 288 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 289 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 290 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 291 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 292 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 293 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 294 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 295 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 296 1. **[MBart](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 297 1. **[MBart-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 298 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 299 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 300 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 301 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 302 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 303 1. **[Nyströmformer](https://huggingface.co/docs/transformers/master/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 304 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 305 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 306 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 307 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 308 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 309 1. **[REALM](https://huggingface.co/transformers/master/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 310 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 311 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 312 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 313 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 314 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 315 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 316 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 317 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 318 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 319 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 320 1. **[SqueezeBert](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 321 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 322 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 323 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 324 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 325 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 326 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 327 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 328 1. **[ViLT)](https://huggingface.co/docs/transformers/master/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 329 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 330 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 331 1. **[ViTMAE)](https://huggingface.co/docs/transformers/master/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 332 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 333 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/master/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 334 1. **[WavLM](https://huggingface.co/docs/transformers/master/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 335 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 336 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 337 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 338 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 339 1. **[XLS-R](https://huggingface.co/docs/master/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 340 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 341 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。 342 343 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 344 345 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。 346 347 348 ## 了解更多 349 350 | 章節 | 描述 | 351 |-|-| 352 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 | 353 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 | 354 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 | 355 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 | 356 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/master/examples) | 為各種任務提供的範例腳本 | 357 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 | 358 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers | 359 360 ## 引用 361 362 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用: 363 ```bibtex 364 @inproceedings{wolf-etal-2020-transformers, 365 title = "Transformers: State-of-the-Art Natural Language Processing", 366 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 367 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 368 month = oct, 369 year = "2020", 370 address = "Online", 371 publisher = "Association for Computational Linguistics", 372 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 373 pages = "38--45" 374 } 375 ``` 376 [end of README_zh-hant.md] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
huggingface/transformers
515ed3ad2a11a6b0cd9800b2ad4d3b313fdaea8c
Addition of Swin Transformer for Computer Vision # 🌟 Addition Swin Transformer ## Model description Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 masks AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on Val), surpassing previous models by a large margin. ## Open source status * [x] the model implementation is available: https://github.com/microsoft/Swin-Transformer * [x] the model weights are available: https://github.com/microsoft/Swin-Transformer * [x] who are the authors: [Swin Transformer](https://arxiv.org/pdf/2103.14030.pdf) ## Possible Task Support: Opensource Version Supports below tasks * Image Classification * Object Detection * Instance Segmentation * Semantic Segmentation * Video Recognition
Maybe of interest to @NielsRogge Hello, I would like to work on adding Swin. I will put out a PR sometime soon. Hey @novice03, thanks for your effort! I believe that @FrancescoSaverioZuppichini is in the process of adding the `Mask2Former` model which depends on Swin, so he's probably working on that too. I'll let him share more about his work.
2022-01-10T08:38:01Z
<patch> diff --git a/src/transformers/__init__.py b/src/transformers/__init__.py --- a/src/transformers/__init__.py +++ b/src/transformers/__init__.py @@ -287,6 +287,7 @@ ], "models.splinter": ["SPLINTER_PRETRAINED_CONFIG_ARCHIVE_MAP", "SplinterConfig", "SplinterTokenizer"], "models.squeezebert": ["SQUEEZEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "SqueezeBertConfig", "SqueezeBertTokenizer"], + "models.swin": ["SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP", "SwinConfig"], "models.t5": ["T5_PRETRAINED_CONFIG_ARCHIVE_MAP", "T5Config"], "models.tapas": ["TAPAS_PRETRAINED_CONFIG_ARCHIVE_MAP", "TapasConfig", "TapasTokenizer"], "models.transfo_xl": [ @@ -1338,6 +1339,14 @@ "SqueezeBertPreTrainedModel", ] ) + _import_structure["models.swin"].extend( + [ + "SWIN_PRETRAINED_MODEL_ARCHIVE_LIST", + "SwinForImageClassification", + "SwinModel", + "SwinPreTrainedModel", + ] + ) _import_structure["models.t5"].extend( [ "T5_PRETRAINED_MODEL_ARCHIVE_LIST", @@ -2412,6 +2421,7 @@ ) from .models.splinter import SPLINTER_PRETRAINED_CONFIG_ARCHIVE_MAP, SplinterConfig, SplinterTokenizer from .models.squeezebert import SQUEEZEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, SqueezeBertConfig, SqueezeBertTokenizer + from .models.swin import SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP, SwinConfig from .models.t5 import T5_PRETRAINED_CONFIG_ARCHIVE_MAP, T5Config from .models.tapas import TAPAS_PRETRAINED_CONFIG_ARCHIVE_MAP, TapasConfig, TapasTokenizer from .models.transfo_xl import ( @@ -3282,6 +3292,12 @@ SqueezeBertModule, SqueezeBertPreTrainedModel, ) + from .models.swin import ( + SWIN_PRETRAINED_MODEL_ARCHIVE_LIST, + SwinForImageClassification, + SwinModel, + SwinPreTrainedModel, + ) from .models.t5 import ( T5_PRETRAINED_MODEL_ARCHIVE_LIST, T5EncoderModel, diff --git a/src/transformers/models/__init__.py b/src/transformers/models/__init__.py --- a/src/transformers/models/__init__.py +++ b/src/transformers/models/__init__.py @@ -98,6 +98,7 @@ speech_to_text_2, splinter, squeezebert, + swin, t5, tapas, transfo_xl, diff --git a/src/transformers/models/auto/configuration_auto.py b/src/transformers/models/auto/configuration_auto.py --- a/src/transformers/models/auto/configuration_auto.py +++ b/src/transformers/models/auto/configuration_auto.py @@ -30,6 +30,7 @@ CONFIG_MAPPING_NAMES = OrderedDict( [ # Add configs here + ("swin", "SwinConfig"), ("vilt", "ViltConfig"), ("vit_mae", "ViTMAEConfig"), ("realm", "RealmConfig"), @@ -120,6 +121,7 @@ CONFIG_ARCHIVE_MAP_MAPPING_NAMES = OrderedDict( [ # Add archive maps here + ("swin", "SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("vilt", "VILT_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("vit_mae", "VIT_MAE_PRETRAINED_CONFIG_ARCHIVE_MAP"), ("realm", "REALM_PRETRAINED_CONFIG_ARCHIVE_MAP"), @@ -198,6 +200,7 @@ MODEL_NAMES_MAPPING = OrderedDict( [ # Add full (and cased) model names here + ("swin", "Swin"), ("vilt", "ViLT"), ("vit_mae", "ViTMAE"), ("realm", "Realm"), diff --git a/src/transformers/models/auto/feature_extraction_auto.py b/src/transformers/models/auto/feature_extraction_auto.py --- a/src/transformers/models/auto/feature_extraction_auto.py +++ b/src/transformers/models/auto/feature_extraction_auto.py @@ -44,6 +44,7 @@ ("layoutlmv2", "LayoutLMv2FeatureExtractor"), ("clip", "CLIPFeatureExtractor"), ("perceiver", "PerceiverFeatureExtractor"), + ("swin", "ViTFeatureExtractor"), ("vit_mae", "ViTFeatureExtractor"), ] ) diff --git a/src/transformers/models/auto/modeling_auto.py b/src/transformers/models/auto/modeling_auto.py --- a/src/transformers/models/auto/modeling_auto.py +++ b/src/transformers/models/auto/modeling_auto.py @@ -28,6 +28,7 @@ MODEL_MAPPING_NAMES = OrderedDict( [ # Base model mapping + ("swin", "SwinModel"), ("vilt", "ViltModel"), ("vit_mae", "ViTMAEModel"), ("nystromformer", "NystromformerModel"), @@ -263,6 +264,7 @@ "PerceiverForImageClassificationConvProcessing", ), ), + ("swin", "SwinForImageClassification"), ] ) diff --git a/src/transformers/models/swin/__init__.py b/src/transformers/models/swin/__init__.py new file mode 100644 --- /dev/null +++ b/src/transformers/models/swin/__init__.py @@ -0,0 +1,53 @@ +# flake8: noqa +# There's no way to ignore "F401 '...' imported but unused" warnings in this +# module, but to preserve other warnings. So, don't check this module at all. + +# Copyright 2022 The HuggingFace Team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from typing import TYPE_CHECKING + +# rely on isort to merge the imports +from ...file_utils import _LazyModule, is_flax_available, is_tf_available, is_torch_available, is_vision_available + + +_import_structure = { + "configuration_swin": ["SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP", "SwinConfig"], +} + + +if is_torch_available(): + _import_structure["modeling_swin"] = [ + "SWIN_PRETRAINED_MODEL_ARCHIVE_LIST", + "SwinForImageClassification", + "SwinModel", + "SwinPreTrainedModel", + ] + + +if TYPE_CHECKING: + from .configuration_swin import SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP, SwinConfig + + if is_torch_available(): + from .modeling_swin import ( + SWIN_PRETRAINED_MODEL_ARCHIVE_LIST, + SwinForImageClassification, + SwinModel, + SwinPreTrainedModel, + ) + + +else: + import sys + + sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) diff --git a/src/transformers/models/swin/configuration_swin.py b/src/transformers/models/swin/configuration_swin.py new file mode 100644 --- /dev/null +++ b/src/transformers/models/swin/configuration_swin.py @@ -0,0 +1,132 @@ +# coding=utf-8 +# Copyright 2022 The HuggingFace Inc. team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" Swin Transformer model configuration""" + +from ...configuration_utils import PretrainedConfig +from ...utils import logging + + +logger = logging.get_logger(__name__) + +SWIN_PRETRAINED_CONFIG_ARCHIVE_MAP = { + "microsoft/swin-tiny-patch4-window7-224": "https://huggingface.co/microsoft/swin-tiny-patch4-window7-224/resolve/main/config.json", + # See all Swin models at https://huggingface.co/models?filter=swin +} + + +class SwinConfig(PretrainedConfig): + r""" + This is the configuration class to store the configuration of a [`SwinModel`]. It is used to instantiate a Swin + model according to the specified arguments, defining the model architecture. Instantiating a configuration with the + defaults will yield a similar configuration to that of the Swin + [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) + architecture. + + Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the + documentation from [`PretrainedConfig`] for more information. + + Args: + image_size (`int`, *optional*, defaults to 224): + The size (resolution) of each image. + patch_size (`int`, *optional*, defaults to 4): + The size (resolution) of each patch. + num_channels (`int`, *optional*, defaults to 3): + The number of input channels. + embed_dim (`int`, *optional*, defaults to 96): + Dimensionality of patch embedding. + depths (`list(int)`, *optional*, defaults to [2, 2, 6, 2]): + Depth of each layer in the Transformer encoder. + num_heads (`list(int)`, *optional*, defaults to [3, 6, 12, 24]): + Number of attention heads in each layer of the Transformer encoder. + window_size (`int`, *optional*, defaults to 7): + Size of windows. + mlp_ratio (`float`, *optional*, defaults to 4.0): + Ratio of MLP hidden dimesionality to embedding dimensionality. + qkv_bias (`bool`, *optional*, defaults to True): + Whether or not a learnable bias should be added to the queries, keys and values. + hidden_dropout_prob (`float`, *optional*, defaults to 0.0): + The dropout probability for all fully connected layers in the embeddings and encoder. + attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): + The dropout ratio for the attention probabilities. + drop_path_rate (`float`, *optional*, defaults to 0.1): + Stochastic depth rate. + hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): + The non-linear activation function (function or string) in the encoder. If string, `"gelu"`, `"relu"`, + `"selu"` and `"gelu_new"` are supported. + use_absolute_embeddings (`bool`, *optional*, defaults to False): + Whether or not to add absolute position embeddings to the patch embeddings. + patch_norm (`bool`, *optional*, defaults to True): + Whether or not to add layer normalization after patch embedding. + initializer_range (`float`, *optional*, defaults to 0.02): + The standard deviation of the truncated_normal_initializer for initializing all weight matrices. + layer_norm_eps (`float`, *optional*, defaults to 1e-12): + The epsilon used by the layer normalization layers. + + Example: + + ```python + >>> from transformers import SwinModel, SwinConfig + + >>> # Initializing a Swin microsoft/swin-tiny-patch4-window7-224 style configuration + >>> configuration = SwinConfig() + + >>> # Initializing a model from the microsoft/swin-tiny-patch4-window7-224 style configuration + >>> model = SwinModel(configuration) + + >>> # Accessing the model configuration + >>> configuration = model.config + ```""" + model_type = "swin" + + def __init__( + self, + image_size=224, + patch_size=4, + num_channels=3, + embed_dim=96, + depths=[2, 2, 6, 2], + num_heads=[3, 6, 12, 24], + window_size=7, + mlp_ratio=4.0, + qkv_bias=True, + hidden_dropout_prob=0.0, + attention_probs_dropout_prob=0.0, + drop_path_rate=0.1, + hidden_act="gelu", + use_absolute_embeddings=False, + patch_norm=True, + initializer_range=0.02, + layer_norm_eps=1e-5, + **kwargs + ): + super().__init__(**kwargs) + + self.image_size = image_size + self.patch_size = patch_size + self.num_channels = num_channels + self.embed_dim = embed_dim + self.depths = depths + self.num_heads = num_heads + self.window_size = window_size + self.mlp_ratio = mlp_ratio + self.qkv_bias = qkv_bias + self.hidden_dropout_prob = hidden_dropout_prob + self.attention_probs_dropout_prob = attention_probs_dropout_prob + self.drop_path_rate = drop_path_rate + self.hidden_act = hidden_act + self.use_absolute_embeddings = use_absolute_embeddings + self.path_norm = patch_norm + self.layer_norm_eps = layer_norm_eps + self.initializer_range = initializer_range diff --git a/src/transformers/models/swin/convert_swin_timm_to_pytorch.py b/src/transformers/models/swin/convert_swin_timm_to_pytorch.py new file mode 100644 --- /dev/null +++ b/src/transformers/models/swin/convert_swin_timm_to_pytorch.py @@ -0,0 +1,173 @@ +import argparse +import json + +import torch +from PIL import Image + +import requests +import timm +from huggingface_hub import cached_download, hf_hub_url +from transformers import AutoFeatureExtractor, SwinConfig, SwinForImageClassification + + +def get_swin_config(swin_name): + config = SwinConfig() + name_split = swin_name.split("_") + + model_size = name_split[1] + img_size = int(name_split[4]) + window_size = int(name_split[3][-1]) + + if model_size == "tiny": + embed_dim = 96 + depths = (2, 2, 6, 2) + num_heads = (3, 6, 12, 24) + elif model_size == "small": + embed_dim = 96 + depths = (2, 2, 18, 2) + num_heads = (3, 6, 12, 24) + elif model_size == "base": + embed_dim = 128 + depths = (2, 2, 18, 2) + num_heads = (4, 8, 16, 32) + else: + embed_dim = 192 + depths = (2, 2, 18, 2) + num_heads = (6, 12, 24, 48) + + if "in22k" in swin_name: + num_classes = 21841 + else: + num_classes = 1000 + repo_id = "datasets/huggingface/label-files" + filename = "imagenet-1k-id2label.json" + id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename)), "r")) + id2label = {int(k): v for k, v in id2label.items()} + config.id2label = id2label + config.label2id = {v: k for k, v in id2label.items()} + + config.image_size = img_size + config.num_labels = num_classes + config.embed_dim = embed_dim + config.depths = depths + config.num_heads = num_heads + config.window_size = window_size + + return config + + +def rename_key(name): + if "patch_embed.proj" in name: + name = name.replace("patch_embed.proj", "embeddings.patch_embeddings.projection") + if "patch_embed.norm" in name: + name = name.replace("patch_embed.norm", "embeddings.norm") + if "layers" in name: + name = "encoder." + name + if "attn.proj" in name: + name = name.replace("attn.proj", "attention.output.dense") + if "attn" in name: + name = name.replace("attn", "attention.self") + if "norm1" in name: + name = name.replace("norm1", "layernorm_before") + if "norm2" in name: + name = name.replace("norm2", "layernorm_after") + if "mlp.fc1" in name: + name = name.replace("mlp.fc1", "intermediate.dense") + if "mlp.fc2" in name: + name = name.replace("mlp.fc2", "output.dense") + + if name == "norm.weight": + name = "layernorm.weight" + if name == "norm.bias": + name = "layernorm.bias" + + if "head" in name: + name = name.replace("head", "classifier") + else: + name = "swin." + name + + return name + + +def convert_state_dict(orig_state_dict, model): + for key in orig_state_dict.copy().keys(): + val = orig_state_dict.pop(key) + + if "mask" in key: + continue + elif "qkv" in key: + key_split = key.split(".") + layer_num = int(key_split[1]) + block_num = int(key_split[3]) + dim = model.swin.encoder.layers[layer_num].blocks[block_num].attention.self.all_head_size + + if "weight" in key: + orig_state_dict[ + f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.query.weight" + ] = val[:dim, :] + orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.key.weight"] = val[ + dim : dim * 2, : + ] + orig_state_dict[ + f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.value.weight" + ] = val[-dim:, :] + else: + orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.query.bias"] = val[ + :dim + ] + orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.key.bias"] = val[ + dim : dim * 2 + ] + orig_state_dict[f"swin.encoder.layers.{layer_num}.blocks.{block_num}.attention.self.value.bias"] = val[ + -dim: + ] + else: + orig_state_dict[rename_key(key)] = val + + return orig_state_dict + + +def convert_swin_checkpoint(swin_name, pytorch_dump_folder_path): + timm_model = timm.create_model(swin_name, pretrained=True) + timm_model.eval() + + config = get_swin_config(swin_name) + model = SwinForImageClassification(config) + model.eval() + + new_state_dict = convert_state_dict(timm_model.state_dict(), model) + model.load_state_dict(new_state_dict) + + url = "http://images.cocodataset.org/val2017/000000039769.jpg" + + feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/{}".format(swin_name.replace("_", "-"))) + image = Image.open(requests.get(url, stream=True).raw) + inputs = feature_extractor(images=image, return_tensors="pt") + + timm_outs = timm_model(inputs["pixel_values"]) + hf_outs = model(**inputs).logits + + assert torch.allclose(timm_outs, hf_outs, atol=1e-3) + + print(f"Saving model {swin_name} to {pytorch_dump_folder_path}") + model.save_pretrained(pytorch_dump_folder_path) + + print(f"Saving feature extractor to {pytorch_dump_folder_path}") + feature_extractor.save_pretrained(pytorch_dump_folder_path) + + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + # Required parameters + parser.add_argument( + "--swin_name", + default="swin_tiny_patch4_window7_224", + type=str, + help="Name of the Swin timm model you'd like to convert.", + ) + parser.add_argument( + "--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory." + ) + + args = parser.parse_args() + convert_swin_checkpoint(args.swin_name, args.pytorch_dump_folder_path) diff --git a/src/transformers/models/swin/modeling_swin.py b/src/transformers/models/swin/modeling_swin.py new file mode 100644 --- /dev/null +++ b/src/transformers/models/swin/modeling_swin.py @@ -0,0 +1,862 @@ +# coding=utf-8 +# Copyright 2022 Microsoft Research and The HuggingFace Inc. team. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" PyTorch Swin Transformer model.""" + + +import collections.abc +import math + +import torch +import torch.utils.checkpoint +from torch import nn +from torch.nn import CrossEntropyLoss, MSELoss + +from ...activations import ACT2FN +from ...file_utils import add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings +from ...modeling_outputs import BaseModelOutput, SequenceClassifierOutput +from ...modeling_utils import PreTrainedModel, find_pruneable_heads_and_indices, prune_linear_layer +from ...utils import logging +from .configuration_swin import SwinConfig + + +logger = logging.get_logger(__name__) + +_CHECKPOINT_FOR_DOC = "microsoft/swin-tiny-patch4-window7-224" +_CONFIG_FOR_DOC = "SwinConfig" + +SWIN_PRETRAINED_MODEL_ARCHIVE_LIST = [ + "microsoft/swin-tiny-patch4-window7-224", + # See all Swin models at https://huggingface.co/models?filter=swin +] + + +# to_2tuple, drop_path, SwinPatchEmbeddings, SwinPatchMerging and SwinDropPath are from the timm library. + + +# Copied from transformers.models.vit.modeling_vit.to_2tuple +def to_2tuple(x): + if isinstance(x, collections.abc.Iterable): + return x + return (x, x) + + +def window_partition(input_feature, window_size): + """ + Partitions the given input into windows. + """ + batch_size, height, width, num_channels = input_feature.shape + input_feature = input_feature.view( + batch_size, height // window_size, window_size, width // window_size, window_size, num_channels + ) + windows = input_feature.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, num_channels) + return windows + + +def window_reverse(windows, window_size, height, width): + """ + Merges windows to produce higher resolution features. + """ + batch_size = int(windows.shape[0] / (height * width / window_size / window_size)) + windows = windows.view(batch_size, height // window_size, width // window_size, window_size, window_size, -1) + windows = windows.permute(0, 1, 3, 2, 4, 5).contiguous().view(batch_size, height, width, -1) + return windows + + +def drop_path(input, drop_prob=0.0, training=False, scale_by_keep=True): + """ + Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). + """ + if drop_prob == 0.0 or not training: + return input + keep_prob = 1 - drop_prob + shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets + random_tensor = input.new_empty(shape).bernoulli_(keep_prob) + if keep_prob > 0.0 and scale_by_keep: + random_tensor.div_(keep_prob) + return input * random_tensor + + +class SwinEmbeddings(nn.Module): + """ + Construct the patch and position embeddings. + """ + + def __init__(self, config): + super().__init__() + + self.patch_embeddings = SwinPatchEmbeddings( + image_size=config.image_size, + patch_size=config.patch_size, + num_channels=config.num_channels, + embed_dim=config.embed_dim, + ) + num_patches = self.patch_embeddings.num_patches + self.patch_grid = self.patch_embeddings.grid_size + + if config.use_absolute_embeddings: + self.position_embeddings = nn.Parameter(torch.zeros(1, num_patches + 1, config.embed_dim)) + else: + self.position_embeddings = None + + self.norm = nn.LayerNorm(config.embed_dim) + self.dropout = nn.Dropout(config.hidden_dropout_prob) + + def forward(self, pixel_values): + embeddings = self.patch_embeddings(pixel_values) + embeddings = self.norm(embeddings) + + if self.position_embeddings is not None: + embeddings = embeddings + self.position_embeddings + + embeddings = self.dropout(embeddings) + + return embeddings + + +class SwinPatchEmbeddings(nn.Module): + """ + Image to Patch Embedding. + """ + + def __init__(self, image_size=224, patch_size=16, num_channels=3, embed_dim=768): + super().__init__() + image_size = to_2tuple(image_size) + patch_size = to_2tuple(patch_size) + num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) + self.image_size = image_size + self.patch_size = patch_size + self.num_patches = num_patches + self.grid_size = (image_size[0] // patch_size[0], image_size[1] // patch_size[1]) + + self.projection = nn.Conv2d(num_channels, embed_dim, kernel_size=patch_size, stride=patch_size) + + def forward(self, pixel_values): + pixel_values = self.projection(pixel_values).flatten(2).transpose(1, 2) + return pixel_values + + +class SwinPatchMerging(nn.Module): + """ + Patch Merging Layer. + + Args: + input_resolution (`Tuple[int]`): + Resolution of input feature. + dim (`int`): + Number of input channels. + norm_layer (`nn.Module`, *optional*, defaults to `nn.LayerNorm`): + Normalization layer class. + """ + + def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): + super().__init__() + self.input_resolution = input_resolution + self.dim = dim + self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) + self.norm = norm_layer(4 * dim) + + def forward(self, input_feature): + height, width = self.input_resolution + # `dim` is height * width + batch_size, dim, num_channels = input_feature.shape + + input_feature = input_feature.view(batch_size, height, width, num_channels) + + input_feature_0 = input_feature[:, 0::2, 0::2, :] # batch_size height/2 width/2 num_channels + input_feature_1 = input_feature[:, 1::2, 0::2, :] # batch_size height/2 width/2 num_channels + input_feature_2 = input_feature[:, 0::2, 1::2, :] # batch_size height/2 width/2 num_channels + input_feature_3 = input_feature[:, 1::2, 1::2, :] # batch_size height/2 width/2 num_channels + # batch_size height/2 width/2 4*num_channels + input_feature = torch.cat([input_feature_0, input_feature_1, input_feature_2, input_feature_3], -1) + input_feature = input_feature.view(batch_size, -1, 4 * num_channels) # batch_size height/2*width/2 4*C + + input_feature = self.norm(input_feature) + input_feature = self.reduction(input_feature) + + return input_feature + + +class SwinDropPath(nn.Module): + """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" + + def __init__(self, drop_prob=None, scale_by_keep=True): + super(SwinDropPath, self).__init__() + self.drop_prob = drop_prob + self.scale_by_keep = scale_by_keep + + def forward(self, input): + return drop_path(input, self.drop_prob, self.training, self.scale_by_keep) + + +class SwinSelfAttention(nn.Module): + def __init__(self, config, dim, num_heads): + super().__init__() + if dim % num_heads != 0: + raise ValueError( + f"The hidden size ({dim}) is not a multiple of the number of attention " f"heads ({num_heads})" + ) + + self.num_attention_heads = num_heads + self.attention_head_size = int(dim / num_heads) + self.all_head_size = self.num_attention_heads * self.attention_head_size + self.window_size = to_2tuple(config.window_size) + + self.relative_position_bias_table = nn.Parameter( + torch.zeros((2 * self.window_size[0] - 1) * (2 * self.window_size[1] - 1), num_heads) + ) + + # get pair-wise relative position index for each token inside the window + coords_h = torch.arange(self.window_size[0]) + coords_w = torch.arange(self.window_size[1]) + coords = torch.stack(torch.meshgrid([coords_h, coords_w])) + coords_flatten = torch.flatten(coords, 1) + relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] + relative_coords = relative_coords.permute(1, 2, 0).contiguous() + relative_coords[:, :, 0] += self.window_size[0] - 1 + relative_coords[:, :, 1] += self.window_size[1] - 1 + relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 + relative_position_index = relative_coords.sum(-1) + self.register_buffer("relative_position_index", relative_position_index) + + self.query = nn.Linear(self.all_head_size, self.all_head_size, bias=config.qkv_bias) + self.key = nn.Linear(self.all_head_size, self.all_head_size, bias=config.qkv_bias) + self.value = nn.Linear(self.all_head_size, self.all_head_size, bias=config.qkv_bias) + + self.dropout = nn.Dropout(config.attention_probs_dropout_prob) + + def transpose_for_scores(self, x): + new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) + x = x.view(*new_x_shape) + return x.permute(0, 2, 1, 3) + + def forward( + self, + hidden_states, + attention_mask=None, + head_mask=None, + output_attentions=False, + ): + batch_size, dim, num_channels = hidden_states.shape + mixed_query_layer = self.query(hidden_states) + + key_layer = self.transpose_for_scores(self.key(hidden_states)) + value_layer = self.transpose_for_scores(self.value(hidden_states)) + query_layer = self.transpose_for_scores(mixed_query_layer) + + # Take the dot product between "query" and "key" to get the raw attention scores. + attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) + + attention_scores = attention_scores / math.sqrt(self.attention_head_size) + + relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)] + relative_position_bias = relative_position_bias.view( + self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 + ) + + relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() + attention_scores = attention_scores + relative_position_bias.unsqueeze(0) + + if attention_mask is not None: + # Apply the attention mask is (precomputed for all layers in SwinModel forward() function) + mask_shape = attention_mask.shape[0] + attention_scores = attention_scores.view( + batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim + ) + attention_scores = attention_scores + attention_mask.unsqueeze(1).unsqueeze(0) + attention_scores = attention_scores.view(-1, self.num_attention_heads, dim, dim) + + # Normalize the attention scores to probabilities. + attention_probs = nn.functional.softmax(attention_scores, dim=-1) + + # This is actually dropping out entire tokens to attend to, which might + # seem a bit unusual, but is taken from the original Transformer paper. + attention_probs = self.dropout(attention_probs) + + # Mask heads if we want to + if head_mask is not None: + attention_probs = attention_probs * head_mask + + context_layer = torch.matmul(attention_probs, value_layer) + context_layer = context_layer.permute(0, 2, 1, 3).contiguous() + new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) + context_layer = context_layer.view(*new_context_layer_shape) + + outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) + + return outputs + + +class SwinSelfOutput(nn.Module): + def __init__(self, config, dim): + super().__init__() + self.dense = nn.Linear(dim, dim) + self.dropout = nn.Dropout(config.attention_probs_dropout_prob) + + def forward(self, hidden_states, input_tensor): + hidden_states = self.dense(hidden_states) + hidden_states = self.dropout(hidden_states) + + return hidden_states + + +class SwinAttention(nn.Module): + def __init__(self, config, dim, num_heads): + super().__init__() + self.self = SwinSelfAttention(config, dim, num_heads) + self.output = SwinSelfOutput(config, dim) + self.pruned_heads = set() + + def prune_heads(self, heads): + if len(heads) == 0: + return + heads, index = find_pruneable_heads_and_indices( + heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads + ) + + # Prune linear layers + self.self.query = prune_linear_layer(self.self.query, index) + self.self.key = prune_linear_layer(self.self.key, index) + self.self.value = prune_linear_layer(self.self.value, index) + self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) + + # Update hyper params and store pruned heads + self.self.num_attention_heads = self.self.num_attention_heads - len(heads) + self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads + self.pruned_heads = self.pruned_heads.union(heads) + + def forward(self, hidden_states, attention_mask=None, head_mask=None, output_attentions=False): + self_outputs = self.self(hidden_states, attention_mask, head_mask, output_attentions) + attention_output = self.output(self_outputs[0], hidden_states) + outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them + return outputs + + +class SwinIntermediate(nn.Module): + def __init__(self, config, dim): + super().__init__() + self.dense = nn.Linear(dim, int(config.mlp_ratio * dim)) + if isinstance(config.hidden_act, str): + self.intermediate_act_fn = ACT2FN[config.hidden_act] + else: + self.intermediate_act_fn = config.hidden_act + + def forward(self, hidden_states): + hidden_states = self.dense(hidden_states) + hidden_states = self.intermediate_act_fn(hidden_states) + return hidden_states + + +class SwinOutput(nn.Module): + def __init__(self, config, dim): + super().__init__() + self.dense = nn.Linear(int(config.mlp_ratio * dim), dim) + self.dropout = nn.Dropout(config.hidden_dropout_prob) + + def forward(self, hidden_states): + hidden_states = self.dense(hidden_states) + hidden_states = self.dropout(hidden_states) + return hidden_states + + +class SwinBlock(nn.Module): + def __init__(self, config, dim, input_resolution, num_heads, shift_size=0): + super().__init__() + self.chunk_size_feed_forward = config.chunk_size_feed_forward + self.shift_size = shift_size + self.window_size = config.window_size + self.input_resolution = input_resolution + + if min(self.input_resolution) <= self.window_size: + # if window size is larger than input resolution, we don't partition windows + self.shift_size = 0 + self.window_size = min(self.input_resolution) + + self.layernorm_before = nn.LayerNorm(dim, eps=config.layer_norm_eps) + self.attention = SwinAttention(config, dim, num_heads) + self.drop_path = SwinDropPath(config.drop_path_rate) if config.drop_path_rate > 0.0 else nn.Identity() + self.layernorm_after = nn.LayerNorm(dim, eps=config.layer_norm_eps) + self.intermediate = SwinIntermediate(config, dim) + self.output = SwinOutput(config, dim) + + if self.shift_size > 0: + # calculate attention mask for SW-MSA + height, width = self.input_resolution + img_mask = torch.zeros((1, height, width, 1)) + height_slices = ( + slice(0, -self.window_size), + slice(-self.window_size, -self.shift_size), + slice(-self.shift_size, None), + ) + width_slices = ( + slice(0, -self.window_size), + slice(-self.window_size, -self.shift_size), + slice(-self.shift_size, None), + ) + count = 0 + for height_slice in height_slices: + for width_slice in width_slices: + img_mask[:, height_slice, width_slice, :] = count + count += 1 + + mask_windows = window_partition(img_mask, self.window_size) + mask_windows = mask_windows.view(-1, self.window_size * self.window_size) + attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) + attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) + else: + attn_mask = None + + self.attn_mask = attn_mask + + def forward(self, hidden_states, head_mask=None, output_attentions=False): + height, width = self.input_resolution + batch_size, dim, channels = hidden_states.size() + shortcut = hidden_states + + hidden_states = self.layernorm_before(hidden_states) + hidden_states = hidden_states.view(batch_size, height, width, channels) + + # cyclic shift + if self.shift_size > 0: + shifted_hidden_states = torch.roll(hidden_states, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) + else: + shifted_hidden_states = hidden_states + + # partition windows + hidden_states_windows = window_partition(shifted_hidden_states, self.window_size) + hidden_states_windows = hidden_states_windows.view(-1, self.window_size * self.window_size, channels) + + self_attention_outputs = self.attention( + hidden_states_windows, + self.attn_mask, + head_mask, + output_attentions=output_attentions, + ) + + attention_output = self_attention_outputs[0] + + outputs = self_attention_outputs[1:] # add self attentions if we output attention weights + + attention_windows = attention_output.view(-1, self.window_size, self.window_size, channels) + shifted_windows = window_reverse(attention_windows, self.window_size, height, width) # B H' W' C + + # reverse cyclic shift + if self.shift_size > 0: + attention_windows = torch.roll(shifted_windows, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) + else: + attention_windows = shifted_windows + + attention_windows = attention_windows.view(batch_size, height * width, channels) + + hidden_states = shortcut + self.drop_path(attention_windows) + + layer_output = self.layernorm_after(hidden_states) + layer_output = self.intermediate(layer_output) + layer_output = hidden_states + self.output(layer_output) + + outputs = (layer_output,) + outputs + + return outputs + + +class SwinLayer(nn.Module): + def __init__(self, config, dim, input_resolution, depth, num_heads, drop_path, downsample): + super().__init__() + self.config = config + self.dim = dim + self.blocks = nn.ModuleList( + [ + SwinBlock( + config=config, + dim=dim, + input_resolution=input_resolution, + num_heads=num_heads, + shift_size=0 if (i % 2 == 0) else config.window_size // 2, + ) + for i in range(depth) + ] + ) + + # patch merging layer + if downsample is not None: + self.downsample = downsample(input_resolution, dim=dim, norm_layer=nn.LayerNorm) + else: + self.downsample = None + + self.pointing = False + + def forward(self, hidden_states, head_mask=None, output_attentions=False, output_hidden_states=False): + all_hidden_states = () if output_hidden_states else None + + for i, block_module in enumerate(self.blocks): + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + layer_head_mask = head_mask[i] if head_mask is not None else None + + layer_outputs = block_module( + hidden_states, + layer_head_mask, + output_attentions, + ) + + hidden_states = layer_outputs[0] + + if self.downsample is not None: + layer_outputs_list = list(layer_outputs) + layer_outputs_list[0] = self.downsample(layer_outputs[0]) + layer_outputs = tuple(layer_outputs_list) + + return layer_outputs + + +class SwinEncoder(nn.Module): + def __init__(self, config, grid_size): + super().__init__() + self.num_layers = len(config.depths) + self.config = config + dpr = [x.item() for x in torch.linspace(0, config.drop_path_rate, sum(config.depths))] + self.layers = nn.ModuleList( + [ + SwinLayer( + config=config, + dim=int(config.embed_dim * 2 ** i_layer), + input_resolution=(grid_size[0] // (2 ** i_layer), grid_size[1] // (2 ** i_layer)), + depth=config.depths[i_layer], + num_heads=config.num_heads[i_layer], + drop_path=dpr[sum(config.depths[:i_layer]) : sum(config.depths[: i_layer + 1])], + downsample=SwinPatchMerging if (i_layer < self.num_layers - 1) else None, + ) + for i_layer in range(self.num_layers) + ] + ) + + self.gradient_checkpointing = False + + def forward( + self, + hidden_states, + head_mask=None, + output_attentions=False, + output_hidden_states=False, + return_dict=True, + ): + all_hidden_states = () if output_hidden_states else None + all_self_attentions = () if output_attentions else None + + for i, layer_module in enumerate(self.layers): + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + layer_head_mask = head_mask[i] if head_mask is not None else None + + if self.gradient_checkpointing and self.training: + + def create_custom_forward(module): + def custom_forward(*inputs): + return module(*inputs, output_attentions) + + return custom_forward + + layer_outputs = torch.utils.checkpoint.checkpoint( + create_custom_forward(layer_module), hidden_states, layer_head_mask + ) + else: + layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) + + hidden_states = layer_outputs[0] + if output_attentions: + all_self_attentions = all_self_attentions + (layer_outputs[1],) + + if output_hidden_states: + all_hidden_states = all_hidden_states + (hidden_states,) + + if not return_dict: + return tuple( + v + for v in [ + hidden_states, + all_hidden_states, + all_self_attentions, + ] + if v is not None + ) + + return BaseModelOutput( + last_hidden_state=hidden_states, + hidden_states=all_hidden_states, + attentions=all_self_attentions, + ) + + +class SwinPreTrainedModel(PreTrainedModel): + """ + An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained + models. + """ + + config_class = SwinConfig + base_model_prefix = "swin" + main_input_name = "pixel_values" + supports_gradient_checkpointing = True + + def _init_weights(self, module): + """Initialize the weights""" + if isinstance(module, nn.Linear): + # Slightly different from the TF version which uses truncated_normal for initialization + # cf https://github.com/pytorch/pytorch/pull/5617 + module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) + if module.bias is not None: + module.bias.data.zero_() + elif isinstance(module, nn.LayerNorm): + module.bias.data.zero_() + module.weight.data.fill_(1.0) + + def _set_gradient_checkpointing(self, module, value=False): + if isinstance(module, SwinEncoder): + module.gradient_checkpointing = value + + +SWIN_START_DOCSTRING = r""" + This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use + it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and + behavior. + + Parameters: + config ([`SwinConfig`]): Model configuration class with all the parameters of the model. + Initializing with a config file does not load the weights associated with the model, only the + configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. +""" + +SWIN_INPUTS_DOCSTRING = r""" + Args: + pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): + Pixel values. Pixel values can be obtained using [`AutoFeatureExtractor`]. See + [`AutoFeatureExtractor.__call__`] for details. + head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): + Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: + + - 1 indicates the head is **not masked**, + - 0 indicates the head is **masked**. + + output_attentions (`bool`, *optional*): + Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned + tensors for more detail. + output_hidden_states (`bool`, *optional*): + Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for + more detail. + return_dict (`bool`, *optional*): + Whether or not to return a [`~file_utils.ModelOutput`] instead of a plain tuple. +""" + + +@add_start_docstrings( + "The bare Swin Model transformer outputting raw hidden-states without any specific head on top.", + SWIN_START_DOCSTRING, +) +class SwinModel(SwinPreTrainedModel): + def __init__(self, config): + super().__init__(config) + self.config = config + self.num_layers = len(config.depths) + self.num_features = int(config.embed_dim * 2 ** (self.num_layers - 1)) + + self.embeddings = SwinEmbeddings(config) + self.encoder = SwinEncoder(config, self.embeddings.patch_grid) + + self.layernorm = nn.LayerNorm(self.num_features, eps=config.layer_norm_eps) + self.pool = nn.AdaptiveAvgPool1d(1) + + # Initialize weights and apply final processing + self.post_init() + + def get_input_embeddings(self): + return self.embeddings.patch_embeddings + + def _prune_heads(self, heads_to_prune): + """ + Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base + class PreTrainedModel + """ + for layer, heads in heads_to_prune.items(): + self.encoder.layer[layer].attention.prune_heads(heads) + + @add_start_docstrings_to_model_forward(SWIN_INPUTS_DOCSTRING) + @replace_return_docstrings(output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC) + def forward( + self, + pixel_values=None, + head_mask=None, + output_attentions=None, + output_hidden_states=None, + return_dict=None, + ): + r""" + Returns: + + Examples: + + ```python + >>> from transformers import AutoFeatureExtractor, SwinModel + >>> from PIL import Image + >>> import requests + + >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" + >>> image = Image.open(requests.get(url, stream=True).raw) + + >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-tiny-patch4-window7-224") + >>> model = SwinModel.from_pretrained("microsoft/swin-tiny-patch4-window7-224") + + >>> inputs = feature_extractor(images=image, return_tensors="pt") + >>> outputs = model(**inputs) + >>> last_hidden_states = outputs.last_hidden_state + ```""" + + output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions + output_hidden_states = ( + output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states + ) + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + if pixel_values is None: + raise ValueError("You have to specify pixel_values") + + # Prepare head mask if needed + # 1.0 in head_mask indicate we keep the head + # attention_probs has shape bsz x n_heads x N x N + # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] + # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] + head_mask = self.get_head_mask(head_mask, len(self.config.depths)) + + embedding_output = self.embeddings(pixel_values) + + encoder_outputs = self.encoder( + embedding_output, + head_mask=head_mask, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict, + ) + + sequence_output = encoder_outputs[0] + sequence_output = self.layernorm(sequence_output) + sequence_output = self.pool(sequence_output.transpose(1, 2)) + sequence_output = torch.flatten(sequence_output, 1) + + if not return_dict: + return (sequence_output,) + encoder_outputs[1:] + + return BaseModelOutput( + last_hidden_state=sequence_output, + hidden_states=encoder_outputs.hidden_states, + attentions=encoder_outputs.attentions, + ) + + +@add_start_docstrings( + """ + Swin Model transformer with an image classification head on top (a linear layer on top of the final hidden state of + the [CLS] token) e.g. for ImageNet. + """, + SWIN_START_DOCSTRING, +) +class SwinForImageClassification(SwinPreTrainedModel): + def __init__(self, config): + super().__init__(config) + + self.num_labels = config.num_labels + self.swin = SwinModel(config) + + # Classifier head + self.classifier = ( + nn.Linear(self.swin.num_features, config.num_labels) if config.num_labels > 0 else nn.Identity() + ) + + # Initialize weights and apply final processing + self.post_init() + + @add_start_docstrings_to_model_forward(SWIN_INPUTS_DOCSTRING) + @replace_return_docstrings(output_type=SequenceClassifierOutput, config_class=_CONFIG_FOR_DOC) + def forward( + self, + pixel_values=None, + head_mask=None, + labels=None, + output_attentions=None, + output_hidden_states=None, + return_dict=None, + ): + r""" + labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): + Labels for computing the image classification/regression loss. Indices should be in `[0, ..., + config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If + `config.num_labels > 1` a classification loss is computed (Cross-Entropy). + + Returns: + + Examples: + + ```python + >>> from transformers import AutoFeatureExtractor, SwinForImageClassification + >>> from PIL import Image + >>> import requests + + >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" + >>> image = Image.open(requests.get(url, stream=True).raw) + + >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-tiny-patch4-window7-224") + >>> model = SwinForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224") + + >>> inputs = feature_extractor(images=image, return_tensors="pt") + >>> outputs = model(**inputs) + >>> logits = outputs.logits + >>> # model predicts one of the 1000 ImageNet classes + >>> predicted_class_idx = logits.argmax(-1).item() + >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) + ```""" + + return_dict = return_dict if return_dict is not None else self.config.use_return_dict + + outputs = self.swin( + pixel_values, + head_mask=head_mask, + output_attentions=output_attentions, + output_hidden_states=output_hidden_states, + return_dict=return_dict, + ) + + sequence_output = outputs[0] + + logits = self.classifier(sequence_output) + + loss = None + if labels is not None: + if self.num_labels == 1: + # We are doing regression + loss_fct = MSELoss() + loss = loss_fct(logits.view(-1), labels.view(-1)) + else: + loss_fct = CrossEntropyLoss() + loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) + + if not return_dict: + output = (logits,) + outputs[1:] + return ((loss,) + output) if loss is not None else output + + return SequenceClassifierOutput( + loss=loss, + logits=logits, + hidden_states=outputs.hidden_states, + attentions=outputs.attentions, + ) diff --git a/src/transformers/utils/dummy_pt_objects.py b/src/transformers/utils/dummy_pt_objects.py --- a/src/transformers/utils/dummy_pt_objects.py +++ b/src/transformers/utils/dummy_pt_objects.py @@ -3356,6 +3356,30 @@ def __init__(self, *args, **kwargs): requires_backends(self, ["torch"]) +SWIN_PRETRAINED_MODEL_ARCHIVE_LIST = None + + +class SwinForImageClassification(metaclass=DummyObject): + _backends = ["torch"] + + def __init__(self, *args, **kwargs): + requires_backends(self, ["torch"]) + + +class SwinModel(metaclass=DummyObject): + _backends = ["torch"] + + def __init__(self, *args, **kwargs): + requires_backends(self, ["torch"]) + + +class SwinPreTrainedModel(metaclass=DummyObject): + _backends = ["torch"] + + def __init__(self, *args, **kwargs): + requires_backends(self, ["torch"]) + + T5_PRETRAINED_MODEL_ARCHIVE_LIST = None </patch>
[]
[]
conda__conda-4175
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> conda env export failure Under conda 4.3.1, `conda env export` returns the backtrace: ```Python Traceback (most recent call last): File "/home/alan/anaconda/lib/python3.5/site-packages/conda/exceptions.py", line 515, in conda_exception_handler return_value = func(*args, **kwargs) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/cli/main_export.py", line 94, in execute ignore_channels=args.ignore_channels) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/env.py", line 62, in from_environment for dist in installed: AttributeError: 'str' object has no attribute 'channel' ``` My current conda information: ``` Current conda install: platform : linux-64 conda version : 4.3.1 conda is private : False conda-env version : 4.3.1 conda-build version : 2.0.12 python version : 3.5.2.final.0 requests version : 2.12.4 root environment : /home/alan/anaconda (writable) default environment : /home/alan/anaconda/envs/labs envs directories : /home/alan/anaconda/envs package cache : /home/alan/anaconda/pkgs channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/conda-canary/linux-64 https://conda.anaconda.org/conda-canary/noarch https://repo.continuum.io/pkgs/free/linux-64 https://repo.continuum.io/pkgs/free/noarch https://repo.continuum.io/pkgs/r/linux-64 https://repo.continuum.io/pkgs/r/noarch https://repo.continuum.io/pkgs/pro/linux-64 https://repo.continuum.io/pkgs/pro/noarch config file : /home/alan/.condarc offline mode : False user-agent : conda/4.3.1 requests/2.12.4 CPython/3.5.2 Linux/4.4.0-57-generic debian/stretch/sid glibc/2.23 UID:GID : 1000:1000 ``` conda env export failure Under conda 4.3.1, `conda env export` returns the backtrace: ```Python Traceback (most recent call last): File "/home/alan/anaconda/lib/python3.5/site-packages/conda/exceptions.py", line 515, in conda_exception_handler return_value = func(*args, **kwargs) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/cli/main_export.py", line 94, in execute ignore_channels=args.ignore_channels) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/env.py", line 62, in from_environment for dist in installed: AttributeError: 'str' object has no attribute 'channel' ``` My current conda information: ``` Current conda install: platform : linux-64 conda version : 4.3.1 conda is private : False conda-env version : 4.3.1 conda-build version : 2.0.12 python version : 3.5.2.final.0 requests version : 2.12.4 root environment : /home/alan/anaconda (writable) default environment : /home/alan/anaconda/envs/labs envs directories : /home/alan/anaconda/envs package cache : /home/alan/anaconda/pkgs channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/conda-canary/linux-64 https://conda.anaconda.org/conda-canary/noarch https://repo.continuum.io/pkgs/free/linux-64 https://repo.continuum.io/pkgs/free/noarch https://repo.continuum.io/pkgs/r/linux-64 https://repo.continuum.io/pkgs/r/noarch https://repo.continuum.io/pkgs/pro/linux-64 https://repo.continuum.io/pkgs/pro/noarch config file : /home/alan/.condarc offline mode : False user-agent : conda/4.3.1 requests/2.12.4 CPython/3.5.2 Linux/4.4.0-57-generic debian/stretch/sid glibc/2.23 UID:GID : 1000:1000 ``` </issue> <code> [start of README.rst] 1 .. NOTE: This file serves both as the README on GitHub and the index.html for 2 conda.pydata.org. If you update this file, be sure to cd to the web 3 directory and run ``make html; make live`` 4 5 .. image:: https://s3.amazonaws.com/conda-dev/conda_logo.svg 6 :alt: Conda Logo 7 8 ---------------------------------------- 9 10 .. image:: https://img.shields.io/travis/conda/conda/master.svg?maxAge=900&label=Linux%20%26%20MacOS 11 :target: https://travis-ci.org/conda/conda 12 :alt: Linux & MacOS tests (Travis) 13 14 .. image:: https://img.shields.io/appveyor/ci/ContinuumAnalyticsFOSS/conda/master.svg?maxAge=900&label=Windows 15 :target: https://ci.appveyor.com/project/ContinuumAnalyticsFOSS/conda 16 :alt: Windows tests (Appveyor) 17 18 .. image:: https://img.shields.io/codecov/c/github/conda/conda/master.svg?label=coverage 19 :alt: Codecov Status 20 :target: https://codecov.io/github/conda/conda?branch=master 21 22 .. image:: https://img.shields.io/github/release/conda/conda.svg 23 :alt: latest release version 24 :target: https://github.com/conda/conda/releases 25 26 | 27 28 .. image:: https://s3.amazonaws.com/conda-dev/conda-announce-signup-button.svg 29 :alt: Join the Conda Announcment List 30 :target: http://conda.pydata.org/docs/announcements.html 31 32 | 33 34 Conda is a cross-platform, Python-agnostic binary package manager. It is the 35 package manager used by `Anaconda 36 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be 37 used for other systems as well. Conda makes environments first-class 38 citizens, making it easy to create independent environments even for C 39 libraries. Conda is written entirely in Python, and is BSD licensed open 40 source. 41 42 Conda is enhanced by organizations, tools, and repositories created and managed by the amazing members of the conda community. Some of them can be found `here <https://github.com/conda/conda/wiki/Conda-Community>`_. 43 44 45 Installation 46 ------------ 47 48 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a 49 minimal installation that only includes conda and its dependencies, called 50 `Miniconda <http://conda.pydata.org/miniconda.html>`_. 51 52 53 Getting Started 54 --------------- 55 56 If you install Anaconda, you will already have hundreds of packages 57 installed. You can see what packages are installed by running 58 59 .. code-block:: bash 60 61 $ conda list 62 63 to see all the packages that are available, use 64 65 .. code-block:: bash 66 67 $ conda search 68 69 and to install a package, use 70 71 .. code-block:: bash 72 73 $ conda install <package-name> 74 75 76 The real power of conda comes from its ability to manage environments. In 77 conda, an environment can be thought of as a completely separate installation. 78 Conda installs packages into environments efficiently using `hard links 79 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so 80 environments are space efficient, and take seconds to create. 81 82 The default environment, which ``conda`` itself is installed into is called 83 ``root``. To create another environment, use the ``conda create`` 84 command. For instance, to create an environment with the IPython notebook and 85 NumPy 1.6, which is older than the version that comes with Anaconda by 86 default, you would run 87 88 .. code-block:: bash 89 90 $ conda create -n numpy16 ipython-notebook numpy=1.6 91 92 This creates an environment called ``numpy16`` with the latest version of 93 the IPython notebook, NumPy 1.6, and their dependencies. 94 95 We can now activate this environment, use 96 97 .. code-block:: bash 98 99 # On Linux and Mac OS X 100 $ source activate numpy16 101 102 # On Windows 103 > activate numpy16 104 105 This puts the bin directory of the ``numpy16`` environment in the front of the 106 ``PATH``, and sets it as the default environment for all subsequent conda commands. 107 108 To go back to the root environment, use 109 110 .. code-block:: bash 111 112 # On Linux and Mac OS X 113 $ source deactivate 114 115 # On Windows 116 > deactivate 117 118 119 Building Your Own Packages 120 -------------------------- 121 122 You can easily build your own packages for conda, and upload them 123 to `anaconda.org <https://anaconda.org>`_, a free service for hosting 124 packages for conda, as well as other package managers. 125 To build a package, create a recipe. 126 See http://github.com/conda/conda-recipes for many example recipes, and 127 http://docs.continuum.io/conda/build.html for documentation on how to build 128 recipes. 129 130 To upload to anaconda.org, create an account. Then, install the 131 anaconda-client and login 132 133 .. code-block:: bash 134 135 $ conda install anaconda-client 136 $ anaconda login 137 138 Then, after you build your recipe 139 140 .. code-block:: bash 141 142 $ conda build <recipe-dir> 143 144 you will be prompted to upload to anaconda.org. 145 146 To add your anaconda.org channel, or the channel of others to conda so 147 that ``conda install`` will find and install their packages, run 148 149 .. code-block:: bash 150 151 $ conda config --add channels https://conda.anaconda.org/username 152 153 (replacing ``username`` with the user name of the person whose channel you want 154 to add). 155 156 Getting Help 157 ------------ 158 159 The documentation for conda is at http://conda.pydata.org/docs/. You can 160 subscribe to the `conda mailing list 161 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source 162 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_. 163 164 Contributing 165 ------------ 166 167 Contributions to conda are welcome. Just fork the GitHub repository and send a 168 pull request. 169 170 To develop on conda, the easiest way is to use a development build. This can be 171 accomplished as follows: 172 173 * clone the conda git repository to a computer with conda already installed 174 * navigate to the root directory of the git clone 175 * run ``$CONDA/bin/python setup.py develop`` where ``$CONDA`` is the path to your 176 miniconda installation 177 178 Note building a development file requires git to be installed. 179 180 To undo this, run ``$CONDA/bin/python setup.py develop -u``. Note that if you 181 used a python other than ``$CONDA/bin/python`` to install, you may have to manually 182 delete the conda executable. For example, on OS X, if you use a homebrew python 183 located at ``/usr/local/bin/python``, then you'll need to ``rm /usr/local/bin/conda`` 184 so that ``which -a conda`` lists first your miniconda installation. 185 186 If you are worried about breaking your conda installation, you can install a 187 separate instance of `Miniconda <http://conda.pydata.org/miniconda.html>`_ and 188 work off it. This is also the only way to test conda in both Python 2 and 189 Python 3, as conda can only be installed into a root environment. 190 191 Run the conda tests by ``conda install pytest pytest-cov pytest-timeout mock responses`` and then running ``py.test`` 192 in the conda directory. The tests are also run by Travis CI when you make a 193 pull request. 194 [end of README.rst] [start of conda/base/constants.py] 1 # -*- coding: utf-8 -*- 2 """ 3 This file should hold most string literals and magic numbers used throughout the code base. 4 The exception is if a literal is specifically meant to be private to and isolated within a module. 5 Think of this as a "more static" source of configuration information. 6 7 Another important source of "static" configuration is conda/models/enums.py. 8 """ 9 from __future__ import absolute_import, division, print_function, unicode_literals 10 11 import sys 12 from os.path import join 13 14 from enum import Enum 15 16 on_win = bool(sys.platform == "win32") 17 PREFIX_PLACEHOLDER = ('/opt/anaconda1anaconda2' 18 # this is intentionally split into parts, such that running 19 # this program on itself will leave it unchanged 20 'anaconda3') 21 22 machine_bits = 8 * tuple.__itemsize__ 23 24 APP_NAME = 'conda' 25 26 SEARCH_PATH = ( 27 '/etc/conda/condarc', 28 '/etc/conda/condarc.d/', 29 '/var/lib/conda/condarc', 30 '/var/lib/conda/condarc.d/', 31 '$CONDA_ROOT/condarc', 32 '$CONDA_ROOT/.condarc', 33 '$CONDA_ROOT/condarc.d/', 34 '~/.conda/condarc', 35 '~/.conda/condarc.d/', 36 '~/.condarc', 37 '$CONDA_PREFIX/.condarc', 38 '$CONDA_PREFIX/condarc.d/', 39 '$CONDARC', 40 ) 41 42 DEFAULT_CHANNEL_ALIAS = 'https://conda.anaconda.org' 43 CONDA_HOMEPAGE_URL = 'https://conda.pydata.org' 44 DEFAULTS = 'defaults' 45 46 PLATFORM_DIRECTORIES = ("linux-64", 47 "linux-32", 48 "win-64", 49 "win-32", 50 "osx-64", 51 "linux-ppc64le", 52 "linux-armv6l", 53 "linux-armv7l", 54 "zos-z", 55 "noarch", 56 ) 57 58 RECOGNIZED_URL_SCHEMES = ('http', 'https', 'ftp', 's3', 'file') 59 60 DEFAULT_CHANNELS_UNIX = ('https://repo.continuum.io/pkgs/free', 61 'https://repo.continuum.io/pkgs/r', 62 'https://repo.continuum.io/pkgs/pro', 63 ) 64 65 DEFAULT_CHANNELS_WIN = ('https://repo.continuum.io/pkgs/free', 66 'https://repo.continuum.io/pkgs/r', 67 'https://repo.continuum.io/pkgs/pro', 68 'https://repo.continuum.io/pkgs/msys2', 69 ) 70 71 DEFAULT_CHANNELS = DEFAULT_CHANNELS_WIN if on_win else DEFAULT_CHANNELS_UNIX 72 73 ROOT_ENV_NAME = 'root' 74 75 ROOT_NO_RM = ( 76 'python', 77 'pycosat', 78 'ruamel_yaml', 79 'conda', 80 'openssl', 81 'requests', 82 ) 83 84 # Maximum priority, reserved for packages we really want to remove 85 MAX_CHANNEL_PRIORITY = 10000 86 87 CONDA_TARBALL_EXTENSION = '.tar.bz2' 88 89 PRIVATE_ENVS = join(sys.prefix, "conda-meta/private_envs") 90 91 UNKNOWN_CHANNEL = "<unknown>" 92 93 INTERRUPT_SIGNALS = ( 94 'SIGABRT', 95 'SIGINT', 96 'SIGTERM', 97 'SIGQUIT', 98 'SIGBREAK', 99 ) 100 101 102 class PathConflict(Enum): 103 clobber = 'clobber' 104 warn = 'warn' 105 prevent = 'prevent' 106 [end of conda/base/constants.py] [start of conda/cli/main_info.py] 1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 7 from __future__ import absolute_import, division, print_function, unicode_literals 8 9 from collections import OrderedDict 10 import json 11 import os 12 from os import listdir 13 from os.path import exists, expanduser, join 14 import re 15 import sys 16 17 from .common import add_parser_json, add_parser_offline, arg2spec, handle_envs_list, stdout_json 18 from ..common.compat import itervalues, on_win 19 from ..common.url import mask_anaconda_token 20 from ..config import rc_path, sys_rc_path, user_rc_path 21 from ..models.channel import prioritize_channels 22 23 help = "Display information about current conda install." 24 25 example = """ 26 27 Examples: 28 29 conda info -a 30 """ 31 32 def configure_parser(sub_parsers): 33 p = sub_parsers.add_parser( 34 'info', 35 description=help, 36 help=help, 37 epilog=example, 38 ) 39 add_parser_json(p) 40 add_parser_offline(p) 41 p.add_argument( 42 '-a', "--all", 43 action="store_true", 44 help="Show all information, (environments, license, and system " 45 "information.") 46 p.add_argument( 47 '-e', "--envs", 48 action="store_true", 49 help="List all known conda environments.", 50 ) 51 p.add_argument( 52 '-l', "--license", 53 action="store_true", 54 help="Display information about the local conda licenses list.", 55 ) 56 p.add_argument( 57 '-s', "--system", 58 action="store_true", 59 help="List environment variables.", 60 ) 61 p.add_argument( 62 'packages', 63 action="store", 64 nargs='*', 65 help="Display information about packages.", 66 ) 67 p.add_argument( 68 '--root', 69 action='store_true', 70 help='Display root environment path.', 71 ) 72 p.add_argument( 73 '--unsafe-channels', 74 action='store_true', 75 help='Display list of channels with tokens exposed.', 76 ) 77 p.set_defaults(func=execute) 78 79 80 python_re = re.compile('python\d\.\d') 81 def get_user_site(): 82 site_dirs = [] 83 if not on_win: 84 if exists(expanduser('~/.local/lib')): 85 for path in listdir(expanduser('~/.local/lib/')): 86 if python_re.match(path): 87 site_dirs.append("~/.local/lib/%s" % path) 88 else: 89 if 'APPDATA' not in os.environ: 90 return site_dirs 91 APPDATA = os.environ['APPDATA'] 92 if exists(join(APPDATA, 'Python')): 93 site_dirs = [join(APPDATA, 'Python', i) for i in 94 listdir(join(APPDATA, 'PYTHON'))] 95 return site_dirs 96 97 98 def pretty_package(pkg): 99 from conda.utils import human_bytes 100 from conda.models.channel import Channel 101 102 d = OrderedDict([ 103 ('file name', pkg.fn), 104 ('name', pkg.name), 105 ('version', pkg.version), 106 ('build number', pkg.build_number), 107 ('build string', pkg.build), 108 ('channel', Channel(pkg.channel).canonical_name), 109 ('size', human_bytes(pkg.info['size'])), 110 ]) 111 rest = pkg.info 112 for key in sorted(rest): 113 if key in {'build', 'depends', 'requires', 'channel', 'name', 114 'version', 'build_number', 'size'}: 115 continue 116 d[key] = rest[key] 117 118 print() 119 header = "%s %s %s" % (d['name'], d['version'], d['build string']) 120 print(header) 121 print('-'*len(header)) 122 for key in d: 123 print("%-12s: %s" % (key, d[key])) 124 print('dependencies:') 125 for dep in pkg.info['depends']: 126 print(' %s' % dep) 127 128 def execute(args, parser): 129 import os 130 from os.path import dirname 131 132 import conda 133 from conda.base.context import context 134 from conda.models.channel import offline_keep 135 from conda.resolve import Resolve 136 from conda.api import get_index 137 from conda.connection import user_agent 138 139 if args.root: 140 if context.json: 141 stdout_json({'root_prefix': context.root_dir}) 142 else: 143 print(context.root_dir) 144 return 145 146 if args.packages: 147 index = get_index() 148 r = Resolve(index) 149 if context.json: 150 stdout_json({ 151 package: [p._asdict() 152 for p in sorted(r.get_pkgs(arg2spec(package)))] 153 for package in args.packages 154 }) 155 else: 156 for package in args.packages: 157 versions = r.get_pkgs(arg2spec(package)) 158 for pkg in sorted(versions): 159 pretty_package(pkg) 160 return 161 162 options = 'envs', 'system', 'license' 163 164 try: 165 from conda.install import linked_data 166 root_pkgs = linked_data(sys.prefix) 167 except: 168 root_pkgs = None 169 170 try: 171 import requests 172 requests_version = requests.__version__ 173 except ImportError: 174 requests_version = "could not import" 175 except Exception as e: 176 requests_version = "Error %s" % e 177 178 try: 179 import conda_env 180 conda_env_version = conda_env.__version__ 181 except: 182 try: 183 cenv = [p for p in itervalues(root_pkgs) if p['name'] == 'conda-env'] 184 conda_env_version = cenv[0]['version'] 185 except: 186 conda_env_version = "not installed" 187 188 try: 189 import conda_build 190 except ImportError: 191 conda_build_version = "not installed" 192 except Exception as e: 193 conda_build_version = "Error %s" % e 194 else: 195 conda_build_version = conda_build.__version__ 196 197 channels = context.channels 198 199 if args.unsafe_channels: 200 if not context.json: 201 print("\n".join(channels)) 202 else: 203 print(json.dumps({"channels": channels})) 204 return 0 205 206 channels = list(prioritize_channels(channels).keys()) 207 if not context.json: 208 channels = [c + ('' if offline_keep(c) else ' (offline)') 209 for c in channels] 210 channels = [mask_anaconda_token(c) for c in channels] 211 212 info_dict = dict( 213 platform=context.subdir, 214 conda_version=conda.__version__, 215 conda_env_version=conda_env_version, 216 conda_build_version=conda_build_version, 217 root_prefix=context.root_dir, 218 conda_prefix=context.conda_prefix, 219 conda_private=context.conda_private, 220 root_writable=context.root_writable, 221 pkgs_dirs=context.pkgs_dirs, 222 envs_dirs=context.envs_dirs, 223 default_prefix=context.default_prefix, 224 channels=channels, 225 rc_path=rc_path, 226 user_rc_path=user_rc_path, 227 sys_rc_path=sys_rc_path, 228 # is_foreign=bool(foreign), 229 offline=context.offline, 230 envs=[], 231 python_version='.'.join(map(str, sys.version_info)), 232 requests_version=requests_version, 233 user_agent=user_agent, 234 ) 235 if not on_win: 236 info_dict['UID'] = os.geteuid() 237 info_dict['GID'] = os.getegid() 238 239 if args.all or context.json: 240 for option in options: 241 setattr(args, option, True) 242 243 if args.all or all(not getattr(args, opt) for opt in options): 244 for key in 'pkgs_dirs', 'envs_dirs', 'channels': 245 info_dict['_' + key] = ('\n' + 26 * ' ').join(info_dict[key]) 246 info_dict['_rtwro'] = ('writable' if info_dict['root_writable'] else 247 'read only') 248 print("""\ 249 Current conda install: 250 251 platform : %(platform)s 252 conda version : %(conda_version)s 253 conda is private : %(conda_private)s 254 conda-env version : %(conda_env_version)s 255 conda-build version : %(conda_build_version)s 256 python version : %(python_version)s 257 requests version : %(requests_version)s 258 root environment : %(root_prefix)s (%(_rtwro)s) 259 default environment : %(default_prefix)s 260 envs directories : %(_envs_dirs)s 261 package cache : %(_pkgs_dirs)s 262 channel URLs : %(_channels)s 263 config file : %(rc_path)s 264 offline mode : %(offline)s 265 user-agent : %(user_agent)s\ 266 """ % info_dict) 267 268 if not on_win: 269 print("""\ 270 UID:GID : %(UID)s:%(GID)s 271 """ % info_dict) 272 else: 273 print() 274 275 if args.envs: 276 handle_envs_list(info_dict['envs'], not context.json) 277 278 if args.system: 279 from conda.cli.find_commands import find_commands, find_executable 280 281 site_dirs = get_user_site() 282 evars = ['PATH', 'PYTHONPATH', 'PYTHONHOME', 'CONDA_DEFAULT_ENV', 283 'CIO_TEST', 'CONDA_ENVS_PATH'] 284 285 if context.platform == 'linux': 286 evars.append('LD_LIBRARY_PATH') 287 elif context.platform == 'osx': 288 evars.append('DYLD_LIBRARY_PATH') 289 290 if context.json: 291 info_dict['sys.version'] = sys.version 292 info_dict['sys.prefix'] = sys.prefix 293 info_dict['sys.executable'] = sys.executable 294 info_dict['site_dirs'] = get_user_site() 295 info_dict['env_vars'] = {ev: os.getenv(ev, '<not set>') for ev in evars} 296 else: 297 print("sys.version: %s..." % (sys.version[:40])) 298 print("sys.prefix: %s" % sys.prefix) 299 print("sys.executable: %s" % sys.executable) 300 print("conda location: %s" % dirname(conda.__file__)) 301 for cmd in sorted(set(find_commands() + ['build'])): 302 print("conda-%s: %s" % (cmd, find_executable('conda-' + cmd))) 303 print("user site dirs: ", end='') 304 if site_dirs: 305 print(site_dirs[0]) 306 else: 307 print() 308 for site_dir in site_dirs[1:]: 309 print(' %s' % site_dir) 310 print() 311 312 for ev in sorted(evars): 313 print("%s: %s" % (ev, os.getenv(ev, '<not set>'))) 314 print() 315 316 if args.license and not context.json: 317 try: 318 from _license import show_info 319 show_info() 320 except ImportError: 321 print("""\ 322 WARNING: could not import _license.show_info 323 # try: 324 # $ conda install -n root _license""") 325 326 if context.json: 327 stdout_json(info_dict) 328 [end of conda/cli/main_info.py] [start of conda/cli/main_search.py] 1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 7 from __future__ import absolute_import, division, print_function, unicode_literals 8 9 from .common import (Completer, Packages, add_parser_channels, add_parser_json, add_parser_known, 10 add_parser_offline, add_parser_prefix, add_parser_use_index_cache, 11 add_parser_use_local, disp_features, 12 ensure_override_channels_requires_channel, ensure_use_local, stdout_json) 13 from ..api import get_index 14 from ..base.context import context 15 from ..common.compat import text_type 16 from ..exceptions import CommandArgumentError, PackageNotFoundError 17 from ..misc import make_icon_url 18 from ..models.dist import Dist 19 from ..resolve import NoPackagesFoundError 20 from conda.models.package import Package 21 22 descr = """Search for packages and display their information. The input is a 23 Python regular expression. To perform a search with a search string that starts 24 with a -, separate the search from the options with --, like 'conda search -- -h'. 25 26 A * in the results means that package is installed in the current 27 environment. A . means that package is not installed but is cached in the pkgs 28 directory. 29 """ 30 example = ''' 31 Examples: 32 33 Search for packages with 'scikit' in the name: 34 35 conda search scikit 36 37 Search for the 'python' package (but no other packages that have 'python' in 38 the name): 39 40 conda search -f python 41 42 Search for packages for 64-bit Linux (by default, packages for your current 43 platform are shown): 44 45 conda search --platform linux-64 46 ''' 47 48 class Platforms(Completer): 49 """ 50 Tab completion for platforms 51 52 There is no limitation on the platform string, except by what is in the 53 repo, but we want to tab complete the most common ones. 54 """ 55 def _get_items(self): 56 return ['win-32', 'win-64', 'osx-64', 'linux-32', 'linux-64'] 57 58 def configure_parser(sub_parsers): 59 p = sub_parsers.add_parser( 60 'search', 61 description=descr, 62 help=descr, 63 epilog=example, 64 ) 65 add_parser_prefix(p) 66 p.add_argument( 67 "--canonical", 68 action="store_true", 69 help="Output canonical names of packages only.", 70 ) 71 p.add_argument( 72 '-f', "--full-name", 73 action="store_true", 74 help="Only search for full name, ie. ^<regex>$.", 75 ) 76 p.add_argument( 77 "--names-only", 78 action="store_true", 79 help="Output only package names.", 80 ) 81 add_parser_known(p) 82 add_parser_use_index_cache(p) 83 p.add_argument( 84 '-o', "--outdated", 85 action="store_true", 86 help="Only display installed but outdated packages.", 87 ) 88 p.add_argument( 89 '--platform', 90 action='store', 91 dest='platform', 92 help="""Search the given platform. Should be formatted like 'osx-64', 'linux-32', 93 'win-64', and so on. The default is to search the current platform.""", 94 choices=Platforms(), 95 default=None, 96 ) 97 p.add_argument( 98 "--spec", 99 action="store_true", 100 help="""Treat the regex argument as a package specification instead 101 (package_name[=version[=build]]).""", 102 ) 103 p.add_argument( 104 "--reverse-dependency", 105 action="store_true", 106 help="""Perform a reverse dependency search. When using this flag, the --full-name 107 flag is recommended. Use 'conda info package' to see the dependencies of a 108 package.""", 109 ) 110 p.add_argument( 111 'regex', 112 metavar='regex', 113 action="store", 114 nargs="?", 115 help="""Package specification or Python regular expression to search for (default: display 116 all packages).""", 117 ).completer = Packages 118 add_parser_offline(p) 119 add_parser_channels(p) 120 add_parser_json(p) 121 add_parser_use_local(p) 122 p.set_defaults(func=execute) 123 124 def execute(args, parser): 125 try: 126 execute_search(args, parser) 127 except NoPackagesFoundError as e: 128 raise PackageNotFoundError('', text_type(e)) 129 130 def execute_search(args, parser): 131 import re 132 from conda.resolve import Resolve 133 134 if args.reverse_dependency: 135 if not args.regex: 136 parser.error("--reverse-dependency requires at least one package name") 137 if args.spec: 138 parser.error("--reverse-dependency does not work with --spec") 139 140 pat = None 141 ms = None 142 if args.regex: 143 if args.spec: 144 ms = ' '.join(args.regex.split('=')) 145 else: 146 regex = args.regex 147 if args.full_name: 148 regex = r'^%s$' % regex 149 try: 150 pat = re.compile(regex, re.I) 151 except re.error as e: 152 raise CommandArgumentError("Failed to compile regex pattern for " 153 "search: %(regex)s\n" 154 "regex error: %(regex_error)s", 155 regex=regex, regex_error=repr(e)) 156 157 prefix = context.prefix_w_legacy_search 158 159 from ..core.linked_data import linked as linked_data 160 from ..core.package_cache import PackageCache 161 162 linked = linked_data(prefix) 163 extracted = set(pc_entry.dist.name for pc_entry in PackageCache.get_all_extracted_entries()) 164 165 # XXX: Make this work with more than one platform 166 platform = args.platform or '' 167 if platform and platform != context.subdir: 168 args.unknown = False 169 ensure_use_local(args) 170 ensure_override_channels_requires_channel(args, dashc=False) 171 channel_urls = args.channel or () 172 index = get_index(channel_urls=channel_urls, prepend=not args.override_channels, 173 platform=args.platform, use_local=args.use_local, 174 use_cache=args.use_index_cache, prefix=None, 175 unknown=args.unknown) 176 177 r = Resolve(index) 178 179 if args.canonical: 180 json = [] 181 else: 182 json = {} 183 184 names = [] 185 for name in sorted(r.groups): 186 if '@' in name: 187 continue 188 if args.reverse_dependency: 189 ms_name = ms 190 for pkg in r.groups[name]: 191 for dep in r.ms_depends(pkg): 192 if pat.search(dep.name): 193 names.append((name, Package(pkg, r.index[pkg]))) 194 else: 195 if pat and pat.search(name) is None: 196 continue 197 if ms and name != ms.split()[0]: 198 continue 199 200 if ms: 201 ms_name = ms 202 else: 203 ms_name = name 204 205 pkgs = sorted(r.get_pkgs(ms_name)) 206 names.append((name, pkgs)) 207 208 if args.reverse_dependency: 209 new_names = [] 210 old = None 211 for name, pkg in sorted(names, key=lambda x: (x[0], x[1].name, x[1])): 212 if name == old: 213 new_names[-1][1].append(pkg) 214 else: 215 new_names.append((name, [pkg])) 216 old = name 217 names = new_names 218 219 for name, pkgs in names: 220 if args.reverse_dependency: 221 disp_name = pkgs[0].name 222 else: 223 disp_name = name 224 225 if args.names_only and not args.outdated: 226 print(name) 227 continue 228 229 if not args.canonical: 230 json[name] = [] 231 232 if args.outdated: 233 vers_inst = [dist.quad[1] for dist in linked if dist.quad[0] == name] 234 if not vers_inst: 235 continue 236 assert len(vers_inst) == 1, name 237 if not pkgs: 238 continue 239 latest = pkgs[-1] 240 if latest.version == vers_inst[0]: 241 continue 242 if args.names_only: 243 print(name) 244 continue 245 246 for pkg in pkgs: 247 dist = Dist(pkg) 248 if args.canonical: 249 if not context.json: 250 print(dist.dist_name) 251 else: 252 json.append(dist.dist_name) 253 continue 254 if platform and platform != context.subdir: 255 inst = ' ' 256 elif dist in linked: 257 inst = '*' 258 elif dist in extracted: 259 inst = '.' 260 else: 261 inst = ' ' 262 263 features = r.features(dist) 264 265 if not context.json: 266 print('%-25s %s %-15s %15s %-15s %s' % ( 267 disp_name, inst, 268 pkg.version, 269 pkg.build, 270 pkg.schannel, 271 disp_features(features), 272 )) 273 disp_name = '' 274 else: 275 data = {} 276 data.update(pkg.info.dump()) 277 data.update({ 278 'fn': pkg.fn, 279 'installed': inst == '*', 280 'extracted': inst in '*.', 281 'version': pkg.version, 282 'build': pkg.build, 283 'build_number': pkg.build_number, 284 'channel': pkg.schannel, 285 'full_channel': pkg.channel, 286 'features': list(features), 287 'license': pkg.info.get('license'), 288 'size': pkg.info.get('size'), 289 'depends': pkg.info.get('depends'), 290 'type': pkg.info.get('type') 291 }) 292 293 if data['type'] == 'app': 294 data['icon'] = make_icon_url(pkg.info) 295 json[name].append(data) 296 297 if context.json: 298 stdout_json(json) 299 [end of conda/cli/main_search.py] [start of conda/connection.py] 1 # (c) 2012-2015 Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 7 from __future__ import absolute_import, division, print_function, unicode_literals 8 9 from conda.gateways.adapters.localfs import LocalFSAdapter 10 from conda.gateways.adapters.s3 import S3Adapter 11 from logging import getLogger 12 import platform 13 from requests import Session, __version__ as REQUESTS_VERSION 14 from requests.adapters import HTTPAdapter, BaseAdapter 15 from requests.auth import AuthBase, _basic_auth_str 16 from requests.cookies import extract_cookies_to_jar 17 from requests.utils import get_auth_from_url, get_netrc_auth 18 19 from . import __version__ as VERSION 20 from ._vendor.auxlib.ish import dals 21 from .base.context import context 22 from .common.compat import iteritems 23 from .common.url import (add_username_and_password, get_proxy_username_and_pass, 24 split_anaconda_token, urlparse) 25 from .exceptions import ProxyError 26 from .gateways.adapters.ftp import FTPAdapter 27 from .gateways.anaconda_client import read_binstar_tokens 28 from .utils import gnu_get_libc_version 29 30 RETRIES = 3 31 32 log = getLogger(__name__) 33 34 # Collect relevant info from OS for reporting purposes (present in User-Agent) 35 _user_agent = ("conda/{conda_ver} " 36 "requests/{requests_ver} " 37 "{python}/{py_ver} " 38 "{system}/{kernel} {dist}/{ver}") 39 40 glibc_ver = gnu_get_libc_version() 41 if context.platform == 'linux': 42 distinfo = platform.linux_distribution() 43 dist, ver = distinfo[0], distinfo[1] 44 elif context.platform == 'osx': 45 dist = 'OSX' 46 ver = platform.mac_ver()[0] 47 else: 48 dist = platform.system() 49 ver = platform.version() 50 51 user_agent = _user_agent.format(conda_ver=VERSION, 52 requests_ver=REQUESTS_VERSION, 53 python=platform.python_implementation(), 54 py_ver=platform.python_version(), 55 system=platform.system(), kernel=platform.release(), 56 dist=dist, ver=ver) 57 if glibc_ver: 58 user_agent += " glibc/{}".format(glibc_ver) 59 60 61 class EnforceUnusedAdapter(BaseAdapter): 62 63 def send(self, request, *args, **kwargs): 64 message = dals(""" 65 EnforceUnusedAdapter called with url %s 66 This command is using a remote connection in offline mode. 67 """ % request.url) 68 raise RuntimeError(message) 69 70 def close(self): 71 raise NotImplementedError() 72 73 74 class CondaSession(Session): 75 76 def __init__(self, *args, **kwargs): 77 super(CondaSession, self).__init__(*args, **kwargs) 78 79 self.auth = CondaHttpAuth() # TODO: should this just be for certain protocol adapters? 80 81 proxies = context.proxy_servers 82 if proxies: 83 self.proxies = proxies 84 85 if context.offline: 86 unused_adapter = EnforceUnusedAdapter() 87 self.mount("http://", unused_adapter) 88 self.mount("https://", unused_adapter) 89 self.mount("ftp://", unused_adapter) 90 self.mount("s3://", unused_adapter) 91 92 else: 93 # Configure retries 94 http_adapter = HTTPAdapter(max_retries=context.remote_max_retries) 95 self.mount("http://", http_adapter) 96 self.mount("https://", http_adapter) 97 self.mount("ftp://", FTPAdapter()) 98 self.mount("s3://", S3Adapter()) 99 100 self.mount("file://", LocalFSAdapter()) 101 102 self.headers['User-Agent'] = user_agent 103 104 self.verify = context.ssl_verify 105 106 if context.client_ssl_cert_key: 107 self.cert = (context.client_ssl_cert, context.client_ssl_cert_key) 108 elif context.client_ssl_cert: 109 self.cert = context.client_ssl_cert 110 111 112 class CondaHttpAuth(AuthBase): 113 # TODO: make this class thread-safe by adding some of the requests.auth.HTTPDigestAuth() code 114 115 def __call__(self, request): 116 request.url = CondaHttpAuth.add_binstar_token(request.url) 117 self._apply_basic_auth(request) 118 request.register_hook('response', self.handle_407) 119 return request 120 121 @staticmethod 122 def _apply_basic_auth(request): 123 # this logic duplicated from Session.prepare_request and PreparedRequest.prepare_auth 124 url_auth = get_auth_from_url(request.url) 125 auth = url_auth if any(url_auth) else None 126 127 if auth is None: 128 # look for auth information in a .netrc file 129 auth = get_netrc_auth(request.url) 130 131 if isinstance(auth, tuple) and len(auth) == 2: 132 request.headers['Authorization'] = _basic_auth_str(*auth) 133 134 return request 135 136 @staticmethod 137 def add_binstar_token(url): 138 clean_url, token = split_anaconda_token(url) 139 if not token: 140 for binstar_url, token in iteritems(read_binstar_tokens()): 141 if clean_url.startswith(binstar_url): 142 log.debug("Adding anaconda token for url <%s>", clean_url) 143 from conda.models.channel import Channel 144 channel = Channel(clean_url) 145 channel.token = token 146 return channel.url(with_credentials=True) 147 return url 148 149 @staticmethod 150 def handle_407(response, **kwargs): 151 """ 152 Prompts the user for the proxy username and password and modifies the 153 proxy in the session object to include it. 154 155 This method is modeled after 156 * requests.auth.HTTPDigestAuth.handle_401() 157 * requests.auth.HTTPProxyAuth 158 * the previous conda.fetch.handle_proxy_407() 159 160 It both adds 'username:password' to the proxy URL, as well as adding a 161 'Proxy-Authorization' header. If any of this is incorrect, please file an issue. 162 163 """ 164 # kwargs = {'verify': True, 'cert': None, 'proxies': OrderedDict(), 'stream': False, 165 # 'timeout': (3.05, 60)} 166 167 if response.status_code != 407: 168 return response 169 170 # Consume content and release the original connection 171 # to allow our new request to reuse the same one. 172 response.content 173 response.close() 174 175 proxies = kwargs.pop('proxies') 176 177 proxy_scheme = urlparse(response.url).scheme 178 if proxy_scheme not in proxies: 179 raise ProxyError(dals("""Could not find a proxy for %r. See 180 http://conda.pydata.org/docs/html#configure-conda-for-use-behind-a-proxy-server 181 for more information on how to configure proxies.""" % proxy_scheme)) 182 183 # fix-up proxy_url with username & password 184 proxy_url = proxies[proxy_scheme] 185 username, password = get_proxy_username_and_pass(proxy_scheme) 186 proxy_url = add_username_and_password(proxy_url, username, password) 187 proxy_authorization_header = _basic_auth_str(username, password) 188 proxies[proxy_scheme] = proxy_url 189 kwargs['proxies'] = proxies 190 191 prep = response.request.copy() 192 extract_cookies_to_jar(prep._cookies, response.request, response.raw) 193 prep.prepare_cookies(prep._cookies) 194 prep.headers['Proxy-Authorization'] = proxy_authorization_header 195 196 _response = response.connection.send(prep, **kwargs) 197 _response.history.append(response) 198 _response.request = prep 199 200 return _response 201 [end of conda/connection.py] [start of conda/install.py] 1 # (c) 2012-2014 Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 """ This module contains: 7 * all low-level code for extracting, linking and unlinking packages 8 * a very simple CLI 9 10 These API functions have argument names referring to: 11 12 dist: canonical package name (e.g. 'numpy-1.6.2-py26_0') 13 14 pkgs_dir: the "packages directory" (e.g. '/opt/anaconda/pkgs' or 15 '/home/joe/envs/.pkgs') 16 17 prefix: the prefix of a particular environment, which may also 18 be the "default" environment (i.e. sys.prefix), 19 but is otherwise something like '/opt/anaconda/envs/foo', 20 or even any prefix, e.g. '/home/joe/myenv' 21 """ 22 from __future__ import absolute_import, division, print_function, unicode_literals 23 24 import functools 25 import logging 26 import os 27 from errno import EACCES, EEXIST, EPERM, EROFS 28 from os import chmod, makedirs, stat 29 from os.path import dirname, isdir, isfile, join, normcase, normpath 30 31 from .base.constants import PREFIX_PLACEHOLDER 32 from .common.compat import on_win 33 from .gateways.disk.delete import delete_trash, move_path_to_trash, rm_rf 34 delete_trash, move_path_to_trash = delete_trash, move_path_to_trash 35 from .core.linked_data import is_linked, linked, linked_data # NOQA 36 is_linked, linked, linked_data = is_linked, linked, linked_data 37 from .core.package_cache import rm_fetched # NOQA 38 rm_fetched = rm_fetched 39 40 log = logging.getLogger(__name__) 41 stdoutlog = logging.getLogger('stdoutlog') 42 43 44 # backwards compatibility for conda-build 45 prefix_placeholder = PREFIX_PLACEHOLDER 46 47 48 # backwards compatibility for conda-build 49 def package_cache(): 50 log.warn('package_cache() is a no-op and deprecated') 51 return {} 52 53 54 if on_win: 55 def win_conda_bat_redirect(src, dst, shell): 56 """Special function for Windows XP where the `CreateSymbolicLink` 57 function is not available. 58 59 Simply creates a `.bat` file at `dst` which calls `src` together with 60 all command line arguments. 61 62 Works of course only with callable files, e.g. `.bat` or `.exe` files. 63 """ 64 from conda.utils import shells 65 try: 66 makedirs(dirname(dst)) 67 except OSError as exc: # Python >2.5 68 if exc.errno == EEXIST and isdir(dirname(dst)): 69 pass 70 else: 71 raise 72 73 # bat file redirect 74 if not isfile(dst + '.bat'): 75 with open(dst + '.bat', 'w') as f: 76 f.write('@echo off\ncall "%s" %%*\n' % src) 77 78 # TODO: probably need one here for powershell at some point 79 80 # This one is for bash/cygwin/msys 81 # set default shell to bash.exe when not provided, as that's most common 82 if not shell: 83 shell = "bash.exe" 84 85 # technically these are "links" - but islink doesn't work on win 86 if not isfile(dst): 87 with open(dst, "w") as f: 88 f.write("#!/usr/bin/env bash \n") 89 if src.endswith("conda"): 90 f.write('%s "$@"' % shells[shell]['path_to'](src+".exe")) 91 else: 92 f.write('source %s "$@"' % shells[shell]['path_to'](src)) 93 # Make the new file executable 94 # http://stackoverflow.com/a/30463972/1170370 95 mode = stat(dst).st_mode 96 mode |= (mode & 292) >> 2 # copy R bits to X 97 chmod(dst, mode) 98 99 100 # Should this be an API function? 101 def symlink_conda(prefix, root_dir, shell=None): 102 # do not symlink root env - this clobbers activate incorrectly. 103 # prefix should always be longer than, or outside the root dir. 104 if normcase(normpath(prefix)) in normcase(normpath(root_dir)): 105 return 106 if on_win: 107 where = 'Scripts' 108 symlink_fn = functools.partial(win_conda_bat_redirect, shell=shell) 109 else: 110 where = 'bin' 111 symlink_fn = os.symlink 112 if not isdir(join(prefix, where)): 113 os.makedirs(join(prefix, where)) 114 symlink_conda_hlp(prefix, root_dir, where, symlink_fn) 115 116 117 def symlink_conda_hlp(prefix, root_dir, where, symlink_fn): 118 scripts = ["conda", "activate", "deactivate"] 119 prefix_where = join(prefix, where) 120 if not isdir(prefix_where): 121 os.makedirs(prefix_where) 122 for f in scripts: 123 root_file = join(root_dir, where, f) 124 prefix_file = join(prefix_where, f) 125 try: 126 # try to kill stale links if they exist 127 if os.path.lexists(prefix_file): 128 rm_rf(prefix_file) 129 # if they're in use, they won't be killed. Skip making new symlink. 130 if not os.path.lexists(prefix_file): 131 symlink_fn(root_file, prefix_file) 132 except (IOError, OSError) as e: 133 if (os.path.lexists(prefix_file) and 134 (e.errno in (EPERM, EACCES, EROFS, EEXIST))): 135 log.debug("Cannot symlink {0} to {1}. Ignoring since link already exists." 136 .format(root_file, prefix_file)) 137 else: 138 raise 139 [end of conda/install.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
d01341e40f358753db6ce549977835ab84cfd454
conda env export failure Under conda 4.3.1, `conda env export` returns the backtrace: ```Python Traceback (most recent call last): File "/home/alan/anaconda/lib/python3.5/site-packages/conda/exceptions.py", line 515, in conda_exception_handler return_value = func(*args, **kwargs) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/cli/main_export.py", line 94, in execute ignore_channels=args.ignore_channels) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/env.py", line 62, in from_environment for dist in installed: AttributeError: 'str' object has no attribute 'channel' ``` My current conda information: ``` Current conda install: platform : linux-64 conda version : 4.3.1 conda is private : False conda-env version : 4.3.1 conda-build version : 2.0.12 python version : 3.5.2.final.0 requests version : 2.12.4 root environment : /home/alan/anaconda (writable) default environment : /home/alan/anaconda/envs/labs envs directories : /home/alan/anaconda/envs package cache : /home/alan/anaconda/pkgs channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/conda-canary/linux-64 https://conda.anaconda.org/conda-canary/noarch https://repo.continuum.io/pkgs/free/linux-64 https://repo.continuum.io/pkgs/free/noarch https://repo.continuum.io/pkgs/r/linux-64 https://repo.continuum.io/pkgs/r/noarch https://repo.continuum.io/pkgs/pro/linux-64 https://repo.continuum.io/pkgs/pro/noarch config file : /home/alan/.condarc offline mode : False user-agent : conda/4.3.1 requests/2.12.4 CPython/3.5.2 Linux/4.4.0-57-generic debian/stretch/sid glibc/2.23 UID:GID : 1000:1000 ``` conda env export failure Under conda 4.3.1, `conda env export` returns the backtrace: ```Python Traceback (most recent call last): File "/home/alan/anaconda/lib/python3.5/site-packages/conda/exceptions.py", line 515, in conda_exception_handler return_value = func(*args, **kwargs) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/cli/main_export.py", line 94, in execute ignore_channels=args.ignore_channels) File "/home/alan/anaconda/lib/python3.5/site-packages/conda_env/env.py", line 62, in from_environment for dist in installed: AttributeError: 'str' object has no attribute 'channel' ``` My current conda information: ``` Current conda install: platform : linux-64 conda version : 4.3.1 conda is private : False conda-env version : 4.3.1 conda-build version : 2.0.12 python version : 3.5.2.final.0 requests version : 2.12.4 root environment : /home/alan/anaconda (writable) default environment : /home/alan/anaconda/envs/labs envs directories : /home/alan/anaconda/envs package cache : /home/alan/anaconda/pkgs channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/conda-canary/linux-64 https://conda.anaconda.org/conda-canary/noarch https://repo.continuum.io/pkgs/free/linux-64 https://repo.continuum.io/pkgs/free/noarch https://repo.continuum.io/pkgs/r/linux-64 https://repo.continuum.io/pkgs/r/noarch https://repo.continuum.io/pkgs/pro/linux-64 https://repo.continuum.io/pkgs/pro/noarch config file : /home/alan/.condarc offline mode : False user-agent : conda/4.3.1 requests/2.12.4 CPython/3.5.2 Linux/4.4.0-57-generic debian/stretch/sid glibc/2.23 UID:GID : 1000:1000 ```
2017-01-04T16:42:50Z
<patch> diff --git a/conda_env/env.py b/conda_env/env.py --- a/conda_env/env.py +++ b/conda_env/env.py @@ -36,10 +36,9 @@ def from_environment(name, prefix, no_builds=False, ignore_channels=False): name: The name of environment prefix: The path of prefix no_builds: Whether has build requirement - ignore_channels: whether ingore_channels - - Returns: Environment obejct + ignore_channels: whether ignore_channels + Returns: Environment object """ installed = linked(prefix, ignore_channels=ignore_channels) conda_pkgs = copy(installed) @@ -58,7 +57,7 @@ def from_environment(name, prefix, no_builds=False, ignore_channels=False): # this doesn't dump correctly using pyyaml channels = list(context.channels) if not ignore_channels: - for dist in installed: + for dist in conda_pkgs: if dist.channel not in channels: channels.insert(0, dist.channel) return Environment(name=name, dependencies=dependencies, channels=channels, prefix=prefix) </patch>
[]
[]
pandas-dev__pandas-38057
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> REGR: Performance regression on RollingGroupby pd.DataFrame({'a': np.random.randn(10000000), 'b': 1}).groupby('b').rolling(3).mean() is approximately 10x slower between 1.0.5 and 1.1.x </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 [![PyPI Latest Release](https://img.shields.io/pypi/v/pandas.svg)](https://pypi.org/project/pandas/) 9 [![Conda Latest Release](https://anaconda.org/conda-forge/pandas/badges/version.svg)](https://anaconda.org/anaconda/pandas/) 10 [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3509134.svg)](https://doi.org/10.5281/zenodo.3509134) 11 [![Package Status](https://img.shields.io/pypi/status/pandas.svg)](https://pypi.org/project/pandas/) 12 [![License](https://img.shields.io/pypi/l/pandas.svg)](https://github.com/pandas-dev/pandas/blob/master/LICENSE) 13 [![Travis Build Status](https://travis-ci.org/pandas-dev/pandas.svg?branch=master)](https://travis-ci.org/pandas-dev/pandas) 14 [![Azure Build Status](https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master)](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master) 15 [![Coverage](https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master)](https://codecov.io/gh/pandas-dev/pandas) 16 [![Downloads](https://anaconda.org/conda-forge/pandas/badges/downloads.svg)](https://pandas.pydata.org) 17 [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas) 18 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org) 19 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 20 21 ## What is it? 22 23 **pandas** is a Python package that provides fast, flexible, and expressive data 24 structures designed to make working with "relational" or "labeled" data both 25 easy and intuitive. It aims to be the fundamental high-level building block for 26 doing practical, **real world** data analysis in Python. Additionally, it has 27 the broader goal of becoming **the most powerful and flexible open source data 28 analysis / manipulation tool available in any language**. It is already well on 29 its way towards this goal. 30 31 ## Main Features 32 Here are just a few of the things that pandas does well: 33 34 - Easy handling of [**missing data**][missing-data] (represented as 35 `NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data 36 - Size mutability: columns can be [**inserted and 37 deleted**][insertion-deletion] from DataFrame and higher dimensional 38 objects 39 - Automatic and explicit [**data alignment**][alignment]: objects can 40 be explicitly aligned to a set of labels, or the user can simply 41 ignore the labels and let `Series`, `DataFrame`, etc. automatically 42 align the data for you in computations 43 - Powerful, flexible [**group by**][groupby] functionality to perform 44 split-apply-combine operations on data sets, for both aggregating 45 and transforming data 46 - Make it [**easy to convert**][conversion] ragged, 47 differently-indexed data in other Python and NumPy data structures 48 into DataFrame objects 49 - Intelligent label-based [**slicing**][slicing], [**fancy 50 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 51 large data sets 52 - Intuitive [**merging**][merging] and [**joining**][joining] data 53 sets 54 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 55 data sets 56 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 57 labels per tick) 58 - Robust IO tools for loading data from [**flat files**][flat-files] 59 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 60 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 61 - [**Time series**][timeseries]-specific functionality: date range 62 generation and frequency conversion, moving window statistics, 63 date shifting and lagging 64 65 66 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 67 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 68 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 69 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 70 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 71 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 72 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 73 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 74 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 75 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 76 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 77 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 78 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 79 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 80 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 81 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 82 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 83 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 84 85 ## Where to get it 86 The source code is currently hosted on GitHub at: 87 https://github.com/pandas-dev/pandas 88 89 Binary installers for the latest released version are available at the [Python 90 package index](https://pypi.org/project/pandas) and on conda. 91 92 ```sh 93 # conda 94 conda install pandas 95 ``` 96 97 ```sh 98 # or PyPI 99 pip install pandas 100 ``` 101 102 ## Dependencies 103 - [NumPy](https://www.numpy.org) 104 - [python-dateutil](https://labix.org/python-dateutil) 105 - [pytz](https://pythonhosted.org/pytz) 106 107 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies. 108 109 ## Installation from sources 110 To install pandas from source you need Cython in addition to the normal 111 dependencies above. Cython can be installed from pypi: 112 113 ```sh 114 pip install cython 115 ``` 116 117 In the `pandas` directory (same one where you found this file after 118 cloning the git repo), execute: 119 120 ```sh 121 python setup.py install 122 ``` 123 124 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 125 126 127 ```sh 128 python -m pip install -e . --no-build-isolation --no-use-pep517 129 ``` 130 131 If you have `make`, you can also use `make develop` to run the same command. 132 133 or alternatively 134 135 ```sh 136 python setup.py develop 137 ``` 138 139 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 140 141 ## License 142 [BSD 3](LICENSE) 143 144 ## Documentation 145 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 146 147 ## Background 148 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 149 has been under active development since then. 150 151 ## Getting Help 152 153 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 154 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 155 156 ## Discussion and Development 157 Most development discussions take place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 158 159 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 160 161 All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome. 162 163 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 164 165 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 166 167 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 168 169 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 170 171 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 172 173 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md) 174 [end of README.md] [start of asv_bench/benchmarks/rolling.py] 1 import numpy as np 2 3 import pandas as pd 4 5 6 class Methods: 7 8 params = ( 9 ["DataFrame", "Series"], 10 [10, 1000], 11 ["int", "float"], 12 ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum"], 13 ) 14 param_names = ["constructor", "window", "dtype", "method"] 15 16 def setup(self, constructor, window, dtype, method): 17 N = 10 ** 5 18 arr = (100 * np.random.random(N)).astype(dtype) 19 self.roll = getattr(pd, constructor)(arr).rolling(window) 20 21 def time_rolling(self, constructor, window, dtype, method): 22 getattr(self.roll, method)() 23 24 def peakmem_rolling(self, constructor, window, dtype, method): 25 getattr(self.roll, method)() 26 27 28 class Apply: 29 params = ( 30 ["DataFrame", "Series"], 31 [3, 300], 32 ["int", "float"], 33 [sum, np.sum, lambda x: np.sum(x) + 5], 34 [True, False], 35 ) 36 param_names = ["constructor", "window", "dtype", "function", "raw"] 37 38 def setup(self, constructor, window, dtype, function, raw): 39 N = 10 ** 3 40 arr = (100 * np.random.random(N)).astype(dtype) 41 self.roll = getattr(pd, constructor)(arr).rolling(window) 42 43 def time_rolling(self, constructor, window, dtype, function, raw): 44 self.roll.apply(function, raw=raw) 45 46 47 class Engine: 48 params = ( 49 ["DataFrame", "Series"], 50 ["int", "float"], 51 [np.sum, lambda x: np.sum(x) + 5], 52 ["cython", "numba"], 53 ) 54 param_names = ["constructor", "dtype", "function", "engine"] 55 56 def setup(self, constructor, dtype, function, engine): 57 N = 10 ** 3 58 arr = (100 * np.random.random(N)).astype(dtype) 59 self.data = getattr(pd, constructor)(arr) 60 61 def time_rolling_apply(self, constructor, dtype, function, engine): 62 self.data.rolling(10).apply(function, raw=True, engine=engine) 63 64 def time_expanding_apply(self, constructor, dtype, function, engine): 65 self.data.expanding().apply(function, raw=True, engine=engine) 66 67 68 class ExpandingMethods: 69 70 params = ( 71 ["DataFrame", "Series"], 72 ["int", "float"], 73 ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum"], 74 ) 75 param_names = ["constructor", "window", "dtype", "method"] 76 77 def setup(self, constructor, dtype, method): 78 N = 10 ** 5 79 N_groupby = 100 80 arr = (100 * np.random.random(N)).astype(dtype) 81 self.expanding = getattr(pd, constructor)(arr).expanding() 82 self.expanding_groupby = ( 83 pd.DataFrame({"A": arr[:N_groupby], "B": range(N_groupby)}) 84 .groupby("B") 85 .expanding() 86 ) 87 88 def time_expanding(self, constructor, dtype, method): 89 getattr(self.expanding, method)() 90 91 def time_expanding_groupby(self, constructor, dtype, method): 92 getattr(self.expanding_groupby, method)() 93 94 95 class EWMMethods: 96 97 params = (["DataFrame", "Series"], [10, 1000], ["int", "float"], ["mean", "std"]) 98 param_names = ["constructor", "window", "dtype", "method"] 99 100 def setup(self, constructor, window, dtype, method): 101 N = 10 ** 5 102 arr = (100 * np.random.random(N)).astype(dtype) 103 times = pd.date_range("1900", periods=N, freq="23s") 104 self.ewm = getattr(pd, constructor)(arr).ewm(halflife=window) 105 self.ewm_times = getattr(pd, constructor)(arr).ewm( 106 halflife="1 Day", times=times 107 ) 108 109 def time_ewm(self, constructor, window, dtype, method): 110 getattr(self.ewm, method)() 111 112 def time_ewm_times(self, constructor, window, dtype, method): 113 self.ewm.mean() 114 115 116 class VariableWindowMethods(Methods): 117 params = ( 118 ["DataFrame", "Series"], 119 ["50s", "1h", "1d"], 120 ["int", "float"], 121 ["median", "mean", "max", "min", "std", "count", "skew", "kurt", "sum"], 122 ) 123 param_names = ["constructor", "window", "dtype", "method"] 124 125 def setup(self, constructor, window, dtype, method): 126 N = 10 ** 5 127 arr = (100 * np.random.random(N)).astype(dtype) 128 index = pd.date_range("2017-01-01", periods=N, freq="5s") 129 self.roll = getattr(pd, constructor)(arr, index=index).rolling(window) 130 131 132 class Pairwise: 133 134 params = ([10, 1000, None], ["corr", "cov"], [True, False]) 135 param_names = ["window", "method", "pairwise"] 136 137 def setup(self, window, method, pairwise): 138 N = 10 ** 4 139 arr = np.random.random(N) 140 self.df = pd.DataFrame(arr) 141 142 def time_pairwise(self, window, method, pairwise): 143 if window is None: 144 r = self.df.expanding() 145 else: 146 r = self.df.rolling(window=window) 147 getattr(r, method)(self.df, pairwise=pairwise) 148 149 150 class Quantile: 151 params = ( 152 ["DataFrame", "Series"], 153 [10, 1000], 154 ["int", "float"], 155 [0, 0.5, 1], 156 ["linear", "nearest", "lower", "higher", "midpoint"], 157 ) 158 param_names = ["constructor", "window", "dtype", "percentile"] 159 160 def setup(self, constructor, window, dtype, percentile, interpolation): 161 N = 10 ** 5 162 arr = np.random.random(N).astype(dtype) 163 self.roll = getattr(pd, constructor)(arr).rolling(window) 164 165 def time_quantile(self, constructor, window, dtype, percentile, interpolation): 166 self.roll.quantile(percentile, interpolation=interpolation) 167 168 169 class PeakMemFixedWindowMinMax: 170 171 params = ["min", "max"] 172 173 def setup(self, operation): 174 N = int(1e6) 175 arr = np.random.random(N) 176 self.roll = pd.Series(arr).rolling(2) 177 178 def peakmem_fixed(self, operation): 179 for x in range(5): 180 getattr(self.roll, operation)() 181 182 183 class ForwardWindowMethods: 184 params = ( 185 ["DataFrame", "Series"], 186 [10, 1000], 187 ["int", "float"], 188 ["median", "mean", "max", "min", "kurt", "sum"], 189 ) 190 param_names = ["constructor", "window_size", "dtype", "method"] 191 192 def setup(self, constructor, window_size, dtype, method): 193 N = 10 ** 5 194 arr = np.random.random(N).astype(dtype) 195 indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=window_size) 196 self.roll = getattr(pd, constructor)(arr).rolling(window=indexer) 197 198 def time_rolling(self, constructor, window_size, dtype, method): 199 getattr(self.roll, method)() 200 201 def peakmem_rolling(self, constructor, window_size, dtype, method): 202 getattr(self.roll, method)() 203 204 205 class Groupby: 206 207 params = ["sum", "median", "mean", "max", "min", "kurt", "sum"] 208 209 def setup(self, method): 210 N = 1000 211 df = pd.DataFrame( 212 { 213 "A": [str(i) for i in range(N)] * 10, 214 "B": list(range(N)) * 10, 215 "C": pd.date_range(start="1900-01-01", freq="1min", periods=N * 10), 216 } 217 ) 218 self.groupby_roll_int = df.groupby("A").rolling(window=2) 219 self.groupby_roll_offset = df.groupby("A").rolling(window="30s", on="C") 220 221 def time_rolling_int(self, method): 222 getattr(self.groupby_roll_int, method)() 223 224 def time_rolling_offset(self, method): 225 getattr(self.groupby_roll_offset, method)() 226 227 228 class GroupbyEWM: 229 230 params = ["cython", "numba"] 231 param_names = ["engine"] 232 233 def setup(self, engine): 234 df = pd.DataFrame({"A": range(50), "B": range(50)}) 235 self.gb_ewm = df.groupby("A").ewm(com=1.0) 236 237 def time_groupby_mean(self, engine): 238 self.gb_ewm.mean(engine=engine) 239 240 241 from .pandas_vb_common import setup # noqa: F401 isort:skip 242 [end of asv_bench/benchmarks/rolling.py] [start of pandas/core/window/ewm.py] 1 import datetime 2 from functools import partial 3 from textwrap import dedent 4 from typing import TYPE_CHECKING, Optional, Union 5 6 import numpy as np 7 8 from pandas._libs.tslibs import Timedelta 9 import pandas._libs.window.aggregations as window_aggregations 10 from pandas._typing import FrameOrSeries, TimedeltaConvertibleTypes 11 from pandas.compat.numpy import function as nv 12 from pandas.util._decorators import Appender, Substitution, doc 13 14 from pandas.core.dtypes.common import is_datetime64_ns_dtype 15 16 import pandas.core.common as common 17 from pandas.core.util.numba_ import maybe_use_numba 18 from pandas.core.window.common import ( 19 _doc_template, 20 _shared_docs, 21 flex_binary_moment, 22 zsqrt, 23 ) 24 from pandas.core.window.indexers import ( 25 BaseIndexer, 26 ExponentialMovingWindowIndexer, 27 GroupbyIndexer, 28 ) 29 from pandas.core.window.numba_ import generate_numba_groupby_ewma_func 30 from pandas.core.window.rolling import BaseWindow, BaseWindowGroupby, dispatch 31 32 if TYPE_CHECKING: 33 from pandas import Series 34 35 36 _bias_template = """ 37 Parameters 38 ---------- 39 bias : bool, default False 40 Use a standard estimation bias correction. 41 *args, **kwargs 42 Arguments and keyword arguments to be passed into func. 43 """ 44 45 46 def get_center_of_mass( 47 comass: Optional[float], 48 span: Optional[float], 49 halflife: Optional[float], 50 alpha: Optional[float], 51 ) -> float: 52 valid_count = common.count_not_none(comass, span, halflife, alpha) 53 if valid_count > 1: 54 raise ValueError("comass, span, halflife, and alpha are mutually exclusive") 55 56 # Convert to center of mass; domain checks ensure 0 < alpha <= 1 57 if comass is not None: 58 if comass < 0: 59 raise ValueError("comass must satisfy: comass >= 0") 60 elif span is not None: 61 if span < 1: 62 raise ValueError("span must satisfy: span >= 1") 63 comass = (span - 1) / 2.0 64 elif halflife is not None: 65 if halflife <= 0: 66 raise ValueError("halflife must satisfy: halflife > 0") 67 decay = 1 - np.exp(np.log(0.5) / halflife) 68 comass = 1 / decay - 1 69 elif alpha is not None: 70 if alpha <= 0 or alpha > 1: 71 raise ValueError("alpha must satisfy: 0 < alpha <= 1") 72 comass = (1.0 - alpha) / alpha 73 else: 74 raise ValueError("Must pass one of comass, span, halflife, or alpha") 75 76 return float(comass) 77 78 79 def wrap_result(obj: "Series", result: np.ndarray) -> "Series": 80 """ 81 Wrap a single 1D result. 82 """ 83 obj = obj._selected_obj 84 85 return obj._constructor(result, obj.index, name=obj.name) 86 87 88 class ExponentialMovingWindow(BaseWindow): 89 r""" 90 Provide exponential weighted (EW) functions. 91 92 Available EW functions: ``mean()``, ``var()``, ``std()``, ``corr()``, ``cov()``. 93 94 Exactly one parameter: ``com``, ``span``, ``halflife``, or ``alpha`` must be 95 provided. 96 97 Parameters 98 ---------- 99 com : float, optional 100 Specify decay in terms of center of mass, 101 :math:`\alpha = 1 / (1 + com)`, for :math:`com \geq 0`. 102 span : float, optional 103 Specify decay in terms of span, 104 :math:`\alpha = 2 / (span + 1)`, for :math:`span \geq 1`. 105 halflife : float, str, timedelta, optional 106 Specify decay in terms of half-life, 107 :math:`\alpha = 1 - \exp\left(-\ln(2) / halflife\right)`, for 108 :math:`halflife > 0`. 109 110 If ``times`` is specified, the time unit (str or timedelta) over which an 111 observation decays to half its value. Only applicable to ``mean()`` 112 and halflife value will not apply to the other functions. 113 114 .. versionadded:: 1.1.0 115 116 alpha : float, optional 117 Specify smoothing factor :math:`\alpha` directly, 118 :math:`0 < \alpha \leq 1`. 119 min_periods : int, default 0 120 Minimum number of observations in window required to have a value 121 (otherwise result is NA). 122 adjust : bool, default True 123 Divide by decaying adjustment factor in beginning periods to account 124 for imbalance in relative weightings (viewing EWMA as a moving average). 125 126 - When ``adjust=True`` (default), the EW function is calculated using weights 127 :math:`w_i = (1 - \alpha)^i`. For example, the EW moving average of the series 128 [:math:`x_0, x_1, ..., x_t`] would be: 129 130 .. math:: 131 y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ... + (1 - 132 \alpha)^t x_0}{1 + (1 - \alpha) + (1 - \alpha)^2 + ... + (1 - \alpha)^t} 133 134 - When ``adjust=False``, the exponentially weighted function is calculated 135 recursively: 136 137 .. math:: 138 \begin{split} 139 y_0 &= x_0\\ 140 y_t &= (1 - \alpha) y_{t-1} + \alpha x_t, 141 \end{split} 142 ignore_na : bool, default False 143 Ignore missing values when calculating weights; specify ``True`` to reproduce 144 pre-0.15.0 behavior. 145 146 - When ``ignore_na=False`` (default), weights are based on absolute positions. 147 For example, the weights of :math:`x_0` and :math:`x_2` used in calculating 148 the final weighted average of [:math:`x_0`, None, :math:`x_2`] are 149 :math:`(1-\alpha)^2` and :math:`1` if ``adjust=True``, and 150 :math:`(1-\alpha)^2` and :math:`\alpha` if ``adjust=False``. 151 152 - When ``ignore_na=True`` (reproducing pre-0.15.0 behavior), weights are based 153 on relative positions. For example, the weights of :math:`x_0` and :math:`x_2` 154 used in calculating the final weighted average of 155 [:math:`x_0`, None, :math:`x_2`] are :math:`1-\alpha` and :math:`1` if 156 ``adjust=True``, and :math:`1-\alpha` and :math:`\alpha` if ``adjust=False``. 157 axis : {0, 1}, default 0 158 The axis to use. The value 0 identifies the rows, and 1 159 identifies the columns. 160 times : str, np.ndarray, Series, default None 161 162 .. versionadded:: 1.1.0 163 164 Times corresponding to the observations. Must be monotonically increasing and 165 ``datetime64[ns]`` dtype. 166 167 If str, the name of the column in the DataFrame representing the times. 168 169 If 1-D array like, a sequence with the same shape as the observations. 170 171 Only applicable to ``mean()``. 172 173 Returns 174 ------- 175 DataFrame 176 A Window sub-classed for the particular operation. 177 178 See Also 179 -------- 180 rolling : Provides rolling window calculations. 181 expanding : Provides expanding transformations. 182 183 Notes 184 ----- 185 186 More details can be found at: 187 :ref:`Exponentially weighted windows <window.exponentially_weighted>`. 188 189 Examples 190 -------- 191 >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]}) 192 >>> df 193 B 194 0 0.0 195 1 1.0 196 2 2.0 197 3 NaN 198 4 4.0 199 200 >>> df.ewm(com=0.5).mean() 201 B 202 0 0.000000 203 1 0.750000 204 2 1.615385 205 3 1.615385 206 4 3.670213 207 208 Specifying ``times`` with a timedelta ``halflife`` when computing mean. 209 210 >>> times = ['2020-01-01', '2020-01-03', '2020-01-10', '2020-01-15', '2020-01-17'] 211 >>> df.ewm(halflife='4 days', times=pd.DatetimeIndex(times)).mean() 212 B 213 0 0.000000 214 1 0.585786 215 2 1.523889 216 3 1.523889 217 4 3.233686 218 """ 219 220 _attributes = ["com", "min_periods", "adjust", "ignore_na", "axis"] 221 222 def __init__( 223 self, 224 obj, 225 com: Optional[float] = None, 226 span: Optional[float] = None, 227 halflife: Optional[Union[float, TimedeltaConvertibleTypes]] = None, 228 alpha: Optional[float] = None, 229 min_periods: int = 0, 230 adjust: bool = True, 231 ignore_na: bool = False, 232 axis: int = 0, 233 times: Optional[Union[str, np.ndarray, FrameOrSeries]] = None, 234 **kwargs, 235 ): 236 self.obj = obj 237 self.min_periods = max(int(min_periods), 1) 238 self.adjust = adjust 239 self.ignore_na = ignore_na 240 self.axis = axis 241 self.on = None 242 self.center = False 243 self.closed = None 244 if times is not None: 245 if isinstance(times, str): 246 times = self._selected_obj[times] 247 if not is_datetime64_ns_dtype(times): 248 raise ValueError("times must be datetime64[ns] dtype.") 249 if len(times) != len(obj): 250 raise ValueError("times must be the same length as the object.") 251 if not isinstance(halflife, (str, datetime.timedelta)): 252 raise ValueError( 253 "halflife must be a string or datetime.timedelta object" 254 ) 255 self.times = np.asarray(times.astype(np.int64)) 256 self.halflife = Timedelta(halflife).value 257 # Halflife is no longer applicable when calculating COM 258 # But allow COM to still be calculated if the user passes other decay args 259 if common.count_not_none(com, span, alpha) > 0: 260 self.com = get_center_of_mass(com, span, None, alpha) 261 else: 262 self.com = 0.0 263 else: 264 if halflife is not None and isinstance(halflife, (str, datetime.timedelta)): 265 raise ValueError( 266 "halflife can only be a timedelta convertible argument if " 267 "times is not None." 268 ) 269 self.times = None 270 self.halflife = None 271 self.com = get_center_of_mass(com, span, halflife, alpha) 272 273 @property 274 def _constructor(self): 275 return ExponentialMovingWindow 276 277 def _get_window_indexer(self) -> BaseIndexer: 278 """ 279 Return an indexer class that will compute the window start and end bounds 280 """ 281 return ExponentialMovingWindowIndexer() 282 283 _agg_see_also_doc = dedent( 284 """ 285 See Also 286 -------- 287 pandas.DataFrame.rolling.aggregate 288 """ 289 ) 290 291 _agg_examples_doc = dedent( 292 """ 293 Examples 294 -------- 295 >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}) 296 >>> df 297 A B C 298 0 1 4 7 299 1 2 5 8 300 2 3 6 9 301 302 >>> df.ewm(alpha=0.5).mean() 303 A B C 304 0 1.000000 4.000000 7.000000 305 1 1.666667 4.666667 7.666667 306 2 2.428571 5.428571 8.428571 307 """ 308 ) 309 310 @doc( 311 _shared_docs["aggregate"], 312 see_also=_agg_see_also_doc, 313 examples=_agg_examples_doc, 314 klass="Series/Dataframe", 315 axis="", 316 ) 317 def aggregate(self, func, *args, **kwargs): 318 return super().aggregate(func, *args, **kwargs) 319 320 agg = aggregate 321 322 @Substitution(name="ewm", func_name="mean") 323 @Appender(_doc_template) 324 def mean(self, *args, **kwargs): 325 """ 326 Exponential weighted moving average. 327 328 Parameters 329 ---------- 330 *args, **kwargs 331 Arguments and keyword arguments to be passed into func. 332 """ 333 nv.validate_window_func("mean", args, kwargs) 334 if self.times is not None: 335 window_func = self._get_roll_func("ewma_time") 336 window_func = partial( 337 window_func, 338 times=self.times, 339 halflife=self.halflife, 340 ) 341 else: 342 window_func = self._get_roll_func("ewma") 343 window_func = partial( 344 window_func, 345 com=self.com, 346 adjust=self.adjust, 347 ignore_na=self.ignore_na, 348 ) 349 return self._apply(window_func) 350 351 @Substitution(name="ewm", func_name="std") 352 @Appender(_doc_template) 353 @Appender(_bias_template) 354 def std(self, bias: bool = False, *args, **kwargs): 355 """ 356 Exponential weighted moving stddev. 357 """ 358 nv.validate_window_func("std", args, kwargs) 359 return zsqrt(self.var(bias=bias, **kwargs)) 360 361 vol = std 362 363 @Substitution(name="ewm", func_name="var") 364 @Appender(_doc_template) 365 @Appender(_bias_template) 366 def var(self, bias: bool = False, *args, **kwargs): 367 """ 368 Exponential weighted moving variance. 369 """ 370 nv.validate_window_func("var", args, kwargs) 371 window_func = self._get_roll_func("ewmcov") 372 window_func = partial( 373 window_func, 374 com=self.com, 375 adjust=self.adjust, 376 ignore_na=self.ignore_na, 377 bias=bias, 378 ) 379 380 def var_func(values, begin, end, min_periods): 381 return window_func(values, begin, end, min_periods, values) 382 383 return self._apply(var_func) 384 385 @Substitution(name="ewm", func_name="cov") 386 @Appender(_doc_template) 387 def cov( 388 self, 389 other: Optional[Union[np.ndarray, FrameOrSeries]] = None, 390 pairwise: Optional[bool] = None, 391 bias: bool = False, 392 **kwargs, 393 ): 394 """ 395 Exponential weighted sample covariance. 396 397 Parameters 398 ---------- 399 other : Series, DataFrame, or ndarray, optional 400 If not supplied then will default to self and produce pairwise 401 output. 402 pairwise : bool, default None 403 If False then only matching columns between self and other will be 404 used and the output will be a DataFrame. 405 If True then all pairwise combinations will be calculated and the 406 output will be a MultiIndex DataFrame in the case of DataFrame 407 inputs. In the case of missing elements, only complete pairwise 408 observations will be used. 409 bias : bool, default False 410 Use a standard estimation bias correction. 411 **kwargs 412 Keyword arguments to be passed into func. 413 """ 414 if other is None: 415 other = self._selected_obj 416 # only default unset 417 pairwise = True if pairwise is None else pairwise 418 other = self._shallow_copy(other) 419 420 def _get_cov(X, Y): 421 X = self._shallow_copy(X) 422 Y = self._shallow_copy(Y) 423 cov = window_aggregations.ewmcov( 424 X._prep_values(), 425 np.array([0], dtype=np.int64), 426 np.array([0], dtype=np.int64), 427 self.min_periods, 428 Y._prep_values(), 429 self.com, 430 self.adjust, 431 self.ignore_na, 432 bias, 433 ) 434 return wrap_result(X, cov) 435 436 return flex_binary_moment( 437 self._selected_obj, other._selected_obj, _get_cov, pairwise=bool(pairwise) 438 ) 439 440 @Substitution(name="ewm", func_name="corr") 441 @Appender(_doc_template) 442 def corr( 443 self, 444 other: Optional[Union[np.ndarray, FrameOrSeries]] = None, 445 pairwise: Optional[bool] = None, 446 **kwargs, 447 ): 448 """ 449 Exponential weighted sample correlation. 450 451 Parameters 452 ---------- 453 other : Series, DataFrame, or ndarray, optional 454 If not supplied then will default to self and produce pairwise 455 output. 456 pairwise : bool, default None 457 If False then only matching columns between self and other will be 458 used and the output will be a DataFrame. 459 If True then all pairwise combinations will be calculated and the 460 output will be a MultiIndex DataFrame in the case of DataFrame 461 inputs. In the case of missing elements, only complete pairwise 462 observations will be used. 463 **kwargs 464 Keyword arguments to be passed into func. 465 """ 466 if other is None: 467 other = self._selected_obj 468 # only default unset 469 pairwise = True if pairwise is None else pairwise 470 other = self._shallow_copy(other) 471 472 def _get_corr(X, Y): 473 X = self._shallow_copy(X) 474 Y = self._shallow_copy(Y) 475 476 def _cov(x, y): 477 return window_aggregations.ewmcov( 478 x, 479 np.array([0], dtype=np.int64), 480 np.array([0], dtype=np.int64), 481 self.min_periods, 482 y, 483 self.com, 484 self.adjust, 485 self.ignore_na, 486 1, 487 ) 488 489 x_values = X._prep_values() 490 y_values = Y._prep_values() 491 with np.errstate(all="ignore"): 492 cov = _cov(x_values, y_values) 493 x_var = _cov(x_values, x_values) 494 y_var = _cov(y_values, y_values) 495 corr = cov / zsqrt(x_var * y_var) 496 return wrap_result(X, corr) 497 498 return flex_binary_moment( 499 self._selected_obj, other._selected_obj, _get_corr, pairwise=bool(pairwise) 500 ) 501 502 503 class ExponentialMovingWindowGroupby(BaseWindowGroupby, ExponentialMovingWindow): 504 """ 505 Provide an exponential moving window groupby implementation. 506 """ 507 508 def _get_window_indexer(self) -> GroupbyIndexer: 509 """ 510 Return an indexer class that will compute the window start and end bounds 511 512 Returns 513 ------- 514 GroupbyIndexer 515 """ 516 window_indexer = GroupbyIndexer( 517 groupby_indicies=self._groupby.indices, 518 window_indexer=ExponentialMovingWindowIndexer, 519 ) 520 return window_indexer 521 522 var = dispatch("var", bias=False) 523 std = dispatch("std", bias=False) 524 cov = dispatch("cov", other=None, pairwise=None, bias=False) 525 corr = dispatch("corr", other=None, pairwise=None) 526 527 def mean(self, engine=None, engine_kwargs=None): 528 """ 529 Parameters 530 ---------- 531 engine : str, default None 532 * ``'cython'`` : Runs mean through C-extensions from cython. 533 * ``'numba'`` : Runs mean through JIT compiled code from numba. 534 Only available when ``raw`` is set to ``True``. 535 * ``None`` : Defaults to ``'cython'`` or globally setting 536 ``compute.use_numba`` 537 538 .. versionadded:: 1.2.0 539 540 engine_kwargs : dict, default None 541 * For ``'cython'`` engine, there are no accepted ``engine_kwargs`` 542 * For ``'numba'`` engine, the engine can accept ``nopython``, ``nogil`` 543 and ``parallel`` dictionary keys. The values must either be ``True`` or 544 ``False``. The default ``engine_kwargs`` for the ``'numba'`` engine is 545 ``{'nopython': True, 'nogil': False, 'parallel': False}``. 546 547 .. versionadded:: 1.2.0 548 549 Returns 550 ------- 551 Series or DataFrame 552 Return type is determined by the caller. 553 """ 554 if maybe_use_numba(engine): 555 groupby_ewma_func = generate_numba_groupby_ewma_func( 556 engine_kwargs, 557 self.com, 558 self.adjust, 559 self.ignore_na, 560 ) 561 return self._apply( 562 groupby_ewma_func, 563 numba_cache_key=(lambda x: x, "groupby_ewma"), 564 ) 565 elif engine in ("cython", None): 566 if engine_kwargs is not None: 567 raise ValueError("cython engine does not accept engine_kwargs") 568 569 def f(x): 570 x = self._shallow_copy(x, groupby=self._groupby) 571 return x.mean() 572 573 return self._groupby.apply(f) 574 else: 575 raise ValueError("engine must be either 'numba' or 'cython'") 576 [end of pandas/core/window/ewm.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
066a16c164fd83745888f8c67b3988c55d10ce87
REGR: Performance regression on RollingGroupby pd.DataFrame({'a': np.random.randn(10000000), 'b': 1}).groupby('b').rolling(3).mean() is approximately 10x slower between 1.0.5 and 1.1.x
@cpmbailey thanks for the report! I can confirm this; for me, the snippet (without the dataframe creation) takes 2s on pandas 1.0.5 and 20s on pandas 1.1.4. @mroeschke do you know if this is something "known"? From looking at the profile, it seems now the `_apply` is taking a lot more time, and it is https://github.com/pandas-dev/pandas/pull/34052 that introduced a RollingGroupby specific `_apply` implementation (although the PR indicates it was for performance reasons) And it seems that on master it slowed down a bit further Looking at a profile, it seems that most of the time comes from constructing the MultiIndex. Illustrating this with the `line_profiler`: ``` In [8]: df = pd.DataFrame({'a': np.random.randn(10000000), 'b': 1}) In [9]: %lprun -f pd.core.window.rolling.RollingGroupby._apply df.groupby('b').rolling(3).mean() Total time: 91.4188 s File: /home/joris/scipy/pandas/pandas/core/window/rolling.py Function: _apply at line 758 Line # Hits Time Per Hit % Time Line Contents ============================================================== 758 def _apply( 759 self, 760 func: Callable[..., Any], 761 name: Optional[str] = None, 762 numba_cache_key: Optional[Tuple[Callable, str]] = None, 763 **kwargs, 764 ) -> FrameOrSeries: 765 1 6.0 6.0 0.0 result = super()._apply( 766 1 3.0 3.0 0.0 func, 767 1 2.0 2.0 0.0 name, 768 1 2.0 2.0 0.0 numba_cache_key, 769 1 1793069.0 1793069.0 2.0 **kwargs, 770 ) 771 # Reconstruct the resulting MultiIndex from tuples 772 # 1st set of levels = group by labels 773 # 2nd set of levels = original index 774 # Ignore 2nd set of levels if a group by label include an index level 775 result_index_names = [ 776 1 11.0 11.0 0.0 grouping.name for grouping in self._groupby.grouper._groupings 777 ] 778 1 1.0 1.0 0.0 grouped_object_index = None 779 780 column_keys = [ 781 1 2.0 2.0 0.0 key 782 1 14.0 14.0 0.0 for key in result_index_names 783 if key not in self.obj.index.names or key is None 784 ] 785 786 1 3.0 3.0 0.0 if len(column_keys) == len(result_index_names): 787 1 1.0 1.0 0.0 grouped_object_index = self.obj.index 788 1 3.0 3.0 0.0 grouped_index_name = [*grouped_object_index.names] 789 1 1.0 1.0 0.0 result_index_names += grouped_index_name 790 else: 791 # Our result will have still kept the column in the result 792 result = result.drop(columns=column_keys, errors="ignore") 793 794 1 1.0 1.0 0.0 result_index_data = [] 795 2 8.0 4.0 0.0 for key, values in self._groupby.grouper.indices.items(): 796 10000001 10311686.0 1.0 11.3 for value in values: 797 data = [ 798 10000000 15486335.0 1.5 16.9 *com.maybe_make_list(key), 799 10000000 8523837.0 0.9 9.3 *com.maybe_make_list( 800 grouped_object_index[value] 801 10000000 34071689.0 3.4 37.3 if grouped_object_index is not None 802 else [] 803 ), 804 ] 805 10000000 10521752.0 1.1 11.5 result_index_data.append(tuple(data)) 806 807 1 7.0 7.0 0.0 result_index = MultiIndex.from_tuples( 808 1 10705244.0 10705244.0 11.7 result_index_data, names=result_index_names 809 ) 810 1 5165.0 5165.0 0.0 result.index = result_index 811 1 4.0 4.0 0.0 return resul ``` (the profiler adds a lot of overhead (ca x3/4 in total time), so the relative numbers are not necessarily reliable, but the overall picture is certainly interesting) Edit: Messed something up. MultiIndex takes way longer ``` pd.DataFrame({'a': np.random.randn(10000000), 'b': 1, "c": 1, "d": 1}).set_index(['b', "c"]).groupby("d").rolling(3).mean() ``` takes 40 Seconds on my machine, the other example took 18-19. My initial performance test case was only using 1000 points unlike in this issue's example where 10000000 points are used: https://github.com/pandas-dev/pandas/pull/34052#issuecomment-631220187 I didn't anticipate, but it makes sense, that the bottleneck at this scale is the creation of the resulting MultiIndex creation is dominating the timings. We could maybe avoid the inner loop, but that reduces the time only by 50%, does not get us anywhere near 2 seconds you can likely use `MultiIndex.from_arrays` here Indeed, my guess is that we should be able to reduce most of the time taken by the index creation by avoiding creating all the tuples
2020-11-25T09:03:16Z
<patch> diff --git a/asv_bench/benchmarks/rolling.py b/asv_bench/benchmarks/rolling.py --- a/asv_bench/benchmarks/rolling.py +++ b/asv_bench/benchmarks/rolling.py @@ -225,6 +225,20 @@ def time_rolling_offset(self, method): getattr(self.groupby_roll_offset, method)() +class GroupbyLargeGroups: + # https://github.com/pandas-dev/pandas/issues/38038 + # specific example where the rolling operation on a larger dataframe + # is relatively cheap (few but large groups), but creation of + # MultiIndex of result can be expensive + + def setup(self): + N = 100000 + self.df = pd.DataFrame({"A": [1, 2] * int(N / 2), "B": np.random.randn(N)}) + + def time_rolling_multiindex_creation(self): + self.df.groupby("A").rolling(3).mean() + + class GroupbyEWM: params = ["cython", "numba"] diff --git a/doc/source/whatsnew/v1.1.5.rst b/doc/source/whatsnew/v1.1.5.rst --- a/doc/source/whatsnew/v1.1.5.rst +++ b/doc/source/whatsnew/v1.1.5.rst @@ -24,6 +24,7 @@ Fixed regressions - Fixed regression in ``df.groupby(..).rolling(..)`` with the resulting :class:`MultiIndex` when grouping by a label that is in the index (:issue:`37641`) - Fixed regression in :meth:`DataFrame.fillna` not filling ``NaN`` after other operations such as :meth:`DataFrame.pivot` (:issue:`36495`). - Fixed performance regression for :meth:`DataFrame.__setitem__` with list-like indexers (:issue:`37954`) +- Fixed performance regression in ``df.groupby(..).rolling(..)`` (:issue:`38038`) - Fixed regression in :meth:`MultiIndex.intersection` returning duplicates when at least one of the indexes had duplicates (:issue:`36915`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -50,7 +50,6 @@ from pandas.core.aggregation import aggregate from pandas.core.base import DataError, SelectionMixin -import pandas.core.common as com from pandas.core.construction import extract_array from pandas.core.groupby.base import GotItemMixin, ShallowMixin from pandas.core.indexes.api import Index, MultiIndex @@ -791,22 +790,29 @@ def _apply( # Our result will have still kept the column in the result result = result.drop(columns=column_keys, errors="ignore") - result_index_data = [] - for key, values in self._groupby.grouper.indices.items(): - for value in values: - data = [ - *com.maybe_make_list(key), - *com.maybe_make_list( - grouped_object_index[value] - if grouped_object_index is not None - else [] - ), - ] - result_index_data.append(tuple(data)) - - result_index = MultiIndex.from_tuples( - result_index_data, names=result_index_names + codes = self._groupby.grouper.codes + levels = self._groupby.grouper.levels + + group_indices = self._groupby.grouper.indices.values() + if group_indices: + indexer = np.concatenate(list(group_indices)) + else: + indexer = np.array([], dtype=np.intp) + codes = [c.take(indexer) for c in codes] + + # if the index of the original dataframe needs to be preserved, append + # this index (but reordered) to the codes/levels from the groupby + if grouped_object_index is not None: + idx = grouped_object_index.take(indexer) + if not isinstance(idx, MultiIndex): + idx = MultiIndex.from_arrays([idx]) + codes.extend(list(idx.codes)) + levels.extend(list(idx.levels)) + + result_index = MultiIndex( + levels, codes, names=result_index_names, verify_integrity=False ) + result.index = result_index return result </patch>
[]
[]
wagtail__wagtail-9393
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Feature request: page editor minimap ### Is your proposal related to a problem? Long page editing UIs are tedious to navigate. ### Describe the solution you'd like The minimap is a proposed feature from the [page editor 2022](https://github.com/wagtail/wagtail/discussions/7739) project, to help with navigation of those pages. We’re currently [looking for sponsors or contributors](https://wagtail.org/blog/build-update-on-new-page-editor/) to help deliver the minimap. This new component serves different use cases: - For mouse users, the minimap presents all of the page’s sections (panels, potentially blocks, potentially fields) as a table of contents that can be reached at any point - For keyboard users, the minimap functions as a set of skip links – to jump straight to any part of the page - For sighted users, the minimap always displays a visual indicator of the user’s current position through the page structure - When opened, it also has a button to collapse all of the page’s sections at once. Here is what it looks like in practice: ![Image](https://user-images.githubusercontent.com/877585/173399135-edebeb20-88af-47a7-9ecd-7f6234c9433f.gif) [Figma design link](https://www.figma.com/file/h67EsVXdWsfu38WGGxWfpi/Wagtail-Design-System?node-id=4859%3A44694) ### Additional context - Expecting this to be built based on our upcoming refactor of field panels, to use a more semantic document outline, as well as having the ability to link to individual section headings - The collapsing is a feature of individual sections – "collapse all" just triggers the action for all of them at once. - The currently-active (visible?) sections of the page are shown in the minimap, using the IntersectionObserver API </issue> <code> [start of README.md] 1 <h1 align="center"> 2 <img width="343" src=".github/wagtail.svg#gh-light-mode-only" alt="Wagtail"> 3 <img width="343" src=".github/wagtail-inverse.svg#gh-dark-mode-only" alt="Wagtail"> 4 </h1> 5 <p align="center"> 6 <br> 7 <a href="https://github.com/wagtail/wagtail/actions"> 8 <img src="https://github.com/wagtail/wagtail/workflows/Wagtail%20CI/badge.svg" alt="Build Status" /> 9 </a> 10 <a href="https://opensource.org/licenses/BSD-3-Clause"> 11 <img src="https://img.shields.io/badge/license-BSD-blue.svg" alt="License" /> 12 </a> 13 <a href="https://pypi.python.org/pypi/wagtail/"> 14 <img src="https://img.shields.io/pypi/v/wagtail.svg" alt="Version" /> 15 </a> 16 <a href="https://lgtm.com/projects/g/wagtail/wagtail/alerts/"> 17 <img src="https://img.shields.io/lgtm/alerts/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Total alerts" /> 18 </a> 19 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:python"> 20 <img src="https://img.shields.io/lgtm/grade/python/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: Python" /> 21 </a> 22 <a href="https://lgtm.com/projects/g/wagtail/wagtail/context:javascript"> 23 <img src="https://img.shields.io/lgtm/grade/javascript/g/wagtail/wagtail.svg?logo=lgtm&logoWidth=18" alt="Language grade: JavaScript" /> 24 </a> 25 <a href="https://pypi.python.org/pypi/wagtail/"> 26 <img src="https://img.shields.io/pypi/dm/wagtail?logo=Downloads" alt="Monthly downloads" /> 27 </a> 28 <a href="https://twitter.com/WagtailCMS"> 29 <img src="https://img.shields.io/twitter/follow/WagtailCMS?style=social&logo=twitter" alt="follow on Twitter"> 30 </a> 31 </p> 32 33 Wagtail is an open source content management system built on Django, with a strong community and commercial support. It's focused on user experience, and offers precise control for designers and developers. 34 35 ![Wagtail screenshot](https://cdn.jsdelivr.net/gh/wagtail/wagtail@main/.github/wagtail-screenshot-with-browser.png) 36 37 ### 🔥 Features 38 39 - A fast, attractive interface for authors 40 - Complete control over front-end design and structure 41 - Scales to millions of pages and thousands of editors 42 - Fast out of the box, cache-friendly when you need it 43 - Content API for 'headless' sites with de-coupled front-end 44 - Runs on a Raspberry Pi or a multi-datacenter cloud platform 45 - StreamField encourages flexible content without compromising structure 46 - Powerful, integrated search, using Elasticsearch or PostgreSQL 47 - Excellent support for images and embedded content 48 - Multi-site and multi-language ready 49 - Embraces and extends Django 50 51 Find out more at [wagtail.org](https://wagtail.org/). 52 53 ### 👉 Getting started 54 55 Wagtail works with [Python 3](https://www.python.org/downloads/), on any platform. 56 57 To get started with using Wagtail, run the following in a virtual environment: 58 59 ![Installing Wagtail](.github/install-animation.gif) 60 61 ```sh 62 pip install wagtail 63 wagtail start mysite 64 cd mysite 65 pip install -r requirements.txt 66 python manage.py migrate 67 python manage.py createsuperuser 68 python manage.py runserver 69 ``` 70 71 For detailed installation and setup docs, see [the getting started tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html). 72 73 ### 👨‍👩‍👧‍👦 Who’s using it? 74 75 Wagtail is used by [NASA](https://www.nasa.gov/), [Google](https://www.google.com/), [Oxfam](https://www.oxfam.org/en), the [NHS](https://www.nhs.uk/), [Mozilla](https://www.mozilla.org/en-US/), [MIT](https://www.mit.edu/), the [Red Cross](https://www.icrc.org/en), [Salesforce](https://www.salesforce.com/), [NBC](https://www.nbc.com/), [BMW](https://www.bmw.com/en/index.html), and the US and UK governments. Add your own Wagtail site to [madewithwagtail.org](https://madewithwagtail.org). 76 77 ### 📖 Documentation 78 79 [docs.wagtail.org](https://docs.wagtail.org/) is the full reference for Wagtail, and includes guides for developers, designers and editors, alongside release notes and our roadmap. 80 81 For those who are **new to Wagtail**, the [Zen of Wagtail](https://docs.wagtail.org/en/stable/getting_started/the_zen_of_wagtail.html) will help you understand what Wagtail is, and what Wagtail is _not_. 82 83 **For developers** who are ready to jump in to their first Wagtail website the [Getting Started Tutorial](https://docs.wagtail.org/en/stable/getting_started/tutorial.html) will guide you through creating and editing your first page. 84 85 **Do you have an existing Django project?** The [Wagtail Integration documentation](https://docs.wagtail.org/en/stable/getting_started/integrating_into_django.html) is the best place to start. 86 87 ### 📌 Compatibility 88 89 _(If you are reading this on GitHub, the details here may not be indicative of the current released version - please see [Compatible Django / Python versions](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) in the Wagtail documentation.)_ 90 91 Wagtail supports: 92 93 - Django 3.2.x, 4.0.x and 4.1.x 94 - Python 3.7, 3.8, 3.9 and 3.10 95 - PostgreSQL, MySQL and SQLite (with JSON1) as database backends 96 97 [Previous versions of Wagtail](https://docs.wagtail.org/en/stable/releases/upgrading.html#compatible-django-python-versions) additionally supported Python 2.7 and earlier Django versions. 98 99 --- 100 101 ### 📢 Community Support 102 103 There is an active community of Wagtail users and developers responding to questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/wagtail). When posting questions, please read Stack Overflow's advice on [how to ask questions](https://stackoverflow.com/help/how-to-ask) and remember to tag your question "wagtail". 104 105 For topics and discussions that do not fit Stack Overflow's question and answer format we have a [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack). Please respect the time and effort of volunteers by not asking the same question in multiple places. 106 107 [![Join slack community](.github/join-slack-community.png)](https://github.com/wagtail/wagtail/wiki/Slack) 108 109 Our [Github discussion boards](https://github.com/wagtail/wagtail/discussions) are open for sharing ideas and plans for the Wagtail project. 110 111 We maintain a curated list of third party packages, articles and other resources at [Awesome Wagtail](https://github.com/springload/awesome-wagtail). 112 113 ### 🧑‍💼 Commercial Support 114 115 Wagtail is sponsored by [Torchbox](https://torchbox.com/). If you need help implementing or hosting Wagtail, please contact us: [email protected]. See also [madewithwagtail.org/developers/](https://madewithwagtail.org/developers/) for expert Wagtail developers around the world. 116 117 ### 🔐 Security 118 119 We take the security of Wagtail, and related packages we maintain, seriously. If you have found a security issue with any of our projects please email us at [[email protected]](mailto:[email protected]) so we can work together to find and patch the issue. We appreciate responsible disclosure with any security related issues, so please contact us first before creating a Github issue. 120 121 If you want to send an encrypted email (optional), the public key ID for [email protected] is 0xbed227b4daf93ff9, and this public key is available from most commonly-used keyservers. 122 123 ### 🕒 Release schedule 124 125 Feature releases of Wagtail are released every three months. Selected releases are designated as Long Term Support (LTS) releases, and will receive maintenance updates for an extended period to address any security and data-loss related issues. For dates of past and upcoming releases and support periods, see [Release Schedule](https://github.com/wagtail/wagtail/wiki/Release-schedule). 126 127 #### 🕛 Nightly releases 128 129 To try out the latest features before a release, we also create builds from `main` every night. You can find instructions on how to install the latest nightly release at https://releases.wagtail.org/nightly/index.html 130 131 ### 🙋🏽 Contributing 132 133 If you're a Python or Django developer, fork the repo and get stuck in! We have several developer focused channels on the [Slack workspace](https://github.com/wagtail/wagtail/wiki/Slack). 134 135 You might like to start by reviewing the [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html) and checking issues with the [good first issue](https://github.com/wagtail/wagtail/labels/good%20first%20issue) label. 136 137 We also welcome translations for Wagtail's interface. Translation work should be submitted through [Transifex](https://explore.transifex.com/torchbox/wagtail/). 138 139 ### 🔓 License 140 141 [BSD](https://github.com/wagtail/wagtail/blob/main/LICENSE) - Free to use and modify for any purpose, including both open and closed-source code. 142 143 ### 👏 Thanks 144 145 We thank the following organisations for their services used in Wagtail's development: 146 147 [![Browserstack](https://cdn.jsdelivr.net/gh/wagtail/wagtail@main/.github/browserstack-logo.svg)](https://www.browserstack.com/)<br> 148 [BrowserStack](https://www.browserstack.com/) provides the project with free access to their live web-based browser testing tool, and automated Selenium cloud testing. 149 150 [![squash.io](https://cdn.jsdelivr.net/gh/wagtail/wagtail@main/.github/squash-logo.svg)](https://www.squash.io/)<br> 151 [Squash](https://www.squash.io/) provides the project with free test environments for reviewing pull requests. 152 153 [![Assistiv Labs](https://cdn.jsdelivr.net/gh/wagtail/wagtail@main/.github/assistivlabs-logo.png)](https://assistivlabs.com/)<br> 154 [Assistiv Labs](https://assistivlabs.com/) provides the project with unlimited access to their remote testing with assistive technologies. 155 [end of README.md] [start of docs/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # Wagtail documentation build configuration file, created by 4 # sphinx-quickstart on Tue Jan 14 17:38:55 2014. 5 # 6 # This file is execfile()d with the current directory set to its 7 # containing dir. 8 # 9 # Note that not all possible configuration values are present in this 10 # autogenerated file. 11 # 12 # All configuration values have a default; values that are commented out 13 # serve to show the default. 14 15 import os 16 import sys 17 from datetime import datetime 18 19 import django 20 import sphinx_wagtail_theme 21 22 from wagtail import VERSION, __version__ 23 24 # on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org 25 on_rtd = os.environ.get("READTHEDOCS", None) == "True" 26 27 html_theme = "sphinx_wagtail_theme" 28 html_theme_path = [sphinx_wagtail_theme.get_html_theme_path()] 29 30 html_theme_options = { 31 "project_name": "Wagtail Documentation", 32 "github_url": "https://github.com/wagtail/wagtail/blob/main/docs/", 33 } 34 35 # If extensions (or modules to document with autodoc) are in another directory, 36 # add these directories to sys.path here. If the directory is relative to the 37 # documentation root, use os.path.abspath to make it absolute, like shown here. 38 sys.path.insert(0, os.path.abspath("..")) 39 40 # Autodoc may need to import some models modules which require django settings 41 # be configured 42 os.environ["DJANGO_SETTINGS_MODULE"] = "wagtail.test.settings" 43 django.setup() 44 45 # Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD 46 os.environ["DATABASE_ENGINE"] = "django.db.backends.sqlite3" 47 48 # -- General configuration ------------------------------------------------ 49 50 # If your documentation needs a minimal Sphinx version, state it here. 51 # needs_sphinx = '1.0' 52 53 # Add any Sphinx extension module names here, as strings. They can be 54 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 55 # ones. 56 extensions = [ 57 "sphinx.ext.autodoc", 58 "sphinx.ext.intersphinx", 59 "sphinx_copybutton", 60 "myst_parser", 61 "sphinx_wagtail_theme", 62 ] 63 64 if not on_rtd: 65 extensions.append("sphinxcontrib.spelling") 66 67 # Add any paths that contain templates here, relative to this directory. 68 templates_path = ["_templates"] 69 70 # The suffix of source filenames. 71 source_suffix = ".rst" 72 73 # The encoding of source files. 74 # source_encoding = 'utf-8-sig' 75 76 # The master toctree document. 77 master_doc = "index" 78 79 # General information about the project. 80 project = "Wagtail Documentation" 81 copyright = f"{datetime.now().year}, Torchbox and contributors" 82 83 # The version info for the project you're documenting, acts as replacement for 84 # |version| and |release|, also used in various other places throughout the 85 # built documents. 86 87 # The short X.Y version. 88 version = "{}.{}".format(VERSION[0], VERSION[1]) 89 # The full version, including alpha/beta/rc tags. 90 release = __version__ 91 92 # The language for content autogenerated by Sphinx. Refer to documentation 93 # for a list of supported languages. 94 # language = None 95 96 # There are two options for replacing |today|: either, you set today to some 97 # non-false value, then it is used: 98 # today = '' 99 # Else, today_fmt is used as the format for a strftime call. 100 # today_fmt = '%B %d, %Y' 101 102 # List of patterns, relative to source directory, that match files and 103 # directories to ignore when looking for source files. 104 exclude_patterns = ["_build", "README.md"] 105 106 # The reST default role (used for this markup: `text`) to use for all 107 # documents. 108 # default_role = None 109 110 # If true, '()' will be appended to :func: etc. cross-reference text. 111 # add_function_parentheses = True 112 113 # If true, the current module name will be prepended to all description 114 # unit titles (such as .. function::). 115 # add_module_names = True 116 117 # If true, sectionauthor and moduleauthor directives will be shown in the 118 # output. They are ignored by default. 119 # show_authors = False 120 121 # The name of the Pygments (syntax highlighting) style to use. 122 pygments_style = None 123 124 # A list of ignored prefixes for module index sorting. 125 # modindex_common_prefix = [] 126 127 # If true, keep warnings as "system message" paragraphs in the built documents. 128 # keep_warnings = False 129 130 # splhinxcontrib.spelling settings 131 132 spelling_lang = "en_GB" 133 spelling_word_list_filename = "spelling_wordlist.txt" 134 135 # sphinx.ext.intersphinx settings 136 intersphinx_mapping = { 137 "django": ( 138 "https://docs.djangoproject.com/en/stable/", 139 "https://docs.djangoproject.com/en/stable/_objects/", 140 ) 141 } 142 143 # -- Options for HTML output ---------------------------------------------- 144 145 # Theme options are theme-specific and customise the look and feel of a theme 146 # further. For a list of options available for each theme, see the 147 # documentation. 148 # html_theme_options = {} 149 150 # The name for this set of Sphinx documents. If None, it defaults to 151 # "<project> v<release> documentation". 152 # html_title = None 153 154 # A shorter title for the navigation bar. Default is the same as html_title. 155 # html_short_title = None 156 157 # The name of an image file (relative to this directory) to place at the top 158 # of the sidebar. 159 # html_logo = 'logo.png' 160 161 # The name of an image file (within the static path) to use as favicon of the 162 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 163 # pixels large. 164 html_favicon = "favicon.ico" 165 166 # Add any paths that contain custom static files (such as style sheets) here, 167 # relative to this directory. They are copied after the builtin static files, 168 # so a file named "default.css" will overwrite the builtin "default.css". 169 html_static_path = ["_static"] 170 171 # Add any extra paths that contain custom files (such as robots.txt or 172 # .htaccess) here, relative to this directory. These files are copied 173 # directly to the root of the documentation. 174 html_extra_path = ["public"] 175 176 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 177 # using the given strftime format. 178 # html_last_updated_fmt = '%b %d, %Y' 179 180 # If true, SmartyPants will be used to convert quotes and dashes to 181 # typographically correct entities. 182 # html_use_smartypants = True 183 184 # Custom sidebar templates, maps document names to template names. 185 # html_sidebars = {} 186 187 # Additional templates that should be rendered to pages, maps page names to 188 # template names. 189 # html_additional_pages = {} 190 191 # If false, no module index is generated. 192 # html_domain_indices = True 193 194 # If false, no index is generated. 195 # Since we are implementing search with Algolia DocSearch, we do not need Sphinx to 196 # generate its own index. It might not hurt to keep the Sphinx index, but it 197 # could potentially speed up the build process. 198 html_use_index = False 199 200 # If true, the index is split into individual pages for each letter. 201 # html_split_index = False 202 203 # If true, links to the reST sources are added to the pages. 204 # html_show_sourcelink = True 205 206 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 207 # html_show_sphinx = True 208 209 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 210 # html_show_copyright = True 211 212 # If true, an OpenSearch description file will be output, and all pages will 213 # contain a <link> tag referring to it. The value of this option must be the 214 # base URL from which the finished HTML is served. 215 # html_use_opensearch = '' 216 217 # This is the file name suffix for HTML files (for example ".xhtml"). 218 # html_file_suffix = None 219 220 # Output file base name for HTML help builder. 221 htmlhelp_basename = "Wagtaildoc" 222 223 # -- Options for LaTeX output --------------------------------------------- 224 225 latex_elements = { 226 # The paper size ('letterpaper' or 'a4paper'). 227 # 'papersize': 'letterpaper', 228 # The font size ('10pt', '11pt' or '12pt'). 229 # 'pointsize': '10pt', 230 # Additional stuff for the LaTeX preamble. 231 # 'preamble': '', 232 } 233 234 # Grouping the document tree into LaTeX files. List of tuples 235 # (source start file, target name, title, 236 # author, documentclass [howto, manual, or own class]). 237 latex_documents = [ 238 ("index", "Wagtail.tex", "Wagtail Documentation", "Torchbox", "manual"), 239 ] 240 241 # The name of an image file (relative to this directory) to place at the top of 242 # the title page. 243 # latex_logo = None 244 245 # For "manual" documents, if this is true, then toplevel headings are parts, 246 # not chapters. 247 # latex_use_parts = False 248 249 # If true, show page references after internal links. 250 # latex_show_pagerefs = False 251 252 # If true, show URL addresses after external links. 253 # latex_show_urls = False 254 255 # Documents to append as an appendix to all manuals. 256 # latex_appendices = [] 257 258 # If false, no module index is generated. 259 # latex_domain_indices = True 260 261 # -- Options for manual page output --------------------------------------- 262 263 # One entry per manual page. List of tuples 264 # (source start file, name, description, authors, manual section). 265 man_pages = [("index", "wagtail", "Wagtail Documentation", ["Torchbox"], 1)] 266 267 # If true, show URL addresses after external links. 268 # man_show_urls = False 269 270 # -- Options for Texinfo output ------------------------------------------- 271 272 # Grouping the document tree into Texinfo files. List of tuples 273 # (source start file, target name, title, author, 274 # dir menu entry, description, category) 275 texinfo_documents = [ 276 ( 277 "index", 278 "Wagtail", 279 "Wagtail Documentation", 280 "Torchbox", 281 "Wagtail", 282 "One line description of project.", 283 "Miscellaneous", 284 ), 285 ] 286 287 # Documents to append as an appendix to all manuals. 288 # texinfo_appendices = [] 289 290 # If false, no module index is generated. 291 # texinfo_domain_indices = True 292 293 # How to display URL addresses: 'footnote', 'no', or 'inline'. 294 # texinfo_show_urls = 'footnote' 295 296 # If true, do not generate a @detailmenu in the "Top" node's menu. 297 # texinfo_no_detailmenu = False 298 299 300 def setup(app): 301 app.add_js_file("js/banner.js") 302 [end of docs/conf.py] [start of wagtail/query.py] 1 import posixpath 2 import warnings 3 from collections import defaultdict 4 from typing import Any, Dict, Iterable, Tuple 5 6 from django.apps import apps 7 from django.contrib.contenttypes.models import ContentType 8 from django.db.models import CharField, Prefetch, Q 9 from django.db.models.expressions import Exists, OuterRef 10 from django.db.models.functions import Cast, Length, Substr 11 from django.db.models.query import BaseIterable, ModelIterable 12 from treebeard.mp_tree import MP_NodeQuerySet 13 14 from wagtail.models.sites import Site 15 from wagtail.search.queryset import SearchableQuerySetMixin 16 17 18 class TreeQuerySet(MP_NodeQuerySet): 19 """ 20 Extends Treebeard's MP_NodeQuerySet with additional useful tree-related operations. 21 """ 22 23 def delete(self): 24 """Redefine the delete method unbound, so we can set the queryset_only parameter.""" 25 super().delete() 26 27 delete.queryset_only = True 28 29 def descendant_of_q(self, other, inclusive=False): 30 q = Q(path__startswith=other.path) & Q(depth__gte=other.depth) 31 32 if not inclusive: 33 q &= ~Q(pk=other.pk) 34 35 return q 36 37 def descendant_of(self, other, inclusive=False): 38 """ 39 This filters the QuerySet to only contain pages that descend from the specified page. 40 41 If inclusive is set to True, it will also contain the page itself (instead of just its descendants). 42 """ 43 return self.filter(self.descendant_of_q(other, inclusive)) 44 45 def not_descendant_of(self, other, inclusive=False): 46 """ 47 This filters the QuerySet to not contain any pages that descend from the specified page. 48 49 If inclusive is set to True, it will also exclude the specified page. 50 """ 51 return self.exclude(self.descendant_of_q(other, inclusive)) 52 53 def child_of_q(self, other): 54 return self.descendant_of_q(other) & Q(depth=other.depth + 1) 55 56 def child_of(self, other): 57 """ 58 This filters the QuerySet to only contain pages that are direct children of the specified page. 59 """ 60 return self.filter(self.child_of_q(other)) 61 62 def not_child_of(self, other): 63 """ 64 This filters the QuerySet to not contain any pages that are direct children of the specified page. 65 """ 66 return self.exclude(self.child_of_q(other)) 67 68 def ancestor_of_q(self, other, inclusive=False): 69 paths = [ 70 other.path[0:pos] 71 for pos in range(0, len(other.path) + 1, other.steplen)[1:] 72 ] 73 q = Q(path__in=paths) 74 75 if not inclusive: 76 q &= ~Q(pk=other.pk) 77 78 return q 79 80 def ancestor_of(self, other, inclusive=False): 81 """ 82 This filters the QuerySet to only contain pages that are ancestors of the specified page. 83 84 If inclusive is set to True, it will also include the specified page. 85 """ 86 return self.filter(self.ancestor_of_q(other, inclusive)) 87 88 def not_ancestor_of(self, other, inclusive=False): 89 """ 90 This filters the QuerySet to not contain any pages that are ancestors of the specified page. 91 92 If inclusive is set to True, it will also exclude the specified page. 93 """ 94 return self.exclude(self.ancestor_of_q(other, inclusive)) 95 96 def parent_of_q(self, other): 97 return Q(path=self.model._get_parent_path_from_path(other.path)) 98 99 def parent_of(self, other): 100 """ 101 This filters the QuerySet to only contain the parent of the specified page. 102 """ 103 return self.filter(self.parent_of_q(other)) 104 105 def not_parent_of(self, other): 106 """ 107 This filters the QuerySet to exclude the parent of the specified page. 108 """ 109 return self.exclude(self.parent_of_q(other)) 110 111 def sibling_of_q(self, other, inclusive=True): 112 q = Q(path__startswith=self.model._get_parent_path_from_path(other.path)) & Q( 113 depth=other.depth 114 ) 115 116 if not inclusive: 117 q &= ~Q(pk=other.pk) 118 119 return q 120 121 def sibling_of(self, other, inclusive=True): 122 """ 123 This filters the QuerySet to only contain pages that are siblings of the specified page. 124 125 By default, inclusive is set to True so it will include the specified page in the results. 126 127 If inclusive is set to False, the page will be excluded from the results. 128 """ 129 return self.filter(self.sibling_of_q(other, inclusive)) 130 131 def not_sibling_of(self, other, inclusive=True): 132 """ 133 This filters the QuerySet to not contain any pages that are siblings of the specified page. 134 135 By default, inclusive is set to True so it will exclude the specified page from the results. 136 137 If inclusive is set to False, the page will be included in the results. 138 """ 139 return self.exclude(self.sibling_of_q(other, inclusive)) 140 141 142 class PageQuerySet(SearchableQuerySetMixin, TreeQuerySet): 143 def __init__(self, *args, **kwargs): 144 """Set custom instance attributes""" 145 super().__init__(*args, **kwargs) 146 # set by defer_streamfields() 147 self._defer_streamfields = False 148 149 def _clone(self): 150 """Ensure clones inherit custom attribute values.""" 151 clone = super()._clone() 152 clone._defer_streamfields = self._defer_streamfields 153 return clone 154 155 def live_q(self): 156 return Q(live=True) 157 158 def live(self): 159 """ 160 This filters the QuerySet to only contain published pages. 161 """ 162 return self.filter(self.live_q()) 163 164 def not_live(self): 165 """ 166 This filters the QuerySet to only contain unpublished pages. 167 """ 168 return self.exclude(self.live_q()) 169 170 def in_menu_q(self): 171 return Q(show_in_menus=True) 172 173 def in_menu(self): 174 """ 175 This filters the QuerySet to only contain pages that are in the menus. 176 """ 177 return self.filter(self.in_menu_q()) 178 179 def not_in_menu(self): 180 """ 181 This filters the QuerySet to only contain pages that are not in the menus. 182 """ 183 return self.exclude(self.in_menu_q()) 184 185 def page_q(self, other): 186 return Q(id=other.id) 187 188 def page(self, other): 189 """ 190 This filters the QuerySet so it only contains the specified page. 191 """ 192 return self.filter(self.page_q(other)) 193 194 def not_page(self, other): 195 """ 196 This filters the QuerySet so it doesn't contain the specified page. 197 """ 198 return self.exclude(self.page_q(other)) 199 200 def type_q(self, *types): 201 all_subclasses = { 202 model for model in apps.get_models() if issubclass(model, types) 203 } 204 content_types = ContentType.objects.get_for_models(*all_subclasses) 205 return Q(content_type__in=list(content_types.values())) 206 207 def type(self, *types): 208 """ 209 This filters the QuerySet to only contain pages that are an instance 210 of the specified model(s) (including subclasses). 211 """ 212 return self.filter(self.type_q(*types)) 213 214 def not_type(self, *types): 215 """ 216 This filters the QuerySet to exclude any pages which are an instance of the specified model(s). 217 """ 218 return self.exclude(self.type_q(*types)) 219 220 def exact_type_q(self, *types): 221 content_types = ContentType.objects.get_for_models(*types) 222 return Q(content_type__in=list(content_types.values())) 223 224 def exact_type(self, *types): 225 """ 226 This filters the QuerySet to only contain pages that are an instance of the specified model(s) 227 (matching the model exactly, not subclasses). 228 """ 229 return self.filter(self.exact_type_q(*types)) 230 231 def not_exact_type(self, *types): 232 """ 233 This filters the QuerySet to exclude any pages which are an instance of the specified model(s) 234 (matching the model exactly, not subclasses). 235 """ 236 return self.exclude(self.exact_type_q(*types)) 237 238 def private_q(self): 239 from wagtail.models import PageViewRestriction 240 241 q = Q() 242 for restriction in PageViewRestriction.objects.select_related("page").all(): 243 q |= self.descendant_of_q(restriction.page, inclusive=True) 244 245 # do not match any page if no private section exists. 246 return q if q else Q(pk__in=[]) 247 248 def public(self): 249 """ 250 Filters the QuerySet to only contain pages that are not in a private 251 section and their descendants. 252 """ 253 return self.exclude(self.private_q()) 254 255 def not_public(self): 256 """ 257 Filters the QuerySet to only contain pages that are in a private 258 section and their descendants. 259 """ 260 return self.filter(self.private_q()) 261 262 def private(self): 263 """ 264 Filters the QuerySet to only contain pages that are in a private 265 section and their descendants. 266 """ 267 return self.filter(self.private_q()) 268 269 def first_common_ancestor(self, include_self=False, strict=False): 270 """ 271 Find the first ancestor that all pages in this queryset have in common. 272 For example, consider a page hierarchy like:: 273 274 - Home/ 275 - Foo Event Index/ 276 - Foo Event Page 1/ 277 - Foo Event Page 2/ 278 - Bar Event Index/ 279 - Bar Event Page 1/ 280 - Bar Event Page 2/ 281 282 The common ancestors for some queries would be: 283 284 .. code-block:: python 285 286 >>> Page.objects\\ 287 ... .type(EventPage)\\ 288 ... .first_common_ancestor() 289 <Page: Home> 290 >>> Page.objects\\ 291 ... .type(EventPage)\\ 292 ... .filter(title__contains='Foo')\\ 293 ... .first_common_ancestor() 294 <Page: Foo Event Index> 295 296 This method tries to be efficient, but if you have millions of pages 297 scattered across your page tree, it will be slow. 298 299 If `include_self` is True, the ancestor can be one of the pages in the 300 queryset: 301 302 .. code-block:: python 303 304 >>> Page.objects\\ 305 ... .filter(title__contains='Foo')\\ 306 ... .first_common_ancestor() 307 <Page: Foo Event Index> 308 >>> Page.objects\\ 309 ... .filter(title__exact='Bar Event Index')\\ 310 ... .first_common_ancestor() 311 <Page: Bar Event Index> 312 313 A few invalid cases exist: when the queryset is empty, when the root 314 Page is in the queryset and ``include_self`` is False, and when there 315 are multiple page trees with no common root (a case Wagtail does not 316 support). If ``strict`` is False (the default), then the first root 317 node is returned in these cases. If ``strict`` is True, then a 318 ``ObjectDoesNotExist`` is raised. 319 """ 320 # An empty queryset has no ancestors. This is a problem 321 if not self.exists(): 322 if strict: 323 raise self.model.DoesNotExist("Can not find ancestor of empty queryset") 324 return self.model.get_first_root_node() 325 326 if include_self: 327 # Get all the paths of the matched pages. 328 paths = self.order_by().values_list("path", flat=True) 329 else: 330 # Find all the distinct parent paths of all matched pages. 331 # The empty `.order_by()` ensures that `Page.path` is not also 332 # selected to order the results, which makes `.distinct()` works. 333 paths = ( 334 self.order_by() 335 .annotate( 336 parent_path=Substr( 337 "path", 338 1, 339 Length("path") - self.model.steplen, 340 output_field=CharField(max_length=255), 341 ) 342 ) 343 .values_list("parent_path", flat=True) 344 .distinct() 345 ) 346 347 # This method works on anything, not just file system paths. 348 common_parent_path = posixpath.commonprefix(paths) 349 350 # That may have returned a path like (0001, 0002, 000), which is 351 # missing some chars off the end. Fix this by trimming the path to a 352 # multiple of `Page.steplen` 353 extra_chars = len(common_parent_path) % self.model.steplen 354 if extra_chars != 0: 355 common_parent_path = common_parent_path[:-extra_chars] 356 357 if common_parent_path == "": 358 # This should only happen when there are multiple trees, 359 # a situation that Wagtail does not support; 360 # or when the root node itself is part of the queryset. 361 if strict: 362 raise self.model.DoesNotExist("No common ancestor found!") 363 364 # Assuming the situation is the latter, just return the root node. 365 # The root node is not its own ancestor, so this is technically 366 # incorrect. If you want very correct operation, use `strict=True` 367 # and receive an error. 368 return self.model.get_first_root_node() 369 370 # Assuming the database is in a consistent state, this page should 371 # *always* exist. If your database is not in a consistent state, you've 372 # got bigger problems. 373 return self.model.objects.get(path=common_parent_path) 374 375 def unpublish(self): 376 """ 377 This unpublishes all live pages in the QuerySet. 378 """ 379 for page in self.live(): 380 page.unpublish() 381 382 def defer_streamfields(self): 383 """ 384 Apply to a queryset to prevent fetching/decoding of StreamField values on 385 evaluation. Useful when working with potentially large numbers of results, 386 where StreamField values are unlikely to be needed. For example, when 387 generating a sitemap or a long list of page links. 388 """ 389 clone = self._clone() 390 clone._defer_streamfields = True # used by specific_iterator() 391 streamfield_names = self.model.get_streamfield_names() 392 if not streamfield_names: 393 return clone 394 return clone.defer(*streamfield_names) 395 396 def specific(self, defer=False): 397 """ 398 This efficiently gets all the specific pages for the queryset, using 399 the minimum number of queries. 400 401 When the "defer" keyword argument is set to True, only generic page 402 field values will be loaded and all specific fields will be deferred. 403 """ 404 clone = self._clone() 405 if defer: 406 clone._iterable_class = DeferredSpecificIterable 407 else: 408 clone._iterable_class = SpecificIterable 409 return clone 410 411 def in_site(self, site): 412 """ 413 This filters the QuerySet to only contain pages within the specified site. 414 """ 415 return self.descendant_of(site.root_page, inclusive=True) 416 417 def translation_of_q(self, page, inclusive): 418 q = Q(translation_key=page.translation_key) 419 420 if not inclusive: 421 q &= ~Q(pk=page.pk) 422 423 return q 424 425 def translation_of(self, page, inclusive=False): 426 """ 427 This filters the QuerySet to only contain pages that are translations of the specified page. 428 429 If inclusive is True, the page itself is returned. 430 """ 431 return self.filter(self.translation_of_q(page, inclusive)) 432 433 def not_translation_of(self, page, inclusive=False): 434 """ 435 This filters the QuerySet to only contain pages that are not translations of the specified page. 436 437 Note, this will include the page itself as the page is technically not a translation of itself. 438 If inclusive is True, we consider the page to be a translation of itself so this excludes the page 439 from the results. 440 """ 441 return self.exclude(self.translation_of_q(page, inclusive)) 442 443 def prefetch_workflow_states(self): 444 """ 445 Performance optimisation for listing pages. 446 Prefetches the active workflow states on each page in this queryset. 447 Used by `workflow_in_progress` and `current_workflow_progress` properties on 448 `wagtailcore.models.Page`. 449 """ 450 from .models import WorkflowState 451 452 workflow_states = WorkflowState.objects.active().select_related( 453 "current_task_state__task" 454 ) 455 456 return self.prefetch_related( 457 Prefetch( 458 "workflow_states", 459 queryset=workflow_states, 460 to_attr="_current_workflow_states", 461 ) 462 ) 463 464 def annotate_approved_schedule(self): 465 """ 466 Performance optimisation for listing pages. 467 Annotates each page with the existence of an approved go live time. 468 Used by `approved_schedule` property on `wagtailcore.models.Page`. 469 """ 470 from .models import Revision 471 472 return self.annotate( 473 _approved_schedule=Exists( 474 Revision.page_revisions.exclude( 475 approved_go_live_at__isnull=True 476 ).filter(object_id=Cast(OuterRef("pk"), output_field=CharField())) 477 ) 478 ) 479 480 def annotate_site_root_state(self): 481 """ 482 Performance optimisation for listing pages. 483 Annotates each object with whether it is a root page of any site. 484 Used by `is_site_root` method on `wagtailcore.models.Page`. 485 """ 486 return self.annotate( 487 _is_site_root=Exists( 488 Site.objects.filter( 489 root_page__translation_key=OuterRef("translation_key") 490 ) 491 ) 492 ) 493 494 495 class SpecificIterable(BaseIterable): 496 def __iter__(self): 497 """ 498 Identify and return all specific pages in a queryset, and return them 499 in the same order, with any annotations intact. 500 """ 501 from wagtail.models import Page 502 503 qs = self.queryset 504 annotation_aliases = qs.query.annotations.keys() 505 values_qs = qs.values("pk", "content_type", *annotation_aliases) 506 507 # Gather pages in batches to reduce peak memory usage 508 for values in self._get_chunks(values_qs): 509 510 annotations_by_pk = defaultdict(list) 511 if annotation_aliases: 512 # Extract annotation results keyed by pk so we can reapply to fetched pages. 513 for data in values: 514 annotations_by_pk[data["pk"]] = { 515 k: v for k, v in data.items() if k in annotation_aliases 516 } 517 518 pks_and_types = [[v["pk"], v["content_type"]] for v in values] 519 pks_by_type = defaultdict(list) 520 for pk, content_type in pks_and_types: 521 pks_by_type[content_type].append(pk) 522 523 # Content types are cached by ID, so this will not run any queries. 524 content_types = { 525 pk: ContentType.objects.get_for_id(pk) for _, pk in pks_and_types 526 } 527 528 # Get the specific instances of all pages, one model class at a time. 529 pages_by_type = {} 530 missing_pks = [] 531 532 for content_type, pks in pks_by_type.items(): 533 # look up model class for this content type, falling back on the original 534 # model (i.e. Page) if the more specific one is missing 535 model = content_types[content_type].model_class() or qs.model 536 pages = model.objects.filter(pk__in=pks) 537 538 if qs._defer_streamfields: 539 pages = pages.defer_streamfields() 540 541 pages_for_type = {page.pk: page for page in pages} 542 pages_by_type[content_type] = pages_for_type 543 missing_pks.extend(pk for pk in pks if pk not in pages_for_type) 544 545 # Fetch generic pages to supplement missing items 546 if missing_pks: 547 generic_pages = ( 548 Page.objects.filter(pk__in=missing_pks) 549 .select_related("content_type") 550 .in_bulk() 551 ) 552 warnings.warn( 553 "Specific versions of the following pages could not be found. " 554 "This is most likely because a database migration has removed " 555 "the relevant table or record since the page was created:\n{}".format( 556 [ 557 {"id": p.id, "title": p.title, "type": p.content_type} 558 for p in generic_pages.values() 559 ] 560 ), 561 category=RuntimeWarning, 562 ) 563 else: 564 generic_pages = {} 565 566 # Yield all pages in the order they occurred in the original query. 567 for pk, content_type in pks_and_types: 568 try: 569 page = pages_by_type[content_type][pk] 570 except KeyError: 571 page = generic_pages[pk] 572 if annotation_aliases: 573 # Reapply annotations before returning 574 for annotation, value in annotations_by_pk.get(page.pk, {}).items(): 575 setattr(page, annotation, value) 576 yield page 577 578 def _get_chunks(self, queryset) -> Iterable[Tuple[Dict[str, Any]]]: 579 if not self.chunked_fetch: 580 # The entire result will be stored in memory, so there is no 581 # benefit to splitting the result 582 yield tuple(queryset) 583 else: 584 # Iterate through the queryset, returning the rows in manageable 585 # chunks for self.__iter__() to fetch full pages for 586 current_chunk = [] 587 for r in queryset.iterator(self.chunk_size): 588 current_chunk.append(r) 589 if len(current_chunk) == self.chunk_size: 590 yield tuple(current_chunk) 591 current_chunk.clear() 592 # Return any left-overs 593 if current_chunk: 594 yield tuple(current_chunk) 595 596 597 class DeferredSpecificIterable(ModelIterable): 598 def __iter__(self): 599 for obj in super().__iter__(): 600 if obj.specific_class: 601 yield obj.specific_deferred 602 else: 603 warnings.warn( 604 "A specific version of the following page could not be returned " 605 "because the specific page model is not present on the active " 606 f"branch: <Page id='{obj.id}' title='{obj.title}' " 607 f"type='{obj.content_type}'>", 608 category=RuntimeWarning, 609 ) 610 yield obj 611 [end of wagtail/query.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
wagtail/wagtail
ddce86f46105166ebd20be898778ab27fd07ff51
Feature request: page editor minimap ### Is your proposal related to a problem? Long page editing UIs are tedious to navigate. ### Describe the solution you'd like The minimap is a proposed feature from the [page editor 2022](https://github.com/wagtail/wagtail/discussions/7739) project, to help with navigation of those pages. We’re currently [looking for sponsors or contributors](https://wagtail.org/blog/build-update-on-new-page-editor/) to help deliver the minimap. This new component serves different use cases: - For mouse users, the minimap presents all of the page’s sections (panels, potentially blocks, potentially fields) as a table of contents that can be reached at any point - For keyboard users, the minimap functions as a set of skip links – to jump straight to any part of the page - For sighted users, the minimap always displays a visual indicator of the user’s current position through the page structure - When opened, it also has a button to collapse all of the page’s sections at once. Here is what it looks like in practice: ![Image](https://user-images.githubusercontent.com/877585/173399135-edebeb20-88af-47a7-9ecd-7f6234c9433f.gif) [Figma design link](https://www.figma.com/file/h67EsVXdWsfu38WGGxWfpi/Wagtail-Design-System?node-id=4859%3A44694) ### Additional context - Expecting this to be built based on our upcoming refactor of field panels, to use a more semantic document outline, as well as having the ability to link to individual section headings - The collapsing is a feature of individual sections – "collapse all" just triggers the action for all of them at once. - The currently-active (visible?) sections of the page are shown in the minimap, using the IntersectionObserver API
I have a working POC for this here https://github.com/lb-/wagtail/commits/experiments/mini-map (see last commit). Note that this is implemented using Stimulus as it makes it easier to scope event listeners and register new behaviour to dynamically injected elements (e.g. blocks). Could be refactored to non-stimulus but would require more code. ## Recording https://user-images.githubusercontent.com/1396140/173516073-49083162-80b9-463d-89d8-0d4b6184dda4.mp4 ## Summary of POC * Each `h2` or `h3` we care about is a Stimulus target (which is used to build the mini-map list) * Each container, where relevant, around those headers gets an intersection observer attached (via a separate, generic intersection observer controller in Stimulus), which dispatches events based on scrolling / moving in or out of view * https://github.com/lb-/wagtail/blob/experiments/mini-map/client/src/components/StreamField/blocks/BaseSequenceBlock.js#L149 (blocks) * The minimap controller is scoped to one tab's object list, so that there is only one sidebar for each tab * https://github.com/lb-/wagtail/blob/experiments/mini-map/wagtail/admin/templates/wagtailadmin/panels/object_list.html -> add data attributes and base `aside` element for the sidebar * Mini map controller code https://github.com/lb-/wagtail/blob/experiments/mini-map/client/src/controllers/MiniMapController.ts * Intersection observer code https://github.com/lb-/wagtail/blob/experiments/mini-map/client/src/controllers/IntersectionObserverController.ts * React component used for the inner `ul` as it is much simpler to re-render content in a list through React, also a good example of how Stimulus & React can work together https://github.com/lb-/wagtail/blob/experiments/mini-map/client/src/components/MiniMapIndex/MiniMapIndex.tsx * interactions are heavily debounced, no smarts yet as to when/when not to dispatch (e.g. only when observer has changed), so can be up to 300ms delay in some interactions. * It will update the minimap when blocks are added/removed and has two levels of 'depth' but can handle more easily * basic hover to expand behaviour but really roughly styled * No smarts as to finding the 'icon' but I think that the current DOM structure does not make this easy * Added some code to auto-inject an id if one is not on a header (might be a better way) * Scroll into view is basic, does not consider sticky header offset * No unit tests and typing fails for some code * There may be a better way :) * Would be very easy to add other 'header' elements to be listened to or even non-headers as opt in via data attributes ## Potential issue with block headers I noticed that the `h3` inside blocks is always empty, I think this may be an accessibility bug but not sure, let me know if this should be raised. ``` <h3 data-block-title class="c-sf-block__header__title"></h3> ``` https://github.com/wagtail/wagtail/blob/cb43536f07f0a43412b3ff1c8c0d8ed61af5b3db/client/src/components/StreamField/blocks/BaseSequenceBlock.js#L149 Update - I pushed some changes to the branch mentioned above today - behaviour is much smoother and also was able to work around the smooth scroll and scrolling to content under the header easily with come CSS (instead of more JS).
2022-10-17T17:42:43Z
<patch> diff --git a/wagtail/admin/views/home.py b/wagtail/admin/views/home.py --- a/wagtail/admin/views/home.py +++ b/wagtail/admin/views/home.py @@ -329,7 +329,11 @@ def icons(): ) combined_icon_markup = "" for icon in all_icons: - combined_icon_markup += render_to_string(icon).replace("svg", "symbol") + combined_icon_markup += ( + render_to_string(icon) + .replace('xmlns="http://www.w3.org/2000/svg"', "") + .replace("svg", "symbol") + ) _full_sprite_html = render_to_string( "wagtailadmin/shared/icons.html", {"icons": combined_icon_markup} </patch>
[]
[]
pandas-dev__pandas-25275
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Excel Module Cleanups As called out by @gfyoung in review there are some easy opportunities to clean up and simplify the existing Excel IO modules. These are mostly minor loop refactors and docstring updates https://github.com/pandas-dev/pandas/pull/25153#pullrequestreview-200888051 </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 52 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td>Coverage</td> 58  <td> 59 <a href="https://codecov.io/gh/pandas-dev/pandas"> 60 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Downloads</td> 66 <td> 67 <a href="https://pandas.pydata.org"> 68 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Gitter</td> 74 <td> 75 <a href="https://gitter.im/pydata/pandas"> 76 <img src="https://badges.gitter.im/Join%20Chat.svg" 77 </a> 78 </td> 79 </tr> 80 </table> 81 82 83 84 ## What is it? 85 86 **pandas** is a Python package providing fast, flexible, and expressive data 87 structures designed to make working with "relational" or "labeled" data both 88 easy and intuitive. It aims to be the fundamental high-level building block for 89 doing practical, **real world** data analysis in Python. Additionally, it has 90 the broader goal of becoming **the most powerful and flexible open source data 91 analysis / manipulation tool available in any language**. It is already well on 92 its way towards this goal. 93 94 ## Main Features 95 Here are just a few of the things that pandas does well: 96 97 - Easy handling of [**missing data**][missing-data] (represented as 98 `NaN`) in floating point as well as non-floating point data 99 - Size mutability: columns can be [**inserted and 100 deleted**][insertion-deletion] from DataFrame and higher dimensional 101 objects 102 - Automatic and explicit [**data alignment**][alignment]: objects can 103 be explicitly aligned to a set of labels, or the user can simply 104 ignore the labels and let `Series`, `DataFrame`, etc. automatically 105 align the data for you in computations 106 - Powerful, flexible [**group by**][groupby] functionality to perform 107 split-apply-combine operations on data sets, for both aggregating 108 and transforming data 109 - Make it [**easy to convert**][conversion] ragged, 110 differently-indexed data in other Python and NumPy data structures 111 into DataFrame objects 112 - Intelligent label-based [**slicing**][slicing], [**fancy 113 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 114 large data sets 115 - Intuitive [**merging**][merging] and [**joining**][joining] data 116 sets 117 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 118 data sets 119 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 120 labels per tick) 121 - Robust IO tools for loading data from [**flat files**][flat-files] 122 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 123 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 124 - [**Time series**][timeseries]-specific functionality: date range 125 generation and frequency conversion, moving window statistics, 126 moving window linear regressions, date shifting and lagging, etc. 127 128 129 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 130 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 131 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 132 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 133 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 134 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 135 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 136 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 137 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 138 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 139 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 140 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 141 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 142 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 143 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 144 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 145 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 146 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 147 148 ## Where to get it 149 The source code is currently hosted on GitHub at: 150 https://github.com/pandas-dev/pandas 151 152 Binary installers for the latest released version are available at the [Python 153 package index](https://pypi.org/project/pandas) and on conda. 154 155 ```sh 156 # conda 157 conda install pandas 158 ``` 159 160 ```sh 161 # or PyPI 162 pip install pandas 163 ``` 164 165 ## Dependencies 166 - [NumPy](https://www.numpy.org): 1.12.0 or higher 167 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 168 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 169 170 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 171 for recommended and optional dependencies. 172 173 ## Installation from sources 174 To install pandas from source you need Cython in addition to the normal 175 dependencies above. Cython can be installed from pypi: 176 177 ```sh 178 pip install cython 179 ``` 180 181 In the `pandas` directory (same one where you found this file after 182 cloning the git repo), execute: 183 184 ```sh 185 python setup.py install 186 ``` 187 188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 189 190 ```sh 191 python setup.py develop 192 ``` 193 194 Alternatively, you can use `pip` if you want all the dependencies pulled 195 in automatically (the `-e` option is for installing it in [development 196 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 197 198 ```sh 199 pip install -e . 200 ``` 201 202 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 203 204 ## License 205 [BSD 3](LICENSE) 206 207 ## Documentation 208 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 209 210 ## Background 211 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 212 has been under active development since then. 213 214 ## Getting Help 215 216 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 217 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 218 219 ## Discussion and Development 220 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 221 222 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 223 224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 225 226 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 227 228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 229 230 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 231 232 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 233 234 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 235 [end of README.md] [start of pandas/io/excel/_xlwt.py] 1 import pandas._libs.json as json 2 3 from pandas.io.excel._base import ExcelWriter 4 from pandas.io.excel._util import _validate_freeze_panes 5 6 7 class _XlwtWriter(ExcelWriter): 8 engine = 'xlwt' 9 supported_extensions = ('.xls',) 10 11 def __init__(self, path, engine=None, encoding=None, mode='w', 12 **engine_kwargs): 13 # Use the xlwt module as the Excel writer. 14 import xlwt 15 engine_kwargs['engine'] = engine 16 17 if mode == 'a': 18 raise ValueError('Append mode is not supported with xlwt!') 19 20 super(_XlwtWriter, self).__init__(path, mode=mode, **engine_kwargs) 21 22 if encoding is None: 23 encoding = 'ascii' 24 self.book = xlwt.Workbook(encoding=encoding) 25 self.fm_datetime = xlwt.easyxf(num_format_str=self.datetime_format) 26 self.fm_date = xlwt.easyxf(num_format_str=self.date_format) 27 28 def save(self): 29 """ 30 Save workbook to disk. 31 """ 32 return self.book.save(self.path) 33 34 def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0, 35 freeze_panes=None): 36 # Write the frame cells using xlwt. 37 38 sheet_name = self._get_sheet_name(sheet_name) 39 40 if sheet_name in self.sheets: 41 wks = self.sheets[sheet_name] 42 else: 43 wks = self.book.add_sheet(sheet_name) 44 self.sheets[sheet_name] = wks 45 46 if _validate_freeze_panes(freeze_panes): 47 wks.set_panes_frozen(True) 48 wks.set_horz_split_pos(freeze_panes[0]) 49 wks.set_vert_split_pos(freeze_panes[1]) 50 51 style_dict = {} 52 53 for cell in cells: 54 val, fmt = self._value_with_fmt(cell.val) 55 56 stylekey = json.dumps(cell.style) 57 if fmt: 58 stylekey += fmt 59 60 if stylekey in style_dict: 61 style = style_dict[stylekey] 62 else: 63 style = self._convert_to_style(cell.style, fmt) 64 style_dict[stylekey] = style 65 66 if cell.mergestart is not None and cell.mergeend is not None: 67 wks.write_merge(startrow + cell.row, 68 startrow + cell.mergestart, 69 startcol + cell.col, 70 startcol + cell.mergeend, 71 val, style) 72 else: 73 wks.write(startrow + cell.row, 74 startcol + cell.col, 75 val, style) 76 77 @classmethod 78 def _style_to_xlwt(cls, item, firstlevel=True, field_sep=',', 79 line_sep=';'): 80 """helper which recursively generate an xlwt easy style string 81 for example: 82 83 hstyle = {"font": {"bold": True}, 84 "border": {"top": "thin", 85 "right": "thin", 86 "bottom": "thin", 87 "left": "thin"}, 88 "align": {"horiz": "center"}} 89 will be converted to 90 font: bold on; \ 91 border: top thin, right thin, bottom thin, left thin; \ 92 align: horiz center; 93 """ 94 if hasattr(item, 'items'): 95 if firstlevel: 96 it = ["{key}: {val}" 97 .format(key=key, val=cls._style_to_xlwt(value, False)) 98 for key, value in item.items()] 99 out = "{sep} ".format(sep=(line_sep).join(it)) 100 return out 101 else: 102 it = ["{key} {val}" 103 .format(key=key, val=cls._style_to_xlwt(value, False)) 104 for key, value in item.items()] 105 out = "{sep} ".format(sep=(field_sep).join(it)) 106 return out 107 else: 108 item = "{item}".format(item=item) 109 item = item.replace("True", "on") 110 item = item.replace("False", "off") 111 return item 112 113 @classmethod 114 def _convert_to_style(cls, style_dict, num_format_str=None): 115 """ 116 converts a style_dict to an xlwt style object 117 Parameters 118 ---------- 119 style_dict : style dictionary to convert 120 num_format_str : optional number format string 121 """ 122 import xlwt 123 124 if style_dict: 125 xlwt_stylestr = cls._style_to_xlwt(style_dict) 126 style = xlwt.easyxf(xlwt_stylestr, field_sep=',', line_sep=';') 127 else: 128 style = xlwt.XFStyle() 129 if num_format_str is not None: 130 style.num_format_str = num_format_str 131 132 return style 133 [end of pandas/io/excel/_xlwt.py] [start of setup.py] 1 #!/usr/bin/env python 2 3 """ 4 Parts of this file were taken from the pyzmq project 5 (https://github.com/zeromq/pyzmq) which have been permitted for use under the 6 BSD license. Parts are from lxml (https://github.com/lxml/lxml) 7 """ 8 9 import os 10 from os.path import join as pjoin 11 12 import pkg_resources 13 import platform 14 from distutils.sysconfig import get_config_var 15 import sys 16 import shutil 17 from distutils.version import LooseVersion 18 from setuptools import setup, Command, find_packages 19 20 # versioning 21 import versioneer 22 cmdclass = versioneer.get_cmdclass() 23 24 25 def is_platform_windows(): 26 return sys.platform == 'win32' or sys.platform == 'cygwin' 27 28 29 def is_platform_mac(): 30 return sys.platform == 'darwin' 31 32 33 min_numpy_ver = '1.12.0' 34 setuptools_kwargs = { 35 'install_requires': [ 36 'python-dateutil >= 2.5.0', 37 'pytz >= 2011k', 38 'numpy >= {numpy_ver}'.format(numpy_ver=min_numpy_ver), 39 ], 40 'setup_requires': ['numpy >= {numpy_ver}'.format(numpy_ver=min_numpy_ver)], 41 'zip_safe': False, 42 } 43 44 45 min_cython_ver = '0.28.2' 46 try: 47 import Cython 48 ver = Cython.__version__ 49 from Cython.Build import cythonize 50 _CYTHON_INSTALLED = ver >= LooseVersion(min_cython_ver) 51 except ImportError: 52 _CYTHON_INSTALLED = False 53 cythonize = lambda x, *args, **kwargs: x # dummy func 54 55 # The import of Extension must be after the import of Cython, otherwise 56 # we do not get the appropriately patched class. 57 # See https://cython.readthedocs.io/en/latest/src/reference/compilation.html 58 from distutils.extension import Extension # noqa:E402 59 from distutils.command.build import build # noqa:E402 60 61 try: 62 if not _CYTHON_INSTALLED: 63 raise ImportError('No supported version of Cython installed.') 64 from Cython.Distutils.old_build_ext import old_build_ext as _build_ext 65 cython = True 66 except ImportError: 67 from distutils.command.build_ext import build_ext as _build_ext 68 cython = False 69 else: 70 try: 71 try: 72 from Cython import Tempita as tempita 73 except ImportError: 74 import tempita 75 except ImportError: 76 raise ImportError('Building pandas requires Tempita: ' 77 'pip install Tempita') 78 79 80 _pxi_dep_template = { 81 'algos': ['_libs/algos_common_helper.pxi.in', 82 '_libs/algos_take_helper.pxi.in', 83 '_libs/algos_rank_helper.pxi.in'], 84 'groupby': ['_libs/groupby_helper.pxi.in'], 85 'hashtable': ['_libs/hashtable_class_helper.pxi.in', 86 '_libs/hashtable_func_helper.pxi.in'], 87 'index': ['_libs/index_class_helper.pxi.in'], 88 'sparse': ['_libs/sparse_op_helper.pxi.in'], 89 'interval': ['_libs/intervaltree.pxi.in']} 90 91 _pxifiles = [] 92 _pxi_dep = {} 93 for module, files in _pxi_dep_template.items(): 94 pxi_files = [pjoin('pandas', x) for x in files] 95 _pxifiles.extend(pxi_files) 96 _pxi_dep[module] = pxi_files 97 98 99 class build_ext(_build_ext): 100 @classmethod 101 def render_templates(cls, pxifiles): 102 for pxifile in pxifiles: 103 # build pxifiles first, template extension must be .pxi.in 104 assert pxifile.endswith('.pxi.in') 105 outfile = pxifile[:-3] 106 107 if (os.path.exists(outfile) and 108 os.stat(pxifile).st_mtime < os.stat(outfile).st_mtime): 109 # if .pxi.in is not updated, no need to output .pxi 110 continue 111 112 with open(pxifile, "r") as f: 113 tmpl = f.read() 114 pyxcontent = tempita.sub(tmpl) 115 116 with open(outfile, "w") as f: 117 f.write(pyxcontent) 118 119 def build_extensions(self): 120 # if building from c files, don't need to 121 # generate template output 122 if cython: 123 self.render_templates(_pxifiles) 124 125 numpy_incl = pkg_resources.resource_filename('numpy', 'core/include') 126 127 for ext in self.extensions: 128 if (hasattr(ext, 'include_dirs') and 129 numpy_incl not in ext.include_dirs): 130 ext.include_dirs.append(numpy_incl) 131 _build_ext.build_extensions(self) 132 133 134 DESCRIPTION = ("Powerful data structures for data analysis, time series, " 135 "and statistics") 136 LONG_DESCRIPTION = """ 137 **pandas** is a Python package providing fast, flexible, and expressive data 138 structures designed to make working with structured (tabular, multidimensional, 139 potentially heterogeneous) and time series data both easy and intuitive. It 140 aims to be the fundamental high-level building block for doing practical, 141 **real world** data analysis in Python. Additionally, it has the broader goal 142 of becoming **the most powerful and flexible open source data analysis / 143 manipulation tool available in any language**. It is already well on its way 144 toward this goal. 145 146 pandas is well suited for many different kinds of data: 147 148 - Tabular data with heterogeneously-typed columns, as in an SQL table or 149 Excel spreadsheet 150 - Ordered and unordered (not necessarily fixed-frequency) time series data. 151 - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and 152 column labels 153 - Any other form of observational / statistical data sets. The data actually 154 need not be labeled at all to be placed into a pandas data structure 155 156 The two primary data structures of pandas, Series (1-dimensional) and DataFrame 157 (2-dimensional), handle the vast majority of typical use cases in finance, 158 statistics, social science, and many areas of engineering. For R users, 159 DataFrame provides everything that R's ``data.frame`` provides and much 160 more. pandas is built on top of `NumPy <http://www.numpy.org>`__ and is 161 intended to integrate well within a scientific computing environment with many 162 other 3rd party libraries. 163 164 Here are just a few of the things that pandas does well: 165 166 - Easy handling of **missing data** (represented as NaN) in floating point as 167 well as non-floating point data 168 - Size mutability: columns can be **inserted and deleted** from DataFrame and 169 higher dimensional objects 170 - Automatic and explicit **data alignment**: objects can be explicitly 171 aligned to a set of labels, or the user can simply ignore the labels and 172 let `Series`, `DataFrame`, etc. automatically align the data for you in 173 computations 174 - Powerful, flexible **group by** functionality to perform 175 split-apply-combine operations on data sets, for both aggregating and 176 transforming data 177 - Make it **easy to convert** ragged, differently-indexed data in other 178 Python and NumPy data structures into DataFrame objects 179 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting** 180 of large data sets 181 - Intuitive **merging** and **joining** data sets 182 - Flexible **reshaping** and pivoting of data sets 183 - **Hierarchical** labeling of axes (possible to have multiple labels per 184 tick) 185 - Robust IO tools for loading data from **flat files** (CSV and delimited), 186 Excel files, databases, and saving / loading data from the ultrafast **HDF5 187 format** 188 - **Time series**-specific functionality: date range generation and frequency 189 conversion, moving window statistics, moving window linear regressions, 190 date shifting and lagging, etc. 191 192 Many of these principles are here to address the shortcomings frequently 193 experienced using other languages / scientific research environments. For data 194 scientists, working with data is typically divided into multiple stages: 195 munging and cleaning data, analyzing / modeling it, then organizing the results 196 of the analysis into a form suitable for plotting or tabular display. pandas is 197 the ideal tool for all of these tasks. 198 """ 199 200 DISTNAME = 'pandas' 201 LICENSE = 'BSD' 202 AUTHOR = "The PyData Development Team" 203 EMAIL = "[email protected]" 204 URL = "http://pandas.pydata.org" 205 DOWNLOAD_URL = '' 206 CLASSIFIERS = [ 207 'Development Status :: 5 - Production/Stable', 208 'Environment :: Console', 209 'Operating System :: OS Independent', 210 'Intended Audience :: Science/Research', 211 'Programming Language :: Python', 212 'Programming Language :: Python :: 2', 213 'Programming Language :: Python :: 3', 214 'Programming Language :: Python :: 2.7', 215 'Programming Language :: Python :: 3.5', 216 'Programming Language :: Python :: 3.6', 217 'Programming Language :: Python :: 3.7', 218 'Programming Language :: Cython', 219 'Topic :: Scientific/Engineering'] 220 221 222 class CleanCommand(Command): 223 """Custom distutils command to clean the .so and .pyc files.""" 224 225 user_options = [("all", "a", "")] 226 227 def initialize_options(self): 228 self.all = True 229 self._clean_me = [] 230 self._clean_trees = [] 231 232 base = pjoin('pandas', '_libs', 'src') 233 tsbase = pjoin('pandas', '_libs', 'tslibs', 'src') 234 dt = pjoin(tsbase, 'datetime') 235 util = pjoin('pandas', 'util') 236 parser = pjoin(base, 'parser') 237 ujson_python = pjoin(base, 'ujson', 'python') 238 ujson_lib = pjoin(base, 'ujson', 'lib') 239 self._clean_exclude = [pjoin(dt, 'np_datetime.c'), 240 pjoin(dt, 'np_datetime_strings.c'), 241 pjoin(parser, 'tokenizer.c'), 242 pjoin(parser, 'io.c'), 243 pjoin(ujson_python, 'ujson.c'), 244 pjoin(ujson_python, 'objToJSON.c'), 245 pjoin(ujson_python, 'JSONtoObj.c'), 246 pjoin(ujson_lib, 'ultrajsonenc.c'), 247 pjoin(ujson_lib, 'ultrajsondec.c'), 248 pjoin(util, 'move.c'), 249 ] 250 251 for root, dirs, files in os.walk('pandas'): 252 for f in files: 253 filepath = pjoin(root, f) 254 if filepath in self._clean_exclude: 255 continue 256 257 if os.path.splitext(f)[-1] in ('.pyc', '.so', '.o', 258 '.pyo', 259 '.pyd', '.c', '.orig'): 260 self._clean_me.append(filepath) 261 for d in dirs: 262 if d == '__pycache__': 263 self._clean_trees.append(pjoin(root, d)) 264 265 # clean the generated pxi files 266 for pxifile in _pxifiles: 267 pxifile = pxifile.replace(".pxi.in", ".pxi") 268 self._clean_me.append(pxifile) 269 270 for d in ('build', 'dist'): 271 if os.path.exists(d): 272 self._clean_trees.append(d) 273 274 def finalize_options(self): 275 pass 276 277 def run(self): 278 for clean_me in self._clean_me: 279 try: 280 os.unlink(clean_me) 281 except Exception: 282 pass 283 for clean_tree in self._clean_trees: 284 try: 285 shutil.rmtree(clean_tree) 286 except Exception: 287 pass 288 289 290 # we need to inherit from the versioneer 291 # class as it encodes the version info 292 sdist_class = cmdclass['sdist'] 293 294 295 class CheckSDist(sdist_class): 296 """Custom sdist that ensures Cython has compiled all pyx files to c.""" 297 298 _pyxfiles = ['pandas/_libs/lib.pyx', 299 'pandas/_libs/hashtable.pyx', 300 'pandas/_libs/tslib.pyx', 301 'pandas/_libs/index.pyx', 302 'pandas/_libs/internals.pyx', 303 'pandas/_libs/algos.pyx', 304 'pandas/_libs/join.pyx', 305 'pandas/_libs/indexing.pyx', 306 'pandas/_libs/interval.pyx', 307 'pandas/_libs/hashing.pyx', 308 'pandas/_libs/missing.pyx', 309 'pandas/_libs/reduction.pyx', 310 'pandas/_libs/testing.pyx', 311 'pandas/_libs/skiplist.pyx', 312 'pandas/_libs/sparse.pyx', 313 'pandas/_libs/ops.pyx', 314 'pandas/_libs/parsers.pyx', 315 'pandas/_libs/tslibs/ccalendar.pyx', 316 'pandas/_libs/tslibs/period.pyx', 317 'pandas/_libs/tslibs/strptime.pyx', 318 'pandas/_libs/tslibs/np_datetime.pyx', 319 'pandas/_libs/tslibs/timedeltas.pyx', 320 'pandas/_libs/tslibs/timestamps.pyx', 321 'pandas/_libs/tslibs/timezones.pyx', 322 'pandas/_libs/tslibs/conversion.pyx', 323 'pandas/_libs/tslibs/fields.pyx', 324 'pandas/_libs/tslibs/offsets.pyx', 325 'pandas/_libs/tslibs/frequencies.pyx', 326 'pandas/_libs/tslibs/resolution.pyx', 327 'pandas/_libs/tslibs/parsing.pyx', 328 'pandas/_libs/writers.pyx', 329 'pandas/io/sas/sas.pyx'] 330 331 _cpp_pyxfiles = ['pandas/_libs/window.pyx', 332 'pandas/io/msgpack/_packer.pyx', 333 'pandas/io/msgpack/_unpacker.pyx'] 334 335 def initialize_options(self): 336 sdist_class.initialize_options(self) 337 338 def run(self): 339 if 'cython' in cmdclass: 340 self.run_command('cython') 341 else: 342 # If we are not running cython then 343 # compile the extensions correctly 344 pyx_files = [(self._pyxfiles, 'c'), (self._cpp_pyxfiles, 'cpp')] 345 346 for pyxfiles, extension in pyx_files: 347 for pyxfile in pyxfiles: 348 sourcefile = pyxfile[:-3] + extension 349 msg = ("{extension}-source file '{source}' not found.\n" 350 "Run 'setup.py cython' before sdist.".format( 351 source=sourcefile, extension=extension)) 352 assert os.path.isfile(sourcefile), msg 353 sdist_class.run(self) 354 355 356 class CheckingBuildExt(build_ext): 357 """ 358 Subclass build_ext to get clearer report if Cython is necessary. 359 """ 360 361 def check_cython_extensions(self, extensions): 362 for ext in extensions: 363 for src in ext.sources: 364 if not os.path.exists(src): 365 print("{}: -> [{}]".format(ext.name, ext.sources)) 366 raise Exception("""Cython-generated file '{src}' not found. 367 Cython is required to compile pandas from a development branch. 368 Please install Cython or download a release package of pandas. 369 """.format(src=src)) 370 371 def build_extensions(self): 372 self.check_cython_extensions(self.extensions) 373 build_ext.build_extensions(self) 374 375 376 class CythonCommand(build_ext): 377 """ 378 Custom distutils command subclassed from Cython.Distutils.build_ext 379 to compile pyx->c, and stop there. All this does is override the 380 C-compile method build_extension() with a no-op. 381 """ 382 def build_extension(self, ext): 383 pass 384 385 386 class DummyBuildSrc(Command): 387 """ numpy's build_src command interferes with Cython's build_ext. 388 """ 389 user_options = [] 390 391 def initialize_options(self): 392 self.py_modules_dict = {} 393 394 def finalize_options(self): 395 pass 396 397 def run(self): 398 pass 399 400 401 cmdclass.update({'clean': CleanCommand, 402 'build': build}) 403 404 if cython: 405 suffix = '.pyx' 406 cmdclass['build_ext'] = CheckingBuildExt 407 cmdclass['cython'] = CythonCommand 408 else: 409 suffix = '.c' 410 cmdclass['build_src'] = DummyBuildSrc 411 cmdclass['build_ext'] = CheckingBuildExt 412 413 # ---------------------------------------------------------------------- 414 # Preparation of compiler arguments 415 416 if sys.byteorder == 'big': 417 endian_macro = [('__BIG_ENDIAN__', '1')] 418 else: 419 endian_macro = [('__LITTLE_ENDIAN__', '1')] 420 421 422 if is_platform_windows(): 423 extra_compile_args = [] 424 else: 425 # args to ignore warnings 426 extra_compile_args = ['-Wno-unused-function'] 427 428 429 # For mac, ensure extensions are built for macos 10.9 when compiling on a 430 # 10.9 system or above, overriding distuitls behaviour which is to target 431 # the version that python was built for. This may be overridden by setting 432 # MACOSX_DEPLOYMENT_TARGET before calling setup.py 433 if is_platform_mac(): 434 if 'MACOSX_DEPLOYMENT_TARGET' not in os.environ: 435 current_system = LooseVersion(platform.mac_ver()[0]) 436 python_target = LooseVersion( 437 get_config_var('MACOSX_DEPLOYMENT_TARGET')) 438 if python_target < '10.9' and current_system >= '10.9': 439 os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.9' 440 441 442 # enable coverage by building cython files by setting the environment variable 443 # "PANDAS_CYTHON_COVERAGE" (with a Truthy value) or by running build_ext 444 # with `--with-cython-coverage`enabled 445 linetrace = os.environ.get('PANDAS_CYTHON_COVERAGE', False) 446 if '--with-cython-coverage' in sys.argv: 447 linetrace = True 448 sys.argv.remove('--with-cython-coverage') 449 450 # Note: if not using `cythonize`, coverage can be enabled by 451 # pinning `ext.cython_directives = directives` to each ext in extensions. 452 # github.com/cython/cython/wiki/enhancements-compilerdirectives#in-setuppy 453 directives = {'linetrace': False, 454 'language_level': 2} 455 macros = [] 456 if linetrace: 457 # https://pypkg.com/pypi/pytest-cython/f/tests/example-project/setup.py 458 directives['linetrace'] = True 459 macros = [('CYTHON_TRACE', '1'), ('CYTHON_TRACE_NOGIL', '1')] 460 461 # in numpy>=1.16.0, silence build warnings about deprecated API usage 462 # we can't do anything about these warnings because they stem from 463 # cython+numpy version mismatches. 464 macros.append(('NPY_NO_DEPRECATED_API', '0')) 465 466 467 # ---------------------------------------------------------------------- 468 # Specification of Dependencies 469 470 # TODO: Need to check to see if e.g. `linetrace` has changed and possibly 471 # re-compile. 472 def maybe_cythonize(extensions, *args, **kwargs): 473 """ 474 Render tempita templates before calling cythonize 475 """ 476 if len(sys.argv) > 1 and 'clean' in sys.argv: 477 # Avoid running cythonize on `python setup.py clean` 478 # See https://github.com/cython/cython/issues/1495 479 return extensions 480 481 numpy_incl = pkg_resources.resource_filename('numpy', 'core/include') 482 # TODO: Is this really necessary here? 483 for ext in extensions: 484 if (hasattr(ext, 'include_dirs') and 485 numpy_incl not in ext.include_dirs): 486 ext.include_dirs.append(numpy_incl) 487 488 if cython: 489 build_ext.render_templates(_pxifiles) 490 return cythonize(extensions, *args, **kwargs) 491 else: 492 return extensions 493 494 495 def srcpath(name=None, suffix='.pyx', subdir='src'): 496 return pjoin('pandas', subdir, name + suffix) 497 498 499 common_include = ['pandas/_libs/src/klib', 'pandas/_libs/src'] 500 ts_include = ['pandas/_libs/tslibs/src', 'pandas/_libs/tslibs'] 501 502 503 lib_depends = ['pandas/_libs/src/parse_helper.h', 504 'pandas/_libs/src/compat_helper.h'] 505 506 np_datetime_headers = [ 507 'pandas/_libs/tslibs/src/datetime/np_datetime.h', 508 'pandas/_libs/tslibs/src/datetime/np_datetime_strings.h'] 509 np_datetime_sources = [ 510 'pandas/_libs/tslibs/src/datetime/np_datetime.c', 511 'pandas/_libs/tslibs/src/datetime/np_datetime_strings.c'] 512 513 tseries_depends = np_datetime_headers 514 515 516 ext_data = { 517 '_libs.algos': { 518 'pyxfile': '_libs/algos', 519 'depends': _pxi_dep['algos']}, 520 '_libs.groupby': { 521 'pyxfile': '_libs/groupby', 522 'depends': _pxi_dep['groupby']}, 523 '_libs.hashing': { 524 'pyxfile': '_libs/hashing', 525 'include': [], 526 'depends': []}, 527 '_libs.hashtable': { 528 'pyxfile': '_libs/hashtable', 529 'depends': (['pandas/_libs/src/klib/khash_python.h'] + 530 _pxi_dep['hashtable'])}, 531 '_libs.index': { 532 'pyxfile': '_libs/index', 533 'include': common_include + ts_include, 534 'depends': _pxi_dep['index'], 535 'sources': np_datetime_sources}, 536 '_libs.indexing': { 537 'pyxfile': '_libs/indexing'}, 538 '_libs.internals': { 539 'pyxfile': '_libs/internals'}, 540 '_libs.interval': { 541 'pyxfile': '_libs/interval', 542 'depends': _pxi_dep['interval']}, 543 '_libs.join': { 544 'pyxfile': '_libs/join'}, 545 '_libs.lib': { 546 'pyxfile': '_libs/lib', 547 'include': common_include + ts_include, 548 'depends': lib_depends + tseries_depends}, 549 '_libs.missing': { 550 'pyxfile': '_libs/missing', 551 'include': common_include + ts_include, 552 'depends': tseries_depends}, 553 '_libs.parsers': { 554 'pyxfile': '_libs/parsers', 555 'depends': ['pandas/_libs/src/parser/tokenizer.h', 556 'pandas/_libs/src/parser/io.h'], 557 'sources': ['pandas/_libs/src/parser/tokenizer.c', 558 'pandas/_libs/src/parser/io.c']}, 559 '_libs.reduction': { 560 'pyxfile': '_libs/reduction'}, 561 '_libs.ops': { 562 'pyxfile': '_libs/ops'}, 563 '_libs.properties': { 564 'pyxfile': '_libs/properties', 565 'include': []}, 566 '_libs.reshape': { 567 'pyxfile': '_libs/reshape', 568 'depends': []}, 569 '_libs.skiplist': { 570 'pyxfile': '_libs/skiplist', 571 'depends': ['pandas/_libs/src/skiplist.h']}, 572 '_libs.sparse': { 573 'pyxfile': '_libs/sparse', 574 'depends': _pxi_dep['sparse']}, 575 '_libs.tslib': { 576 'pyxfile': '_libs/tslib', 577 'include': ts_include, 578 'depends': tseries_depends, 579 'sources': np_datetime_sources}, 580 '_libs.tslibs.ccalendar': { 581 'pyxfile': '_libs/tslibs/ccalendar', 582 'include': []}, 583 '_libs.tslibs.conversion': { 584 'pyxfile': '_libs/tslibs/conversion', 585 'include': ts_include, 586 'depends': tseries_depends, 587 'sources': np_datetime_sources}, 588 '_libs.tslibs.fields': { 589 'pyxfile': '_libs/tslibs/fields', 590 'include': ts_include, 591 'depends': tseries_depends, 592 'sources': np_datetime_sources}, 593 '_libs.tslibs.frequencies': { 594 'pyxfile': '_libs/tslibs/frequencies', 595 'include': []}, 596 '_libs.tslibs.nattype': { 597 'pyxfile': '_libs/tslibs/nattype', 598 'include': []}, 599 '_libs.tslibs.np_datetime': { 600 'pyxfile': '_libs/tslibs/np_datetime', 601 'include': ts_include, 602 'depends': np_datetime_headers, 603 'sources': np_datetime_sources}, 604 '_libs.tslibs.offsets': { 605 'pyxfile': '_libs/tslibs/offsets', 606 'include': ts_include, 607 'depends': tseries_depends, 608 'sources': np_datetime_sources}, 609 '_libs.tslibs.parsing': { 610 'pyxfile': '_libs/tslibs/parsing', 611 'include': []}, 612 '_libs.tslibs.period': { 613 'pyxfile': '_libs/tslibs/period', 614 'include': ts_include, 615 'depends': tseries_depends, 616 'sources': np_datetime_sources}, 617 '_libs.tslibs.resolution': { 618 'pyxfile': '_libs/tslibs/resolution', 619 'include': ts_include, 620 'depends': tseries_depends, 621 'sources': np_datetime_sources}, 622 '_libs.tslibs.strptime': { 623 'pyxfile': '_libs/tslibs/strptime', 624 'include': ts_include, 625 'depends': tseries_depends, 626 'sources': np_datetime_sources}, 627 '_libs.tslibs.timedeltas': { 628 'pyxfile': '_libs/tslibs/timedeltas', 629 'include': ts_include, 630 'depends': np_datetime_headers, 631 'sources': np_datetime_sources}, 632 '_libs.tslibs.timestamps': { 633 'pyxfile': '_libs/tslibs/timestamps', 634 'include': ts_include, 635 'depends': tseries_depends, 636 'sources': np_datetime_sources}, 637 '_libs.tslibs.timezones': { 638 'pyxfile': '_libs/tslibs/timezones', 639 'include': []}, 640 '_libs.testing': { 641 'pyxfile': '_libs/testing'}, 642 '_libs.window': { 643 'pyxfile': '_libs/window', 644 'language': 'c++', 645 'suffix': '.cpp'}, 646 '_libs.writers': { 647 'pyxfile': '_libs/writers'}, 648 'io.sas._sas': { 649 'pyxfile': 'io/sas/sas'}, 650 'io.msgpack._packer': { 651 'macros': endian_macro + macros, 652 'depends': ['pandas/_libs/src/msgpack/pack.h', 653 'pandas/_libs/src/msgpack/pack_template.h'], 654 'include': ['pandas/_libs/src/msgpack'] + common_include, 655 'language': 'c++', 656 'suffix': '.cpp', 657 'pyxfile': 'io/msgpack/_packer', 658 'subdir': 'io/msgpack'}, 659 'io.msgpack._unpacker': { 660 'depends': ['pandas/_libs/src/msgpack/unpack.h', 661 'pandas/_libs/src/msgpack/unpack_define.h', 662 'pandas/_libs/src/msgpack/unpack_template.h'], 663 'macros': endian_macro + macros, 664 'include': ['pandas/_libs/src/msgpack'] + common_include, 665 'language': 'c++', 666 'suffix': '.cpp', 667 'pyxfile': 'io/msgpack/_unpacker', 668 'subdir': 'io/msgpack' 669 } 670 } 671 672 extensions = [] 673 674 for name, data in ext_data.items(): 675 source_suffix = suffix if suffix == '.pyx' else data.get('suffix', '.c') 676 677 sources = [srcpath(data['pyxfile'], suffix=source_suffix, subdir='')] 678 679 sources.extend(data.get('sources', [])) 680 681 include = data.get('include', common_include) 682 683 obj = Extension('pandas.{name}'.format(name=name), 684 sources=sources, 685 depends=data.get('depends', []), 686 include_dirs=include, 687 language=data.get('language', 'c'), 688 define_macros=data.get('macros', macros), 689 extra_compile_args=extra_compile_args) 690 691 extensions.append(obj) 692 693 # ---------------------------------------------------------------------- 694 # ujson 695 696 if suffix == '.pyx': 697 # undo dumb setuptools bug clobbering .pyx sources back to .c 698 for ext in extensions: 699 if ext.sources[0].endswith(('.c', '.cpp')): 700 root, _ = os.path.splitext(ext.sources[0]) 701 ext.sources[0] = root + suffix 702 703 ujson_ext = Extension('pandas._libs.json', 704 depends=['pandas/_libs/src/ujson/lib/ultrajson.h'], 705 sources=(['pandas/_libs/src/ujson/python/ujson.c', 706 'pandas/_libs/src/ujson/python/objToJSON.c', 707 'pandas/_libs/src/ujson/python/JSONtoObj.c', 708 'pandas/_libs/src/ujson/lib/ultrajsonenc.c', 709 'pandas/_libs/src/ujson/lib/ultrajsondec.c'] + 710 np_datetime_sources), 711 include_dirs=['pandas/_libs/src/ujson/python', 712 'pandas/_libs/src/ujson/lib', 713 'pandas/_libs/src/datetime'], 714 extra_compile_args=(['-D_GNU_SOURCE'] + 715 extra_compile_args), 716 define_macros=macros) 717 718 719 extensions.append(ujson_ext) 720 721 # ---------------------------------------------------------------------- 722 # util 723 # extension for pseudo-safely moving bytes into mutable buffers 724 _move_ext = Extension('pandas.util._move', 725 depends=[], 726 sources=['pandas/util/move.c'], 727 define_macros=macros) 728 extensions.append(_move_ext) 729 730 # The build cache system does string matching below this point. 731 # if you change something, be careful. 732 733 setup(name=DISTNAME, 734 maintainer=AUTHOR, 735 version=versioneer.get_version(), 736 packages=find_packages(include=['pandas', 'pandas.*']), 737 package_data={'': ['templates/*', '_libs/*.dll']}, 738 ext_modules=maybe_cythonize(extensions, compiler_directives=directives), 739 maintainer_email=EMAIL, 740 description=DESCRIPTION, 741 license=LICENSE, 742 cmdclass=cmdclass, 743 url=URL, 744 download_url=DOWNLOAD_URL, 745 long_description=LONG_DESCRIPTION, 746 classifiers=CLASSIFIERS, 747 platforms='any', 748 python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*', 749 **setuptools_kwargs) 750 [end of setup.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
659e0cae6be2d7ab3370cc7d8ab936bc3ee1b159
Excel Module Cleanups As called out by @gfyoung in review there are some easy opportunities to clean up and simplify the existing Excel IO modules. These are mostly minor loop refactors and docstring updates https://github.com/pandas-dev/pandas/pull/25153#pullrequestreview-200888051
2019-02-11T21:02:24Z
<patch> diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py --- a/pandas/io/excel/_base.py +++ b/pandas/io/excel/_base.py @@ -590,9 +590,8 @@ def __new__(cls, path, engine=None, **kwargs): if engine == 'auto': engine = _get_default_writer(ext) except KeyError: - error = ValueError("No engine for filetype: '{ext}'" - .format(ext=ext)) - raise error + raise ValueError("No engine for filetype: '{ext}'" + .format(ext=ext)) cls = get_writer(engine) return object.__new__(cls) diff --git a/pandas/io/excel/_util.py b/pandas/io/excel/_util.py --- a/pandas/io/excel/_util.py +++ b/pandas/io/excel/_util.py @@ -5,32 +5,39 @@ from pandas.core.dtypes.common import is_integer, is_list_like -from pandas.core import config - -_writer_extensions = ["xlsx", "xls", "xlsm"] - - _writers = {} def register_writer(klass): - """Adds engine to the excel writer registry. You must use this method to - integrate with ``to_excel``. Also adds config options for any new - ``supported_extensions`` defined on the writer.""" + """ + Add engine to the excel writer registry.io.excel. + + You must use this method to integrate with ``to_excel``. + + Parameters + ---------- + klass : ExcelWriter + """ if not callable(klass): raise ValueError("Can only register callables as engines") engine_name = klass.engine _writers[engine_name] = klass - for ext in klass.supported_extensions: - if ext.startswith('.'): - ext = ext[1:] - if ext not in _writer_extensions: - config.register_option("io.excel.{ext}.writer".format(ext=ext), - engine_name, validator=str) - _writer_extensions.append(ext) def _get_default_writer(ext): + """ + Return the default writer for the given extension. + + Parameters + ---------- + ext : str + The excel file extension for which to get the default engine. + + Returns + ------- + str + The default engine for the extension. + """ _default_writers = {'xlsx': 'openpyxl', 'xlsm': 'openpyxl', 'xls': 'xlwt'} try: import xlsxwriter # noqa @@ -230,8 +237,6 @@ def _fill_mi_header(row, control_row): return _maybe_convert_to_string(row), control_row -# fill blank if index_col not None - def _pop_header_name(row, index_col): """ </patch>
[]
[]
pandas-dev__pandas-11006
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> pd.to_pickle() and pd.read_pickle() bug? I'm on windows 7, anaconda, python 2.7.9, 32bit pandas 0.16.2 try this: ``` python import pandas as pd df4 = pd.DataFrame(index=P.date_range('1750-1-1', '2050-1-1', freq='7D') pd.to_pickle(df4, '7d.test') pd.read_pickle('7d.test') In [84]: P.read_pickle('7d.test') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-84-0108eecfbea7> in <module>() ----> 1 P.read_pickle('7d.test') C:\Users\LV\Miniconda\lib\site-packages\pandas\io\pickle.pyc in read_pickle(path) 58 59 try: ---> 60 return try_read(path) 61 except: 62 if PY3: C:\Users\LV\Miniconda\lib\site-packages\pandas\io\pickle.pyc in try_read(path, encoding) 55 except: 56 with open(path, 'rb') as fh: ---> 57 return pc.load(fh, encoding=encoding, compat=True) 58 59 try: C:\Users\LV\Miniconda\lib\site-packages\pandas\compat\pickle_compat.pyc in load(fh, encoding, compat, is_verbose) 114 up.is_verbose = is_verbose 115 --> 116 return up.load() 117 except: 118 raise C:\Users\LV\Miniconda\lib\pickle.pyc in load(self) 856 while 1: 857 key = read(1) --> 858 dispatch[key](self) 859 except _Stop, stopinst: 860 return stopinst.value C:\Users\LV\Miniconda\lib\site-packages\pandas\compat\pickle_compat.pyc in load_reduce(self) 18 19 try: ---> 20 stack[-1] = func(*args) 21 return 22 except Exception as e: C:\Users\LV\Miniconda\lib\site-packages\pandas\tseries\index.pyc in _new_DatetimeIndex(cls, d) 113 # data are already in UTC 114 tz = d.pop('tz',None) --> 115 result = cls.__new__(cls, **d) 116 result.tz = tz 117 return result C:\Users\LV\Miniconda\lib\site-packages\pandas\util\decorators.pyc in wrapper(*args, **kwargs) 86 else: 87 kwargs[new_arg_name] = new_arg_value ---> 88 return func(*args, **kwargs) 89 return wrapper 90 return _deprecate_kwarg C:\Users\LV\Miniconda\lib\site-packages\pandas\tseries\index.pyc in __new__(cls, data, freq, start, end, periods, copy, name, tz, verify_integrity, normalize, closed, ambiguous, **kwargs) 334 if not np.array_equal(subarr.asi8, on_freq.asi8): 335 raise ValueError('Inferred frequency {0} from passed dates does not' --> 336 'conform to passed frequency {1}'.format(inferred, freq.freqstr)) 337 338 if freq_infer: ValueError: Inferred frequency W-THU from passed dates does notconform to passed frequency 7D ``` </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 <table> 4 <tr> 5 <td>Latest Release</td> 6 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td> 7 </tr> 8 <tr> 9 <td>Package Status</td> 10 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 11 </tr> 12 <tr> 13 <td>License</td> 14 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td> 15 </tr> 16 <tr> 17 <td>Build Status</td> 18 <td> 19 <a href="https://travis-ci.org/pydata/pandas"> 20 <img src="https://travis-ci.org/pydata/pandas.svg?branch=master" alt="build status" /> 21 </a> 22 </td> 23 </tr> 24 <tr> 25 <td>Conda</td> 26 <td> 27 <a href="http://pandas.pydata.org"> 28 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda downloads" /> 29 </a> 30 </td> 31 </tr> 32 <tr> 33 <td>PyPI</td> 34 <td> 35 <a href="https://pypi.python.org/pypi/pandas/"> 36 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" /> 37 </a> 38 </td> 39 </tr> 40 </table> 41 42 [![https://gitter.im/pydata/pandas](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 43 44 ## What is it 45 46 **pandas** is a Python package providing fast, flexible, and expressive data 47 structures designed to make working with "relational" or "labeled" data both 48 easy and intuitive. It aims to be the fundamental high-level building block for 49 doing practical, **real world** data analysis in Python. Additionally, it has 50 the broader goal of becoming **the most powerful and flexible open source data 51 analysis / manipulation tool available in any language**. It is already well on 52 its way toward this goal. 53 54 ## Main Features 55 Here are just a few of the things that pandas does well: 56 57 - Easy handling of [**missing data**][missing-data] (represented as 58 `NaN`) in floating point as well as non-floating point data 59 - Size mutability: columns can be [**inserted and 60 deleted**][insertion-deletion] from DataFrame and higher dimensional 61 objects 62 - Automatic and explicit [**data alignment**][alignment]: objects can 63 be explicitly aligned to a set of labels, or the user can simply 64 ignore the labels and let `Series`, `DataFrame`, etc. automatically 65 align the data for you in computations 66 - Powerful, flexible [**group by**][groupby] functionality to perform 67 split-apply-combine operations on data sets, for both aggregating 68 and transforming data 69 - Make it [**easy to convert**][conversion] ragged, 70 differently-indexed data in other Python and NumPy data structures 71 into DataFrame objects 72 - Intelligent label-based [**slicing**][slicing], [**fancy 73 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 74 large data sets 75 - Intuitive [**merging**][merging] and [**joining**][joining] data 76 sets 77 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 78 data sets 79 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 80 labels per tick) 81 - Robust IO tools for loading data from [**flat files**][flat-files] 82 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 83 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 84 - [**Time series**][timeseries]-specific functionality: date range 85 generation and frequency conversion, moving window statistics, 86 moving window linear regressions, date shifting and lagging, etc. 87 88 89 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 90 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 91 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 92 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 93 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 94 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 95 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 96 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 97 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 98 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 99 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 100 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 101 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 102 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 103 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 104 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 105 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 106 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 107 108 ## Where to get it 109 The source code is currently hosted on GitHub at: 110 http://github.com/pydata/pandas 111 112 Binary installers for the latest released version are available at the Python 113 package index 114 115 http://pypi.python.org/pypi/pandas/ 116 117 And via `easy_install`: 118 119 ```sh 120 easy_install pandas 121 ``` 122 123 or `pip`: 124 125 ```sh 126 pip install pandas 127 ``` 128 129 or `conda`: 130 131 ```sh 132 conda install pandas 133 ``` 134 135 ## Dependencies 136 - [NumPy](http://www.numpy.org): 1.7.0 or higher 137 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 138 - [pytz](http://pytz.sourceforge.net) 139 - Needed for time zone support with ``pandas.date_range`` 140 141 ### Highly Recommended Dependencies 142 - [numexpr](https://github.com/pydata/numexpr) 143 - Needed to accelerate some expression evaluation operations 144 - Required by PyTables 145 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 146 - Needed to accelerate certain numerical operations 147 148 ### Optional dependencies 149 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 150 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 151 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 152 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 153 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 154 - [statsmodels](http://statsmodels.sourceforge.net/) 155 - Needed for parts of `pandas.stats` 156 - For Excel I/O: 157 - [xlrd/xlwt](http://www.python-excel.org/) 158 - Excel reading (xlrd) and writing (xlwt) 159 - [openpyxl](http://packages.python.org/openpyxl/) 160 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for 161 writing .xlsx files 162 - xlrd >= 0.9.0 163 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 164 - Alternative Excel writer. 165 - [Google bq Command Line Tool](https://cloud.google.com/bigquery/bq-command-line-tool) 166 - Needed for `pandas.io.gbq` 167 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 168 - One of the following combinations of libraries is needed to use the 169 top-level [`pandas.read_html`][read-html-docs] function: 170 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 171 recent version of [html5lib][html5lib] is okay.) 172 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 173 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 174 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 175 for reasons as to why you should probably **not** take this approach. 176 177 #### Notes about HTML parsing libraries 178 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 179 either [lxml][lxml] or [html5lib][html5lib] or both. 180 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 181 installed. 182 - You are strongly encouraged to read [HTML reading 183 gotchas][html-gotchas]. It explains issues surrounding the 184 installation and usage of the above three libraries. 185 - You may need to install an older version of 186 [BeautifulSoup4][BeautifulSoup4]: 187 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 188 32-bit Ubuntu/Debian 189 - Additionally, if you're using [Anaconda][Anaconda] you should 190 definitely read [the gotchas about HTML parsing][html-gotchas] 191 libraries 192 - If you're on a system with `apt-get` you can do 193 194 ```sh 195 sudo apt-get build-dep python-lxml 196 ``` 197 198 to get the necessary dependencies for installation of [lxml][lxml]. 199 This will prevent further headaches down the line. 200 201 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 202 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 203 [lxml]: http://lxml.de 204 [Anaconda]: https://store.continuum.io/cshop/anaconda 205 [NumPy]: http://numpy.scipy.org/ 206 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 207 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 208 209 ## Installation from sources 210 To install pandas from source you need Cython in addition to the normal 211 dependencies above. Cython can be installed from pypi: 212 213 ```sh 214 pip install cython 215 ``` 216 217 In the `pandas` directory (same one where you found this file after 218 cloning the git repo), execute: 219 220 ```sh 221 python setup.py install 222 ``` 223 224 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 225 226 ```sh 227 python setup.py develop 228 ``` 229 230 Alternatively, you can use `pip` if you want all the dependencies pulled 231 in automatically (the `-e` option is for installing it in [development 232 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 233 234 ```sh 235 pip install -e . 236 ``` 237 238 On Windows, you will need to install MinGW and execute: 239 240 ```sh 241 python setup.py build --compiler=mingw32 242 python setup.py install 243 ``` 244 245 See http://pandas.pydata.org/ for more information. 246 247 ## License 248 BSD 249 250 ## Documentation 251 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 252 253 The Sphinx documentation should provide a good starting point for learning how 254 to use the library. Expect the docs to continue to expand as time goes on. 255 256 ## Background 257 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 258 has been under active development since then. 259 260 ## Discussion and Development 261 Since pandas development is related to a number of other scientific 262 Python projects, questions are welcome on the scipy-user mailing 263 list. Specialized discussions or design issues should take place on 264 the PyData mailing list / Google group: 265 266 https://groups.google.com/forum/#!forum/pydata 267 [end of README.md] [start of pandas/tseries/tdi.py] 1 """ implement the TimedeltaIndex """ 2 3 from datetime import timedelta 4 import numpy as np 5 from pandas.core.common import (ABCSeries, _TD_DTYPE, _INT64_DTYPE, 6 is_timedelta64_dtype, _maybe_box, 7 _values_from_object, isnull, is_integer, is_float) 8 from pandas.core.index import Index, Int64Index 9 import pandas.compat as compat 10 from pandas.compat import u 11 from pandas.util.decorators import cache_readonly 12 from pandas.tseries.frequencies import to_offset 13 import pandas.core.common as com 14 from pandas.tseries import timedeltas 15 from pandas.tseries.base import DatetimeIndexOpsMixin 16 from pandas.tseries.timedeltas import to_timedelta, _coerce_scalar_to_timedelta_type 17 import pandas.tseries.offsets as offsets 18 from pandas.tseries.offsets import Tick, DateOffset 19 20 import pandas.lib as lib 21 import pandas.tslib as tslib 22 import pandas.algos as _algos 23 import pandas.index as _index 24 25 Timedelta = tslib.Timedelta 26 27 _resolution_map = { 28 'ns' : offsets.Nano, 29 'us' : offsets.Micro, 30 'ms' : offsets.Milli, 31 's' : offsets.Second, 32 'm' : offsets.Minute, 33 'h' : offsets.Hour, 34 'D' : offsets.Day, 35 } 36 37 def _td_index_cmp(opname, nat_result=False): 38 """ 39 Wrap comparison operations to convert timedelta-like to timedelta64 40 """ 41 def wrapper(self, other): 42 func = getattr(super(TimedeltaIndex, self), opname) 43 if _is_convertible_to_td(other): 44 other = _to_m8(other) 45 result = func(other) 46 if com.isnull(other): 47 result.fill(nat_result) 48 else: 49 if not com.is_list_like(other): 50 raise TypeError("cannot compare a TimedeltaIndex with type {0}".format(type(other))) 51 52 other = TimedeltaIndex(other).values 53 result = func(other) 54 result = _values_from_object(result) 55 56 if isinstance(other, Index): 57 o_mask = other.values.view('i8') == tslib.iNaT 58 else: 59 o_mask = other.view('i8') == tslib.iNaT 60 61 if o_mask.any(): 62 result[o_mask] = nat_result 63 64 mask = self.asi8 == tslib.iNaT 65 if mask.any(): 66 result[mask] = nat_result 67 68 # support of bool dtype indexers 69 if com.is_bool_dtype(result): 70 return result 71 return Index(result) 72 73 return wrapper 74 75 76 class TimedeltaIndex(DatetimeIndexOpsMixin, Int64Index): 77 """ 78 Immutable ndarray of timedelta64 data, represented internally as int64, and 79 which can be boxed to timedelta objects 80 81 Parameters 82 ---------- 83 data : array-like (1-dimensional), optional 84 Optional timedelta-like data to construct index with 85 unit: unit of the arg (D,h,m,s,ms,us,ns) denote the unit, optional 86 which is an integer/float number 87 freq: a frequency for the index, optional 88 copy : bool 89 Make a copy of input ndarray 90 start : starting value, timedelta-like, optional 91 If data is None, start is used as the start point in generating regular 92 timedelta data. 93 periods : int, optional, > 0 94 Number of periods to generate, if generating index. Takes precedence 95 over end argument 96 end : end time, timedelta-like, optional 97 If periods is none, generated index will extend to first conforming 98 time on or just past end argument 99 closed : string or None, default None 100 Make the interval closed with respect to the given frequency to 101 the 'left', 'right', or both sides (None) 102 name : object 103 Name to be stored in the index 104 """ 105 106 _typ = 'timedeltaindex' 107 _join_precedence = 10 108 def _join_i8_wrapper(joinf, **kwargs): 109 return DatetimeIndexOpsMixin._join_i8_wrapper(joinf, dtype='m8[ns]', **kwargs) 110 111 _inner_indexer = _join_i8_wrapper(_algos.inner_join_indexer_int64) 112 _outer_indexer = _join_i8_wrapper(_algos.outer_join_indexer_int64) 113 _left_indexer = _join_i8_wrapper(_algos.left_join_indexer_int64) 114 _left_indexer_unique = _join_i8_wrapper( 115 _algos.left_join_indexer_unique_int64, with_indexers=False) 116 _arrmap = None 117 _datetimelike_ops = ['days','seconds','microseconds','nanoseconds', 118 'freq','components'] 119 120 __eq__ = _td_index_cmp('__eq__') 121 __ne__ = _td_index_cmp('__ne__', nat_result=True) 122 __lt__ = _td_index_cmp('__lt__') 123 __gt__ = _td_index_cmp('__gt__') 124 __le__ = _td_index_cmp('__le__') 125 __ge__ = _td_index_cmp('__ge__') 126 127 _engine_type = _index.TimedeltaEngine 128 129 _comparables = ['name', 'freq'] 130 _attributes = ['name', 'freq'] 131 _is_numeric_dtype = True 132 freq = None 133 134 def __new__(cls, data=None, unit=None, 135 freq=None, start=None, end=None, periods=None, 136 copy=False, name=None, 137 closed=None, verify_integrity=True, **kwargs): 138 139 if isinstance(data, TimedeltaIndex) and freq is None and name is None: 140 if copy: 141 data = data.copy() 142 return data 143 144 freq_infer = False 145 if not isinstance(freq, DateOffset): 146 147 # if a passed freq is None, don't infer automatically 148 if freq != 'infer': 149 freq = to_offset(freq) 150 else: 151 freq_infer = True 152 freq = None 153 154 if periods is not None: 155 if is_float(periods): 156 periods = int(periods) 157 elif not is_integer(periods): 158 raise ValueError('Periods must be a number, got %s' % 159 str(periods)) 160 161 if data is None and freq is None: 162 raise ValueError("Must provide freq argument if no data is " 163 "supplied") 164 165 if data is None: 166 return cls._generate(start, end, periods, name, freq, 167 closed=closed) 168 169 if unit is not None: 170 data = to_timedelta(data, unit=unit, box=False) 171 172 if not isinstance(data, (np.ndarray, Index, ABCSeries)): 173 if np.isscalar(data): 174 raise ValueError('TimedeltaIndex() must be called with a ' 175 'collection of some kind, %s was passed' 176 % repr(data)) 177 178 # convert if not already 179 if getattr(data,'dtype',None) != _TD_DTYPE: 180 data = to_timedelta(data,unit=unit,box=False) 181 elif copy: 182 data = np.array(data,copy=True) 183 184 # check that we are matching freqs 185 if verify_integrity and len(data) > 0: 186 if freq is not None and not freq_infer: 187 index = cls._simple_new(data, name=name) 188 inferred = index.inferred_freq 189 if inferred != freq.freqstr: 190 on_freq = cls._generate(index[0], None, len(index), name, freq) 191 if not np.array_equal(index.asi8, on_freq.asi8): 192 raise ValueError('Inferred frequency {0} from passed timedeltas does not ' 193 'conform to passed frequency {1}'.format(inferred, freq.freqstr)) 194 index.freq = freq 195 return index 196 197 if freq_infer: 198 index = cls._simple_new(data, name=name) 199 inferred = index.inferred_freq 200 if inferred: 201 index.freq = to_offset(inferred) 202 return index 203 204 return cls._simple_new(data, name=name, freq=freq) 205 206 @classmethod 207 def _generate(cls, start, end, periods, name, offset, closed=None): 208 if com._count_not_none(start, end, periods) != 2: 209 raise ValueError('Must specify two of start, end, or periods') 210 211 if start is not None: 212 start = Timedelta(start) 213 214 if end is not None: 215 end = Timedelta(end) 216 217 left_closed = False 218 right_closed = False 219 220 if start is None and end is None: 221 if closed is not None: 222 raise ValueError("Closed has to be None if not both of start" 223 "and end are defined") 224 225 if closed is None: 226 left_closed = True 227 right_closed = True 228 elif closed == "left": 229 left_closed = True 230 elif closed == "right": 231 right_closed = True 232 else: 233 raise ValueError("Closed has to be either 'left', 'right' or None") 234 235 index = _generate_regular_range(start, end, periods, offset) 236 index = cls._simple_new(index, name=name, freq=offset) 237 238 if not left_closed: 239 index = index[1:] 240 if not right_closed: 241 index = index[:-1] 242 243 return index 244 245 @property 246 def _box_func(self): 247 return lambda x: Timedelta(x, unit='ns') 248 249 @classmethod 250 def _simple_new(cls, values, name=None, freq=None, **kwargs): 251 if not getattr(values,'dtype',None): 252 values = np.array(values,copy=False) 253 if values.dtype == np.object_: 254 values = tslib.array_to_timedelta64(values) 255 if values.dtype != _TD_DTYPE: 256 values = com._ensure_int64(values).view(_TD_DTYPE) 257 258 result = object.__new__(cls) 259 result._data = values 260 result.name = name 261 result.freq = freq 262 result._reset_identity() 263 return result 264 265 _na_value = tslib.NaT 266 """The expected NA value to use with this index.""" 267 268 @property 269 def _formatter_func(self): 270 from pandas.core.format import _get_format_timedelta64 271 return _get_format_timedelta64(self, box=True) 272 273 def __setstate__(self, state): 274 """Necessary for making this object picklable""" 275 if isinstance(state, dict): 276 super(TimedeltaIndex, self).__setstate__(state) 277 else: 278 raise Exception("invalid pickle state") 279 _unpickle_compat = __setstate__ 280 281 def _add_delta(self, delta): 282 if isinstance(delta, (Tick, timedelta, np.timedelta64)): 283 new_values = self._add_delta_td(delta) 284 name = self.name 285 elif isinstance(delta, TimedeltaIndex): 286 new_values = self._add_delta_tdi(delta) 287 # update name when delta is index 288 name = com._maybe_match_name(self, delta) 289 else: 290 raise ValueError("cannot add the type {0} to a TimedeltaIndex".format(type(delta))) 291 292 result = TimedeltaIndex(new_values, freq='infer', name=name) 293 return result 294 295 def _evaluate_with_timedelta_like(self, other, op, opstr): 296 297 # allow division by a timedelta 298 if opstr in ['__div__','__truediv__']: 299 if _is_convertible_to_td(other): 300 other = Timedelta(other) 301 if isnull(other): 302 raise NotImplementedError("division by pd.NaT not implemented") 303 304 i8 = self.asi8 305 result = i8/float(other.value) 306 result = self._maybe_mask_results(result,convert='float64') 307 return Index(result,name=self.name,copy=False) 308 309 return NotImplemented 310 311 def _add_datelike(self, other): 312 313 # adding a timedeltaindex to a datetimelike 314 from pandas import Timestamp, DatetimeIndex 315 other = Timestamp(other) 316 i8 = self.asi8 317 result = i8 + other.value 318 result = self._maybe_mask_results(result,fill_value=tslib.iNaT) 319 return DatetimeIndex(result,name=self.name,copy=False) 320 321 def _sub_datelike(self, other): 322 raise TypeError("cannot subtract a datelike from a TimedeltaIndex") 323 324 def _format_native_types(self, na_rep=u('NaT'), 325 date_format=None, **kwargs): 326 from pandas.core.format import Timedelta64Formatter 327 return Timedelta64Formatter(values=self, 328 nat_rep=na_rep, 329 justify='all').get_result() 330 331 def _get_field(self, m): 332 333 values = self.asi8 334 hasnans = self.hasnans 335 if hasnans: 336 result = np.empty(len(self), dtype='float64') 337 mask = values == tslib.iNaT 338 imask = ~mask 339 result.flat[imask] = np.array([ getattr(Timedelta(val),m) for val in values[imask] ]) 340 result[mask] = np.nan 341 else: 342 result = np.array([ getattr(Timedelta(val),m) for val in values ],dtype='int64') 343 return result 344 345 @property 346 def days(self): 347 """ Number of days for each element. """ 348 return self._get_field('days') 349 350 @property 351 def seconds(self): 352 """ Number of seconds (>= 0 and less than 1 day) for each element. """ 353 return self._get_field('seconds') 354 355 @property 356 def microseconds(self): 357 """ Number of microseconds (>= 0 and less than 1 second) for each element. """ 358 return self._get_field('microseconds') 359 360 @property 361 def nanoseconds(self): 362 """ Number of nanoseconds (>= 0 and less than 1 microsecond) for each element. """ 363 return self._get_field('nanoseconds') 364 365 @property 366 def components(self): 367 """ 368 Return a dataframe of the components (days, hours, minutes, 369 seconds, milliseconds, microseconds, nanoseconds) of the Timedeltas. 370 371 Returns 372 ------- 373 a DataFrame 374 """ 375 from pandas import DataFrame 376 377 columns = ['days','hours','minutes','seconds','milliseconds','microseconds','nanoseconds'] 378 hasnans = self.hasnans 379 if hasnans: 380 def f(x): 381 if isnull(x): 382 return [np.nan]*len(columns) 383 return x.components 384 else: 385 def f(x): 386 return x.components 387 388 result = DataFrame([ f(x) for x in self ]) 389 result.columns = columns 390 if not hasnans: 391 result = result.astype('int64') 392 return result 393 394 def total_seconds(self): 395 """ Total duration of each element expressed in seconds. """ 396 return self._maybe_mask_results(1e-9*self.asi8) 397 398 def to_pytimedelta(self): 399 """ 400 Return TimedeltaIndex as object ndarray of datetime.timedelta objects 401 402 Returns 403 ------- 404 datetimes : ndarray 405 """ 406 return tslib.ints_to_pytimedelta(self.asi8) 407 408 def astype(self, dtype): 409 dtype = np.dtype(dtype) 410 411 if dtype == np.object_: 412 return self.asobject 413 elif dtype == _INT64_DTYPE: 414 return self.asi8.copy() 415 elif dtype == _TD_DTYPE: 416 return self 417 elif dtype.kind == 'm': 418 419 # return an index (essentially this is division) 420 result = self.values.astype(dtype) 421 if self.hasnans: 422 return Index(self._maybe_mask_results(result,convert='float64'),name=self.name) 423 424 return Index(result.astype('i8'),name=self.name) 425 426 else: # pragma: no cover 427 raise ValueError('Cannot cast TimedeltaIndex to dtype %s' % dtype) 428 429 def union(self, other): 430 """ 431 Specialized union for TimedeltaIndex objects. If combine 432 overlapping ranges with the same DateOffset, will be much 433 faster than Index.union 434 435 Parameters 436 ---------- 437 other : TimedeltaIndex or array-like 438 439 Returns 440 ------- 441 y : Index or TimedeltaIndex 442 """ 443 self._assert_can_do_setop(other) 444 if not isinstance(other, TimedeltaIndex): 445 try: 446 other = TimedeltaIndex(other) 447 except (TypeError, ValueError): 448 pass 449 this, other = self, other 450 451 if this._can_fast_union(other): 452 return this._fast_union(other) 453 else: 454 result = Index.union(this, other) 455 if isinstance(result, TimedeltaIndex): 456 if result.freq is None: 457 result.freq = to_offset(result.inferred_freq) 458 return result 459 460 def append(self, other): 461 """ 462 Append a collection of Index options together 463 464 Parameters 465 ---------- 466 other : Index or list/tuple of indices 467 468 Returns 469 ------- 470 appended : Index 471 """ 472 name = self.name 473 to_concat = [self] 474 475 if isinstance(other, (list, tuple)): 476 to_concat = to_concat + list(other) 477 else: 478 to_concat.append(other) 479 480 for obj in to_concat: 481 if isinstance(obj, Index) and obj.name != name: 482 name = None 483 break 484 485 to_concat = self._ensure_compat_concat(to_concat) 486 return Index(com._concat_compat(to_concat), name=name) 487 488 def join(self, other, how='left', level=None, return_indexers=False): 489 """ 490 See Index.join 491 """ 492 if _is_convertible_to_index(other): 493 try: 494 other = TimedeltaIndex(other) 495 except (TypeError, ValueError): 496 pass 497 498 return Index.join(self, other, how=how, level=level, 499 return_indexers=return_indexers) 500 501 def _wrap_joined_index(self, joined, other): 502 name = self.name if self.name == other.name else None 503 if (isinstance(other, TimedeltaIndex) and self.freq == other.freq 504 and self._can_fast_union(other)): 505 joined = self._shallow_copy(joined) 506 joined.name = name 507 return joined 508 else: 509 return self._simple_new(joined, name) 510 511 def _can_fast_union(self, other): 512 if not isinstance(other, TimedeltaIndex): 513 return False 514 515 freq = self.freq 516 517 if freq is None or freq != other.freq: 518 return False 519 520 if not self.is_monotonic or not other.is_monotonic: 521 return False 522 523 if len(self) == 0 or len(other) == 0: 524 return True 525 526 # to make our life easier, "sort" the two ranges 527 if self[0] <= other[0]: 528 left, right = self, other 529 else: 530 left, right = other, self 531 532 right_start = right[0] 533 left_end = left[-1] 534 535 # Only need to "adjoin", not overlap 536 return (right_start == left_end + freq) or right_start in left 537 538 def _fast_union(self, other): 539 if len(other) == 0: 540 return self.view(type(self)) 541 542 if len(self) == 0: 543 return other.view(type(self)) 544 545 # to make our life easier, "sort" the two ranges 546 if self[0] <= other[0]: 547 left, right = self, other 548 else: 549 left, right = other, self 550 551 left_start, left_end = left[0], left[-1] 552 right_end = right[-1] 553 554 # concatenate 555 if left_end < right_end: 556 loc = right.searchsorted(left_end, side='right') 557 right_chunk = right.values[loc:] 558 dates = com._concat_compat((left.values, right_chunk)) 559 return self._shallow_copy(dates) 560 else: 561 return left 562 563 def __array_finalize__(self, obj): 564 if self.ndim == 0: # pragma: no cover 565 return self.item() 566 567 self.name = getattr(obj, 'name', None) 568 self.freq = getattr(obj, 'freq', None) 569 self._reset_identity() 570 571 def _wrap_union_result(self, other, result): 572 name = self.name if self.name == other.name else None 573 return self._simple_new(result, name=name, freq=None) 574 575 def intersection(self, other): 576 """ 577 Specialized intersection for TimedeltaIndex objects. May be much faster 578 than Index.intersection 579 580 Parameters 581 ---------- 582 other : TimedeltaIndex or array-like 583 584 Returns 585 ------- 586 y : Index or TimedeltaIndex 587 """ 588 self._assert_can_do_setop(other) 589 if not isinstance(other, TimedeltaIndex): 590 try: 591 other = TimedeltaIndex(other) 592 except (TypeError, ValueError): 593 pass 594 result = Index.intersection(self, other) 595 return result 596 597 if len(self) == 0: 598 return self 599 if len(other) == 0: 600 return other 601 # to make our life easier, "sort" the two ranges 602 if self[0] <= other[0]: 603 left, right = self, other 604 else: 605 left, right = other, self 606 607 end = min(left[-1], right[-1]) 608 start = right[0] 609 610 if end < start: 611 return type(self)(data=[]) 612 else: 613 lslice = slice(*left.slice_locs(start, end)) 614 left_chunk = left.values[lslice] 615 return self._shallow_copy(left_chunk) 616 617 def _possibly_promote(self, other): 618 if other.inferred_type == 'timedelta': 619 other = TimedeltaIndex(other) 620 return self, other 621 622 def get_value(self, series, key): 623 """ 624 Fast lookup of value from 1-dimensional ndarray. Only use this if you 625 know what you're doing 626 """ 627 628 if _is_convertible_to_td(key): 629 key = Timedelta(key) 630 return self.get_value_maybe_box(series, key) 631 632 try: 633 return _maybe_box(self, Index.get_value(self, series, key), series, key) 634 except KeyError: 635 try: 636 loc = self._get_string_slice(key) 637 return series[loc] 638 except (TypeError, ValueError, KeyError): 639 pass 640 641 try: 642 return self.get_value_maybe_box(series, key) 643 except (TypeError, ValueError, KeyError): 644 raise KeyError(key) 645 646 def get_value_maybe_box(self, series, key): 647 if not isinstance(key, Timedelta): 648 key = Timedelta(key) 649 values = self._engine.get_value(_values_from_object(series), key) 650 return _maybe_box(self, values, series, key) 651 652 def get_loc(self, key, method=None, tolerance=None): 653 """ 654 Get integer location for requested label 655 656 Returns 657 ------- 658 loc : int 659 """ 660 if tolerance is not None: 661 # try converting tolerance now, so errors don't get swallowed by 662 # the try/except clauses below 663 tolerance = self._convert_tolerance(tolerance) 664 665 if _is_convertible_to_td(key): 666 key = Timedelta(key) 667 return Index.get_loc(self, key, method, tolerance) 668 669 try: 670 return Index.get_loc(self, key, method, tolerance) 671 except (KeyError, ValueError, TypeError): 672 try: 673 return self._get_string_slice(key) 674 except (TypeError, KeyError, ValueError): 675 pass 676 677 try: 678 stamp = Timedelta(key) 679 return Index.get_loc(self, stamp, method, tolerance) 680 except (KeyError, ValueError): 681 raise KeyError(key) 682 683 def _maybe_cast_slice_bound(self, label, side, kind): 684 """ 685 If label is a string, cast it to timedelta according to resolution. 686 687 688 Parameters 689 ---------- 690 label : object 691 side : {'left', 'right'} 692 kind : string / None 693 694 Returns 695 ------- 696 label : object 697 698 """ 699 if isinstance(label, compat.string_types): 700 parsed = _coerce_scalar_to_timedelta_type(label, box=True) 701 lbound = parsed.round(parsed.resolution) 702 if side == 'left': 703 return lbound 704 else: 705 return (lbound + _resolution_map[parsed.resolution]() - 706 Timedelta(1, 'ns')) 707 elif is_integer(label) or is_float(label): 708 self._invalid_indexer('slice',label) 709 710 return label 711 712 def _get_string_slice(self, key, use_lhs=True, use_rhs=True): 713 freq = getattr(self, 'freqstr', 714 getattr(self, 'inferred_freq', None)) 715 if is_integer(key) or is_float(key): 716 self._invalid_indexer('slice',key) 717 loc = self._partial_td_slice(key, freq, use_lhs=use_lhs, 718 use_rhs=use_rhs) 719 return loc 720 721 def _partial_td_slice(self, key, freq, use_lhs=True, use_rhs=True): 722 723 # given a key, try to figure out a location for a partial slice 724 if not isinstance(key, compat.string_types): 725 return key 726 727 parsed = _coerce_scalar_to_timedelta_type(key, box=True) 728 729 is_monotonic = self.is_monotonic 730 731 # figure out the resolution of the passed td 732 # and round to it 733 reso = parsed.resolution 734 t1 = parsed.round(reso) 735 t2 = t1 + _resolution_map[reso]() - Timedelta(1,'ns') 736 737 stamps = self.asi8 738 739 if is_monotonic: 740 741 # we are out of range 742 if len(stamps) and ( 743 (use_lhs and t1.value < stamps[0] and t2.value < stamps[0]) or ( 744 (use_rhs and t1.value > stamps[-1] and t2.value > stamps[-1]))): 745 raise KeyError 746 747 # a monotonic (sorted) series can be sliced 748 left = stamps.searchsorted(t1.value, side='left') if use_lhs else None 749 right = stamps.searchsorted(t2.value, side='right') if use_rhs else None 750 751 return slice(left, right) 752 753 lhs_mask = (stamps >= t1.value) if use_lhs else True 754 rhs_mask = (stamps <= t2.value) if use_rhs else True 755 756 # try to find a the dates 757 return (lhs_mask & rhs_mask).nonzero()[0] 758 759 def searchsorted(self, key, side='left'): 760 if isinstance(key, (np.ndarray, Index)): 761 key = np.array(key, dtype=_TD_DTYPE, copy=False) 762 else: 763 key = _to_m8(key) 764 765 return self.values.searchsorted(key, side=side) 766 767 def is_type_compatible(self, typ): 768 return typ == self.inferred_type or typ == 'timedelta' 769 770 @property 771 def inferred_type(self): 772 return 'timedelta64' 773 774 @property 775 def dtype(self): 776 return _TD_DTYPE 777 778 @property 779 def is_all_dates(self): 780 return True 781 782 def equals(self, other): 783 """ 784 Determines if two Index objects contain the same elements. 785 """ 786 if self.is_(other): 787 return True 788 789 if (not hasattr(other, 'inferred_type') or 790 other.inferred_type != 'timedelta64'): 791 try: 792 other = TimedeltaIndex(other) 793 except: 794 return False 795 796 return np.array_equal(self.asi8, other.asi8) 797 798 def insert(self, loc, item): 799 """ 800 Make new Index inserting new item at location 801 802 Parameters 803 ---------- 804 loc : int 805 item : object 806 if not either a Python datetime or a numpy integer-like, returned 807 Index dtype will be object rather than datetime. 808 809 Returns 810 ------- 811 new_index : Index 812 """ 813 814 # try to convert if possible 815 if _is_convertible_to_td(item): 816 try: 817 item = Timedelta(item) 818 except: 819 pass 820 821 freq = None 822 if isinstance(item, (Timedelta, tslib.NaTType)): 823 824 # check freq can be preserved on edge cases 825 if self.freq is not None: 826 if (loc == 0 or loc == -len(self)) and item + self.freq == self[0]: 827 freq = self.freq 828 elif (loc == len(self)) and item - self.freq == self[-1]: 829 freq = self.freq 830 item = _to_m8(item) 831 832 try: 833 new_tds = np.concatenate((self[:loc].asi8, [item.view(np.int64)], 834 self[loc:].asi8)) 835 return TimedeltaIndex(new_tds, name=self.name, freq=freq) 836 837 except (AttributeError, TypeError): 838 839 # fall back to object index 840 if isinstance(item,compat.string_types): 841 return self.asobject.insert(loc, item) 842 raise TypeError("cannot insert TimedeltaIndex with incompatible label") 843 844 def delete(self, loc): 845 """ 846 Make a new DatetimeIndex with passed location(s) deleted. 847 848 Parameters 849 ---------- 850 loc: int, slice or array of ints 851 Indicate which sub-arrays to remove. 852 853 Returns 854 ------- 855 new_index : TimedeltaIndex 856 """ 857 new_tds = np.delete(self.asi8, loc) 858 859 freq = 'infer' 860 if is_integer(loc): 861 if loc in (0, -len(self), -1, len(self) - 1): 862 freq = self.freq 863 else: 864 if com.is_list_like(loc): 865 loc = lib.maybe_indices_to_slice(com._ensure_int64(np.array(loc)), len(self)) 866 if isinstance(loc, slice) and loc.step in (1, None): 867 if (loc.start in (0, None) or loc.stop in (len(self), None)): 868 freq = self.freq 869 870 return TimedeltaIndex(new_tds, name=self.name, freq=freq) 871 872 873 TimedeltaIndex._add_numeric_methods() 874 TimedeltaIndex._add_logical_methods_disabled() 875 TimedeltaIndex._add_datetimelike_methods() 876 877 878 def _is_convertible_to_index(other): 879 """ return a boolean whether I can attempt conversion to a TimedeltaIndex """ 880 if isinstance(other, TimedeltaIndex): 881 return True 882 elif (len(other) > 0 and 883 other.inferred_type not in ('floating', 'mixed-integer','integer', 884 'mixed-integer-float', 'mixed')): 885 return True 886 return False 887 888 889 def _is_convertible_to_td(key): 890 return isinstance(key, (DateOffset, timedelta, Timedelta, np.timedelta64, compat.string_types)) 891 892 def _to_m8(key): 893 ''' 894 Timedelta-like => dt64 895 ''' 896 if not isinstance(key, Timedelta): 897 # this also converts strings 898 key = Timedelta(key) 899 900 # return an type that can be compared 901 return np.int64(key.value).view(_TD_DTYPE) 902 903 def _generate_regular_range(start, end, periods, offset): 904 stride = offset.nanos 905 if periods is None: 906 b = Timedelta(start).value 907 e = Timedelta(end).value 908 e += stride - e % stride 909 elif start is not None: 910 b = Timedelta(start).value 911 e = b + periods * stride 912 elif end is not None: 913 e = Timedelta(end).value + stride 914 b = e - periods * stride 915 else: 916 raise ValueError("at least 'start' or 'end' should be specified " 917 "if a 'period' is given.") 918 919 data = np.arange(b, e, stride, dtype=np.int64) 920 data = TimedeltaIndex._simple_new(data, None) 921 922 return data 923 924 925 def timedelta_range(start=None, end=None, periods=None, freq='D', 926 name=None, closed=None): 927 """ 928 Return a fixed frequency timedelta index, with day as the default 929 frequency 930 931 Parameters 932 ---------- 933 start : string or timedelta-like, default None 934 Left bound for generating dates 935 end : string or datetime-like, default None 936 Right bound for generating dates 937 periods : integer or None, default None 938 If None, must specify start and end 939 freq : string or DateOffset, default 'D' (calendar daily) 940 Frequency strings can have multiples, e.g. '5H' 941 name : str, default None 942 Name of the resulting index 943 closed : string or None, default None 944 Make the interval closed with respect to the given frequency to 945 the 'left', 'right', or both sides (None) 946 947 Notes 948 ----- 949 2 of start, end, or periods must be specified 950 951 Returns 952 ------- 953 rng : TimedeltaIndex 954 """ 955 return TimedeltaIndex(start=start, end=end, periods=periods, 956 freq=freq, name=name, 957 closed=closed) 958 [end of pandas/tseries/tdi.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
f5b85cab0420f77d200e4def27a6215950abce60
pd.to_pickle() and pd.read_pickle() bug? I'm on windows 7, anaconda, python 2.7.9, 32bit pandas 0.16.2 try this: ``` python import pandas as pd df4 = pd.DataFrame(index=P.date_range('1750-1-1', '2050-1-1', freq='7D') pd.to_pickle(df4, '7d.test') pd.read_pickle('7d.test') In [84]: P.read_pickle('7d.test') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-84-0108eecfbea7> in <module>() ----> 1 P.read_pickle('7d.test') C:\Users\LV\Miniconda\lib\site-packages\pandas\io\pickle.pyc in read_pickle(path) 58 59 try: ---> 60 return try_read(path) 61 except: 62 if PY3: C:\Users\LV\Miniconda\lib\site-packages\pandas\io\pickle.pyc in try_read(path, encoding) 55 except: 56 with open(path, 'rb') as fh: ---> 57 return pc.load(fh, encoding=encoding, compat=True) 58 59 try: C:\Users\LV\Miniconda\lib\site-packages\pandas\compat\pickle_compat.pyc in load(fh, encoding, compat, is_verbose) 114 up.is_verbose = is_verbose 115 --> 116 return up.load() 117 except: 118 raise C:\Users\LV\Miniconda\lib\pickle.pyc in load(self) 856 while 1: 857 key = read(1) --> 858 dispatch[key](self) 859 except _Stop, stopinst: 860 return stopinst.value C:\Users\LV\Miniconda\lib\site-packages\pandas\compat\pickle_compat.pyc in load_reduce(self) 18 19 try: ---> 20 stack[-1] = func(*args) 21 return 22 except Exception as e: C:\Users\LV\Miniconda\lib\site-packages\pandas\tseries\index.pyc in _new_DatetimeIndex(cls, d) 113 # data are already in UTC 114 tz = d.pop('tz',None) --> 115 result = cls.__new__(cls, **d) 116 result.tz = tz 117 return result C:\Users\LV\Miniconda\lib\site-packages\pandas\util\decorators.pyc in wrapper(*args, **kwargs) 86 else: 87 kwargs[new_arg_name] = new_arg_value ---> 88 return func(*args, **kwargs) 89 return wrapper 90 return _deprecate_kwarg C:\Users\LV\Miniconda\lib\site-packages\pandas\tseries\index.pyc in __new__(cls, data, freq, start, end, periods, copy, name, tz, verify_integrity, normalize, closed, ambiguous, **kwargs) 334 if not np.array_equal(subarr.asi8, on_freq.asi8): 335 raise ValueError('Inferred frequency {0} from passed dates does not' --> 336 'conform to passed frequency {1}'.format(inferred, freq.freqstr)) 337 338 if freq_infer: ValueError: Inferred frequency W-THU from passed dates does notconform to passed frequency 7D ```
2015-09-05T18:27:44Z
<patch> diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt --- a/doc/source/whatsnew/v0.17.0.txt +++ b/doc/source/whatsnew/v0.17.0.txt @@ -894,7 +894,7 @@ Bug Fixes - Bug in clearing the cache on ``DataFrame.pop`` and a subsequent inplace op (:issue:`10912`) - Bug in indexing with a mixed-integer ``Index`` causing an ``ImportError`` (:issue:`10610`) - Bug in ``Series.count`` when index has nulls (:issue:`10946`) - +- Bug in pickling of a non-regular freq ``DatetimeIndex`` (:issue:`11002`) - Bug causing ``DataFrame.where`` to not respect the ``axis`` parameter when the frame has a symmetric shape. (:issue:`9736`) - Bug in ``Table.select_column`` where name is not preserved (:issue:`10392`) diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -120,7 +120,8 @@ def _new_DatetimeIndex(cls, d): # data are already in UTC # so need to localize tz = d.pop('tz',None) - result = cls.__new__(cls, **d) + + result = cls.__new__(cls, verify_integrity=False, **d) if tz is not None: result = result.tz_localize('UTC').tz_convert(tz) return result </patch>
[]
[]
pandas-dev__pandas-21216
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: Index with integer data and datetime64[ns, tz] dtype does not localize correctly ``` In [1]: pd.__version__ Out[1]: '0.23.0rc2+16.gccf4b96.dirty' In [2]: val = [pd.Timestamp('2018-01-01', tz='US/Pacific').value] In [3]: pd.Index(val, dtype='datetime64[ns, US/Pacific]') Out[3]: DatetimeIndex(['2018-01-01 08:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None) ``` The localization appears to localize directly to the timezone instead of localizing first to UTC and therefore does not roundtrip correctly from the timestamp value. #20956 can be simplified once this is fixed. Expected: ``` Out[3]: DatetimeIndex(['2018-01-01 00:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None) ``` </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://circleci.com/gh/pandas-dev/pandas"> 52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td></td> 58 <td> 59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas"> 60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Coverage</td> 66  <td> 67 <a href="https://codecov.io/gh/pandas-dev/pandas"> 68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Downloads</td> 74 <td> 75 <a href="https://pandas.pydata.org"> 76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 77 </a> 78 </td> 79 </tr> 80 <tr> 81 <td>Gitter</td> 82 <td> 83 <a href="https://gitter.im/pydata/pandas"> 84 <img src="https://badges.gitter.im/Join%20Chat.svg" 85 </a> 86 </td> 87 </tr> 88 </table> 89 90 91 92 ## What is it 93 94 **pandas** is a Python package providing fast, flexible, and expressive data 95 structures designed to make working with "relational" or "labeled" data both 96 easy and intuitive. It aims to be the fundamental high-level building block for 97 doing practical, **real world** data analysis in Python. Additionally, it has 98 the broader goal of becoming **the most powerful and flexible open source data 99 analysis / manipulation tool available in any language**. It is already well on 100 its way toward this goal. 101 102 ## Main Features 103 Here are just a few of the things that pandas does well: 104 105 - Easy handling of [**missing data**][missing-data] (represented as 106 `NaN`) in floating point as well as non-floating point data 107 - Size mutability: columns can be [**inserted and 108 deleted**][insertion-deletion] from DataFrame and higher dimensional 109 objects 110 - Automatic and explicit [**data alignment**][alignment]: objects can 111 be explicitly aligned to a set of labels, or the user can simply 112 ignore the labels and let `Series`, `DataFrame`, etc. automatically 113 align the data for you in computations 114 - Powerful, flexible [**group by**][groupby] functionality to perform 115 split-apply-combine operations on data sets, for both aggregating 116 and transforming data 117 - Make it [**easy to convert**][conversion] ragged, 118 differently-indexed data in other Python and NumPy data structures 119 into DataFrame objects 120 - Intelligent label-based [**slicing**][slicing], [**fancy 121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 122 large data sets 123 - Intuitive [**merging**][merging] and [**joining**][joining] data 124 sets 125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 126 data sets 127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 128 labels per tick) 129 - Robust IO tools for loading data from [**flat files**][flat-files] 130 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 132 - [**Time series**][timeseries]-specific functionality: date range 133 generation and frequency conversion, moving window statistics, 134 moving window linear regressions, date shifting and lagging, etc. 135 136 137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 155 156 ## Where to get it 157 The source code is currently hosted on GitHub at: 158 https://github.com/pandas-dev/pandas 159 160 Binary installers for the latest released version are available at the [Python 161 package index](https://pypi.org/project/pandas) and on conda. 162 163 ```sh 164 # conda 165 conda install pandas 166 ``` 167 168 ```sh 169 # or PyPI 170 pip install pandas 171 ``` 172 173 ## Dependencies 174 - [NumPy](https://www.numpy.org): 1.9.0 or higher 175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 177 178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 179 for recommended and optional dependencies. 180 181 ## Installation from sources 182 To install pandas from source you need Cython in addition to the normal 183 dependencies above. Cython can be installed from pypi: 184 185 ```sh 186 pip install cython 187 ``` 188 189 In the `pandas` directory (same one where you found this file after 190 cloning the git repo), execute: 191 192 ```sh 193 python setup.py install 194 ``` 195 196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 197 198 ```sh 199 python setup.py develop 200 ``` 201 202 Alternatively, you can use `pip` if you want all the dependencies pulled 203 in automatically (the `-e` option is for installing it in [development 204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 205 206 ```sh 207 pip install -e . 208 ``` 209 210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 211 212 ## License 213 [BSD 3](LICENSE) 214 215 ## Documentation 216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 217 218 ## Background 219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 220 has been under active development since then. 221 222 ## Getting Help 223 224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 226 227 ## Discussion and Development 228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 229 230 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 231 232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 233 234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)** 235 236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 237 238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 239 240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 241 242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 243 [end of README.md] [start of pandas/core/tools/datetimes.py] 1 from datetime import datetime, timedelta, time 2 from collections import MutableMapping 3 4 import numpy as np 5 6 from pandas._libs import tslib 7 from pandas._libs.tslibs.strptime import array_strptime 8 from pandas._libs.tslibs import parsing, conversion 9 from pandas._libs.tslibs.parsing import ( # noqa 10 parse_time_string, 11 DateParseError, 12 _format_is_iso, 13 _guess_datetime_format) 14 15 from pandas.core.dtypes.common import ( 16 _ensure_object, 17 is_datetime64_ns_dtype, 18 is_datetime64_dtype, 19 is_datetime64tz_dtype, 20 is_integer_dtype, 21 is_integer, 22 is_float, 23 is_list_like, 24 is_scalar, 25 is_numeric_dtype) 26 from pandas.core.dtypes.generic import ( 27 ABCIndexClass, ABCSeries, 28 ABCDataFrame) 29 from pandas.core.dtypes.missing import notna 30 from pandas.core import algorithms 31 from pandas.compat import zip 32 33 34 def _guess_datetime_format_for_array(arr, **kwargs): 35 # Try to guess the format based on the first non-NaN element 36 non_nan_elements = notna(arr).nonzero()[0] 37 if len(non_nan_elements): 38 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs) 39 40 41 def _maybe_cache(arg, format, cache, tz, convert_listlike): 42 """ 43 Create a cache of unique dates from an array of dates 44 45 Parameters 46 ---------- 47 arg : integer, float, string, datetime, list, tuple, 1-d array, Series 48 format : string 49 Strftime format to parse time 50 cache : boolean 51 True attempts to create a cache of converted values 52 tz : string 53 Timezone of the dates 54 convert_listlike : function 55 Conversion function to apply on dates 56 57 Returns 58 ------- 59 cache_array : Series 60 Cache of converted, unique dates. Can be empty 61 """ 62 from pandas import Series 63 cache_array = Series() 64 if cache: 65 # Perform a quicker unique check 66 from pandas import Index 67 if not Index(arg).is_unique: 68 unique_dates = algorithms.unique(arg) 69 cache_dates = convert_listlike(unique_dates, True, format, tz=tz) 70 cache_array = Series(cache_dates, index=unique_dates) 71 return cache_array 72 73 74 def _convert_and_box_cache(arg, cache_array, box, errors, name=None): 75 """ 76 Convert array of dates with a cache and box the result 77 78 Parameters 79 ---------- 80 arg : integer, float, string, datetime, list, tuple, 1-d array, Series 81 cache_array : Series 82 Cache of converted, unique dates 83 box : boolean 84 True boxes result as an Index-like, False returns an ndarray 85 errors : string 86 'ignore' plus box=True will convert result to Index 87 name : string, default None 88 Name for a DatetimeIndex 89 90 Returns 91 ------- 92 result : datetime of converted dates 93 Returns: 94 95 - Index-like if box=True 96 - ndarray if box=False 97 """ 98 from pandas import Series, DatetimeIndex, Index 99 result = Series(arg).map(cache_array) 100 if box: 101 if errors == 'ignore': 102 return Index(result) 103 else: 104 return DatetimeIndex(result, name=name) 105 return result.values 106 107 108 def _return_parsed_timezone_results(result, timezones, box, tz): 109 """ 110 Return results from array_strptime if a %z or %Z directive was passed. 111 112 Parameters 113 ---------- 114 result : ndarray 115 int64 date representations of the dates 116 timezones : ndarray 117 pytz timezone objects 118 box : boolean 119 True boxes result as an Index-like, False returns an ndarray 120 tz : object 121 None or pytz timezone object 122 Returns 123 ------- 124 tz_result : ndarray of parsed dates with timezone 125 Returns: 126 127 - Index-like if box=True 128 - ndarray of Timestamps if box=False 129 130 """ 131 if tz is not None: 132 raise ValueError("Cannot pass a tz argument when " 133 "parsing strings with timezone " 134 "information.") 135 tz_results = np.array([tslib.Timestamp(res).tz_localize(zone) for res, zone 136 in zip(result, timezones)]) 137 if box: 138 from pandas import Index 139 return Index(tz_results) 140 return tz_results 141 142 143 def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, 144 utc=None, box=True, format=None, exact=True, 145 unit=None, infer_datetime_format=False, origin='unix', 146 cache=False): 147 """ 148 Convert argument to datetime. 149 150 Parameters 151 ---------- 152 arg : integer, float, string, datetime, list, tuple, 1-d array, Series 153 154 .. versionadded:: 0.18.1 155 156 or DataFrame/dict-like 157 158 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 159 160 - If 'raise', then invalid parsing will raise an exception 161 - If 'coerce', then invalid parsing will be set as NaT 162 - If 'ignore', then invalid parsing will return the input 163 dayfirst : boolean, default False 164 Specify a date parse order if `arg` is str or its list-likes. 165 If True, parses dates with the day first, eg 10/11/12 is parsed as 166 2012-11-10. 167 Warning: dayfirst=True is not strict, but will prefer to parse 168 with day first (this is a known bug, based on dateutil behavior). 169 yearfirst : boolean, default False 170 Specify a date parse order if `arg` is str or its list-likes. 171 172 - If True parses dates with the year first, eg 10/11/12 is parsed as 173 2010-11-12. 174 - If both dayfirst and yearfirst are True, yearfirst is preceded (same 175 as dateutil). 176 177 Warning: yearfirst=True is not strict, but will prefer to parse 178 with year first (this is a known bug, based on dateutil behavior). 179 180 .. versionadded:: 0.16.1 181 182 utc : boolean, default None 183 Return UTC DatetimeIndex if True (converting any tz-aware 184 datetime.datetime objects as well). 185 box : boolean, default True 186 187 - If True returns a DatetimeIndex 188 - If False returns ndarray of values. 189 format : string, default None 190 strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse 191 all the way up to nanoseconds. 192 exact : boolean, True by default 193 194 - If True, require an exact format match. 195 - If False, allow the format to match anywhere in the target string. 196 197 unit : string, default 'ns' 198 unit of the arg (D,s,ms,us,ns) denote the unit, which is an 199 integer or float number. This will be based off the origin. 200 Example, with unit='ms' and origin='unix' (the default), this 201 would calculate the number of milliseconds to the unix epoch start. 202 infer_datetime_format : boolean, default False 203 If True and no `format` is given, attempt to infer the format of the 204 datetime strings, and if it can be inferred, switch to a faster 205 method of parsing them. In some cases this can increase the parsing 206 speed by ~5-10x. 207 origin : scalar, default is 'unix' 208 Define the reference date. The numeric values would be parsed as number 209 of units (defined by `unit`) since this reference date. 210 211 - If 'unix' (or POSIX) time; origin is set to 1970-01-01. 212 - If 'julian', unit must be 'D', and origin is set to beginning of 213 Julian Calendar. Julian day number 0 is assigned to the day starting 214 at noon on January 1, 4713 BC. 215 - If Timestamp convertible, origin is set to Timestamp identified by 216 origin. 217 218 .. versionadded:: 0.20.0 219 cache : boolean, default False 220 If True, use a cache of unique, converted dates to apply the datetime 221 conversion. May produce significant speed-up when parsing duplicate 222 date strings, especially ones with timezone offsets. 223 224 .. versionadded:: 0.23.0 225 226 Returns 227 ------- 228 ret : datetime if parsing succeeded. 229 Return type depends on input: 230 231 - list-like: DatetimeIndex 232 - Series: Series of datetime64 dtype 233 - scalar: Timestamp 234 235 In case when it is not possible to return designated types (e.g. when 236 any element of input is before Timestamp.min or after Timestamp.max) 237 return will have datetime.datetime type (or corresponding 238 array/Series). 239 240 Examples 241 -------- 242 Assembling a datetime from multiple columns of a DataFrame. The keys can be 243 common abbreviations like ['year', 'month', 'day', 'minute', 'second', 244 'ms', 'us', 'ns']) or plurals of the same 245 246 >>> df = pd.DataFrame({'year': [2015, 2016], 247 'month': [2, 3], 248 'day': [4, 5]}) 249 >>> pd.to_datetime(df) 250 0 2015-02-04 251 1 2016-03-05 252 dtype: datetime64[ns] 253 254 If a date does not meet the `timestamp limitations 255 <http://pandas.pydata.org/pandas-docs/stable/timeseries.html 256 #timeseries-timestamp-limits>`_, passing errors='ignore' 257 will return the original input instead of raising any exception. 258 259 Passing errors='coerce' will force an out-of-bounds date to NaT, 260 in addition to forcing non-dates (or non-parseable dates) to NaT. 261 262 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore') 263 datetime.datetime(1300, 1, 1, 0, 0) 264 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce') 265 NaT 266 267 Passing infer_datetime_format=True can often-times speedup a parsing 268 if its not an ISO8601 format exactly, but in a regular format. 269 270 >>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000']*1000) 271 272 >>> s.head() 273 0 3/11/2000 274 1 3/12/2000 275 2 3/13/2000 276 3 3/11/2000 277 4 3/12/2000 278 dtype: object 279 280 >>> %timeit pd.to_datetime(s,infer_datetime_format=True) 281 100 loops, best of 3: 10.4 ms per loop 282 283 >>> %timeit pd.to_datetime(s,infer_datetime_format=False) 284 1 loop, best of 3: 471 ms per loop 285 286 Using a unix epoch time 287 288 >>> pd.to_datetime(1490195805, unit='s') 289 Timestamp('2017-03-22 15:16:45') 290 >>> pd.to_datetime(1490195805433502912, unit='ns') 291 Timestamp('2017-03-22 15:16:45.433502912') 292 293 .. warning:: For float arg, precision rounding might happen. To prevent 294 unexpected behavior use a fixed-width exact type. 295 296 Using a non-unix epoch origin 297 298 >>> pd.to_datetime([1, 2, 3], unit='D', 299 origin=pd.Timestamp('1960-01-01')) 300 0 1960-01-02 301 1 1960-01-03 302 2 1960-01-04 303 304 See also 305 -------- 306 pandas.DataFrame.astype : Cast argument to a specified dtype. 307 pandas.to_timedelta : Convert argument to timedelta. 308 """ 309 from pandas.core.indexes.datetimes import DatetimeIndex 310 311 tz = 'utc' if utc else None 312 313 def _convert_listlike(arg, box, format, name=None, tz=tz): 314 315 if isinstance(arg, (list, tuple)): 316 arg = np.array(arg, dtype='O') 317 318 # these are shortcutable 319 if is_datetime64tz_dtype(arg): 320 if not isinstance(arg, DatetimeIndex): 321 return DatetimeIndex(arg, tz=tz, name=name) 322 if utc: 323 arg = arg.tz_convert(None).tz_localize('UTC') 324 return arg 325 326 elif is_datetime64_ns_dtype(arg): 327 if box and not isinstance(arg, DatetimeIndex): 328 try: 329 return DatetimeIndex(arg, tz=tz, name=name) 330 except ValueError: 331 pass 332 333 return arg 334 335 elif unit is not None: 336 if format is not None: 337 raise ValueError("cannot specify both format and unit") 338 arg = getattr(arg, 'values', arg) 339 result = tslib.array_with_unit_to_datetime(arg, unit, 340 errors=errors) 341 if box: 342 if errors == 'ignore': 343 from pandas import Index 344 return Index(result) 345 346 return DatetimeIndex(result, tz=tz, name=name) 347 return result 348 elif getattr(arg, 'ndim', 1) > 1: 349 raise TypeError('arg must be a string, datetime, list, tuple, ' 350 '1-d array, or Series') 351 352 arg = _ensure_object(arg) 353 require_iso8601 = False 354 355 if infer_datetime_format and format is None: 356 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst) 357 358 if format is not None: 359 # There is a special fast-path for iso8601 formatted 360 # datetime strings, so in those cases don't use the inferred 361 # format because this path makes process slower in this 362 # special case 363 format_is_iso8601 = _format_is_iso(format) 364 if format_is_iso8601: 365 require_iso8601 = not infer_datetime_format 366 format = None 367 368 try: 369 result = None 370 371 if format is not None: 372 # shortcut formatting here 373 if format == '%Y%m%d': 374 try: 375 result = _attempt_YYYYMMDD(arg, errors=errors) 376 except: 377 raise ValueError("cannot convert the input to " 378 "'%Y%m%d' date format") 379 380 # fallback 381 if result is None: 382 try: 383 result, timezones = array_strptime( 384 arg, format, exact=exact, errors=errors) 385 if '%Z' in format or '%z' in format: 386 return _return_parsed_timezone_results( 387 result, timezones, box, tz) 388 except tslib.OutOfBoundsDatetime: 389 if errors == 'raise': 390 raise 391 result = arg 392 except ValueError: 393 # if format was inferred, try falling back 394 # to array_to_datetime - terminate here 395 # for specified formats 396 if not infer_datetime_format: 397 if errors == 'raise': 398 raise 399 result = arg 400 401 if result is None and (format is None or infer_datetime_format): 402 result = tslib.array_to_datetime( 403 arg, 404 errors=errors, 405 utc=utc, 406 dayfirst=dayfirst, 407 yearfirst=yearfirst, 408 require_iso8601=require_iso8601 409 ) 410 411 if is_datetime64_dtype(result) and box: 412 result = DatetimeIndex(result, tz=tz, name=name) 413 return result 414 415 except ValueError as e: 416 try: 417 values, tz = conversion.datetime_to_datetime64(arg) 418 return DatetimeIndex._simple_new(values, name=name, tz=tz) 419 except (ValueError, TypeError): 420 raise e 421 422 if arg is None: 423 return None 424 425 # handle origin 426 if origin == 'julian': 427 428 original = arg 429 j0 = tslib.Timestamp(0).to_julian_date() 430 if unit != 'D': 431 raise ValueError("unit must be 'D' for origin='julian'") 432 try: 433 arg = arg - j0 434 except: 435 raise ValueError("incompatible 'arg' type for given " 436 "'origin'='julian'") 437 438 # premptively check this for a nice range 439 j_max = tslib.Timestamp.max.to_julian_date() - j0 440 j_min = tslib.Timestamp.min.to_julian_date() - j0 441 if np.any(arg > j_max) or np.any(arg < j_min): 442 raise tslib.OutOfBoundsDatetime( 443 "{original} is Out of Bounds for " 444 "origin='julian'".format(original=original)) 445 446 elif origin not in ['unix', 'julian']: 447 448 # arg must be a numeric 449 original = arg 450 if not ((is_scalar(arg) and (is_integer(arg) or is_float(arg))) or 451 is_numeric_dtype(np.asarray(arg))): 452 raise ValueError( 453 "'{arg}' is not compatible with origin='{origin}'; " 454 "it must be numeric with a unit specified ".format( 455 arg=arg, 456 origin=origin)) 457 458 # we are going to offset back to unix / epoch time 459 try: 460 offset = tslib.Timestamp(origin) 461 except tslib.OutOfBoundsDatetime: 462 raise tslib.OutOfBoundsDatetime( 463 "origin {origin} is Out of Bounds".format(origin=origin)) 464 except ValueError: 465 raise ValueError("origin {origin} cannot be converted " 466 "to a Timestamp".format(origin=origin)) 467 468 if offset.tz is not None: 469 raise ValueError( 470 "origin offset {} must be tz-naive".format(offset)) 471 offset -= tslib.Timestamp(0) 472 473 # convert the offset to the unit of the arg 474 # this should be lossless in terms of precision 475 offset = offset // tslib.Timedelta(1, unit=unit) 476 477 # scalars & ndarray-like can handle the addition 478 if is_list_like(arg) and not isinstance( 479 arg, (ABCSeries, ABCIndexClass, np.ndarray)): 480 arg = np.asarray(arg) 481 arg = arg + offset 482 483 if isinstance(arg, tslib.Timestamp): 484 result = arg 485 elif isinstance(arg, ABCSeries): 486 cache_array = _maybe_cache(arg, format, cache, tz, _convert_listlike) 487 if not cache_array.empty: 488 result = arg.map(cache_array) 489 else: 490 from pandas import Series 491 values = _convert_listlike(arg._values, True, format) 492 result = Series(values, index=arg.index, name=arg.name) 493 elif isinstance(arg, (ABCDataFrame, MutableMapping)): 494 result = _assemble_from_unit_mappings(arg, errors=errors) 495 elif isinstance(arg, ABCIndexClass): 496 cache_array = _maybe_cache(arg, format, cache, tz, _convert_listlike) 497 if not cache_array.empty: 498 result = _convert_and_box_cache(arg, cache_array, box, errors, 499 name=arg.name) 500 else: 501 result = _convert_listlike(arg, box, format, name=arg.name) 502 elif is_list_like(arg): 503 cache_array = _maybe_cache(arg, format, cache, tz, _convert_listlike) 504 if not cache_array.empty: 505 result = _convert_and_box_cache(arg, cache_array, box, errors) 506 else: 507 result = _convert_listlike(arg, box, format) 508 else: 509 result = _convert_listlike(np.array([arg]), box, format)[0] 510 511 return result 512 513 514 # mappings for assembling units 515 _unit_map = {'year': 'year', 516 'years': 'year', 517 'month': 'month', 518 'months': 'month', 519 'day': 'day', 520 'days': 'day', 521 'hour': 'h', 522 'hours': 'h', 523 'minute': 'm', 524 'minutes': 'm', 525 'second': 's', 526 'seconds': 's', 527 'ms': 'ms', 528 'millisecond': 'ms', 529 'milliseconds': 'ms', 530 'us': 'us', 531 'microsecond': 'us', 532 'microseconds': 'us', 533 'ns': 'ns', 534 'nanosecond': 'ns', 535 'nanoseconds': 'ns' 536 } 537 538 539 def _assemble_from_unit_mappings(arg, errors): 540 """ 541 assemble the unit specified fields from the arg (DataFrame) 542 Return a Series for actual parsing 543 544 Parameters 545 ---------- 546 arg : DataFrame 547 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 548 549 - If 'raise', then invalid parsing will raise an exception 550 - If 'coerce', then invalid parsing will be set as NaT 551 - If 'ignore', then invalid parsing will return the input 552 553 Returns 554 ------- 555 Series 556 """ 557 from pandas import to_timedelta, to_numeric, DataFrame 558 arg = DataFrame(arg) 559 if not arg.columns.is_unique: 560 raise ValueError("cannot assemble with duplicate keys") 561 562 # replace passed unit with _unit_map 563 def f(value): 564 if value in _unit_map: 565 return _unit_map[value] 566 567 # m is case significant 568 if value.lower() in _unit_map: 569 return _unit_map[value.lower()] 570 571 return value 572 573 unit = {k: f(k) for k in arg.keys()} 574 unit_rev = {v: k for k, v in unit.items()} 575 576 # we require at least Ymd 577 required = ['year', 'month', 'day'] 578 req = sorted(list(set(required) - set(unit_rev.keys()))) 579 if len(req): 580 raise ValueError("to assemble mappings requires at least that " 581 "[year, month, day] be specified: [{required}] " 582 "is missing".format(required=','.join(req))) 583 584 # keys we don't recognize 585 excess = sorted(list(set(unit_rev.keys()) - set(_unit_map.values()))) 586 if len(excess): 587 raise ValueError("extra keys have been passed " 588 "to the datetime assemblage: " 589 "[{excess}]".format(excess=','.join(excess))) 590 591 def coerce(values): 592 # we allow coercion to if errors allows 593 values = to_numeric(values, errors=errors) 594 595 # prevent overflow in case of int8 or int16 596 if is_integer_dtype(values): 597 values = values.astype('int64', copy=False) 598 return values 599 600 values = (coerce(arg[unit_rev['year']]) * 10000 + 601 coerce(arg[unit_rev['month']]) * 100 + 602 coerce(arg[unit_rev['day']])) 603 try: 604 values = to_datetime(values, format='%Y%m%d', errors=errors) 605 except (TypeError, ValueError) as e: 606 raise ValueError("cannot assemble the " 607 "datetimes: {error}".format(error=e)) 608 609 for u in ['h', 'm', 's', 'ms', 'us', 'ns']: 610 value = unit_rev.get(u) 611 if value is not None and value in arg: 612 try: 613 values += to_timedelta(coerce(arg[value]), 614 unit=u, 615 errors=errors) 616 except (TypeError, ValueError) as e: 617 raise ValueError("cannot assemble the datetimes [{value}]: " 618 "{error}".format(value=value, error=e)) 619 620 return values 621 622 623 def _attempt_YYYYMMDD(arg, errors): 624 """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like, 625 arg is a passed in as an object dtype, but could really be ints/strings 626 with nan-like/or floats (e.g. with nan) 627 628 Parameters 629 ---------- 630 arg : passed value 631 errors : 'raise','ignore','coerce' 632 """ 633 634 def calc(carg): 635 # calculate the actual result 636 carg = carg.astype(object) 637 parsed = parsing.try_parse_year_month_day(carg / 10000, 638 carg / 100 % 100, 639 carg % 100) 640 return tslib.array_to_datetime(parsed, errors=errors) 641 642 def calc_with_mask(carg, mask): 643 result = np.empty(carg.shape, dtype='M8[ns]') 644 iresult = result.view('i8') 645 iresult[~mask] = tslib.iNaT 646 result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)). \ 647 astype('M8[ns]') 648 return result 649 650 # try intlike / strings that are ints 651 try: 652 return calc(arg.astype(np.int64)) 653 except: 654 pass 655 656 # a float with actual np.nan 657 try: 658 carg = arg.astype(np.float64) 659 return calc_with_mask(carg, notna(carg)) 660 except: 661 pass 662 663 # string with NaN-like 664 try: 665 mask = ~algorithms.isin(arg, list(tslib.nat_strings)) 666 return calc_with_mask(arg, mask) 667 except: 668 pass 669 670 return None 671 672 673 # Fixed time formats for time parsing 674 _time_formats = ["%H:%M", "%H%M", "%I:%M%p", "%I%M%p", 675 "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"] 676 677 678 def _guess_time_format_for_array(arr): 679 # Try to guess the format based on the first non-NaN element 680 non_nan_elements = notna(arr).nonzero()[0] 681 if len(non_nan_elements): 682 element = arr[non_nan_elements[0]] 683 for time_format in _time_formats: 684 try: 685 datetime.strptime(element, time_format) 686 return time_format 687 except ValueError: 688 pass 689 690 return None 691 692 693 def to_time(arg, format=None, infer_time_format=False, errors='raise'): 694 """ 695 Parse time strings to time objects using fixed strptime formats ("%H:%M", 696 "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", 697 "%I%M%S%p") 698 699 Use infer_time_format if all the strings are in the same format to speed 700 up conversion. 701 702 Parameters 703 ---------- 704 arg : string in time format, datetime.time, list, tuple, 1-d array, Series 705 format : str, default None 706 Format used to convert arg into a time object. If None, fixed formats 707 are used. 708 infer_time_format: bool, default False 709 Infer the time format based on the first non-NaN element. If all 710 strings are in the same format, this will speed up conversion. 711 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 712 - If 'raise', then invalid parsing will raise an exception 713 - If 'coerce', then invalid parsing will be set as None 714 - If 'ignore', then invalid parsing will return the input 715 716 Returns 717 ------- 718 datetime.time 719 """ 720 from pandas.core.series import Series 721 722 def _convert_listlike(arg, format): 723 724 if isinstance(arg, (list, tuple)): 725 arg = np.array(arg, dtype='O') 726 727 elif getattr(arg, 'ndim', 1) > 1: 728 raise TypeError('arg must be a string, datetime, list, tuple, ' 729 '1-d array, or Series') 730 731 arg = _ensure_object(arg) 732 733 if infer_time_format and format is None: 734 format = _guess_time_format_for_array(arg) 735 736 times = [] 737 if format is not None: 738 for element in arg: 739 try: 740 times.append(datetime.strptime(element, format).time()) 741 except (ValueError, TypeError): 742 if errors == 'raise': 743 msg = ("Cannot convert {element} to a time with given " 744 "format {format}").format(element=element, 745 format=format) 746 raise ValueError(msg) 747 elif errors == 'ignore': 748 return arg 749 else: 750 times.append(None) 751 else: 752 formats = _time_formats[:] 753 format_found = False 754 for element in arg: 755 time_object = None 756 for time_format in formats: 757 try: 758 time_object = datetime.strptime(element, 759 time_format).time() 760 if not format_found: 761 # Put the found format in front 762 fmt = formats.pop(formats.index(time_format)) 763 formats.insert(0, fmt) 764 format_found = True 765 break 766 except (ValueError, TypeError): 767 continue 768 769 if time_object is not None: 770 times.append(time_object) 771 elif errors == 'raise': 772 raise ValueError("Cannot convert arg {arg} to " 773 "a time".format(arg=arg)) 774 elif errors == 'ignore': 775 return arg 776 else: 777 times.append(None) 778 779 return times 780 781 if arg is None: 782 return arg 783 elif isinstance(arg, time): 784 return arg 785 elif isinstance(arg, Series): 786 values = _convert_listlike(arg._values, format) 787 return Series(values, index=arg.index, name=arg.name) 788 elif isinstance(arg, ABCIndexClass): 789 return _convert_listlike(arg, format) 790 elif is_list_like(arg): 791 return _convert_listlike(arg, format) 792 793 return _convert_listlike(np.array([arg]), format)[0] 794 795 796 def format(dt): 797 """Returns date in YYYYMMDD format.""" 798 return dt.strftime('%Y%m%d') 799 800 801 OLE_TIME_ZERO = datetime(1899, 12, 30, 0, 0, 0) 802 803 804 def ole2datetime(oledt): 805 """function for converting excel date to normal date format""" 806 val = float(oledt) 807 808 # Excel has a bug where it thinks the date 2/29/1900 exists 809 # we just reject any date before 3/1/1900. 810 if val < 61: 811 msg = "Value is outside of acceptable range: {value}".format(value=val) 812 raise ValueError(msg) 813 814 return OLE_TIME_ZERO + timedelta(days=val) 815 [end of pandas/core/tools/datetimes.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
b5fc76954963e30b7d7dd4c30ed7363568fd01ee
BUG: Index with integer data and datetime64[ns, tz] dtype does not localize correctly ``` In [1]: pd.__version__ Out[1]: '0.23.0rc2+16.gccf4b96.dirty' In [2]: val = [pd.Timestamp('2018-01-01', tz='US/Pacific').value] In [3]: pd.Index(val, dtype='datetime64[ns, US/Pacific]') Out[3]: DatetimeIndex(['2018-01-01 08:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None) ``` The localization appears to localize directly to the timezone instead of localizing first to UTC and therefore does not roundtrip correctly from the timestamp value. #20956 can be simplified once this is fixed. Expected: ``` Out[3]: DatetimeIndex(['2018-01-01 00:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None) ```
hmm I think there is an open issue about this already, if you'd have a look
2018-05-26T05:39:36Z
<patch> diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt --- a/doc/source/whatsnew/v0.24.0.txt +++ b/doc/source/whatsnew/v0.24.0.txt @@ -36,7 +36,7 @@ Datetimelike API Changes Other API Changes ^^^^^^^^^^^^^^^^^ -- +- :class:`DatetimeIndex` now accepts :class:`Int64Index` arguments as epoch timestamps (:issue:`20997`) - - @@ -92,7 +92,7 @@ Datetimelike ^^^^^^^^^^^^ - Fixed bug where two :class:`DateOffset` objects with different ``normalize`` attributes could evaluate as equal (:issue:`21404`) -- +- Bug in :class:`Index` with ``datetime64[ns, tz]`` dtype that did not localize integer data correctly (:issue:`20964`) - Timedelta diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -1175,6 +1175,10 @@ def astype(self, dtype, copy=True): return CategoricalIndex(self.values, name=self.name, dtype=dtype, copy=copy) try: + if is_datetime64tz_dtype(dtype): + from pandas.core.indexes.datetimes import DatetimeIndex + return DatetimeIndex(self.values, name=self.name, dtype=dtype, + copy=copy) return Index(self.values.astype(dtype, copy=copy), name=self.name, dtype=dtype) except (TypeError, ValueError): diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -395,57 +395,43 @@ def __new__(cls, data=None, # data must be Index or np.ndarray here if not (is_datetime64_dtype(data) or is_datetimetz(data) or - is_integer_dtype(data)): + is_integer_dtype(data) or lib.infer_dtype(data) == 'integer'): data = tools.to_datetime(data, dayfirst=dayfirst, yearfirst=yearfirst) - if issubclass(data.dtype.type, np.datetime64) or is_datetimetz(data): - - if isinstance(data, DatetimeIndex): - if tz is None: - tz = data.tz - elif data.tz is None: - data = data.tz_localize(tz, ambiguous=ambiguous) - else: - # the tz's must match - if str(tz) != str(data.tz): - msg = ('data is already tz-aware {0}, unable to ' - 'set specified tz: {1}') - raise TypeError(msg.format(data.tz, tz)) + if isinstance(data, DatetimeIndex): + if tz is None: + tz = data.tz + elif data.tz is None: + data = data.tz_localize(tz, ambiguous=ambiguous) + else: + # the tz's must match + if str(tz) != str(data.tz): + msg = ('data is already tz-aware {0}, unable to ' + 'set specified tz: {1}') + raise TypeError(msg.format(data.tz, tz)) - subarr = data.values + subarr = data.values - if freq is None: - freq = data.freq - verify_integrity = False - else: - if data.dtype != _NS_DTYPE: - subarr = conversion.ensure_datetime64ns(data) - else: - subarr = data + if freq is None: + freq = data.freq + verify_integrity = False + elif issubclass(data.dtype.type, np.datetime64): + if data.dtype != _NS_DTYPE: + data = conversion.ensure_datetime64ns(data) + if tz is not None: + # Convert tz-naive to UTC + tz = timezones.maybe_get_tz(tz) + data = conversion.tz_localize_to_utc(data.view('i8'), tz, + ambiguous=ambiguous) + subarr = data.view(_NS_DTYPE) else: # must be integer dtype otherwise - if isinstance(data, Int64Index): - raise TypeError('cannot convert Int64Index->DatetimeIndex') + # assume this data are epoch timestamps if data.dtype != _INT64_DTYPE: - data = data.astype(np.int64) + data = data.astype(np.int64, copy=False) subarr = data.view(_NS_DTYPE) - if isinstance(subarr, DatetimeIndex): - if tz is None: - tz = subarr.tz - else: - if tz is not None: - tz = timezones.maybe_get_tz(tz) - - if (not isinstance(data, DatetimeIndex) or - getattr(data, 'tz', None) is None): - # Convert tz-naive to UTC - ints = subarr.view('i8') - subarr = conversion.tz_localize_to_utc(ints, tz, - ambiguous=ambiguous) - subarr = subarr.view(_NS_DTYPE) - subarr = cls._simple_new(subarr, name=name, freq=freq, tz=tz) if dtype is not None: if not is_dtype_equal(subarr.dtype, dtype): @@ -807,8 +793,9 @@ def _mpl_repr(self): @cache_readonly def _is_dates_only(self): + """Return a boolean if we are only dates (and don't have a timezone)""" from pandas.io.formats.format import _is_dates_only - return _is_dates_only(self.values) + return _is_dates_only(self.values) and self.tz is None @property def _formatter_func(self): @@ -1244,7 +1231,7 @@ def join(self, other, how='left', level=None, return_indexers=False, See Index.join """ if (not isinstance(other, DatetimeIndex) and len(other) > 0 and - other.inferred_type not in ('floating', 'mixed-integer', + other.inferred_type not in ('floating', 'integer', 'mixed-integer', 'mixed-integer-float', 'mixed')): try: other = DatetimeIndex(other) @@ -2100,8 +2087,9 @@ def normalize(self): dtype='datetime64[ns, Asia/Calcutta]', freq=None) """ new_values = conversion.date_normalize(self.asi8, self.tz) - return DatetimeIndex(new_values, freq='infer', name=self.name, - tz=self.tz) + return DatetimeIndex(new_values, + freq='infer', + name=self.name).tz_localize(self.tz) @Substitution(klass='DatetimeIndex') @Appender(_shared_docs['searchsorted']) @@ -2182,8 +2170,6 @@ def insert(self, loc, item): try: new_dates = np.concatenate((self[:loc].asi8, [item.view(np.int64)], self[loc:].asi8)) - if self.tz is not None: - new_dates = conversion.tz_convert(new_dates, 'UTC', self.tz) return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz) except (AttributeError, TypeError): @@ -2221,8 +2207,6 @@ def delete(self, loc): if (loc.start in (0, None) or loc.stop in (len(self), None)): freq = self.freq - if self.tz is not None: - new_dates = conversion.tz_convert(new_dates, 'UTC', self.tz) return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz) def tz_convert(self, tz): </patch>
[]
[]
pandas-dev__pandas-11957
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> ERR: Maximum recursion depth exceeded in comparision when comparing a TimeDelta to numpy object array of TimeDelta See http://stackoverflow.com/questions/34251068/runtimeerror-from-scipy-stats-mode-on-array-of-timedelta-maximum-recursion-dept Python 3.5.1, pandas 0.17.1, numpy 0.10.1: ``` Python 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:24:55) Type "copyright", "credits" or "license" for more information. IPython 4.0.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import numpy as np In [2]: np.__version__ Out[2]: '1.10.1' In [3]: import pandas as pd In [4]: pd.__version__ Out[4]: '0.17.1' ``` Create a numpy array of `TimeDelta` objects, and do a comparison of the array to a `TimeDelta` instance: ``` In [5]: from pandas import Timedelta In [6]: periods = [Timedelta('0 days 01:00:00'), Timedelta('0 days 01:00:00')] In [7]: p = np.array(periods) In [8]: periods[0] > p --------------------------------------------------------------------------- RecursionError Traceback (most recent call last) <ipython-input-8-1c05a376ecc2> in <module>() ----> 1 periods[0] > p pandas/tslib.pyx in pandas.tslib._Timedelta.__richcmp__ (pandas/tslib.c:38155)() <SNIP> pandas/tslib.pyx in pandas.tslib._Timedelta.__richcmp__ (pandas/tslib.c:38155)() pandas/tslib.pyx in pandas.tslib._Timedelta.__richcmp__ (pandas/tslib.c:38155)() RecursionError: maximum recursion depth exceeded in comparison In [9]: ``` </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 <table> 4 <tr> 5 <td>Latest Release</td> 6 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td> 7 </tr> 8 <td></td> 9 <td><img src="https://anaconda.org/pandas/pandas/badges/version.svg" alt="latest release" /></td> 10 </tr> 11 <tr> 12 <td>Package Status</td> 13 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 14 </tr> 15 <tr> 16 <td>License</td> 17 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td> 18 </tr> 19 <tr> 20 <td>Build Status</td> 21 <td> 22 <a href="https://travis-ci.org/pydata/pandas"> 23 <img src="https://travis-ci.org/pydata/pandas.svg?branch=master" alt="travis build status" /> 24 </a> 25 </td> 26 </tr> 27 <td></td> 28 <td> 29 <a href="https://ci.appveyor.com/project/jreback/pandas-465"> 30 <img src="https://ci.appveyor.com/api/projects/status/iblk29s98quexwxi/branch/master?svg=true" alt="appveyor build status" /> 31 </a> 32 </td> 33 </tr> 34 <tr> 35 <td>Conda</td> 36 <td> 37 <a href="http://pandas.pydata.org"> 38 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda downloads" /> 39 </a> 40 </td> 41 </tr> 42 <tr> 43 <td>PyPI</td> 44 <td> 45 <a href="https://pypi.python.org/pypi/pandas/"> 46 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" /> 47 </a> 48 </td> 49 </tr> 50 </table> 51 52 [![https://gitter.im/pydata/pandas](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 53 54 ## What is it 55 56 **pandas** is a Python package providing fast, flexible, and expressive data 57 structures designed to make working with "relational" or "labeled" data both 58 easy and intuitive. It aims to be the fundamental high-level building block for 59 doing practical, **real world** data analysis in Python. Additionally, it has 60 the broader goal of becoming **the most powerful and flexible open source data 61 analysis / manipulation tool available in any language**. It is already well on 62 its way toward this goal. 63 64 ## Main Features 65 Here are just a few of the things that pandas does well: 66 67 - Easy handling of [**missing data**][missing-data] (represented as 68 `NaN`) in floating point as well as non-floating point data 69 - Size mutability: columns can be [**inserted and 70 deleted**][insertion-deletion] from DataFrame and higher dimensional 71 objects 72 - Automatic and explicit [**data alignment**][alignment]: objects can 73 be explicitly aligned to a set of labels, or the user can simply 74 ignore the labels and let `Series`, `DataFrame`, etc. automatically 75 align the data for you in computations 76 - Powerful, flexible [**group by**][groupby] functionality to perform 77 split-apply-combine operations on data sets, for both aggregating 78 and transforming data 79 - Make it [**easy to convert**][conversion] ragged, 80 differently-indexed data in other Python and NumPy data structures 81 into DataFrame objects 82 - Intelligent label-based [**slicing**][slicing], [**fancy 83 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 84 large data sets 85 - Intuitive [**merging**][merging] and [**joining**][joining] data 86 sets 87 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 88 data sets 89 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 90 labels per tick) 91 - Robust IO tools for loading data from [**flat files**][flat-files] 92 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 93 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 94 - [**Time series**][timeseries]-specific functionality: date range 95 generation and frequency conversion, moving window statistics, 96 moving window linear regressions, date shifting and lagging, etc. 97 98 99 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 100 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 101 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 102 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 103 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 104 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 105 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 106 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 107 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 108 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 109 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 110 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 111 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 112 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 113 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 114 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 115 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 116 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 117 118 ## Where to get it 119 The source code is currently hosted on GitHub at: 120 http://github.com/pydata/pandas 121 122 Binary installers for the latest released version are available at the Python 123 package index 124 125 http://pypi.python.org/pypi/pandas/ 126 127 And via `easy_install`: 128 129 ```sh 130 easy_install pandas 131 ``` 132 133 or `pip`: 134 135 ```sh 136 pip install pandas 137 ``` 138 139 or `conda`: 140 141 ```sh 142 conda install pandas 143 ``` 144 145 ## Dependencies 146 - [NumPy](http://www.numpy.org): 1.7.0 or higher 147 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 148 - [pytz](http://pytz.sourceforge.net) 149 - Needed for time zone support with ``pandas.date_range`` 150 151 ### Highly Recommended Dependencies 152 - [numexpr](https://github.com/pydata/numexpr) 153 - Needed to accelerate some expression evaluation operations 154 - Required by PyTables 155 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 156 - Needed to accelerate certain numerical operations 157 158 ### Optional dependencies 159 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 160 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 161 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 162 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 163 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 164 - [statsmodels](http://statsmodels.sourceforge.net/) 165 - Needed for parts of `pandas.stats` 166 - For Excel I/O: 167 - [xlrd/xlwt](http://www.python-excel.org/) 168 - Excel reading (xlrd) and writing (xlwt) 169 - [openpyxl](http://packages.python.org/openpyxl/) 170 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for 171 writing .xlsx files 172 - xlrd >= 0.9.0 173 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 174 - Alternative Excel writer. 175 - [Google bq Command Line Tool](https://cloud.google.com/bigquery/bq-command-line-tool) 176 - Needed for `pandas.io.gbq` 177 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 178 - One of the following combinations of libraries is needed to use the 179 top-level [`pandas.read_html`][read-html-docs] function: 180 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 181 recent version of [html5lib][html5lib] is okay.) 182 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 183 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 184 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 185 for reasons as to why you should probably **not** take this approach. 186 187 #### Notes about HTML parsing libraries 188 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 189 either [lxml][lxml] or [html5lib][html5lib] or both. 190 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 191 installed. 192 - You are strongly encouraged to read [HTML reading 193 gotchas][html-gotchas]. It explains issues surrounding the 194 installation and usage of the above three libraries. 195 - You may need to install an older version of 196 [BeautifulSoup4][BeautifulSoup4]: 197 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 198 32-bit Ubuntu/Debian 199 - Additionally, if you're using [Anaconda][Anaconda] you should 200 definitely read [the gotchas about HTML parsing][html-gotchas] 201 libraries 202 - If you're on a system with `apt-get` you can do 203 204 ```sh 205 sudo apt-get build-dep python-lxml 206 ``` 207 208 to get the necessary dependencies for installation of [lxml][lxml]. 209 This will prevent further headaches down the line. 210 211 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 212 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 213 [lxml]: http://lxml.de 214 [Anaconda]: https://store.continuum.io/cshop/anaconda 215 [NumPy]: http://numpy.scipy.org/ 216 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 217 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 218 219 ## Installation from sources 220 To install pandas from source you need Cython in addition to the normal 221 dependencies above. Cython can be installed from pypi: 222 223 ```sh 224 pip install cython 225 ``` 226 227 In the `pandas` directory (same one where you found this file after 228 cloning the git repo), execute: 229 230 ```sh 231 python setup.py install 232 ``` 233 234 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 235 236 ```sh 237 python setup.py develop 238 ``` 239 240 Alternatively, you can use `pip` if you want all the dependencies pulled 241 in automatically (the `-e` option is for installing it in [development 242 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 243 244 ```sh 245 pip install -e . 246 ``` 247 248 On Windows, you will need to install MinGW and execute: 249 250 ```sh 251 python setup.py build --compiler=mingw32 252 python setup.py install 253 ``` 254 255 See http://pandas.pydata.org/ for more information. 256 257 ## License 258 BSD 259 260 ## Documentation 261 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 262 263 The Sphinx documentation should provide a good starting point for learning how 264 to use the library. Expect the docs to continue to expand as time goes on. 265 266 ## Background 267 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 268 has been under active development since then. 269 270 ## Discussion and Development 271 Since pandas development is related to a number of other scientific 272 Python projects, questions are welcome on the scipy-user mailing 273 list. Specialized discussions or design issues should take place on 274 the PyData mailing list / Google group: 275 276 https://groups.google.com/forum/#!forum/pydata 277 [end of README.md] [start of pandas/tseries/timedeltas.py] 1 """ 2 timedelta support tools 3 """ 4 5 import re 6 import numpy as np 7 import pandas.tslib as tslib 8 from pandas import compat 9 from pandas.core.common import (ABCSeries, is_integer_dtype, 10 is_timedelta64_dtype, is_list_like, 11 isnull, _ensure_object, ABCIndexClass) 12 from pandas.util.decorators import deprecate_kwarg 13 14 @deprecate_kwarg(old_arg_name='coerce', new_arg_name='errors', 15 mapping={True: 'coerce', False: 'raise'}) 16 def to_timedelta(arg, unit='ns', box=True, errors='raise', coerce=None): 17 """ 18 Convert argument to timedelta 19 20 Parameters 21 ---------- 22 arg : string, timedelta, list, tuple, 1-d array, or Series 23 unit : unit of the arg (D,h,m,s,ms,us,ns) denote the unit, which is an integer/float number 24 box : boolean, default True 25 - If True returns a Timedelta/TimedeltaIndex of the results 26 - if False returns a np.timedelta64 or ndarray of values of dtype timedelta64[ns] 27 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 28 - If 'raise', then invalid parsing will raise an exception 29 - If 'coerce', then invalid parsing will be set as NaT 30 - If 'ignore', then invalid parsing will return the input 31 32 Returns 33 ------- 34 ret : timedelta64/arrays of timedelta64 if parsing succeeded 35 36 Examples 37 -------- 38 39 Parsing a single string to a Timedelta: 40 41 >>> pd.to_timedelta('1 days 06:05:01.00003') 42 Timedelta('1 days 06:05:01.000030') 43 >>> pd.to_timedelta('15.5us') 44 Timedelta('0 days 00:00:00.000015') 45 46 Parsing a list or array of strings: 47 48 >>> pd.to_timedelta(['1 days 06:05:01.00003', '15.5us', 'nan']) 49 TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015', NaT], dtype='timedelta64[ns]', freq=None) 50 51 Converting numbers by specifying the `unit` keyword argument: 52 53 >>> pd.to_timedelta(np.arange(5), unit='s') 54 TimedeltaIndex(['00:00:00', '00:00:01', '00:00:02', '00:00:03', '00:00:04'], dtype='timedelta64[ns]', freq=None) 55 >>> pd.to_timedelta(np.arange(5), unit='d') 56 TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'], dtype='timedelta64[ns]', freq=None) 57 """ 58 unit = _validate_timedelta_unit(unit) 59 60 def _convert_listlike(arg, box, unit, name=None): 61 62 if isinstance(arg, (list, tuple)) or not hasattr(arg, 'dtype'): 63 arg = np.array(list(arg), dtype='O') 64 65 # these are shortcutable 66 if is_timedelta64_dtype(arg): 67 value = arg.astype('timedelta64[ns]') 68 elif is_integer_dtype(arg): 69 value = arg.astype('timedelta64[{0}]'.format(unit)).astype('timedelta64[ns]', copy=False) 70 else: 71 value = tslib.array_to_timedelta64(_ensure_object(arg), unit=unit, errors=errors) 72 value = value.astype('timedelta64[ns]', copy=False) 73 74 if box: 75 from pandas import TimedeltaIndex 76 value = TimedeltaIndex(value,unit='ns', name=name) 77 return value 78 79 if arg is None: 80 return arg 81 elif isinstance(arg, ABCSeries): 82 from pandas import Series 83 values = _convert_listlike(arg._values, box=False, unit=unit) 84 return Series(values, index=arg.index, name=arg.name, dtype='m8[ns]') 85 elif isinstance(arg, ABCIndexClass): 86 return _convert_listlike(arg, box=box, unit=unit, name=arg.name) 87 elif is_list_like(arg) and getattr(arg, 'ndim', 1) == 1: 88 return _convert_listlike(arg, box=box, unit=unit) 89 elif getattr(arg, 'ndim', 1) > 1: 90 raise TypeError('arg must be a string, timedelta, list, tuple, 1-d array, or Series') 91 92 # ...so it must be a scalar value. Return scalar. 93 return _coerce_scalar_to_timedelta_type(arg, unit=unit, box=box, errors=errors) 94 95 _unit_map = { 96 'Y' : 'Y', 97 'y' : 'Y', 98 'W' : 'W', 99 'w' : 'W', 100 'D' : 'D', 101 'd' : 'D', 102 'days' : 'D', 103 'Days' : 'D', 104 'day' : 'D', 105 'Day' : 'D', 106 'M' : 'M', 107 'H' : 'h', 108 'h' : 'h', 109 'm' : 'm', 110 'T' : 'm', 111 'S' : 's', 112 's' : 's', 113 'L' : 'ms', 114 'MS' : 'ms', 115 'ms' : 'ms', 116 'US' : 'us', 117 'us' : 'us', 118 'NS' : 'ns', 119 'ns' : 'ns', 120 } 121 122 def _validate_timedelta_unit(arg): 123 """ provide validation / translation for timedelta short units """ 124 try: 125 return _unit_map[arg] 126 except: 127 if arg is None: 128 return 'ns' 129 raise ValueError("invalid timedelta unit {0} provided".format(arg)) 130 131 def _coerce_scalar_to_timedelta_type(r, unit='ns', box=True, errors='raise'): 132 """ convert strings to timedelta; coerce to Timedelta (if box), else np.timedelta64""" 133 134 result = tslib.convert_to_timedelta(r,unit,errors) 135 if box: 136 result = tslib.Timedelta(result) 137 138 return result 139 [end of pandas/tseries/timedeltas.py] [start of pandas/tseries/tools.py] 1 from datetime import datetime, timedelta, time 2 import sys 3 4 import numpy as np 5 6 import pandas.lib as lib 7 import pandas.tslib as tslib 8 import pandas.core.common as com 9 from pandas.core.common import ABCIndexClass 10 import pandas.compat as compat 11 from pandas.util.decorators import deprecate_kwarg 12 13 try: 14 import dateutil 15 # raise exception if dateutil 2.0 install on 2.x platform 16 if (sys.version_info[0] == 2 and 17 dateutil.__version__ == '2.0'): # pragma: no cover 18 raise Exception('dateutil 2.0 incompatible with Python 2.x, you must ' 19 'install version 1.5 or 2.1+!') 20 except ImportError: # pragma: no cover 21 print('Please install python-dateutil via easy_install or some method!') 22 raise # otherwise a 2nd import won't show the message 23 24 _DATEUTIL_LEXER_SPLIT = None 25 try: 26 # Since these are private methods from dateutil, it is safely imported 27 # here so in case this interface changes, pandas will just fallback 28 # to not using the functionality 29 from dateutil.parser import _timelex 30 31 if hasattr(_timelex, 'split'): 32 def _lexer_split_from_str(dt_str): 33 # The StringIO(str(_)) is for dateutil 2.2 compatibility 34 return _timelex.split(compat.StringIO(str(dt_str))) 35 36 _DATEUTIL_LEXER_SPLIT = _lexer_split_from_str 37 except (ImportError, AttributeError): 38 pass 39 40 def _infer_tzinfo(start, end): 41 def _infer(a, b): 42 tz = a.tzinfo 43 if b and b.tzinfo: 44 if not (tslib.get_timezone(tz) == tslib.get_timezone(b.tzinfo)): 45 raise AssertionError('Inputs must both have the same timezone,' 46 ' {0} != {1}'.format(tz, b.tzinfo)) 47 return tz 48 tz = None 49 if start is not None: 50 tz = _infer(start, end) 51 elif end is not None: 52 tz = _infer(end, start) 53 return tz 54 55 56 def _guess_datetime_format(dt_str, dayfirst=False, 57 dt_str_parse=compat.parse_date, 58 dt_str_split=_DATEUTIL_LEXER_SPLIT): 59 """ 60 Guess the datetime format of a given datetime string. 61 62 Parameters 63 ---------- 64 dt_str : string, datetime string to guess the format of 65 dayfirst : boolean, default False 66 If True parses dates with the day first, eg 20/01/2005 67 Warning: dayfirst=True is not strict, but will prefer to parse 68 with day first (this is a known bug). 69 dt_str_parse : function, defaults to `compat.parse_date` (dateutil) 70 This function should take in a datetime string and return 71 a `datetime.datetime` guess that the datetime string represents 72 dt_str_split : function, defaults to `_DATEUTIL_LEXER_SPLIT` (dateutil) 73 This function should take in a datetime string and return 74 a list of strings, the guess of the various specific parts 75 e.g. '2011/12/30' -> ['2011', '/', '12', '/', '30'] 76 77 Returns 78 ------- 79 ret : datetime format string (for `strftime` or `strptime`) 80 """ 81 if dt_str_parse is None or dt_str_split is None: 82 return None 83 84 if not isinstance(dt_str, compat.string_types): 85 return None 86 87 day_attribute_and_format = (('day',), '%d', 2) 88 89 # attr name, format, padding (if any) 90 datetime_attrs_to_format = [ 91 (('year', 'month', 'day'), '%Y%m%d', 0), 92 (('year',), '%Y', 0), 93 (('month',), '%B', 0), 94 (('month',), '%b', 0), 95 (('month',), '%m', 2), 96 day_attribute_and_format, 97 (('hour',), '%H', 2), 98 (('minute',), '%M', 2), 99 (('second',), '%S', 2), 100 (('microsecond',), '%f', 6), 101 (('second', 'microsecond'), '%S.%f', 0), 102 ] 103 104 if dayfirst: 105 datetime_attrs_to_format.remove(day_attribute_and_format) 106 datetime_attrs_to_format.insert(0, day_attribute_and_format) 107 108 try: 109 parsed_datetime = dt_str_parse(dt_str, dayfirst=dayfirst) 110 except: 111 # In case the datetime can't be parsed, its format cannot be guessed 112 return None 113 114 if parsed_datetime is None: 115 return None 116 117 try: 118 tokens = dt_str_split(dt_str) 119 except: 120 # In case the datetime string can't be split, its format cannot 121 # be guessed 122 return None 123 124 format_guess = [None] * len(tokens) 125 found_attrs = set() 126 127 for attrs, attr_format, padding in datetime_attrs_to_format: 128 # If a given attribute has been placed in the format string, skip 129 # over other formats for that same underlying attribute (IE, month 130 # can be represented in multiple different ways) 131 if set(attrs) & found_attrs: 132 continue 133 134 if all(getattr(parsed_datetime, attr) is not None for attr in attrs): 135 for i, token_format in enumerate(format_guess): 136 token_filled = tokens[i].zfill(padding) 137 if (token_format is None and 138 token_filled == parsed_datetime.strftime(attr_format)): 139 format_guess[i] = attr_format 140 tokens[i] = token_filled 141 found_attrs.update(attrs) 142 break 143 144 # Only consider it a valid guess if we have a year, month and day 145 if len(set(['year', 'month', 'day']) & found_attrs) != 3: 146 return None 147 148 output_format = [] 149 for i, guess in enumerate(format_guess): 150 if guess is not None: 151 # Either fill in the format placeholder (like %Y) 152 output_format.append(guess) 153 else: 154 # Or just the token separate (IE, the dashes in "01-01-2013") 155 try: 156 # If the token is numeric, then we likely didn't parse it 157 # properly, so our guess is wrong 158 float(tokens[i]) 159 return None 160 except ValueError: 161 pass 162 163 output_format.append(tokens[i]) 164 165 guessed_format = ''.join(output_format) 166 167 # rebuild string, capturing any inferred padding 168 dt_str = ''.join(tokens) 169 if parsed_datetime.strftime(guessed_format) == dt_str: 170 return guessed_format 171 172 def _guess_datetime_format_for_array(arr, **kwargs): 173 # Try to guess the format based on the first non-NaN element 174 non_nan_elements = com.notnull(arr).nonzero()[0] 175 if len(non_nan_elements): 176 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs) 177 178 179 @deprecate_kwarg(old_arg_name='coerce', new_arg_name='errors', 180 mapping={True: 'coerce', False: 'raise'}) 181 def to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, 182 utc=None, box=True, format=None, exact=True, coerce=None, 183 unit='ns', infer_datetime_format=False): 184 """ 185 Convert argument to datetime. 186 187 Parameters 188 ---------- 189 arg : string, datetime, list, tuple, 1-d array, or Series 190 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 191 - If 'raise', then invalid parsing will raise an exception 192 - If 'coerce', then invalid parsing will be set as NaT 193 - If 'ignore', then invalid parsing will return the input 194 dayfirst : boolean, default False 195 Specify a date parse order if `arg` is str or its list-likes. 196 If True, parses dates with the day first, eg 10/11/12 is parsed as 2012-11-10. 197 Warning: dayfirst=True is not strict, but will prefer to parse 198 with day first (this is a known bug, based on dateutil behavior). 199 yearfirst : boolean, default False 200 Specify a date parse order if `arg` is str or its list-likes. 201 - If True parses dates with the year first, eg 10/11/12 is parsed as 2010-11-12. 202 - If both dayfirst and yearfirst are True, yearfirst is preceded (same as dateutil). 203 Warning: yearfirst=True is not strict, but will prefer to parse 204 with year first (this is a known bug, based on dateutil beahavior). 205 206 .. versionadded: 0.16.1 207 208 utc : boolean, default None 209 Return UTC DatetimeIndex if True (converting any tz-aware 210 datetime.datetime objects as well). 211 box : boolean, default True 212 - If True returns a DatetimeIndex 213 - If False returns ndarray of values. 214 format : string, default None 215 strftime to parse time, eg "%d/%m/%Y", note that "%f" will parse 216 all the way up to nanoseconds. 217 exact : boolean, True by default 218 - If True, require an exact format match. 219 - If False, allow the format to match anywhere in the target string. 220 unit : unit of the arg (D,s,ms,us,ns) denote the unit in epoch 221 (e.g. a unix timestamp), which is an integer/float number. 222 infer_datetime_format : boolean, default False 223 If no `format` is given, try to infer the format based on the first 224 datetime string. Provides a large speed-up in many cases. 225 226 Returns 227 ------- 228 ret : datetime if parsing succeeded. 229 Return type depends on input: 230 231 - list-like: DatetimeIndex 232 - Series: Series of datetime64 dtype 233 - scalar: Timestamp 234 235 In case when it is not possible to return designated types (e.g. when 236 any element of input is before Timestamp.min or after Timestamp.max) 237 return will have datetime.datetime type (or correspoding array/Series). 238 239 Examples 240 -------- 241 Take separate series and convert to datetime 242 243 >>> import pandas as pd 244 >>> i = pd.date_range('20000101',periods=100) 245 >>> df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day)) 246 >>> pd.to_datetime(df.year*10000 + df.month*100 + df.day, format='%Y%m%d') 247 0 2000-01-01 248 1 2000-01-02 249 ... 250 98 2000-04-08 251 99 2000-04-09 252 Length: 100, dtype: datetime64[ns] 253 254 Or from strings 255 256 >>> df = df.astype(str) 257 >>> pd.to_datetime(df.day + df.month + df.year, format="%d%m%Y") 258 0 2000-01-01 259 1 2000-01-02 260 ... 261 98 2000-04-08 262 99 2000-04-09 263 Length: 100, dtype: datetime64[ns] 264 265 Date that does not meet timestamp limitations: 266 267 >>> pd.to_datetime('13000101', format='%Y%m%d') 268 datetime.datetime(1300, 1, 1, 0, 0) 269 >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce') 270 NaT 271 """ 272 return _to_datetime(arg, errors=errors, dayfirst=dayfirst, yearfirst=yearfirst, 273 utc=utc, box=box, format=format, exact=exact, 274 unit=unit, infer_datetime_format=infer_datetime_format) 275 276 277 def _to_datetime(arg, errors='raise', dayfirst=False, yearfirst=False, 278 utc=None, box=True, format=None, exact=True, 279 unit='ns', freq=None, infer_datetime_format=False): 280 """ 281 Same as to_datetime, but accept freq for 282 DatetimeIndex internal construction 283 """ 284 from pandas.core.series import Series 285 from pandas.tseries.index import DatetimeIndex 286 287 def _convert_listlike(arg, box, format, name=None): 288 289 if isinstance(arg, (list, tuple)): 290 arg = np.array(arg, dtype='O') 291 292 # these are shortcutable 293 if com.is_datetime64_ns_dtype(arg): 294 if box and not isinstance(arg, DatetimeIndex): 295 try: 296 return DatetimeIndex(arg, tz='utc' if utc else None, name=name) 297 except ValueError: 298 pass 299 300 return arg 301 302 elif com.is_datetime64tz_dtype(arg): 303 if not isinstance(arg, DatetimeIndex): 304 return DatetimeIndex(arg, tz='utc' if utc else None) 305 if utc: 306 arg = arg.tz_convert(None) 307 return arg 308 309 elif format is None and com.is_integer_dtype(arg) and unit=='ns': 310 result = arg.astype('datetime64[ns]') 311 if box: 312 return DatetimeIndex(result, tz='utc' if utc else None, name=name) 313 return result 314 elif getattr(arg, 'ndim', 1) > 1: 315 raise TypeError('arg must be a string, datetime, list, tuple, 1-d array, or Series') 316 317 arg = com._ensure_object(arg) 318 require_iso8601 = False 319 320 if infer_datetime_format and format is None: 321 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst) 322 323 if format is not None: 324 # There is a special fast-path for iso8601 formatted 325 # datetime strings, so in those cases don't use the inferred 326 # format because this path makes process slower in this 327 # special case 328 format_is_iso8601 = ( 329 ('%Y-%m-%dT%H:%M:%S.%f'.startswith(format) or 330 '%Y-%m-%d %H:%M:%S.%f'.startswith(format)) and 331 format != '%Y') 332 if format_is_iso8601: 333 require_iso8601 = not infer_datetime_format 334 format = None 335 336 try: 337 result = None 338 339 if format is not None: 340 # shortcut formatting here 341 if format == '%Y%m%d': 342 try: 343 result = _attempt_YYYYMMDD(arg, errors=errors) 344 except: 345 raise ValueError("cannot convert the input to " 346 "'%Y%m%d' date format") 347 348 # fallback 349 if result is None: 350 try: 351 result = tslib.array_strptime( 352 arg, format, exact=exact, errors=errors) 353 except tslib.OutOfBoundsDatetime: 354 if errors == 'raise': 355 raise 356 result = arg 357 except ValueError: 358 # if format was inferred, try falling back 359 # to array_to_datetime - terminate here 360 # for specified formats 361 if not infer_datetime_format: 362 if errors == 'raise': 363 raise 364 result = arg 365 366 if result is None and (format is None or infer_datetime_format): 367 result = tslib.array_to_datetime( 368 arg, 369 errors=errors, 370 utc=utc, 371 dayfirst=dayfirst, 372 yearfirst=yearfirst, 373 freq=freq, 374 unit=unit, 375 require_iso8601=require_iso8601 376 ) 377 378 if com.is_datetime64_dtype(result) and box: 379 result = DatetimeIndex(result, 380 tz='utc' if utc else None, 381 name=name) 382 return result 383 384 except ValueError as e: 385 try: 386 values, tz = tslib.datetime_to_datetime64(arg) 387 return DatetimeIndex._simple_new(values, name=name, tz=tz) 388 except (ValueError, TypeError): 389 raise e 390 391 if arg is None: 392 return arg 393 elif isinstance(arg, tslib.Timestamp): 394 return arg 395 elif isinstance(arg, Series): 396 values = _convert_listlike(arg._values, False, format) 397 return Series(values, index=arg.index, name=arg.name) 398 elif isinstance(arg, ABCIndexClass): 399 return _convert_listlike(arg, box, format, name=arg.name) 400 elif com.is_list_like(arg): 401 return _convert_listlike(arg, box, format) 402 403 return _convert_listlike(np.array([ arg ]), box, format)[0] 404 405 406 def _attempt_YYYYMMDD(arg, errors): 407 """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like, 408 arg is a passed in as an object dtype, but could really be ints/strings 409 with nan-like/or floats (e.g. with nan) 410 411 Parameters 412 ---------- 413 arg : passed value 414 errors : 'raise','ignore','coerce' 415 """ 416 417 def calc(carg): 418 # calculate the actual result 419 carg = carg.astype(object) 420 parsed = lib.try_parse_year_month_day(carg/10000, 421 carg/100 % 100, 422 carg % 100) 423 return tslib.array_to_datetime(parsed, errors=errors) 424 425 def calc_with_mask(carg, mask): 426 result = np.empty(carg.shape, dtype='M8[ns]') 427 iresult = result.view('i8') 428 iresult[~mask] = tslib.iNaT 429 result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)).\ 430 astype('M8[ns]') 431 return result 432 433 # try intlike / strings that are ints 434 try: 435 return calc(arg.astype(np.int64)) 436 except: 437 pass 438 439 # a float with actual np.nan 440 try: 441 carg = arg.astype(np.float64) 442 return calc_with_mask(carg,com.notnull(carg)) 443 except: 444 pass 445 446 # string with NaN-like 447 try: 448 mask = ~lib.ismember(arg, tslib._nat_strings) 449 return calc_with_mask(arg,mask) 450 except: 451 pass 452 453 return None 454 455 456 def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None): 457 """ 458 Try hard to parse datetime string, leveraging dateutil plus some extra 459 goodies like quarter recognition. 460 461 Parameters 462 ---------- 463 arg : compat.string_types 464 freq : str or DateOffset, default None 465 Helps with interpreting time string if supplied 466 dayfirst : bool, default None 467 If None uses default from print_config 468 yearfirst : bool, default None 469 If None uses default from print_config 470 471 Returns 472 ------- 473 datetime, datetime/dateutil.parser._result, str 474 """ 475 from pandas.core.config import get_option 476 if not isinstance(arg, compat.string_types): 477 return arg 478 479 from pandas.tseries.offsets import DateOffset 480 if isinstance(freq, DateOffset): 481 freq = freq.rule_code 482 483 if dayfirst is None: 484 dayfirst = get_option("display.date_dayfirst") 485 if yearfirst is None: 486 yearfirst = get_option("display.date_yearfirst") 487 488 return tslib.parse_datetime_string_with_reso(arg, freq=freq, 489 dayfirst=dayfirst, 490 yearfirst=yearfirst) 491 492 493 DateParseError = tslib.DateParseError 494 normalize_date = tslib.normalize_date 495 496 497 # Fixed time formats for time parsing 498 _time_formats = ["%H:%M", "%H%M", "%I:%M%p", "%I%M%p", 499 "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", "%I%M%S%p"] 500 501 502 def _guess_time_format_for_array(arr): 503 # Try to guess the format based on the first non-NaN element 504 non_nan_elements = com.notnull(arr).nonzero()[0] 505 if len(non_nan_elements): 506 element = arr[non_nan_elements[0]] 507 for time_format in _time_formats: 508 try: 509 datetime.strptime(element, time_format) 510 return time_format 511 except ValueError: 512 pass 513 514 return None 515 516 517 def to_time(arg, format=None, infer_time_format=False, errors='raise'): 518 """ 519 Parse time strings to time objects using fixed strptime formats ("%H:%M", 520 "%H%M", "%I:%M%p", "%I%M%p", "%H:%M:%S", "%H%M%S", "%I:%M:%S%p", 521 "%I%M%S%p") 522 523 Use infer_time_format if all the strings are in the same format to speed 524 up conversion. 525 526 Parameters 527 ---------- 528 arg : string in time format, datetime.time, list, tuple, 1-d array, Series 529 format : str, default None 530 Format used to convert arg into a time object. If None, fixed formats 531 are used. 532 infer_time_format: bool, default False 533 Infer the time format based on the first non-NaN element. If all 534 strings are in the same format, this will speed up conversion. 535 errors : {'ignore', 'raise', 'coerce'}, default 'raise' 536 - If 'raise', then invalid parsing will raise an exception 537 - If 'coerce', then invalid parsing will be set as None 538 - If 'ignore', then invalid parsing will return the input 539 540 Returns 541 ------- 542 datetime.time 543 """ 544 from pandas.core.series import Series 545 546 def _convert_listlike(arg, format): 547 548 if isinstance(arg, (list, tuple)): 549 arg = np.array(arg, dtype='O') 550 551 elif getattr(arg, 'ndim', 1) > 1: 552 raise TypeError('arg must be a string, datetime, list, tuple, ' 553 '1-d array, or Series') 554 555 arg = com._ensure_object(arg) 556 557 if infer_time_format and format is None: 558 format = _guess_time_format_for_array(arg) 559 560 times = [] 561 if format is not None: 562 for element in arg: 563 try: 564 times.append(datetime.strptime(element, format).time()) 565 except (ValueError, TypeError): 566 if errors == 'raise': 567 raise ValueError("Cannot convert %s to a time with " 568 "given format %s" % (element, format)) 569 elif errors == 'ignore': 570 return arg 571 else: 572 times.append(None) 573 else: 574 formats = _time_formats[:] 575 format_found = False 576 for element in arg: 577 time_object = None 578 for time_format in formats: 579 try: 580 time_object = datetime.strptime(element, 581 time_format).time() 582 if not format_found: 583 # Put the found format in front 584 fmt = formats.pop(formats.index(time_format)) 585 formats.insert(0, fmt) 586 format_found = True 587 break 588 except (ValueError, TypeError): 589 continue 590 591 if time_object is not None: 592 times.append(time_object) 593 elif errors == 'raise': 594 raise ValueError("Cannot convert arg {arg} to " 595 "a time".format(arg=arg)) 596 elif errors == 'ignore': 597 return arg 598 else: 599 times.append(None) 600 601 return times 602 603 if arg is None: 604 return arg 605 elif isinstance(arg, time): 606 return arg 607 elif isinstance(arg, Series): 608 values = _convert_listlike(arg._values, format) 609 return Series(values, index=arg.index, name=arg.name) 610 elif isinstance(arg, ABCIndexClass): 611 return _convert_listlike(arg, format) 612 elif com.is_list_like(arg): 613 return _convert_listlike(arg, format) 614 615 return _convert_listlike(np.array([arg]), format)[0] 616 617 618 def format(dt): 619 """Returns date in YYYYMMDD format.""" 620 return dt.strftime('%Y%m%d') 621 622 OLE_TIME_ZERO = datetime(1899, 12, 30, 0, 0, 0) 623 624 625 def ole2datetime(oledt): 626 """function for converting excel date to normal date format""" 627 val = float(oledt) 628 629 # Excel has a bug where it thinks the date 2/29/1900 exists 630 # we just reject any date before 3/1/1900. 631 if val < 61: 632 raise ValueError("Value is outside of acceptable range: %s " % val) 633 634 return OLE_TIME_ZERO + timedelta(days=val) 635 [end of pandas/tseries/tools.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
cbd7f1892f097242d3631b791c78a6518e154906
ERR: Maximum recursion depth exceeded in comparision when comparing a TimeDelta to numpy object array of TimeDelta See http://stackoverflow.com/questions/34251068/runtimeerror-from-scipy-stats-mode-on-array-of-timedelta-maximum-recursion-dept Python 3.5.1, pandas 0.17.1, numpy 0.10.1: ``` Python 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:24:55) Type "copyright", "credits" or "license" for more information. IPython 4.0.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: import numpy as np In [2]: np.__version__ Out[2]: '1.10.1' In [3]: import pandas as pd In [4]: pd.__version__ Out[4]: '0.17.1' ``` Create a numpy array of `TimeDelta` objects, and do a comparison of the array to a `TimeDelta` instance: ``` In [5]: from pandas import Timedelta In [6]: periods = [Timedelta('0 days 01:00:00'), Timedelta('0 days 01:00:00')] In [7]: p = np.array(periods) In [8]: periods[0] > p --------------------------------------------------------------------------- RecursionError Traceback (most recent call last) <ipython-input-8-1c05a376ecc2> in <module>() ----> 1 periods[0] > p pandas/tslib.pyx in pandas.tslib._Timedelta.__richcmp__ (pandas/tslib.c:38155)() <SNIP> pandas/tslib.pyx in pandas.tslib._Timedelta.__richcmp__ (pandas/tslib.c:38155)() pandas/tslib.pyx in pandas.tslib._Timedelta.__richcmp__ (pandas/tslib.c:38155)() RecursionError: maximum recursion depth exceeded in comparison In [9]: ```
ok. you should use a `TimedeltaIndex`; a numpy array of `Timedeltas` is not very useful. In any event looks like a bug in the comparison ops. pull -requests welcome.
2016-01-04T23:09:14Z
<patch> diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -476,6 +476,7 @@ Bug Fixes - Bug in ``.style.bar`` may not rendered properly using specific browser (:issue:`11678`) +- Bug in rich comparison of ``Timedelta`` with a ``numpy.array`` of ``Timedelta``s that caused an infinite recursion (:issue:`11835`) - Bug in ``df.replace`` while replacing value in mixed dtype ``Dataframe`` (:issue:`11698`) diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -2184,6 +2184,8 @@ cdef class _Timedelta(timedelta): raise TypeError('Cannot compare type %r with type %r' % (type(self).__name__, type(other).__name__)) + if isinstance(other, np.ndarray): + return PyObject_RichCompare(np.array([self]), other, op) return PyObject_RichCompare(other, self, _reverse_ops[op]) else: if op == Py_EQ: </patch>
[]
[]
apache__airflow-19418
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> A dag's schedule interval can no longer be an instance of dateutils.relativedelta ### Apache Airflow version 2.2.1 (latest released) ### Operating System debian ### Versions of Apache Airflow Providers apache-airflow==2.2.1 apache-airflow-providers-amazon==2.3.0 apache-airflow-providers-ftp==2.0.1 apache-airflow-providers-google==6.0.0 apache-airflow-providers-http==2.0.1 apache-airflow-providers-imap==2.0.1 apache-airflow-providers-jira==2.0.1 apache-airflow-providers-mysql==2.1.1 apache-airflow-providers-postgres==2.3.0 apache-airflow-providers-redis==2.0.1 apache-airflow-providers-sqlite==2.0.1 apache-airflow-providers-ssh==2.2.0 ### Deployment Other Docker-based deployment ### Deployment details Dask executor, custom-built Docker images, postgres 12.7 backend ### What happened I upgraded Airflow from 2.0.2 to 2.2.1, and some DAGs I have that used dateutils.relativedelta objects as schedule intervals stopped running ### What you expected to happen The [code](https://github.com/apache/airflow/blob/2.2.1/airflow/models/dag.py#L101) for the schedule_interval parameter of the DAG constructor indicates that a relativedelta object is allowed, so I expected the DAG to be correctly parsed and scheduled. ### How to reproduce Create a DAG that has a relativedelta object as its schedule interval, and it will not appear in the UI or be scheduled. ### Anything else Here is the code that causes the failure within the PR where it was introduced: [link](https://github.com/apache/airflow/pull/17414/files#diff-ed37fe966e8247e0bfd8aa28bc2698febeec3807df5f5a00545ca80744f8aff6R267) Here are the logs for the exception, found in the scheduler logs for the file that contains the offending DAG <details><pre> ERROR | {dagbag.py:528} - 'relativedelta' object has no attribute 'total_seconds' Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 515, in collect_dags found_dags = self.process_file(filepath, only_if_updated=only_if_updated, safe_mode=safe_mode) File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 298, in process_file found_dags = self._process_modules(filepath, mods, file_last_changed_on_disk) File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 401, in _process_modules dag.timetable.validate() File "/usr/local/lib/python3.9/site-packages/airflow/timetables/interval.py", line 274, in validate if self._delta.total_seconds() <= 0: AttributeError: 'relativedelta' object has no attribute 'total_seconds' </pre></details> ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) </issue> <code> [start of README.md] 1 <!-- 2 Licensed to the Apache Software Foundation (ASF) under one 3 or more contributor license agreements. See the NOTICE file 4 distributed with this work for additional information 5 regarding copyright ownership. The ASF licenses this file 6 to you under the Apache License, Version 2.0 (the 7 "License"); you may not use this file except in compliance 8 with the License. You may obtain a copy of the License at 9 10 http://www.apache.org/licenses/LICENSE-2.0 11 12 Unless required by applicable law or agreed to in writing, 13 software distributed under the License is distributed on an 14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 KIND, either express or implied. See the License for the 16 specific language governing permissions and limitations 17 under the License. 18 --> 19 20 # Apache Airflow 21 22 [![PyPI version](https://badge.fury.io/py/apache-airflow.svg)](https://badge.fury.io/py/apache-airflow) 23 [![GitHub Build](https://github.com/apache/airflow/workflows/CI%20Build/badge.svg)](https://github.com/apache/airflow/actions) 24 [![Coverage Status](https://img.shields.io/codecov/c/github/apache/airflow/main.svg)](https://codecov.io/github/apache/airflow?branch=main) 25 [![License](https://img.shields.io/:license-Apache%202-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) 26 [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apache-airflow.svg)](https://pypi.org/project/apache-airflow/) 27 [![Docker Pulls](https://img.shields.io/docker/pulls/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 28 [![Docker Stars](https://img.shields.io/docker/stars/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 29 [![PyPI - Downloads](https://img.shields.io/pypi/dm/apache-airflow)](https://pypi.org/project/apache-airflow/) 30 [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/apache-airflow)](https://artifacthub.io/packages/search?repo=apache-airflow) 31 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 32 [![Twitter Follow](https://img.shields.io/twitter/follow/ApacheAirflow.svg?style=social&label=Follow)](https://twitter.com/ApacheAirflow) 33 [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://s.apache.org/airflow-slack) 34 35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. 36 37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. 38 39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. 40 41 <!-- START doctoc generated TOC please keep comment here to allow auto update --> 42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> 43 **Table of contents** 44 45 - [Project Focus](#project-focus) 46 - [Principles](#principles) 47 - [Requirements](#requirements) 48 - [Getting started](#getting-started) 49 - [Installing from PyPI](#installing-from-pypi) 50 - [Official source code](#official-source-code) 51 - [Convenience packages](#convenience-packages) 52 - [User Interface](#user-interface) 53 - [Semantic versioning](#semantic-versioning) 54 - [Version Life Cycle](#version-life-cycle) 55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions) 56 - [Contributing](#contributing) 57 - [Who uses Apache Airflow?](#who-uses-apache-airflow) 58 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow) 59 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation) 60 - [Airflow merchandise](#airflow-merchandise) 61 - [Links](#links) 62 - [Sponsors](#sponsors) 63 64 <!-- END doctoc generated TOC please keep comment here to allow auto update --> 65 66 ## Project Focus 67 68 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/). 69 70 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work. 71 72 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches. 73 74 ## Principles 75 76 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. 77 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment. 78 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine. 79 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. 80 81 ## Requirements 82 83 Apache Airflow is tested with: 84 85 | | Main version (dev) | Stable version (2.2.1) | 86 | -------------------- | ------------------------- | ------------------------ | 87 | Python | 3.6, 3.7, 3.8, 3.9 | 3.6, 3.7, 3.8, 3.9 | 88 | Kubernetes | 1.18, 1.19, 1.20 | 1.18, 1.19, 1.20 | 89 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 | 90 | MySQL | 5.7, 8 | 5.7, 8 | 91 | SQLite | 3.15.0+ | 3.15.0+ | 92 | MSSQL(Experimental) | 2017, 2019 | | 93 94 **Note**: MySQL 5.x versions are unable to or have limitations with 95 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html). 96 MariaDB is not tested/recommended. 97 98 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend 99 using the latest stable version of SQLite for local development. 100 101 **Note**: Python v3.10 is not supported yet. For details, see [#19059](https://github.com/apache/airflow/issues/19059). 102 103 ## Getting started 104 105 Visit the official Airflow website documentation (latest **stable** release) for help with 106 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html), 107 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking 108 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html). 109 110 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/). 111 112 For more information on Airflow Improvement Proposals (AIPs), visit 113 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals). 114 115 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/). 116 117 ## Installing from PyPI 118 119 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky 120 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and 121 applications usually pin them, but we should do neither and both simultaneously. We decided to keep 122 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries 123 if needed. This means that `pip install apache-airflow` will not work from time to time or will 124 produce unusable Airflow installation. 125 126 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint 127 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working" 128 constraints files separately per major/minor Python version. 129 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify 130 correct Airflow tag/version/branch and Python versions in the URL. 131 132 133 1. Installing just Airflow: 134 135 > Note: Only `pip` installation is currently officially supported. 136 137 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or 138 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as 139 `pip` - especially when it comes to constraint vs. requirements management. 140 Installing via `Poetry` or `pip-tools` is not currently supported. 141 142 If you wish to install Airflow using those tools, you should use the constraint files and convert 143 them to the appropriate format and workflow that your tool requires. 144 145 146 ```bash 147 pip install 'apache-airflow==2.2.1' \ 148 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.1/constraints-3.7.txt" 149 ``` 150 151 2. Installing with extras (i.e., postgres, google) 152 153 ```bash 154 pip install 'apache-airflow[postgres,google]==2.2.1' \ 155 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.1/constraints-3.7.txt" 156 ``` 157 158 For information on installing provider packages, check 159 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html). 160 161 ## Official source code 162 163 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project, 164 and our official source code releases: 165 166 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html) 167 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow) 168 - Are cryptographically signed by the release manager 169 - Are officially voted on by the PMC members during the 170 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval) 171 172 Following the ASF rules, the source packages released must be sufficient for a user to build and test the 173 release provided they have access to the appropriate platform and tools. 174 175 ## Convenience packages 176 177 There are other ways of installing and using Airflow. Those are "convenience" methods - they are 178 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users 179 who do not want to build the software themselves. 180 181 Those are - in the order of most common ways people install Airflow: 182 183 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool 184 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via 185 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can 186 read more about using, customising, and extending the images in the 187 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and 188 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document. 189 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that 190 were used to generate official source packages via git 191 192 All those artifacts are not official releases, but they are prepared using officially released sources. 193 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such 194 following the ASF Policy. 195 196 ## User Interface 197 198 - **DAGs**: Overview of all DAGs in your environment. 199 200 ![DAGs](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/dags.png) 201 202 - **Tree**: Tree representation of a DAG that spans across time. 203 204 ![Tree](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/tree.png) 205 206 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run. 207 208 ![Graph](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/graph.png) 209 210 - **Task Duration**: Total time spent on different tasks over time. 211 212 ![Task Duration](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/duration.png) 213 214 - **Gantt**: Duration and overlap of a DAG. 215 216 ![Gantt](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/gantt.png) 217 218 - **Code**: Quick way to view source code of a DAG. 219 220 ![Code](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/code.png) 221 222 ## Semantic versioning 223 224 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released. 225 226 There are few specific rules that we agreed to that define details of versioning of the different 227 packages: 228 229 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers). 230 Changing limits for versions of Airflow dependencies is not a breaking change on its own. 231 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only. 232 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version. 233 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed 234 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages, 235 they are present in providers as `install_requires` limitations. We aim to keep backwards 236 compatibility of providers with all previously released Airflow 2 versions but 237 there will sometimes be breaking changes that might make some, or all 238 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version 239 is a breaking change for provider because installing the new provider might automatically 240 upgrade Airflow (which might be an undesired side effect of upgrading provider). 241 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR 242 versions for the chart are independent from the Airflow version. We aim to keep backwards 243 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might 244 only work starting from specific Airflow releases. We might however limit the Helm 245 Chart to depend on minimal Airflow version. 246 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow. 247 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of 248 all clients. The clients then can release their own PATCH releases with bugfixes, 249 independently of Airflow PATCH releases. 250 251 ## Version Life Cycle 252 253 Apache Airflow version life cycle: 254 255 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated | 256 |---------|---------------------|-----------|---------------|-----------------|----------------| 257 | 2 | 2.2.1 | Supported | Dec 17, 2020 | TBD | TBD | 258 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 | 259 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 | 260 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 | 261 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 | 262 263 Limited support versions will be supported with security and critical bug fix only. 264 EOL versions will not get any fixes nor support. 265 We always recommend that all users run the latest available minor release for whatever major version is in use. 266 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date. 267 268 ## Support for Python and Kubernetes versions 269 270 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support. 271 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the 272 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and 273 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/). 274 275 1. We drop support for Python and Kubernetes versions when they reach EOL. We drop support for those 276 EOL versions in main right after EOL date, and it is effectively removed when we release the 277 first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow 278 For example, for Python 3.6 it means that we drop support in main right after 23.12.2021, and the first 279 MAJOR or MINOR version of Airflow released after will not have it. 280 281 2. The "oldest" supported version of Python/Kubernetes is the default one until we decide to switch to 282 later version. "Default" is only meaningful in terms of "smoke tests" in CI PRs, which are run using this 283 default version and the default reference image available. Currently `apache/airflow:latest` 284 and `apache/airflow:2.2.1` images are Python 3.7 images as we are preparing for 23.12.2021 when will 285 Python 3.6 reaches end of life. 286 287 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we 288 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with 289 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup. 290 291 ### Additional notes on Python version requirements 292 293 * Previous versions [require](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3 294 when using Python 3. 295 296 ## Contributing 297 298 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst). 299 300 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst). 301 302 ## Who uses Apache Airflow? 303 304 More than 400 organizations are using Apache Airflow 305 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md). 306 307 ## Who Maintains Apache Airflow? 308 309 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors), 310 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow) 311 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests. 312 If you would like to become a maintainer, please review the Apache Airflow 313 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer). 314 315 ## Can I use the Apache Airflow logo in my presentation? 316 317 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html). 318 319 ## Airflow merchandise 320 321 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out 322 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5). 323 324 ## Links 325 326 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/) 327 - [Chat](https://s.apache.org/airflow-slack) 328 329 ## Sponsors 330 331 The CI infrastructure for Apache Airflow has been sponsored by: 332 333 <!-- Ordered by most recently "funded" --> 334 335 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a> 336 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a> 337 [end of README.md] [start of airflow/settings.py] 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 import atexit 19 import functools 20 import json 21 import logging 22 import os 23 import sys 24 import warnings 25 from typing import Optional 26 27 import pendulum 28 import sqlalchemy 29 from sqlalchemy import create_engine, exc 30 from sqlalchemy.engine import Engine 31 from sqlalchemy.orm import scoped_session, sessionmaker 32 from sqlalchemy.orm.session import Session as SASession 33 from sqlalchemy.pool import NullPool 34 35 from airflow.configuration import AIRFLOW_HOME, WEBSERVER_CONFIG, conf # NOQA F401 36 from airflow.executors import executor_constants 37 from airflow.logging_config import configure_logging 38 from airflow.utils.orm_event_handlers import setup_event_handlers 39 40 log = logging.getLogger(__name__) 41 42 43 TIMEZONE = pendulum.tz.timezone('UTC') 44 try: 45 tz = conf.get("core", "default_timezone") 46 if tz == "system": 47 TIMEZONE = pendulum.tz.local_timezone() 48 else: 49 TIMEZONE = pendulum.tz.timezone(tz) 50 except Exception: 51 pass 52 log.info("Configured default timezone %s", TIMEZONE) 53 54 55 HEADER = '\n'.join( 56 [ 57 r' ____________ _____________', 58 r' ____ |__( )_________ __/__ /________ __', 59 r'____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /', 60 r'___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /', 61 r' _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/', 62 ] 63 ) 64 65 LOGGING_LEVEL = logging.INFO 66 67 # the prefix to append to gunicorn worker processes after init 68 GUNICORN_WORKER_READY_PREFIX = "[ready] " 69 70 LOG_FORMAT = conf.get('logging', 'log_format') 71 SIMPLE_LOG_FORMAT = conf.get('logging', 'simple_log_format') 72 73 SQL_ALCHEMY_CONN: Optional[str] = None 74 PLUGINS_FOLDER: Optional[str] = None 75 LOGGING_CLASS_PATH: Optional[str] = None 76 DAGS_FOLDER: str = os.path.expanduser(conf.get('core', 'DAGS_FOLDER')) 77 78 engine: Optional[Engine] = None 79 Session: Optional[SASession] = None 80 81 # The JSON library to use for DAG Serialization and De-Serialization 82 json = json 83 84 # Dictionary containing State and colors associated to each state to 85 # display on the Webserver 86 STATE_COLORS = { 87 "queued": "gray", 88 "running": "lime", 89 "success": "green", 90 "failed": "red", 91 "up_for_retry": "gold", 92 "up_for_reschedule": "turquoise", 93 "upstream_failed": "orange", 94 "skipped": "pink", 95 "scheduled": "tan", 96 "deferred": "mediumpurple", 97 } 98 99 100 @functools.lru_cache(maxsize=None) 101 def _get_rich_console(file): 102 # Delay imports until we need it 103 import rich.console 104 105 return rich.console.Console(file=file) 106 107 108 def custom_show_warning(message, category, filename, lineno, file=None, line=None): 109 """Custom function to print rich and visible warnings""" 110 # Delay imports until we need it 111 from rich.markup import escape 112 113 msg = f"[bold]{line}" if line else f"[bold][yellow]{filename}:{lineno}" 114 msg += f" {category.__name__}[/bold]: {escape(str(message))}[/yellow]" 115 write_console = _get_rich_console(file or sys.stderr) 116 write_console.print(msg, soft_wrap=True) 117 118 119 warnings.showwarning = custom_show_warning 120 121 122 def task_policy(task) -> None: 123 """ 124 This policy setting allows altering tasks after they are loaded in 125 the DagBag. It allows administrator to rewire some task's parameters. 126 Alternatively you can raise ``AirflowClusterPolicyViolation`` exception 127 to stop DAG from being executed. 128 129 To define policy, add a ``airflow_local_settings`` module 130 to your PYTHONPATH that defines this ``task_policy`` function. 131 132 Here are a few examples of how this can be useful: 133 134 * You could enforce a specific queue (say the ``spark`` queue) 135 for tasks using the ``SparkOperator`` to make sure that these 136 tasks get wired to the right workers 137 * You could enforce a task timeout policy, making sure that no tasks run 138 for more than 48 hours 139 140 :param task: task to be mutated 141 :type task: airflow.models.baseoperator.BaseOperator 142 """ 143 144 145 def dag_policy(dag) -> None: 146 """ 147 This policy setting allows altering DAGs after they are loaded in 148 the DagBag. It allows administrator to rewire some DAG's parameters. 149 Alternatively you can raise ``AirflowClusterPolicyViolation`` exception 150 to stop DAG from being executed. 151 152 To define policy, add a ``airflow_local_settings`` module 153 to your PYTHONPATH that defines this ``dag_policy`` function. 154 155 Here are a few examples of how this can be useful: 156 157 * You could enforce default user for DAGs 158 * Check if every DAG has configured tags 159 160 :param dag: dag to be mutated 161 :type dag: airflow.models.dag.DAG 162 """ 163 164 165 def task_instance_mutation_hook(task_instance): 166 """ 167 This setting allows altering task instances before they are queued by 168 the Airflow scheduler. 169 170 To define task_instance_mutation_hook, add a ``airflow_local_settings`` module 171 to your PYTHONPATH that defines this ``task_instance_mutation_hook`` function. 172 173 This could be used, for instance, to modify the task instance during retries. 174 175 :param task_instance: task instance to be mutated 176 :type task_instance: airflow.models.taskinstance.TaskInstance 177 """ 178 179 180 def pod_mutation_hook(pod): 181 """ 182 This setting allows altering ``kubernetes.client.models.V1Pod`` object 183 before they are passed to the Kubernetes client by the ``PodLauncher`` 184 for scheduling. 185 186 To define a pod mutation hook, add a ``airflow_local_settings`` module 187 to your PYTHONPATH that defines this ``pod_mutation_hook`` function. 188 It receives a ``Pod`` object and can alter it where needed. 189 190 This could be used, for instance, to add sidecar or init containers 191 to every worker pod launched by KubernetesExecutor or KubernetesPodOperator. 192 """ 193 194 195 def configure_vars(): 196 """Configure Global Variables from airflow.cfg""" 197 global SQL_ALCHEMY_CONN 198 global DAGS_FOLDER 199 global PLUGINS_FOLDER 200 SQL_ALCHEMY_CONN = conf.get('core', 'SQL_ALCHEMY_CONN') 201 DAGS_FOLDER = os.path.expanduser(conf.get('core', 'DAGS_FOLDER')) 202 203 PLUGINS_FOLDER = conf.get('core', 'plugins_folder', fallback=os.path.join(AIRFLOW_HOME, 'plugins')) 204 205 206 def configure_orm(disable_connection_pool=False): 207 """Configure ORM using SQLAlchemy""" 208 from airflow.utils.log.secrets_masker import mask_secret 209 210 log.debug("Setting up DB connection pool (PID %s)", os.getpid()) 211 global engine 212 global Session 213 engine_args = prepare_engine_args(disable_connection_pool) 214 215 # Allow the user to specify an encoding for their DB otherwise default 216 # to utf-8 so jobs & users with non-latin1 characters can still use us. 217 engine_args['encoding'] = conf.get('core', 'SQL_ENGINE_ENCODING', fallback='utf-8') 218 219 if conf.has_option('core', 'sql_alchemy_connect_args'): 220 connect_args = conf.getimport('core', 'sql_alchemy_connect_args') 221 else: 222 connect_args = {} 223 224 engine = create_engine(SQL_ALCHEMY_CONN, connect_args=connect_args, **engine_args) 225 226 mask_secret(engine.url.password) 227 228 setup_event_handlers(engine) 229 230 Session = scoped_session( 231 sessionmaker( 232 autocommit=False, 233 autoflush=False, 234 bind=engine, 235 expire_on_commit=False, 236 ) 237 ) 238 if engine.dialect.name == 'mssql': 239 session = Session() 240 try: 241 result = session.execute( 242 sqlalchemy.text( 243 'SELECT is_read_committed_snapshot_on FROM sys.databases WHERE name=:database_name' 244 ), 245 params={"database_name": engine.url.database}, 246 ) 247 data = result.fetchone()[0] 248 if data != 1: 249 log.critical("MSSQL database MUST have READ_COMMITTED_SNAPSHOT enabled.") 250 log.critical(f"The database {engine.url.database} has it disabled.") 251 log.critical("This will cause random deadlocks, Refusing to start.") 252 log.critical( 253 "See https://airflow.apache.org/docs/apache-airflow/stable/howto/" 254 "set-up-database.html#setting-up-a-mssql-database" 255 ) 256 raise Exception("MSSQL database MUST have READ_COMMITTED_SNAPSHOT enabled.") 257 finally: 258 session.close() 259 260 261 def prepare_engine_args(disable_connection_pool=False): 262 """Prepare SQLAlchemy engine args""" 263 engine_args = {} 264 pool_connections = conf.getboolean('core', 'SQL_ALCHEMY_POOL_ENABLED') 265 if disable_connection_pool or not pool_connections: 266 engine_args['poolclass'] = NullPool 267 log.debug("settings.prepare_engine_args(): Using NullPool") 268 elif not SQL_ALCHEMY_CONN.startswith('sqlite'): 269 # Pool size engine args not supported by sqlite. 270 # If no config value is defined for the pool size, select a reasonable value. 271 # 0 means no limit, which could lead to exceeding the Database connection limit. 272 pool_size = conf.getint('core', 'SQL_ALCHEMY_POOL_SIZE', fallback=5) 273 274 # The maximum overflow size of the pool. 275 # When the number of checked-out connections reaches the size set in pool_size, 276 # additional connections will be returned up to this limit. 277 # When those additional connections are returned to the pool, they are disconnected and discarded. 278 # It follows then that the total number of simultaneous connections 279 # the pool will allow is pool_size + max_overflow, 280 # and the total number of “sleeping” connections the pool will allow is pool_size. 281 # max_overflow can be set to -1 to indicate no overflow limit; 282 # no limit will be placed on the total number 283 # of concurrent connections. Defaults to 10. 284 max_overflow = conf.getint('core', 'SQL_ALCHEMY_MAX_OVERFLOW', fallback=10) 285 286 # The DB server already has a value for wait_timeout (number of seconds after 287 # which an idle sleeping connection should be killed). Since other DBs may 288 # co-exist on the same server, SQLAlchemy should set its 289 # pool_recycle to an equal or smaller value. 290 pool_recycle = conf.getint('core', 'SQL_ALCHEMY_POOL_RECYCLE', fallback=1800) 291 292 # Check connection at the start of each connection pool checkout. 293 # Typically, this is a simple statement like “SELECT 1”, but may also make use 294 # of some DBAPI-specific method to test the connection for liveness. 295 # More information here: 296 # https://docs.sqlalchemy.org/en/13/core/pooling.html#disconnect-handling-pessimistic 297 pool_pre_ping = conf.getboolean('core', 'SQL_ALCHEMY_POOL_PRE_PING', fallback=True) 298 299 log.debug( 300 "settings.prepare_engine_args(): Using pool settings. pool_size=%d, max_overflow=%d, " 301 "pool_recycle=%d, pid=%d", 302 pool_size, 303 max_overflow, 304 pool_recycle, 305 os.getpid(), 306 ) 307 engine_args['pool_size'] = pool_size 308 engine_args['pool_recycle'] = pool_recycle 309 engine_args['pool_pre_ping'] = pool_pre_ping 310 engine_args['max_overflow'] = max_overflow 311 312 # The default isolation level for MySQL (REPEATABLE READ) can introduce inconsistencies when 313 # running multiple schedulers, as repeated queries on the same session may read from stale snapshots. 314 # 'READ COMMITTED' is the default value for PostgreSQL. 315 # More information here: 316 # https://dev.mysql.com/doc/refman/8.0/en/innodb-transaction-isolation-levels.html" 317 318 # Similarly MSSQL default isolation level should be set to READ COMMITTED. 319 # We also make sure that READ_COMMITTED_SNAPSHOT option is on, in order to avoid deadlocks when 320 # Select queries are running. This is by default enforced during init/upgrade. More information: 321 # https://docs.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql 322 323 if SQL_ALCHEMY_CONN.startswith(('mysql', 'mssql')): 324 engine_args['isolation_level'] = 'READ COMMITTED' 325 326 return engine_args 327 328 329 def dispose_orm(): 330 """Properly close pooled database connections""" 331 log.debug("Disposing DB connection pool (PID %s)", os.getpid()) 332 global engine 333 global Session 334 335 if Session: 336 Session.remove() 337 Session = None 338 if engine: 339 engine.dispose() 340 engine = None 341 342 343 def configure_adapters(): 344 """Register Adapters and DB Converters""" 345 from pendulum import DateTime as Pendulum 346 347 if SQL_ALCHEMY_CONN.startswith('sqlite'): 348 from sqlite3 import register_adapter 349 350 register_adapter(Pendulum, lambda val: val.isoformat(' ')) 351 352 if SQL_ALCHEMY_CONN.startswith('mysql'): 353 try: 354 import MySQLdb.converters 355 356 MySQLdb.converters.conversions[Pendulum] = MySQLdb.converters.DateTime2literal 357 except ImportError: 358 pass 359 try: 360 import pymysql.converters 361 362 pymysql.converters.conversions[Pendulum] = pymysql.converters.escape_datetime 363 except ImportError: 364 pass 365 366 367 def validate_session(): 368 """Validate ORM Session""" 369 worker_precheck = conf.getboolean('celery', 'worker_precheck', fallback=False) 370 if not worker_precheck: 371 return True 372 else: 373 check_session = sessionmaker(bind=engine) 374 session = check_session() 375 try: 376 session.execute("select 1") 377 conn_status = True 378 except exc.DBAPIError as err: 379 log.error(err) 380 conn_status = False 381 session.close() 382 return conn_status 383 384 385 def configure_action_logging(): 386 """ 387 Any additional configuration (register callback) for airflow.utils.action_loggers 388 module 389 :rtype: None 390 """ 391 392 393 def prepare_syspath(): 394 """Ensures that certain subfolders of AIRFLOW_HOME are on the classpath""" 395 if DAGS_FOLDER not in sys.path: 396 sys.path.append(DAGS_FOLDER) 397 398 # Add ./config/ for loading custom log parsers etc, or 399 # airflow_local_settings etc. 400 config_path = os.path.join(AIRFLOW_HOME, 'config') 401 if config_path not in sys.path: 402 sys.path.append(config_path) 403 404 if PLUGINS_FOLDER not in sys.path: 405 sys.path.append(PLUGINS_FOLDER) 406 407 408 def get_session_lifetime_config(): 409 """Gets session timeout configs and handles outdated configs gracefully.""" 410 session_lifetime_minutes = conf.get('webserver', 'session_lifetime_minutes', fallback=None) 411 session_lifetime_days = conf.get('webserver', 'session_lifetime_days', fallback=None) 412 uses_deprecated_lifetime_configs = session_lifetime_days or conf.get( 413 'webserver', 'force_log_out_after', fallback=None 414 ) 415 416 minutes_per_day = 24 * 60 417 default_lifetime_minutes = '43200' 418 if uses_deprecated_lifetime_configs and session_lifetime_minutes == default_lifetime_minutes: 419 warnings.warn( 420 '`session_lifetime_days` option from `[webserver]` section has been ' 421 'renamed to `session_lifetime_minutes`. The new option allows to configure ' 422 'session lifetime in minutes. The `force_log_out_after` option has been removed ' 423 'from `[webserver]` section. Please update your configuration.', 424 category=DeprecationWarning, 425 ) 426 if session_lifetime_days: 427 session_lifetime_minutes = minutes_per_day * int(session_lifetime_days) 428 429 if not session_lifetime_minutes: 430 session_lifetime_days = 30 431 session_lifetime_minutes = minutes_per_day * session_lifetime_days 432 433 logging.debug('User session lifetime is set to %s minutes.', session_lifetime_minutes) 434 435 return int(session_lifetime_minutes) 436 437 438 def import_local_settings(): 439 """Import airflow_local_settings.py files to allow overriding any configs in settings.py file""" 440 try: 441 import airflow_local_settings 442 443 if hasattr(airflow_local_settings, "__all__"): 444 for i in airflow_local_settings.__all__: 445 globals()[i] = getattr(airflow_local_settings, i) 446 else: 447 for k, v in airflow_local_settings.__dict__.items(): 448 if not k.startswith("__"): 449 globals()[k] = v 450 451 # TODO: Remove once deprecated 452 if "policy" in globals() and "task_policy" not in globals(): 453 warnings.warn( 454 "Using `policy` in airflow_local_settings.py is deprecated. " 455 "Please rename your `policy` to `task_policy`.", 456 DeprecationWarning, 457 stacklevel=2, 458 ) 459 globals()["task_policy"] = globals()["policy"] 460 del globals()["policy"] 461 462 log.info("Loaded airflow_local_settings from %s .", airflow_local_settings.__file__) 463 except ModuleNotFoundError as e: 464 if e.name == "airflow_local_settings": 465 log.debug("No airflow_local_settings to import.", exc_info=True) 466 else: 467 log.critical( 468 "Failed to import airflow_local_settings due to a transitive module not found error.", 469 exc_info=True, 470 ) 471 raise 472 except ImportError: 473 log.critical("Failed to import airflow_local_settings.", exc_info=True) 474 raise 475 476 477 def initialize(): 478 """Initialize Airflow with all the settings from this file""" 479 configure_vars() 480 prepare_syspath() 481 import_local_settings() 482 global LOGGING_CLASS_PATH 483 LOGGING_CLASS_PATH = configure_logging() 484 configure_adapters() 485 # The webservers import this file from models.py with the default settings. 486 configure_orm() 487 configure_action_logging() 488 489 # Ensure we close DB connections at scheduler and gunicorn worker terminations 490 atexit.register(dispose_orm) 491 492 493 # Const stuff 494 495 KILOBYTE = 1024 496 MEGABYTE = KILOBYTE * KILOBYTE 497 WEB_COLORS = {'LIGHTBLUE': '#4d9de0', 'LIGHTORANGE': '#FF9933'} 498 499 500 # Updating serialized DAG can not be faster than a minimum interval to reduce database 501 # write rate. 502 MIN_SERIALIZED_DAG_UPDATE_INTERVAL = conf.getint('core', 'min_serialized_dag_update_interval', fallback=30) 503 504 # Fetching serialized DAG can not be faster than a minimum interval to reduce database 505 # read rate. This config controls when your DAGs are updated in the Webserver 506 MIN_SERIALIZED_DAG_FETCH_INTERVAL = conf.getint('core', 'min_serialized_dag_fetch_interval', fallback=10) 507 508 # If donot_modify_handlers=True, we do not modify logging handlers in task_run command 509 # If the flag is set to False, we remove all handlers from the root logger 510 # and add all handlers from 'airflow.task' logger to the root Logger. This is done 511 # to get all the logs from the print & log statements in the DAG files before a task is run 512 # The handlers are restored after the task completes execution. 513 DONOT_MODIFY_HANDLERS = conf.getboolean('logging', 'donot_modify_handlers', fallback=False) 514 515 CAN_FORK = hasattr(os, "fork") 516 517 EXECUTE_TASKS_NEW_PYTHON_INTERPRETER = not CAN_FORK or conf.getboolean( 518 'core', 519 'execute_tasks_new_python_interpreter', 520 fallback=False, 521 ) 522 523 ALLOW_FUTURE_EXEC_DATES = conf.getboolean('scheduler', 'allow_trigger_in_future', fallback=False) 524 525 # Whether or not to check each dagrun against defined SLAs 526 CHECK_SLAS = conf.getboolean('core', 'check_slas', fallback=True) 527 528 USE_JOB_SCHEDULE = conf.getboolean('scheduler', 'use_job_schedule', fallback=True) 529 530 # By default Airflow plugins are lazily-loaded (only loaded when required). Set it to False, 531 # if you want to load plugins whenever 'airflow' is invoked via cli or loaded from module. 532 LAZY_LOAD_PLUGINS = conf.getboolean('core', 'lazy_load_plugins', fallback=True) 533 534 # By default Airflow providers are lazily-discovered (discovery and imports happen only when required). 535 # Set it to False, if you want to discover providers whenever 'airflow' is invoked via cli or 536 # loaded from module. 537 LAZY_LOAD_PROVIDERS = conf.getboolean('core', 'lazy_discover_providers', fallback=True) 538 539 # Determines if the executor utilizes Kubernetes 540 IS_K8S_OR_K8SCELERY_EXECUTOR = conf.get('core', 'EXECUTOR') in { 541 executor_constants.KUBERNETES_EXECUTOR, 542 executor_constants.CELERY_KUBERNETES_EXECUTOR, 543 } 544 545 HIDE_SENSITIVE_VAR_CONN_FIELDS = conf.getboolean('core', 'hide_sensitive_var_conn_fields') 546 547 # By default this is off, but is automatically configured on when running task 548 # instances 549 MASK_SECRETS_IN_LOGS = False 550 551 # Display alerts on the dashboard 552 # Useful for warning about setup issues or announcing changes to end users 553 # List of UIAlerts, which allows for specifying the message, category, and roles the 554 # message should be shown to. For example: 555 # from airflow.www.utils import UIAlert 556 # 557 # DASHBOARD_UIALERTS = [ 558 # UIAlert("Welcome to Airflow"), # All users 559 # UIAlert("Airflow update happening next week", roles=["User"]), # Only users with the User role 560 # # A flash message with html: 561 # UIAlert('Visit <a href="http://airflow.apache.org">airflow.apache.org</a>', html=True), 562 # ] 563 # 564 # DASHBOARD_UIALERTS: List["UIAlert"] 565 DASHBOARD_UIALERTS = [] 566 567 # Prefix used to identify tables holding data moved during migration. 568 AIRFLOW_MOVED_TABLE_PREFIX = "_airflow_moved" 569 [end of airflow/settings.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
apache/airflow
5a113f302769f0ecad3a54bad3027d459cb276a4
A dag's schedule interval can no longer be an instance of dateutils.relativedelta ### Apache Airflow version 2.2.1 (latest released) ### Operating System debian ### Versions of Apache Airflow Providers apache-airflow==2.2.1 apache-airflow-providers-amazon==2.3.0 apache-airflow-providers-ftp==2.0.1 apache-airflow-providers-google==6.0.0 apache-airflow-providers-http==2.0.1 apache-airflow-providers-imap==2.0.1 apache-airflow-providers-jira==2.0.1 apache-airflow-providers-mysql==2.1.1 apache-airflow-providers-postgres==2.3.0 apache-airflow-providers-redis==2.0.1 apache-airflow-providers-sqlite==2.0.1 apache-airflow-providers-ssh==2.2.0 ### Deployment Other Docker-based deployment ### Deployment details Dask executor, custom-built Docker images, postgres 12.7 backend ### What happened I upgraded Airflow from 2.0.2 to 2.2.1, and some DAGs I have that used dateutils.relativedelta objects as schedule intervals stopped running ### What you expected to happen The [code](https://github.com/apache/airflow/blob/2.2.1/airflow/models/dag.py#L101) for the schedule_interval parameter of the DAG constructor indicates that a relativedelta object is allowed, so I expected the DAG to be correctly parsed and scheduled. ### How to reproduce Create a DAG that has a relativedelta object as its schedule interval, and it will not appear in the UI or be scheduled. ### Anything else Here is the code that causes the failure within the PR where it was introduced: [link](https://github.com/apache/airflow/pull/17414/files#diff-ed37fe966e8247e0bfd8aa28bc2698febeec3807df5f5a00545ca80744f8aff6R267) Here are the logs for the exception, found in the scheduler logs for the file that contains the offending DAG <details><pre> ERROR | {dagbag.py:528} - 'relativedelta' object has no attribute 'total_seconds' Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 515, in collect_dags found_dags = self.process_file(filepath, only_if_updated=only_if_updated, safe_mode=safe_mode) File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 298, in process_file found_dags = self._process_modules(filepath, mods, file_last_changed_on_disk) File "/usr/local/lib/python3.9/site-packages/airflow/models/dagbag.py", line 401, in _process_modules dag.timetable.validate() File "/usr/local/lib/python3.9/site-packages/airflow/timetables/interval.py", line 274, in validate if self._delta.total_seconds() <= 0: AttributeError: 'relativedelta' object has no attribute 'total_seconds' </pre></details> ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
2021-11-05T02:37:03Z
<patch> diff --git a/airflow/timetables/interval.py b/airflow/timetables/interval.py --- a/airflow/timetables/interval.py +++ b/airflow/timetables/interval.py @@ -271,8 +271,9 @@ def serialize(self) -> Dict[str, Any]: return {"delta": delta} def validate(self) -> None: - if self._delta.total_seconds() <= 0: - raise AirflowTimetableInvalid("schedule interval must be positive") + now = datetime.datetime.now() + if (now + self._delta) <= now: + raise AirflowTimetableInvalid(f"schedule interval must be positive, not {self._delta!r}") def _get_next(self, current: DateTime) -> DateTime: return convert_to_utc(current + self._delta) </patch>
[]
[]
google__jax-1972
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Nit: lax.conv_general_dilated does not support 0D-convolution AFAIK 0-dimensional convolution should reduce to a fully-connected layer, but fails with the following example and error. While this is not a practically useful scenario, degenerate cases like these could be useful for testing. ``` from jax import lax import jax.numpy as np lax.conv_general_dilated(lhs=np.ones((10, 5)), rhs=np.ones((5, 7)), strides=(), paddign='SAME', dimension_numbers=('NC', 'IO', 'NC')) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-17-573e7695a148> in <module>() ----> 1 lax.conv_general_dilated(np.ones((10, 5)), np.ones((5, 7)), (), 'SAME', dimension_numbers=('NC', 'IO', 'NC')) 6 frames google3/third_party/py/jax/lax/lax.py in conv_general_dilated(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, dimension_numbers, feature_group_count, precision) 516 feature_group_count=feature_group_count, 517 lhs_shape=lhs.shape, rhs_shape=rhs.shape, --> 518 precision=_canonicalize_precision(precision)) 519 520 def dot(lhs, rhs, precision=None): google3/third_party/py/jax/core.py in bind(self, *args, **kwargs) 148 top_trace = find_top_trace(args) 149 if top_trace is None: --> 150 return self.impl(*args, **kwargs) 151 152 tracers = map(top_trace.full_raise, args) google3/third_party/py/jax/interpreters/xla.py in apply_primitive(prim, *args, **params) 150 def apply_primitive(prim, *args, **params): 151 """Impl rule that compiles and runs a single primitive 'prim' using XLA.""" --> 152 compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params) 153 return compiled_fun(*args) 154 google3/third_party/py/jax/interpreters/xla.py in xla_primitive_callable(prim, *arg_specs, **params) 174 device = device and next(d for d in all_devices if (type(d), d.id) == device) 175 backend = xb.get_device_backend(device) --> 176 aval_out = prim.abstract_eval(*avals, **params) 177 if prim.multiple_results: 178 handlers = tuple(map(aval_to_result_handler, aval_out)) google3/third_party/py/jax/lax/lax.py in standard_abstract_eval(prim, shape_rule, dtype_rule, *args, **kwargs) 1500 return ConcreteArray(prim.impl(*[x.val for x in args], **kwargs)) 1501 elif least_specialized is ShapedArray: -> 1502 return ShapedArray(shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs)) 1503 elif least_specialized is UnshapedArray: 1504 return UnshapedArray(dtype_rule(*args, **kwargs)) google3/third_party/py/jax/lax/lax.py in _conv_general_dilated_shape_rule(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, dimension_numbers, feature_group_count, **unused_kwargs) 1943 lhs_trans = _dilate_shape(onp.take(lhs.shape, lhs_perm), lhs_dilation) 1944 rhs_trans = _dilate_shape(onp.take(rhs.shape, rhs_perm), rhs_dilation) -> 1945 out_trans = conv_shape_tuple(lhs_trans, rhs_trans, window_strides, padding) 1946 return tuple(onp.take(out_trans, onp.argsort(out_perm))) 1947 google3/third_party/py/jax/lax/lax.py in conv_shape_tuple(lhs_shape, rhs_shape, strides, pads) 4334 raise TypeError(msg.format(len(lhs_shape) - 2, len(pads))) 4335 -> 4336 lhs_padded = onp.add(lhs_shape[2:], onp.add(*zip(*pads))) 4337 out_space = onp.floor_divide( 4338 onp.subtract(lhs_padded, rhs_shape[2:]), strides) + 1 ValueError: invalid number of arguments ``` </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://raw.githubusercontent.com/google/jax/master/images/jax_logo_250px.png" alt="logo"></img> 3 </div> 4 5 # JAX: Autograd and XLA [![Test status](https://travis-ci.org/google/jax.svg?branch=master)](https://travis-ci.org/google/jax) 6 7 [**Quickstart**](#quickstart-colab-in-the-cloud) 8 | [**Transformations**](#transformations) 9 | [**Install guide**](#installation) 10 | [**Reference docs**](https://jax.readthedocs.io/en/latest/) 11 12 JAX is [Autograd](https://github.com/hips/autograd) and 13 [XLA](https://www.tensorflow.org/xla), 14 brought together for high-performance machine learning research. 15 16 With its updated version of [Autograd](https://github.com/hips/autograd), 17 JAX can automatically differentiate native 18 Python and NumPy functions. It can differentiate through loops, branches, 19 recursion, and closures, and it can take derivatives of derivatives of 20 derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) 21 via [`grad`](#automatic-differentiation-with-grad) as well as forward-mode differentiation, 22 and the two can be composed arbitrarily to any order. 23 24 What’s new is that JAX uses 25 [XLA](https://www.tensorflow.org/xla) 26 to compile and run your NumPy programs on GPUs and TPUs. Compilation happens 27 under the hood by default, with library calls getting just-in-time compiled and 28 executed. But JAX also lets you just-in-time compile your own Python functions 29 into XLA-optimized kernels using a one-function API, 30 [`jit`](#compilation-with-jit). Compilation and automatic differentiation can be 31 composed arbitrarily, so you can express sophisticated algorithms and get 32 maximal performance without leaving Python. You can even program multiple GPUs 33 or TPU cores at once using [`pmap`](#spmd-programming-with-pmap), and 34 differentiate through the whole thing. 35 36 Dig a little deeper, and you'll see that JAX is really an extensible system for 37 [composable function transformations](#transformations). Both 38 [`grad`](#automatic-differentiation-with-grad) and [`jit`](#compilation-with-jit) 39 are instances of such transformations. Others are 40 [`vmap`](#auto-vectorization-with-vmap) for automatic vectorization and 41 [`pmap`](#spmd-programming-with-pmap) for single-program multiple-data (SPMD) 42 parallel programming of multiple accelerators, with more to come. 43 44 This is a research project, not an official Google product. Expect bugs and 45 [sharp edges](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html). 46 Please help by trying it out, [reporting 47 bugs](https://github.com/google/jax/issues), and letting us know what you 48 think! 49 50 ```python 51 import jax.numpy as np 52 from jax import grad, jit, vmap 53 54 def predict(params, inputs): 55 for W, b in params: 56 outputs = np.dot(inputs, W) + b 57 inputs = np.tanh(outputs) 58 return outputs 59 60 def logprob_fun(params, inputs, targets): 61 preds = predict(params, inputs) 62 return np.sum((preds - targets)**2) 63 64 grad_fun = jit(grad(logprob_fun)) # compiled gradient evaluation function 65 perex_grads = jit(vmap(grad_fun, in_axes=(None, 0, 0))) # fast per-example grads 66 ``` 67 68 ### Contents 69 * [Quickstart: Colab in the Cloud](#quickstart-colab-in-the-cloud) 70 * [Transformations](#transformations) 71 * [Current gotchas](#current-gotchas) 72 * [Installation](#installation) 73 * [Citing JAX](#citing-jax) 74 * [Reference documentation](#reference-documentation) 75 76 ## Quickstart: Colab in the Cloud 77 Jump right in using a notebook in your browser, connected to a Google Cloud GPU. 78 Here are some starter notebooks: 79 - [The basics: NumPy on accelerators, `grad` for differentiation, `jit` for compilation, and `vmap` for vectorization](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html) 80 - [Training a Simple Neural Network, with TensorFlow Dataset Data Loading](https://colab.research.google.com/github/google/jax/blob/master/docs/notebooks/neural_network_with_tfds_data.ipynb) 81 82 **JAX now runs on Cloud TPUs.** To try out the preview, see the [Cloud TPU 83 Colabs](https://github.com/google/jax/tree/master/cloud_tpu_colabs). 84 85 For a deeper dive into JAX: 86 - [The Autodiff Cookbook, Part 1: easy and powerful automatic differentiation in JAX](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html) 87 - [Common gotchas and sharp edges](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html) 88 - See the [full list of 89 notebooks](https://github.com/google/jax/tree/master/docs/notebooks). 90 91 ## Transformations 92 93 At its core, JAX is an extensible system for transforming numerical functions. 94 Here are four of primary interest: `grad`, `jit`, `vmap`, and `pmap`. 95 96 ### Automatic differentiation with `grad` 97 98 JAX has roughly the same API as [Autograd](https://github.com/hips/autograd). 99 The most popular function is 100 [`grad`](https://jax.readthedocs.io/en/latest/jax.html#jax.grad) 101 for reverse-mode gradients: 102 103 ```python 104 from jax import grad 105 import jax.numpy as np 106 107 def tanh(x): # Define a function 108 y = np.exp(-2.0 * x) 109 return (1.0 - y) / (1.0 + y) 110 111 grad_tanh = grad(tanh) # Obtain its gradient function 112 print(grad_tanh(1.0)) # Evaluate it at x = 1.0 113 # prints 0.4199743 114 ``` 115 116 You can differentiate to any order with `grad`. 117 118 ```python 119 print(grad(grad(grad(tanh)))(1.0)) 120 # prints 0.62162673 121 ``` 122 123 For more advanced autodiff, you can use 124 [`jax.vjp`](https://jax.readthedocs.io/en/latest/jax.html#jax.vjp) for 125 reverse-mode vector-Jacobian products and 126 [`jax.jvp`](https://jax.readthedocs.io/en/latest/jax.html#jax.defjvp) for 127 forward-mode Jacobian-vector products. The two can be composed arbitrarily with 128 one another, and with other JAX transformations. Here's one way to compose those 129 to make a function that efficiently computes [full Hessian 130 matrices](https://jax.readthedocs.io/en/latest/jax.html#jax.hessian): 131 132 ```python 133 from jax import jit, jacfwd, jacrev 134 135 def hessian(fun): 136 return jit(jacfwd(jacrev(fun))) 137 ``` 138 139 As with [Autograd](https://github.com/hips/autograd), you're free to use 140 differentiation with Python control structures: 141 142 ```python 143 def abs_val(x): 144 if x > 0: 145 return x 146 else: 147 return -x 148 149 abs_val_grad = grad(abs_val) 150 print(abs_val_grad(1.0)) # prints 1.0 151 print(abs_val_grad(-1.0)) # prints -1.0 (abs_val is re-evaluated) 152 ``` 153 154 See the [reference docs on automatic 155 differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) 156 and the [JAX Autodiff 157 Cookbook](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html) 158 for more. 159 160 ### Compilation with `jit` 161 162 You can use XLA to compile your functions end-to-end with 163 [`jit`](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit), 164 used either as an `@jit` decorator or as a higher-order function. 165 166 ```python 167 import jax.numpy as np 168 from jax import jit 169 170 def slow_f(x): 171 # Element-wise ops see a large benefit from fusion 172 return x * x + x * 2.0 173 174 x = np.ones((5000, 5000)) 175 fast_f = jit(slow_f) 176 %timeit -n10 -r3 fast_f(x) # ~ 4.5 ms / loop on Titan X 177 %timeit -n10 -r3 slow_f(x) # ~ 14.5 ms / loop (also on GPU via JAX) 178 ``` 179 180 You can mix `jit` and `grad` and any other JAX transformation however you like. 181 182 Using `jit` puts constraints on the kind of Python control flow 183 the function can use; see 184 the [Gotchas 185 Notebook](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#python-control-flow-+-JIT) 186 for more. 187 188 ### Auto-vectorization with `vmap` 189 190 [`vmap`](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) is 191 the vectorizing map. 192 It has the familiar semantics of mapping a function along array axes, but 193 instead of keeping the loop on the outside, it pushes the loop down into a 194 function’s primitive operations for better performance. 195 196 Using `vmap` can save you from having to carry around batch dimensions in your 197 code. For example, consider this simple *unbatched* neural network prediction 198 function: 199 200 ```python 201 def predict(params, input_vec): 202 assert input_vec.ndim == 1 203 for W, b in params: 204 output_vec = np.dot(W, input_vec) + b # `input_vec` on the right-hand side! 205 input_vec = np.tanh(output_vec) 206 return output_vec 207 ``` 208 209 We often instead write `np.dot(inputs, W)` to allow for a batch dimension on the 210 left side of `inputs`, but we’ve written this particular prediction function to 211 apply only to single input vectors. If we wanted to apply this function to a 212 batch of inputs at once, semantically we could just write 213 214 ```python 215 from functools import partial 216 predictions = np.stack(list(map(partial(predict, params), input_batch))) 217 ``` 218 219 But pushing one example through the network at a time would be slow! It’s better 220 to vectorize the computation, so that at every layer we’re doing matrix-matrix 221 multiplies rather than matrix-vector multiplies. 222 223 The `vmap` function does that transformation for us. That is, if we write 224 225 ```python 226 from jax import vmap 227 predictions = vmap(partial(predict, params))(input_batch) 228 # or, alternatively 229 predictions = vmap(predict, in_axes=(None, 0))(params, input_batch) 230 ``` 231 232 then the `vmap` function will push the outer loop inside the function, and our 233 machine will end up executing matrix-matrix multiplications exactly as if we’d 234 done the batching by hand. 235 236 It’s easy enough to manually batch a simple neural network without `vmap`, but 237 in other cases manual vectorization can be impractical or impossible. Take the 238 problem of efficiently computing per-example gradients: that is, for a fixed set 239 of parameters, we want to compute the gradient of our loss function evaluated 240 separately at each example in a batch. With `vmap`, it’s easy: 241 242 ```python 243 per_example_gradients = vmap(partial(grad(loss), params))(inputs, targets) 244 ``` 245 246 Of course, `vmap` can be arbitrarily composed with `jit`, `grad`, and any other 247 JAX transformation! We use `vmap` with both forward- and reverse-mode automatic 248 differentiation for fast Jacobian and Hessian matrix calculations in 249 `jax.jacfwd`, `jax.jacrev`, and `jax.hessian`. 250 251 ### SPMD programming with `pmap` 252 253 For parallel programming of multiple accelerators, like multiple GPUs, use 254 [`pmap`](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap). 255 With `pmap` you write single-program multiple-data (SPMD) programs, including 256 fast parallel collective communication operations. 257 258 Here's an example on an 8-GPU machine: 259 260 ```python 261 from jax import random 262 263 # Create 8 random 5000 x 6000 matrices, one per GPU 264 keys = random.split(random.PRNGKey(0), 8) 265 mats = pmap(lambda key: random.normal(key, (5000, 6000)))(keys) 266 267 # Run a local matmul on each device in parallel (no data transfer) 268 result = pmap(lambda x: np.dot(x, x.T))(mats) # result.shape is (8, 5000, 5000) 269 270 # Compute the mean on each device in parallel and print the result 271 print(pmap(np.mean)(result)) 272 # prints [1.1566595 1.1805978 ... 1.2321935 1.2015157] 273 ``` 274 275 In addition to expressing pure maps, you can fast use [collective communication 276 operations](https://jax.readthedocs.io/en/latest/jax.lax.html#parallel-operators) 277 between devices: 278 279 ```python 280 from functools import partial 281 from jax import lax 282 283 @partial(pmap, axis_name='i') 284 def normalize(x): 285 return x / lax.psum(x, 'i') 286 287 print(normalize(np.arange(4.))) 288 # prints [0. 0.16666667 0.33333334 0.5 ] 289 ``` 290 291 You can even [nest `pmap` functions](https://colab.sandbox.google.com/github/google/jax/blob/master/cloud_tpu_colabs/Pmap_Cookbook.ipynb#scrollTo=MdRscR5MONuN) for more 292 sophisticated communication patterns. 293 294 It all composes, so you're free to differentiate through parallel computations: 295 296 ```python 297 from jax import grad 298 299 @pmap 300 def f(x): 301 y = np.sin(x) 302 @pmap 303 def g(z): 304 return np.cos(z) * np.tan(y.sum()) * np.tanh(x).sum() 305 return grad(lambda w: np.sum(g(w)))(x) 306 307 print(f(x)) 308 # [[ 0. , -0.7170853 ], 309 # [-3.1085174 , -0.4824318 ], 310 # [10.366636 , 13.135289 ], 311 # [ 0.22163185, -0.52112055]] 312 313 print(grad(lambda x: np.sum(f(x)))(x)) 314 # [[ -3.2369726, -1.6356447], 315 # [ 4.7572474, 11.606951 ], 316 # [-98.524414 , 42.76499 ], 317 # [ -1.6007166, -1.2568436]] 318 ``` 319 320 When reverse-mode differentiating a `pmap` function (e.g. with `grad`), the 321 backward pass of the computation is parallelized just like the forward pass. 322 323 See the [SPMD 324 Cookbook](https://colab.sandbox.google.com/github/google/jax/blob/master/cloud_tpu_colabs/Pmap_Cookbook.ipynb) 325 and the [SPMD MNIST classifier from scratch 326 example](https://github.com/google/jax/blob/master/examples/spmd_mnist_classifier_fromscratch.py) 327 for more. 328 329 ## Current gotchas 330 331 For a more thorough survey of current gotchas, with examples and explanations, 332 we highly recommend reading the [Gotchas 333 Notebook](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html). 334 Some standouts: 335 336 1. [In-place mutating updates of 337 arrays](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-In-Place-Updates), like `x[i] += y`, aren't supported, but [there are functional alternatives](https://jax.readthedocs.io/en/latest/jax.ops.html). Under a `jit`, those functional alternatives will reuse buffers in-place automatically. 338 2. [Random numbers are 339 different](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-Random-Numbers), but for [good reasons](https://github.com/google/jax/blob/master/design_notes/prng.md). 340 3. If you're looking for [convolution 341 operators](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-Convolutions), 342 they're in the `jax.lax` package. 343 4. JAX enforces single-precision (32-bit, e.g. `float32`) values by default, and 344 [to enable 345 double-precision](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#Double-(64bit)-precision) 346 (64-bit, e.g. `float64`) one needs to set the `jax_enable_x64` variable at 347 startup (or set the environment variable `JAX_ENABLE_X64=True`). 348 5. Some of NumPy's dtype promotion semantics involving a mix of Python scalars 349 and NumPy types aren't preserved, namely `np.add(1, np.array([2], 350 np.float32)).dtype` is `float64` rather than `float32`. 351 6. Some transformations, like `jit`, [constrain how you can use Python control 352 flow](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#%F0%9F%94%AA-Control-Flow). 353 You'll always get loud errors if something goes wrong. You might have to use 354 [`jit`'s `static_argnums` 355 parameter](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit), 356 [structured control flow 357 primitives](https://jax.readthedocs.io/en/latest/jax.lax.html#control-flow-operators) 358 like 359 [`lax.scan`](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html#jax.lax.scan), 360 or just use `jit` on smaller subfunctions. 361 362 ## Installation 363 364 JAX is written in pure Python, but it depends on XLA, which needs to be 365 installed as the `jaxlib` package. Use the following instructions to install a 366 binary package with `pip`, or to build JAX from source. 367 368 We support installing or building `jaxlib` on Linux (Ubuntu 16.04 or later) and 369 macOS (10.12 or later) platforms, but not yet Windows. We're not currently 370 working on Windows support, but contributions are welcome 371 (see [#438](https://github.com/google/jax/issues/438)). Some users have reported 372 success with building a CPU-only `jaxlib` from source using the Windows Subsytem 373 for Linux. 374 375 ### pip installation 376 377 To install a CPU-only version, which might be useful for doing local 378 development on a laptop, you can run 379 380 ```bash 381 pip install --upgrade pip 382 pip install --upgrade jax jaxlib # CPU-only version 383 ``` 384 385 On Linux, it is often necessary to first update `pip` to a version that supports 386 `manylinux2010` wheels. 387 388 If you want to install JAX with both CPU and GPU support, using existing CUDA 389 and CUDNN7 installations on your machine (for example, preinstalled on your 390 cloud VM), you can run 391 392 ```bash 393 # install jaxlib 394 PYTHON_VERSION=cp37 # alternatives: cp35, cp36, cp37, cp38 395 CUDA_VERSION=cuda92 # alternatives: cuda90, cuda92, cuda100, cuda101 396 PLATFORM=linux_x86_64 # alternatives: linux_x86_64 397 BASE_URL='https://storage.googleapis.com/jax-releases' 398 pip install --upgrade $BASE_URL/$CUDA_VERSION/jaxlib-0.1.37-$PYTHON_VERSION-none-$PLATFORM.whl 399 400 pip install --upgrade jax # install jax 401 ``` 402 403 The library package name must correspond to the version of the existing CUDA 404 installation you want to use, with `cuda101` for CUDA 10.1, `cuda100` for CUDA 405 10.0, `cuda92` for CUDA 9.2, and `cuda90` for CUDA 9.0. To find your CUDA and 406 CUDNN versions, you can run commands like these, depending on your CUDNN install 407 path: 408 409 ```bash 410 nvcc --version 411 grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h # might need different path 412 ``` 413 414 The Python version must match your Python interpreter. There are prebuilt wheels 415 for Python 3.5, 3.6, 3.7, and 3.8; for anything else, you must build from 416 source. Jax requires Python 3.5 or above. Jax does not support Python 2 any 417 more. 418 419 Please let us know on [the issue tracker](https://github.com/google/jax/issues) 420 if you run into any errors or problems with the prebuilt wheels. 421 422 ### Building JAX from source 423 See [Building JAX from 424 source](https://jax.readthedocs.io/en/latest/developer.html#building-from-source). 425 426 427 ## Citing JAX 428 429 To cite this repository: 430 431 ``` 432 @software{jax2018github, 433 author = {James Bradbury and Roy Frostig and Peter Hawkins and Matthew James Johnson and Chris Leary and Dougal Maclaurin and Skye Wanderman-Milne}, 434 title = {{JAX}: composable transformations of {P}ython+{N}um{P}y programs}, 435 url = {http://github.com/google/jax}, 436 version = {0.1.55}, 437 year = {2018}, 438 } 439 ``` 440 441 In the above bibtex entry, names are in alphabetical order, the version number 442 is intended to be that from [jax/version.py](../blob/master/jax/version.py), and 443 the year corresponds to the project's open-source release. 444 445 A nascent version of JAX, supporting only automatic differentiation and 446 compilation to XLA, was described in a [paper that appeared at SysML 447 2018](https://www.sysml.cc/doc/2018/146.pdf). We're currently working on 448 covering JAX's ideas and capabilities in a more comprehensive and up-to-date 449 paper. 450 451 ## Reference documentation 452 453 For details about the JAX API, see the 454 [reference documentation](https://jax.readthedocs.io/). 455 456 For getting started as a JAX developer, see the 457 [developer documentation](https://jax.readthedocs.io/en/latest/developer.html). 458 [end of README.md] [start of examples/onnx2xla.py] 1 # Copyright 2018 Google LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # https://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """An ONNX to XLA compiler by JAX-tracing a Numpy-backed ONNX interpreter.""" 16 from __future__ import absolute_import 17 from __future__ import division 18 from __future__ import print_function 19 20 from cStringIO import StringIO 21 from functools import partial 22 import hashlib 23 import sys 24 25 import onnx 26 from onnx import numpy_helper 27 from onnx import onnx_pb2 28 import urllib 29 30 import jax.numpy as np 31 from jax import jit, grad 32 from jax import lax 33 34 35 def _asarray(proto): 36 return numpy_helper.to_array(proto).reshape(tuple(proto.dims)) 37 38 39 attr_types = dict(onnx_pb2.AttributeProto.AttributeType.items()) 40 attribute_handlers = { 41 attr_types['FLOAT']: lambda a: a.f, 42 attr_types['INT']: lambda a: a.i, 43 attr_types['STRING']: lambda a: a.s, 44 attr_types['TENSOR']: lambda a: _asarray(a.t), 45 attr_types['FLOATS']: lambda a: a.floats, 46 attr_types['INTS']: lambda a: a.ints, 47 attr_types['STRINGS']: lambda a: a.strings, 48 attr_types['TENSORS']: lambda a: [_asarray(x) for x in a.tensors], 49 } 50 51 52 def onnx_maxpool(x, kernel_shape, pads=None, strides=None): 53 """Numpy-backed implementation of ONNX MaxPool op.""" 54 prefix = (1,) * (x.ndim - len(kernel_shape)) 55 dims = prefix + tuple(kernel_shape) 56 pads = tuple(pads) if pads else [0] * len(kernel_shape) 57 strides = (prefix + tuple(strides)) if strides else [1] * len(kernel_shape) 58 return [lax.reduce_window(x, -np.inf, lax.max, dims, strides, 'VALID')] 59 60 61 def onnx_conv(x, w, b=0, group=1, kernel_shape=None, pads=None, strides=None, 62 dilations=None, auto_pad=None): 63 """Numpy-backed implementation of ONNX Conv op.""" 64 assert group == 1 65 kernel_shape = kernel_shape or w.shape 66 strides = strides or [1] * (w.ndim - 2) 67 if auto_pad: 68 auto_pad = 'SAME' if auto_pad.startswith('SAME') else 'VALID' 69 pads = lax.padtype_to_pads(x.shape[2:], w.shape[2:], strides, auto_pad) 70 else: 71 pads = pads or [0] * (w.ndim - 2) 72 lhs_dilation = [1] * (w.ndim - 2) 73 rhs_dilation = dilations or [1] * (w.ndim - 2) 74 return [lax.conv_with_general_padding(x, w, strides, pads, 75 lhs_dilation, rhs_dilation) + b] 76 77 78 def onnx_add(a, b, axis=None, broadcast=True): 79 """Numpy-backed implementation of ONNX Add op.""" 80 if broadcast: 81 axis = (a.dim - b.ndim) if axis is None else axis % a.ndim 82 assert a.shape[axis:][:b.ndim] == b.shape 83 b_shape = np.ones(a.ndim, dtype='int64').copy() 84 b_shape[axis:axis + b.ndim] = b.shape 85 b = np.reshape(b, b_shape) 86 return [a + b] 87 88 89 onnx_ops = { 90 'Add': onnx_add, 91 'Constant': lambda value: [value], 92 'Conv': onnx_conv, 93 'MatMul': lambda x, y: [np.matmul(x, y)], 94 'MaxPool': onnx_maxpool, 95 'Relu': lambda x: [np.maximum(x, 0)], 96 'Reshape': lambda x, shape: [np.reshape(x, shape)], 97 } 98 99 100 def interpret_onnx(graph, *args): 101 vals = dict({n.name: a for n, a in zip(graph.input, args)}, 102 **{n.name: _asarray(n) for n in graph.initializer}) 103 for node in graph.node: 104 args = (vals[name] for name in node.input) 105 attrs = {a.name: attribute_handlers[a.type](a) for a in node.attribute} 106 outputs = onnx_ops[node.op_type](*args, **attrs) 107 for name, output in zip(node.output, outputs): 108 vals[name] = output 109 return [vals[n.name] for n in graph.output] 110 111 112 if __name__ == "__main__": 113 # It seems that there are several ONNX proto versions (you had one job!) but 114 # this implementation works with at least this one mnist example file. 115 url = ('https://github.com/onnx/models/blob/' 116 '81c4779096d1205edd0b809e191a924c58c38fef/' 117 'mnist/model.onnx?raw=true') 118 download = urllib.request.urlopen(url).read() 119 if hashlib.md5(download).hexdigest() != 'bc8ad9bd19c5a058055dc18d0f089dad': 120 print("onnx file checksum mismatch") 121 sys.exit(1) 122 model = onnx.load(StringIO(download)) 123 124 predict = lambda inputs: interpret_onnx(model.graph, inputs)[0] 125 126 # Run inference in Numpy-backed interpreter 127 print("interpreted:") 128 print(predict(np.ones((1, 1, 28, 28)))) 129 130 # JIT compile to XLA device, run inference on device 131 compiled_predict = jit(predict) 132 print("compiled:") 133 print(compiled_predict(np.ones((1, 1, 28, 28)))) 134 135 # The interpreter is differentiable too! Even the compiled one: 136 fun = lambda inputs: np.sum(compiled_predict(inputs)) 137 print("a derivative with respect to inputs:") 138 print(grad(fun)(np.ones((1, 1, 28, 28)))[..., :3, :3]) 139 140 [end of examples/onnx2xla.py] [start of jax/lax_reference.py] 1 # Copyright 2018 Google LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # https://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from __future__ import absolute_import 16 from __future__ import division 17 from __future__ import print_function 18 19 import builtins 20 import collections 21 import itertools 22 23 import numpy as onp 24 import opt_einsum 25 import scipy.special 26 27 from . import dtypes 28 29 _slice = builtins.slice 30 _max = builtins.max 31 _min = builtins.min 32 _map = builtins.map 33 34 neg = onp.negative 35 sign = onp.sign 36 floor = onp.floor 37 ceil = onp.ceil 38 round = onp.round 39 nextafter = onp.nextafter 40 41 is_finite = onp.isfinite 42 43 exp = onp.exp 44 expm1 = onp.expm1 45 log = onp.log 46 log1p = onp.log1p 47 tanh = onp.tanh 48 sin = onp.sin 49 cos = onp.cos 50 atan2 = onp.arctan2 51 52 sqrt = onp.sqrt 53 rsqrt = lambda x: 1. / onp.sqrt(x) 54 square = onp.square 55 reciprocal = onp.reciprocal 56 tan = onp.tan 57 asin = onp.arcsin 58 acos = onp.arccos 59 atan = onp.arctan 60 sinh = onp.sinh 61 cosh = onp.cosh 62 63 lgamma = scipy.special.gammaln 64 digamma = scipy.special.digamma 65 erf = scipy.special.erf 66 erfc = scipy.special.erfc 67 erf_inv = scipy.special.erfinv 68 bessel_i0e = scipy.special.i0e 69 bessel_i1e = scipy.special.i1e 70 71 real = onp.real 72 imag = onp.imag 73 74 def conj(x): 75 return onp.conj(x) + onp.complex64(0) 76 77 def complex(x, y): 78 return x + onp.complex64(1j) * y 79 80 abs = onp.absolute 81 pow = onp.power 82 83 bitwise_not = onp.bitwise_not 84 bitwise_and = onp.bitwise_and 85 bitwise_or = onp.bitwise_or 86 bitwise_xor = onp.bitwise_xor 87 88 add = onp.add 89 sub = onp.subtract 90 mul = onp.multiply 91 92 def div(lhs, rhs): 93 if dtypes.issubdtype(dtypes.result_type(lhs), onp.integer): 94 quotient = onp.floor_divide(lhs, rhs) 95 select = onp.logical_and(onp.sign(lhs) != onp.sign(rhs), 96 onp.remainder(lhs, rhs) != 0) 97 return onp.where(select, quotient + 1, quotient) 98 else: 99 return onp.divide(lhs, rhs) 100 101 def rem(lhs, rhs): 102 return onp.sign(lhs) * onp.remainder(onp.abs(lhs), onp.abs(rhs)) 103 104 max = onp.maximum 105 min = onp.minimum 106 107 shift_left = onp.left_shift 108 shift_right_arithmetic = onp.right_shift 109 # TODO shift_right_logical 110 111 eq = onp.equal 112 ne = onp.not_equal 113 ge = onp.greater_equal 114 gt = onp.greater 115 le = onp.less_equal 116 lt = onp.less 117 118 def convert_element_type(operand, dtype): 119 return onp.asarray(operand, dtype=dtype) 120 121 def bitcast_convert_type(operand, dtype): 122 return onp.asarray(operand).view(dtype) 123 124 def clamp(min, operand, max): 125 return onp.clip(operand, onp.clip(min, None, max), max) 126 127 def concatenate(operands, dimension): 128 return onp.concatenate(operands, axis=dimension) 129 130 def conv(lhs, rhs, window_strides, padding): 131 pads = padtype_to_pads(lhs.shape[2:], rhs.shape[2:], window_strides, padding) 132 return _conv(lhs, rhs, window_strides, pads) 133 134 def conv_with_general_padding( 135 lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation): 136 return _conv(_dilate(lhs, lhs_dilation), _dilate(rhs, rhs_dilation), 137 window_strides, padding) 138 139 def conv_general_dilated(lhs, rhs, window_strides, padding, lhs_dilation, 140 rhs_dilation, dimension_numbers): 141 lhs_perm, rhs_perm, out_perm = _conv_general_permutations(dimension_numbers) 142 if isinstance(padding, str): 143 padding = padtype_to_pads(onp.take(lhs.shape, lhs_perm)[2:], 144 onp.take(rhs.shape, rhs_perm)[2:], 145 window_strides, padding) 146 trans_lhs = transpose(lhs, lhs_perm) 147 trans_rhs = transpose(rhs, rhs_perm) 148 out = conv_with_general_padding(trans_lhs, trans_rhs, window_strides, padding, 149 lhs_dilation, rhs_dilation) 150 return transpose(out, onp.argsort(out_perm)) 151 152 dot = onp.dot 153 154 def dot_general(lhs, rhs, dimension_numbers): 155 (lhs_contracting, rhs_contracting), (lhs_batch, rhs_batch) = dimension_numbers 156 new_id = itertools.count() 157 lhs_axis_ids = [next(new_id) for _ in lhs.shape] 158 rhs_axis_ids = [next(new_id) for _ in rhs.shape] 159 lhs_out_axis_ids = lhs_axis_ids[:] 160 rhs_out_axis_ids = rhs_axis_ids[:] 161 162 for lhs_axis, rhs_axis in zip(lhs_contracting, rhs_contracting): 163 shared_id = next(new_id) 164 lhs_axis_ids[lhs_axis] = shared_id 165 rhs_axis_ids[rhs_axis] = shared_id 166 lhs_out_axis_ids[lhs_axis] = None 167 rhs_out_axis_ids[rhs_axis] = None 168 169 batch_ids = [] 170 for lhs_axis, rhs_axis in zip(lhs_batch, rhs_batch): 171 shared_id = next(new_id) 172 lhs_axis_ids[lhs_axis] = shared_id 173 rhs_axis_ids[rhs_axis] = shared_id 174 lhs_out_axis_ids[lhs_axis] = None 175 rhs_out_axis_ids[rhs_axis] = None 176 batch_ids.append(shared_id) 177 178 not_none = lambda x: x is not None 179 out_axis_ids = filter(not_none, 180 batch_ids + lhs_out_axis_ids + rhs_out_axis_ids) 181 assert lhs.dtype == rhs.dtype 182 dtype = onp.float32 if lhs.dtype == dtypes.bfloat16 else None 183 out = onp.einsum(lhs, lhs_axis_ids, rhs, rhs_axis_ids, out_axis_ids, 184 dtype=dtype) 185 return out.astype(dtypes.bfloat16) if lhs.dtype == dtypes.bfloat16 else out 186 187 def broadcast(operand, sizes): 188 return onp.broadcast_to(operand, sizes + onp.shape(operand)) 189 190 def broadcast_in_dim(operand, shape, broadcast_dimensions): 191 inshape = tuple(1 if i not in broadcast_dimensions else d 192 for i, d in enumerate(shape)) 193 return onp.broadcast_to(onp.reshape(operand, inshape), shape) 194 195 sum = onp.sum 196 197 def reshape(operand, new_sizes, dimensions=None): 198 if dimensions is None: 199 dimensions = range(len(onp.shape(operand))) 200 return onp.reshape(onp.transpose(operand, dimensions), new_sizes) 201 202 def pad(operand, padding_value, padding_config): 203 lo, hi, interior = zip(*padding_config) 204 outshape = onp.add(onp.add(onp.add(lo, hi), operand.shape), 205 onp.multiply(interior, onp.subtract(operand.shape, 1))) 206 out = onp.full(outshape, padding_value, operand.dtype) 207 lhs_slices = tuple(_slice(l if l > 0 else 0, -h if h > 0 else None, step) 208 for l, h, step in zip(lo, hi, onp.add(1, interior))) 209 rhs_slices = tuple(_slice(l if l < 0 else 0, -h if h < 0 else None) 210 for l, h in zip(lo, hi)) 211 out[lhs_slices] = operand[rhs_slices] 212 return out 213 214 def rev(operand, dimensions): 215 dimensions = frozenset(dimensions) 216 indexer = (_slice(None, None, -1) if d in dimensions else _slice(None) 217 for d in range(onp.ndim(operand))) 218 return operand[tuple(indexer)] 219 220 select = onp.where 221 222 def slice(operand, start_indices, limit_indices, strides=None): # pylint: disable=redefined-builtin 223 if strides is None: 224 strides = onp.ones(len(start_indices)).astype(int) 225 slices = tuple(_map(_slice, start_indices, limit_indices, strides)) 226 return operand[slices] 227 228 def dynamic_slice(operand, start_indices, slice_sizes): 229 out = onp.zeros(slice_sizes, dtype=operand.dtype) 230 idx = tuple(_slice(start, start+size) 231 for start, size in zip(start_indices, slice_sizes)) 232 section = operand[idx] 233 out[tuple(_slice(None, stop) for stop in section.shape)] = section 234 return out 235 236 def dynamic_update_slice(operand, update, start_indices): 237 slices = tuple(_map(_slice, start_indices, onp.add(start_indices, update.shape))) 238 updated_operand = onp.copy(operand) 239 updated_operand[slices] = update 240 return updated_operand 241 242 transpose = onp.transpose 243 244 def reduce(operand, init_value, computation, dimensions): # pylint: disable=redefined-builtin 245 reducer = _make_reducer(computation, init_value) 246 return reducer(operand, tuple(dimensions)).astype(onp.asarray(operand).dtype) 247 248 def reduce_window(operand, init_value, computation, window_dimensions, 249 window_strides, padding): 250 op, dims, strides = operand, window_dimensions, window_strides 251 pads = padtype_to_pads(op.shape, dims, strides, padding) 252 view = _conv_view(op.reshape((1, 1) + op.shape), (1, 1) + dims, strides, pads, 253 pad_value=init_value)[0] 254 view = view.reshape(view.shape[1:1+len(dims)] + (-1,)) 255 reducer = _make_reducer(computation, init_value) 256 return reducer(view, axis=-1) 257 258 # TODO(mattjj): select_and_scatter 259 260 sort = onp.sort 261 262 def sort_key_val(keys, values, dimension=-1): 263 idxs = list(onp.ix_(*[onp.arange(d) for d in keys.shape])) 264 idxs[dimension] = onp.argsort(keys, axis=dimension) 265 return keys[idxs], values[idxs] 266 267 # TODO untake 268 269 ### conv util 270 271 def _conv(lhs, rhs, window_strides, pads): 272 view, view_axes, rhs_axes, out_axes = _conv_view( 273 lhs, rhs.shape, window_strides, pads, 0.) 274 return opt_einsum.contract( 275 view, view_axes, rhs, rhs_axes, out_axes, use_blas=True) 276 277 def padtype_to_pads(in_shape, filter_shape, window_strides, padding): 278 if padding.upper() == 'SAME': 279 out_shape = onp.ceil(onp.true_divide(in_shape, window_strides)).astype(int) 280 pad_sizes = [_max((out_size - 1) * stride + filter_size - in_size, 0) 281 for out_size, stride, filter_size, in_size 282 in zip(out_shape, window_strides, filter_shape, in_shape)] 283 return [(pad_size // 2, pad_size - pad_size // 2) for pad_size in pad_sizes] 284 else: 285 return [(0, 0)] * len(in_shape) 286 287 def _conv_view(lhs, rhs_shape, window_strides, pads, pad_value): 288 """Compute the view (and its axes) of a convolution or window reduction.""" 289 if (_min(lhs.ndim, len(rhs_shape)) < 2 or lhs.ndim != len(rhs_shape) 290 or lhs.shape[1] != rhs_shape[1]): 291 raise ValueError('Dimension mismatch') 292 if len(window_strides) != len(rhs_shape) - 2: 293 raise ValueError('Wrong number of strides for spatial dimensions') 294 if len(pads) != len(rhs_shape) - 2: 295 raise ValueError('Wrong number of pads for spatial dimensions') 296 297 lhs = _pad(lhs, [(0, 0)] * 2 + list(pads), pad_value) 298 in_shape = lhs.shape[2:] 299 filter_shape = rhs_shape[2:] 300 dim = len(filter_shape) # number of 'spatial' dimensions in convolution 301 302 out_strides = onp.multiply(window_strides, lhs.strides[2:]) 303 view_strides = lhs.strides[:1] + tuple(out_strides) + lhs.strides[1:] 304 305 out_shape = onp.floor_divide( 306 onp.subtract(in_shape, filter_shape), window_strides) + 1 307 view_shape = lhs.shape[:1] + tuple(out_shape) + rhs_shape[1:] 308 309 view = onp.lib.stride_tricks.as_strided(lhs, view_shape, view_strides) 310 311 view_axes = list(range(view.ndim)) 312 sum_axes = view_axes[-dim-1:] 313 rhs_axes = [view.ndim] + sum_axes 314 out_axes = [0, view.ndim] + list(range(1, dim+1)) 315 316 return view, view_axes, rhs_axes, out_axes 317 318 def _pad(arr, pads, pad_value): 319 out = onp.pad(arr, onp.maximum(0, pads), mode='constant', 320 constant_values=pad_value).astype(arr.dtype) 321 slices = tuple(_slice(abs(lo) if lo < 0 else 0, hi % dim if hi < 0 else None) 322 for (lo, hi), dim in zip(pads, onp.shape(arr))) 323 return out[slices] 324 325 def _dilate(operand, factors): 326 # this logic is like lax.pad, but with two leading dimensions, no edge 327 # padding, and factors are at least 1 (interior padding is at least 0) 328 outspace = onp.add(operand.shape[2:], 329 onp.multiply(onp.subtract(factors, 1), 330 onp.subtract(operand.shape[2:], 1))) 331 out = onp.zeros(operand.shape[:2] + tuple(outspace), operand.dtype) 332 lhs_slices = tuple(_slice(None, None, step) for step in factors) 333 out[(_slice(None),) * 2 + lhs_slices] = operand 334 return out 335 336 def _conv_general_permutations(dimension_numbers): 337 lhs_spec, rhs_spec, out_spec = dimension_numbers 338 rhs_perm = ((rhs_spec.index('O'), rhs_spec.index('I')) 339 + tuple(i for i, c in enumerate(rhs_spec) if c not in {'O', 'I'})) 340 lhs_perm = ((lhs_spec.index('N'), lhs_spec.index('C')) 341 + tuple(sorted((i for i, c in enumerate(lhs_spec) 342 if c not in {'N', 'C'}), 343 key=lambda i: rhs_spec.index(lhs_spec[i])))) 344 out_perm = ((out_spec.index('N'), out_spec.index('C')) 345 + tuple(sorted((i for i, c in enumerate(out_spec) 346 if c not in {'N', 'C'}), 347 key=lambda i: rhs_spec.index(out_spec[i])))) 348 return lhs_perm, rhs_perm, out_perm 349 350 ### reduce util 351 352 def _make_reducer(py_binop, init_val): 353 """Make a reducer function given a Python binop and an initial value.""" 354 # It's tempting to use onp.ufunc.reduce (even with a ufunc generated by 355 # onp.frompyfunc(py_binop)), but this may not agree with custom init_val. 356 # We make an attempt to uncover an underlying numpy ufunc (which might be 357 # wrapped by autograd or lax) and check its identity against init_val. 358 monoid_record = _monoids.get(getattr(py_binop, '__name__')) 359 if monoid_record: 360 reducer, monoid_identity = monoid_record 361 if init_val == monoid_identity(dtypes.result_type(init_val)): 362 return reducer 363 return _reducer_from_pyfunc(py_binop, init_val) 364 365 def _get_max_identity(dt): 366 return -onp.inf if dtypes.issubdtype(dt, onp.floating) else onp.iinfo(dt).min 367 368 def _get_min_identity(dt): 369 return onp.inf if dtypes.issubdtype(dt, onp.floating) else onp.iinfo(dt).max 370 371 def _identity_getter(op): 372 return lambda dtype: onp.asarray(op.identity, dtype=dtype) 373 374 MonoidRecord = collections.namedtuple('MonoidRecord', ['reducer', 'identity']) 375 _monoids = { 376 'max': MonoidRecord(onp.maximum.reduce, _get_max_identity), 377 'min': MonoidRecord(onp.minimum.reduce, _get_min_identity), 378 'add': MonoidRecord(onp.add.reduce, _identity_getter(onp.add)), 379 'mul': MonoidRecord(onp.multiply.reduce, _identity_getter(onp.multiply)), 380 'multiply': MonoidRecord(onp.multiply.reduce, 381 _identity_getter(onp.multiply)), 382 'logical_and': MonoidRecord(onp.logical_and.reduce, 383 _identity_getter(onp.logical_and)), 384 'logical_or': MonoidRecord(onp.logical_or.reduce, 385 _identity_getter(onp.logical_or)), 386 } 387 388 def _reducer_from_pyfunc(py_binop, init_val): 389 def reducer(operand, axis=0): 390 axis = range(onp.ndim(operand)) if axis is None else axis 391 result = onp.full(onp.delete(onp.shape(operand), axis), init_val, 392 dtype=onp.asarray(operand).dtype) 393 for idx, _ in onp.ndenumerate(operand): 394 out_idx = tuple(onp.delete(idx, axis)) 395 result[out_idx] = py_binop(result[out_idx], operand[idx]) 396 return result 397 return reducer 398 [end of jax/lax_reference.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
google/jax
ab2582585e0000e6a148a180ff419253c02390db
Nit: lax.conv_general_dilated does not support 0D-convolution AFAIK 0-dimensional convolution should reduce to a fully-connected layer, but fails with the following example and error. While this is not a practically useful scenario, degenerate cases like these could be useful for testing. ``` from jax import lax import jax.numpy as np lax.conv_general_dilated(lhs=np.ones((10, 5)), rhs=np.ones((5, 7)), strides=(), paddign='SAME', dimension_numbers=('NC', 'IO', 'NC')) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-17-573e7695a148> in <module>() ----> 1 lax.conv_general_dilated(np.ones((10, 5)), np.ones((5, 7)), (), 'SAME', dimension_numbers=('NC', 'IO', 'NC')) 6 frames google3/third_party/py/jax/lax/lax.py in conv_general_dilated(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, dimension_numbers, feature_group_count, precision) 516 feature_group_count=feature_group_count, 517 lhs_shape=lhs.shape, rhs_shape=rhs.shape, --> 518 precision=_canonicalize_precision(precision)) 519 520 def dot(lhs, rhs, precision=None): google3/third_party/py/jax/core.py in bind(self, *args, **kwargs) 148 top_trace = find_top_trace(args) 149 if top_trace is None: --> 150 return self.impl(*args, **kwargs) 151 152 tracers = map(top_trace.full_raise, args) google3/third_party/py/jax/interpreters/xla.py in apply_primitive(prim, *args, **params) 150 def apply_primitive(prim, *args, **params): 151 """Impl rule that compiles and runs a single primitive 'prim' using XLA.""" --> 152 compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params) 153 return compiled_fun(*args) 154 google3/third_party/py/jax/interpreters/xla.py in xla_primitive_callable(prim, *arg_specs, **params) 174 device = device and next(d for d in all_devices if (type(d), d.id) == device) 175 backend = xb.get_device_backend(device) --> 176 aval_out = prim.abstract_eval(*avals, **params) 177 if prim.multiple_results: 178 handlers = tuple(map(aval_to_result_handler, aval_out)) google3/third_party/py/jax/lax/lax.py in standard_abstract_eval(prim, shape_rule, dtype_rule, *args, **kwargs) 1500 return ConcreteArray(prim.impl(*[x.val for x in args], **kwargs)) 1501 elif least_specialized is ShapedArray: -> 1502 return ShapedArray(shape_rule(*args, **kwargs), dtype_rule(*args, **kwargs)) 1503 elif least_specialized is UnshapedArray: 1504 return UnshapedArray(dtype_rule(*args, **kwargs)) google3/third_party/py/jax/lax/lax.py in _conv_general_dilated_shape_rule(lhs, rhs, window_strides, padding, lhs_dilation, rhs_dilation, dimension_numbers, feature_group_count, **unused_kwargs) 1943 lhs_trans = _dilate_shape(onp.take(lhs.shape, lhs_perm), lhs_dilation) 1944 rhs_trans = _dilate_shape(onp.take(rhs.shape, rhs_perm), rhs_dilation) -> 1945 out_trans = conv_shape_tuple(lhs_trans, rhs_trans, window_strides, padding) 1946 return tuple(onp.take(out_trans, onp.argsort(out_perm))) 1947 google3/third_party/py/jax/lax/lax.py in conv_shape_tuple(lhs_shape, rhs_shape, strides, pads) 4334 raise TypeError(msg.format(len(lhs_shape) - 2, len(pads))) 4335 -> 4336 lhs_padded = onp.add(lhs_shape[2:], onp.add(*zip(*pads))) 4337 out_space = onp.floor_divide( 4338 onp.subtract(lhs_padded, rhs_shape[2:]), strides) + 1 ValueError: invalid number of arguments ```
2020-01-09T18:16:17Z
<patch> diff --git a/jax/lax/lax.py b/jax/lax/lax.py --- a/jax/lax/lax.py +++ b/jax/lax/lax.py @@ -4320,7 +4320,8 @@ def conv_shape_tuple(lhs_shape, rhs_shape, strides, pads): msg = "Wrong number of explicit pads for convolution: expected {}, got {}." raise TypeError(msg.format(len(lhs_shape) - 2, len(pads))) - lhs_padded = onp.add(lhs_shape[2:], onp.add(*zip(*pads))) + lhs_padded = onp.add(lhs_shape[2:], onp.sum(onp.array(pads).reshape(-1, 2), + axis=1)) out_space = onp.floor_divide( onp.subtract(lhs_padded, rhs_shape[2:]), strides) + 1 out_space = onp.maximum(0, out_space) </patch>
[]
[]
Qiskit__qiskit-892
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Loading Qiskit with no internet results in a ConnectionError <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### What is the current behavior? Loading Qiskit with no Internet results in a `ConnectionError`. ### Steps to reproduce the problem Run Qiskit without Internet connection. ### What is the expected behavior? Qiskit prompt no error at all. </issue> <code> [start of README.md] 1 # Quantum Information Science Kit (Qiskit) 2 3 [![PyPI](https://img.shields.io/pypi/v/qiskit.svg)](https://pypi.python.org/pypi/qiskit) 4 [![Build Status](https://travis-ci.org/Qiskit/qiskit-terra.svg?branch=master)](https://travis-ci.org/Qiskit/qiskit-terra) 5 [![Build Status IBM Q](https://travis-matrix-badges.herokuapp.com/repos/Qiskit/qiskit-terra/branches/master/8)](https://travis-ci.org/Qiskit/qiskit-terra) 6 7 The Quantum Information Science Kit (**Qiskit** for short) is a software development kit (SDK) for 8 working with [OpenQASM](https://github.com/Qiskit/qiskit-openqasm) and the 9 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/). 10 11 Use **Qiskit** to create quantum computing programs, compile them, and execute them on one of 12 several backends (online Real quantum processors, online simulators, and local simulators). For 13 the online backends, Qiskit uses our [python API client](https://github.com/Qiskit/qiskit-api-py) 14 to connect to the IBM Q Experience. 15 16 **We use GitHub issues for tracking requests and bugs. Please see the** 17 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for 18 questions and discussion.** 19 20 **If you'd like to contribute to Qiskit, please take a look at our** 21 [contribution guidelines](.github/CONTRIBUTING.rst). 22 23 Links to Sections: 24 25 * [Installation](#installation) 26 * [Creating your first Quantum Program](#creating-your-first-quantum-program) 27 * [More Information](#more-information) 28 * [Authors](#authors-alphabetical) 29 30 ## Installation 31 32 ### Dependencies 33 34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In 35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended 36 for interacting with the tutorials. 37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads) 38 python distribution, as it comes with all of these dependencies pre-installed. 39 40 In addition, a basic understanding of quantum information is very helpful when interacting with 41 Qiskit. If you're new to quantum, start with our 42 [User Guides](https://github.com/Qiskit/ibmqx-user-guides)! 43 44 ### Instructions 45 46 We encourage to install Qiskit via the PIP tool (a python package manager): 47 48 ```bash 49 pip install qiskit 50 ``` 51 52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version. 53 54 PIP package comes with prebuilt binaries for these platforms: 55 56 * Linux x86_64 57 * Darwin 58 * Win64 59 60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html). 61 62 If during the installation PIP doesn't succeed to build, don't worry, you will have Qiskit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback. 63 64 #### Setup your environment 65 66 We recommend using python virtual environments to improve your experience. Refer to our 67 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information. 68 69 ## Creating your first Quantum Program 70 71 Now that the SDK is installed, it's time to begin working with Qiskit. 72 73 We are ready to try out a quantum circuit example, which runs via the local simulator. 74 75 This is a simple example that makes an entangled state. 76 77 ```python 78 # Import the Qiskit SDK 79 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister 80 from qiskit import available_backends, execute 81 82 # Create a Quantum Register with 2 qubits. 83 q = QuantumRegister(2) 84 # Create a Classical Register with 2 bits. 85 c = ClassicalRegister(2) 86 # Create a Quantum Circuit 87 qc = QuantumCircuit(q, c) 88 89 # Add a H gate on qubit 0, putting this qubit in superposition. 90 qc.h(q[0]) 91 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting 92 # the qubits in a Bell state. 93 qc.cx(q[0], q[1]) 94 # Add a Measure gate to see the state. 95 qc.measure(q, c) 96 97 # See a list of available local simulators 98 print("Local backends: ", available_backends({'local': True})) 99 100 # Compile and run the Quantum circuit on a simulator backend 101 job_sim = execute(qc, "local_qasm_simulator") 102 sim_result = job_sim.result() 103 104 # Show the results 105 print("simulation: ", sim_result) 106 print(sim_result.get_counts(qc)) 107 ``` 108 109 In this case, the output will be: 110 111 ```python 112 COMPLETED 113 {'counts': {'00': 512, '11': 512}} 114 ``` 115 116 This script is available [here](examples/python/hello_quantum.py), where we also show how to 117 run the same program on a real quantum computer. 118 119 ### Executing your code on a real Quantum chip 120 121 You can also use Qiskit to execute your code on a 122 [real quantum chip](https://github.com/Qiskit/ibmqx-backend-information). 123 In order to do so, you need to configure the SDK for using the credentials in 124 your IBM Q Experience account: 125 126 #### Configure your API token and QX credentials 127 128 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so. 129 130 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst). 131 132 3. We are now going to add the necessary credentials to QISKit. Take your token 133 from step 2, here called `MY_API_TOKEN`, and pass it to the 134 `store_credentials` function: 135 136 ```python 137 from qiskit import store_credentials 138 139 store_credentials('MY_API_TOKEN') 140 ``` 141 142 4. If you have access to the IBM Q Network features, you also need to pass the 143 url listed on your IBM Q account page to `store_credentials`. 144 145 After calling `store_credentials()`, your credentials will be stored into disk. 146 Once they are stored, Qiskit will automatically load and use them in your program 147 via: 148 149 ```python 150 from qiskit import register 151 152 register() 153 ``` 154 155 For more details on installing Qiskit and for alternative methods for passing 156 the IBM QX credentials, such as using environment variables, sending them 157 explicitly and support for the `Qconfig.py` method available in previous 158 versions, please check 159 [our Qiskit documentation](https://www.qiskit.org/documentation/). 160 161 ### Next Steps 162 163 Now you're set up and ready to check out some of the other examples from our 164 [Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository. Start with the 165 [index tutorial](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and then go to 166 the [‘Getting Started’ example](https://github.com/Qiskit/qiskit-tutorial/blob/master/reference/tools/getting_started.ipynb). 167 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html), 168 you can copy and modify the notebooks to create your own experiments. 169 170 To install the tutorials as part of the Qiskit SDK, see the following 171 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK 172 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in 173 [the official Qiskit site](https://www.qiskit.org/documentation). 174 175 ## More Information 176 177 For more information on how to use Qiskit, tutorial examples, and other helpful links, take a look 178 at these resources: 179 180 * **[User Guides](https://github.com/Qiskit/ibmqx-user-guides)**, 181 a good starting place for learning about quantum information and computing 182 * **[Tutorials](https://github.com/Qiskit/qiskit-tutorial)**, 183 for example notebooks, start with the [index](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/Qiskit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb) 184 * **[OpenQASM](https://github.com/Qiskit/openqasm)**, 185 for additional information and examples of QASM code 186 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**, 187 a GUI for interacting with real and simulated quantum computers 188 * **[QISkit Python API](https://github.com/Qiskit/qiskit-api-py)**, an API to use the IBM Quantum 189 Experience in Python 190 191 Qiskit was originally developed by researchers and developers on the 192 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/), 193 with the aim of offering a high level development kit to work with quantum computers. 194 195 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for 196 questions and discussions on Qiskit and quantum computing more broadly. If you'd like to 197 contribute to Qiskit, please take a look at our [contribution guidelines](.github/CONTRIBUTING.rst). 198 199 ## Multilanguage guide 200 201 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean. 202 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese. 203 204 ## Authors (alphabetical) 205 206 Qiskit was originally authored by 207 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez, 208 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood. 209 210 And continues to grow with the help and work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 211 to the project at different levels. 212 [end of README.md] [start of doc/conf.py] 1 #!/usr/bin/env python3 2 # -*- coding: utf-8 -*- 3 # 4 # Qiskit documentation build configuration file, created by 5 # sphinx-quickstart on Tue Jul 25 18:13:28 2017. 6 # 7 # This file is execfile()d with the current directory set to its 8 # containing dir. 9 # 10 # Note that not all possible configuration values are present in this 11 # autogenerated file. 12 # 13 # All configuration values have a default; values that are commented out 14 # serve to show the default. 15 16 # If extensions (or modules to document with autodoc) are in another directory, 17 # add these directories to sys.path here. If the directory is relative to the 18 # documentation root, use os.path.abspath to make it absolute, like shown here. 19 # 20 import os 21 import sys 22 from qiskit import __version__ 23 sys.path.insert(0, os.path.abspath('.')) 24 25 # Imported manually, as otherwise it will not be fully imported. 26 import qiskit.extensions.simulator 27 28 # -- General configuration ------------------------------------------------ 29 30 # If your documentation needs a minimal Sphinx version, state it here. 31 # 32 # needs_sphinx = '1.0' 33 34 # Add any Sphinx extension module names here, as strings. They can be 35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 36 # ones. 37 extensions = ['sphinx.ext.autodoc', 38 'sphinx.ext.autosummary', 39 'sphinx.ext.napoleon', 40 'sphinx.ext.doctest', 41 'sphinx.ext.coverage', 42 'sphinx.ext.mathjax', 43 'sphinx.ext.viewcode', 44 'sphinx.ext.githubpages', 45 'sphinxcontrib.fulltoc'] 46 47 # Napoleon settings 48 napoleon_google_docstring = True 49 napoleon_numpy_docstring = False 50 napoleon_include_init_with_doc = True 51 napoleon_include_private_with_doc = False 52 napoleon_include_special_with_doc = False 53 napoleon_use_admonition_for_examples = False 54 napoleon_use_admonition_for_notes = False 55 napoleon_use_admonition_for_references = False 56 napoleon_use_ivar = False 57 napoleon_use_param = True 58 napoleon_use_rtype = True 59 60 autoclass_content = 'both' 61 62 # Add any paths that contain templates here, relative to this directory. 63 templates_path = ['_templates'] 64 65 # The suffix(es) of source filenames. 66 # You can specify multiple suffix as a list of string: 67 # 68 # source_suffix = ['.rst', '.md'] 69 source_suffix = '.rst' 70 71 # The master toctree document. 72 master_doc = 'index' 73 74 # General information about the project. 75 project = 'Qiskit SDK' 76 copyright = '2017-2018 IBM Research' 77 author = 'IBM Research' 78 79 # Add description 80 html_context = { 81 'description': 'Quantum Information Science Kit' 82 } 83 84 # The version info for the project you're documenting, acts as replacement for 85 # |version| and |release|, also used in various other places throughout the 86 # built documents. 87 # 88 # The short X.Y version. 89 version = __version__ 90 # The full version, including alpha/beta/rc tags. 91 release = version 92 93 # The language for content autogenerated by Sphinx. Refer to documentation 94 # for a list of supported languages. 95 # 96 # This is also used if you do content translation via gettext catalogs. 97 # Usually you set "language" from the command line for these cases. 98 language = None 99 100 # List of patterns, relative to source directory, that match files and 101 # directories to ignore when looking for source files. 102 # This patterns also effect to html_static_path and html_extra_path 103 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 104 '_autodoc/modules.rst', 'de', 'ja'] 105 106 # The name of the Pygments (syntax highlighting) style to use. 107 pygments_style = 'sphinx' 108 109 # If true, `todo` and `todoList` produce output, else they produce nothing. 110 todo_include_todos = False 111 112 113 # -- Options for HTML output ---------------------------------------------- 114 115 # The theme to use for HTML and HTML Help pages. See the documentation for 116 # a list of builtin themes. 117 # 118 # html_theme = 'alabaster' 119 # html_theme = 'bizstyle' 120 # html_theme = agogo 121 122 html_theme = 'theme' # use the theme in subdir 'theme' 123 html_theme_path = ['./'] # make sphinx search for themes in current dir 124 125 126 # Theme options are theme-specific and customize the look and feel of a theme 127 # further. For a list of options available for each theme, see the 128 # documentation. 129 # 130 html_theme_options = {} 131 132 # Add any paths that contain custom static files (such as style sheets) here, 133 # relative to this directory. They are copied after the builtin static files, 134 # so a file named "default.css" will overwrite the builtin "default.css". 135 html_static_path = [] 136 137 # The name of an image file (relative to this directory) to place at the top 138 # of the sidebar. 139 html_logo = 'theme/static/qiskit-logo-white-no-margin.gif' 140 141 html_favicon = 'theme/static/favicon.ico' 142 143 html_last_updated_fmt = '%Y/%m/%d' 144 145 # -- Options for HTMLHelp output ------------------------------------------ 146 147 # Output file base name for HTML help builder. 148 htmlhelp_basename = 'QISKitdoc' 149 150 151 # -- Options for LaTeX output --------------------------------------------- 152 153 latex_elements = { 154 # The paper size ('letterpaper' or 'a4paper'). 155 # 156 # 'papersize': 'letterpaper', 157 158 # The font size ('10pt', '11pt' or '12pt'). 159 # 160 # 'pointsize': '10pt', 161 162 # Additional stuff for the LaTeX preamble. 163 # 164 # 'preamble': '', 165 166 # Latex figure (float) alignment 167 # 168 # 'figure_align': 'htbp', 169 } 170 171 # Grouping the document tree into LaTeX files. List of tuples 172 # (source start file, target name, title, 173 # author, documentclass [howto, manual, or own class]). 174 latex_documents = [ 175 (master_doc, 'QISKit.tex', 'Qiskit Documentation', 176 '''Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Jesus Perez, 177 and John Smolin''', 'manual'), 178 ] 179 180 181 # -- Options for manual page output --------------------------------------- 182 183 # One entry per manual page. List of tuples 184 # (source start file, name, description, authors, manual section). 185 man_pages = [ 186 (master_doc, 'qiskit', 'Qiskit Documentation', 187 [author], 1) 188 ] 189 190 191 # -- Options for Texinfo output ------------------------------------------- 192 193 # Grouping the document tree into Texinfo files. List of tuples 194 # (source start file, target name, title, author, 195 # dir menu entry, description, category) 196 texinfo_documents = [ 197 (master_doc, 'Qiskit', 'Qiskit Documentation', 198 author, 'Qiskit', 'One line description of project.', 199 'Miscellaneous'), 200 ] 201 202 203 # Avoid a warning and treat the docstrings of the QasmLexer tokens as verbatim, 204 # as PLY uses docstring as a way to define the patterns the token matches. 205 def remove_module_docstring(app, what, name, obj, options, lines): 206 if name.startswith('qiskit.qasm._qasmlexer.QasmLexer.t_') and lines: 207 lines[0] = u'Token matching: ``%s``' % lines[0] 208 209 210 def setup(app): 211 app.connect('autodoc-process-docstring', remove_module_docstring) 212 [end of doc/conf.py] [start of qiskit/backends/ibmq/ibmqjob.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 """IBMQJob module 9 10 This module is used for creating asynchronous job objects for the 11 IBM Q Experience. 12 """ 13 14 from concurrent import futures 15 import time 16 import logging 17 import pprint 18 import contextlib 19 import json 20 import datetime 21 import numpy 22 23 from IBMQuantumExperience import ApiError 24 25 from qiskit.qobj import qobj_to_dict 26 from qiskit.transpiler import transpile 27 from qiskit.backends import BaseJob, JobError, JobTimeoutError 28 from qiskit.backends.jobstatus import JobStatus, JOB_FINAL_STATES 29 from qiskit.result._utils import result_from_old_style_dict 30 from qiskit.qobj import validate_qobj_against_schema 31 32 logger = logging.getLogger(__name__) 33 34 35 API_FINAL_STATES = ( 36 'COMPLETED', 37 'CANCELLED', 38 'ERROR_CREATING_JOB', 39 'ERROR_VALIDATING_JOB', 40 'ERROR_RUNNING_JOB' 41 ) 42 43 44 class IBMQJob(BaseJob): 45 """Represent the jobs that will be executed on IBM-Q simulators and real 46 devices. Jobs are intended to be created calling ``run()`` on a particular 47 backend. 48 49 Creating a ``Job`` instance does not imply running it. You need to do it in 50 separate steps:: 51 52 job = IBMQJob(...) 53 job.submit() # It won't block. 54 55 An error while submitting a job will cause the next call to ``status()`` to 56 raise. If submitting the job successes, you can inspect the job's status by 57 using ``status()``. Status can be one of ``JobStatus`` members:: 58 59 from qiskit.backends.jobstatus import JobStatus 60 61 job = IBMQJob(...) 62 job.submit() 63 64 try: 65 job_status = job.status() # It won't block. It will query the backend API. 66 if job_status is JobStatus.RUNNING: 67 print('The job is still running') 68 69 except JobError as ex: 70 print("Something wrong happened!: {}".format(ex)) 71 72 A call to ``status()`` can raise if something happens at the API level that 73 prevents Qiskit from determining the status of the job. An example of this 74 is a temporary connection lose or a network failure. 75 76 The ``submit()`` and ``status()`` methods are examples of non-blocking API. 77 ``Job`` instances also have `id()` and ``result()`` methods which will 78 block:: 79 80 job = IBMQJob(...) 81 job.submit() 82 83 try: 84 job_id = job.id() # It will block until completing submission. 85 print('The job {} was successfully submitted'.format(job_id)) 86 87 job_result = job.result() # It will block until finishing. 88 print('The job finished with result {}'.format(job_result)) 89 90 except JobError as ex: 91 print("Something wrong happened!: {}".format(ex)) 92 93 94 Both methods can raise if something ath the API level happens that prevent 95 Qiskit from determining the status of the job. 96 97 .. NOTE:: 98 When querying the API for getting the status, two kinds of errors are 99 possible. The most severe is the one preventing Qiskit from getting a 100 response from the backend. This can be caused by a network failure or a 101 temporary system break. In these cases, calling ``status()`` will raise. 102 103 If Qiskit successfully retrieves the status of a job, it could be it 104 finished with errors. In that case, ``status()`` will simply return 105 ``JobStatus.ERROR`` and you can call ``error_message()`` to get more 106 info. 107 108 Attributes: 109 _executor (futures.Executor): executor to handle asynchronous jobs 110 """ 111 _executor = futures.ThreadPoolExecutor() 112 113 def __init__(self, api, is_device, qobj=None, job_id=None, backend_name=None, 114 creation_date=None): 115 """IBMQJob init function. 116 We can instantiate jobs from two sources: A QObj, and an already submitted job returned by 117 the API servers. 118 119 Args: 120 api (IBMQuantumExperience): IBM Q API 121 is_device (bool): whether backend is a real device # TODO: remove this after Qobj 122 qobj (Qobj): The Quantum Object. See notes below 123 job_id (String): The job ID of an already submitted job. 124 backend_name(String): The name of the backend that run the job. 125 creation_date(String): When the job was run. 126 127 Notes: 128 It is mandatory to pass either ``qobj`` or ``job_id``. Passing a ``qobj`` 129 will ignore ``job_id`` and will create an instance representing 130 an already-created job retrieved from the API server. 131 """ 132 super().__init__() 133 self._job_data = None 134 135 if qobj is not None: 136 validate_qobj_against_schema(qobj) 137 138 # TODO: No need for this conversion, just use the new equivalent members above 139 old_qobj = qobj_to_dict(qobj, version='0.0.1') 140 self._job_data = { 141 'circuits': old_qobj['circuits'], 142 'hpc': old_qobj['config'].get('hpc'), 143 'seed': old_qobj['circuits'][0]['config']['seed'], 144 'shots': old_qobj['config']['shots'], 145 'max_credits': old_qobj['config']['max_credits'] 146 } 147 148 self._future_captured_exception = None 149 self._api = api 150 self._id = job_id 151 self._backend_name = qobj.header.backend_name if qobj is not None else backend_name 152 self._status = JobStatus.INITIALIZING 153 # In case of not providing a qobj, it assumes job_id has been provided 154 # and query the API for updating the status. 155 if qobj is None: 156 self.status() 157 self._queue_position = None 158 self._cancelled = False 159 self._is_device = is_device 160 161 def current_utc_time(): 162 """Gets the current time in UTC format""" 163 datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat() 164 165 self._creation_date = creation_date or current_utc_time() 166 self._future = None 167 self._api_error_msg = None 168 169 # pylint: disable=arguments-differ 170 def result(self, timeout=None, wait=5): 171 """Return the result from the job. 172 173 Args: 174 timeout (int): number of seconds to wait for job 175 wait (int): time between queries to IBM Q server 176 177 Returns: 178 qiskit.Result: Result object 179 180 Raises: 181 JobError: exception raised during job initialization 182 """ 183 self._wait_for_submission() 184 try: 185 job_data = self._wait_for_job(timeout=timeout, wait=wait) 186 except ApiError as api_err: 187 raise JobError(str(api_err)) 188 189 if self._is_device and self.status() == JobStatus.DONE: 190 _reorder_bits(job_data) 191 192 # Build the Result. 193 job_result_list = [] 194 for circuit_result in job_data['qasms']: 195 this_result = {'data': circuit_result['data'], 196 'name': circuit_result.get('name'), 197 'compiled_circuit_qasm': circuit_result.get('qasm'), 198 'status': circuit_result['status'], 199 'success': circuit_result['status'] == 'DONE', 200 'shots': job_data['shots']} 201 if 'metadata' in circuit_result: 202 this_result['metadata'] = circuit_result['metadata'] 203 204 job_result_list.append(this_result) 205 206 return result_from_old_style_dict({ 207 'id': self._id, 208 'status': job_data['status'], 209 'used_credits': job_data.get('usedCredits'), 210 'result': job_result_list, 211 'backend_name': self.backend_name(), 212 'success': job_data['status'] == 'DONE' 213 }, [circuit_result['name'] for circuit_result in job_data['qasms']]) 214 215 def cancel(self): 216 """Attempt to cancel a job. 217 218 Returns: 219 bool: True if job can be cancelled, else False. Currently this is 220 only possible on commercial systems. 221 222 Raises: 223 JobError: if there was some unexpected failure in the server 224 """ 225 hub = self._api.config.get('hub', None) 226 group = self._api.config.get('group', None) 227 project = self._api.config.get('project', None) 228 229 try: 230 response = self._api.cancel_job(self._id, hub, group, project) 231 self._cancelled = 'error' not in response 232 return self._cancelled 233 except ApiError as error: 234 self._cancelled = False 235 raise JobError('Error cancelling job: %s' % error.usr_msg) 236 237 def status(self): 238 """Query the API to update the status. 239 240 Returns: 241 JobStatus: The status of the job, once updated. 242 243 Raises: 244 JobError: if there was an exception in the future being executed 245 or the server sent an unknown answer. 246 """ 247 248 # Implies self._id is None 249 if self._future_captured_exception is not None: 250 raise JobError(str(self._future_captured_exception)) 251 252 if self._id is None or self._status in JOB_FINAL_STATES: 253 return self._status 254 255 try: 256 # TODO: See result values 257 api_job = self._api.get_status_job(self._id) 258 if 'status' not in api_job: 259 raise JobError('get_job didn\'t return status: %s' % 260 pprint.pformat(api_job)) 261 # pylint: disable=broad-except 262 except Exception as err: 263 raise JobError(str(err)) 264 265 if api_job['status'] == 'VALIDATING': 266 self._status = JobStatus.VALIDATING 267 268 elif api_job['status'] == 'RUNNING': 269 self._status = JobStatus.RUNNING 270 queued, self._queue_position = _is_job_queued(api_job) 271 if queued: 272 self._status = JobStatus.QUEUED 273 274 elif api_job['status'] == 'COMPLETED': 275 self._status = JobStatus.DONE 276 277 elif api_job['status'] == 'CANCELLED': 278 self._status = JobStatus.CANCELLED 279 self._cancelled = True 280 281 elif 'ERROR' in api_job['status']: 282 # Error status are of the form "ERROR_*_JOB" 283 self._status = JobStatus.ERROR 284 # TODO: This seems to be an inconsistency in the API package. 285 self._api_error_msg = api_job.get('error') or api_job.get('Error') 286 287 else: 288 raise JobError('Unrecognized answer from server: \n{}' 289 .format(pprint.pformat(api_job))) 290 291 return self._status 292 293 def error_message(self): 294 """Return the error message returned from the API server response.""" 295 return self._api_error_msg 296 297 def queue_position(self): 298 """Return the position in the server queue. 299 300 Returns: 301 Number: Position in the queue. 302 """ 303 return self._queue_position 304 305 def creation_date(self): 306 """ 307 Return creation date. 308 """ 309 return self._creation_date 310 311 # pylint: disable=invalid-name 312 def id(self): 313 """Return backend determined id. 314 315 If the Id is not set because the job is already initializing, this call 316 will block until we have an Id. 317 """ 318 self._wait_for_submission() 319 return self._id 320 321 def backend_name(self): 322 """Return backend name used for this job.""" 323 return self._backend_name 324 325 def submit(self): 326 """Submit job to IBM-Q. 327 328 Raises: 329 JobError: If we have already submitted the job. 330 """ 331 # TODO: Validation against the schema should be done here and not 332 # during initiliazation. Once done, we should document that the method 333 # can raise QobjValidationError. 334 if self._future is not None or self._id is not None: 335 raise JobError("We have already submitted the job!") 336 337 api_jobs = [] 338 circuits = self._job_data['circuits'] 339 for circuit in circuits: 340 job = _create_api_job_from_circuit(circuit) 341 api_jobs.append(job) 342 343 hpc = self._job_data['hpc'] 344 seed = self._job_data['seed'] 345 shots = self._job_data['shots'] 346 max_credits = self._job_data['max_credits'] 347 348 hpc_camel_cased = _format_hpc_parameters(hpc) 349 350 self._future = self._executor.submit(self._submit_callback, api_jobs, 351 self._backend_name, hpc_camel_cased, 352 seed, shots, max_credits) 353 354 def _submit_callback(self, api_jobs, backend_name, hpc, seed, shots, max_credits): 355 """Submit job to IBM-Q. 356 357 Args: 358 api_jobs (list): List of API job dictionaries to submit. One per circuit. 359 backend_name (string): The name of the backend 360 hpc (dict): HPC specific configuration 361 seed (integer): The seed for the circuits 362 shots (integer): Number of shots the circuits should run 363 max_credits (integer): Maximum number of credits 364 365 Returns: 366 dict: A dictionary with the response of the submitted job 367 """ 368 try: 369 submit_info = self._api.run_job(api_jobs, backend=backend_name, 370 shots=shots, max_credits=max_credits, 371 seed=seed, hpc=hpc) 372 # pylint: disable=broad-except 373 except Exception as err: 374 # Undefined error during submission: 375 # Capture and keep it for raising it when calling status(). 376 self._future_captured_exception = err 377 return None 378 379 # Error in the job after submission: 380 # Transition to the `ERROR` final state. 381 if 'error' in submit_info: 382 self._status = JobStatus.ERROR 383 self._api_error_msg = str(submit_info['error']) 384 return submit_info 385 386 # Submisssion success. 387 self._creation_date = submit_info.get('creationDate') 388 self._status = JobStatus.QUEUED 389 self._id = submit_info.get('id') 390 return submit_info 391 392 def _wait_for_job(self, timeout=60, wait=5): 393 """Wait until all online ran circuits of a qobj are 'COMPLETED'. 394 395 Args: 396 timeout (float or None): seconds to wait for job. If None, wait 397 indefinitely. 398 wait (float): seconds between queries 399 400 Returns: 401 dict: A dict with the contents of the API request. 402 403 Raises: 404 JobTimeoutError: if the job does not return results before a specified timeout. 405 JobError: if something wrong happened in some of the server API calls 406 """ 407 start_time = time.time() 408 while self.status() not in JOB_FINAL_STATES: 409 elapsed_time = time.time() - start_time 410 if timeout is not None and elapsed_time >= timeout: 411 raise JobTimeoutError( 412 'Timeout while waiting for the job: {}'.format(self._id) 413 ) 414 415 logger.info('status = %s (%d seconds)', self._status, elapsed_time) 416 time.sleep(wait) 417 418 if self._cancelled: 419 raise JobError( 420 'Job result impossible to retrieve. The job was cancelled.') 421 422 return self._api.get_job(self._id) 423 424 def _wait_for_submission(self, timeout=60): 425 """Waits for the request to return a job ID""" 426 if self._id is None: 427 if self._future is None: 428 raise JobError("You have to submit before asking for status or results!") 429 try: 430 submit_info = self._future.result(timeout=timeout) 431 if self._future_captured_exception is not None: 432 # pylint can't see if catch of None type 433 # pylint: disable=raising-bad-type 434 raise self._future_captured_exception 435 except TimeoutError as ex: 436 raise JobTimeoutError( 437 "Timeout waiting for the job being submitted: {}".format(ex) 438 ) 439 if 'error' in submit_info: 440 self._status = JobStatus.ERROR 441 self._api_error_msg = str(submit_info['error']) 442 raise JobError(str(submit_info['error'])) 443 444 445 def _reorder_bits(job_data): 446 """Temporary fix for ibmq backends. 447 448 For every ran circuit, get reordering information from qobj 449 and apply reordering on result. 450 451 Args: 452 job_data (dict): dict with the bare contents of the API.get_job request. 453 454 Raises: 455 JobError: raised if the creg sizes don't add up in result header. 456 """ 457 for circuit_result in job_data['qasms']: 458 if 'metadata' in circuit_result: 459 circ = circuit_result['metadata'].get('compiled_circuit') 460 else: 461 logger.warning('result object missing metadata for reordering' 462 ' bits: bits may be out of order') 463 return 464 # device_qubit -> device_clbit (how it should have been) 465 measure_dict = {op['qubits'][0]: op['clbits'][0] 466 for op in circ['operations'] 467 if op['name'] == 'measure'} 468 counts_dict_new = {} 469 for item in circuit_result['data']['counts'].items(): 470 # fix clbit ordering to what it should have been 471 bits = list(item[0]) 472 bits.reverse() # lsb in 0th position 473 count = item[1] 474 reordered_bits = list('x' * len(bits)) 475 for device_clbit, bit in enumerate(bits): 476 if device_clbit in measure_dict: 477 correct_device_clbit = measure_dict[device_clbit] 478 reordered_bits[correct_device_clbit] = bit 479 reordered_bits.reverse() 480 481 # only keep the clbits specified by circuit, not everything on device 482 num_clbits = circ['header']['number_of_clbits'] 483 compact_key = reordered_bits[-num_clbits:] 484 compact_key = "".join([b if b != 'x' else '0' 485 for b in compact_key]) 486 487 # insert spaces to signify different classical registers 488 cregs = circ['header']['clbit_labels'] 489 if sum([creg[1] for creg in cregs]) != num_clbits: 490 raise JobError("creg sizes don't add up in result header.") 491 creg_begin_pos = [] 492 creg_end_pos = [] 493 acc = 0 494 for creg in reversed(cregs): 495 creg_size = creg[1] 496 creg_begin_pos.append(acc) 497 creg_end_pos.append(acc + creg_size) 498 acc += creg_size 499 compact_key = " ".join([compact_key[creg_begin_pos[i]:creg_end_pos[i]] 500 for i in range(len(cregs))]) 501 502 # marginalize over unwanted measured qubits 503 if compact_key not in counts_dict_new: 504 counts_dict_new[compact_key] = count 505 else: 506 counts_dict_new[compact_key] += count 507 508 circuit_result['data']['counts'] = counts_dict_new 509 510 511 def _numpy_type_converter(obj): 512 ret = obj 513 if isinstance(obj, numpy.integer): 514 ret = int(obj) 515 elif isinstance(obj, numpy.floating): # pylint: disable=no-member 516 ret = float(obj) 517 elif isinstance(obj, numpy.ndarray): 518 ret = obj.tolist() 519 return ret 520 521 522 def _create_api_job_from_circuit(circuit): 523 """Helper function that creates a special job required by the API, from a circuit.""" 524 api_job = {} 525 if not circuit.get('compiled_circuit_qasm'): 526 compiled_circuit = transpile(circuit['circuit']) 527 circuit['compiled_circuit_qasm'] = compiled_circuit.qasm(qeflag=True) 528 529 if isinstance(circuit['compiled_circuit_qasm'], bytes): 530 api_job['qasm'] = circuit['compiled_circuit_qasm'].decode() 531 else: 532 api_job['qasm'] = circuit['compiled_circuit_qasm'] 533 534 if circuit.get('name'): 535 api_job['name'] = circuit['name'] 536 537 # convert numpy types for json serialization 538 compiled_circuit = json.loads(json.dumps(circuit['compiled_circuit'], 539 default=_numpy_type_converter)) 540 541 api_job['metadata'] = {'compiled_circuit': compiled_circuit} 542 return api_job 543 544 545 def _is_job_queued(api_job_response): 546 """Checks whether a job has been queued or not.""" 547 is_queued, position = False, 0 548 if 'infoQueue' in api_job_response: 549 if 'status' in api_job_response['infoQueue']: 550 queue_status = api_job_response['infoQueue']['status'] 551 is_queued = queue_status == 'PENDING_IN_QUEUE' 552 if 'position' in api_job_response['infoQueue']: 553 position = api_job_response['infoQueue']['position'] 554 return is_queued, position 555 556 557 def _format_hpc_parameters(hpc): 558 """Helper function to get HPC parameters with the correct format""" 559 if hpc is None: 560 return None 561 562 hpc_camel_cased = None 563 with contextlib.suppress(KeyError, TypeError): 564 # Use CamelCase when passing the hpc parameters to the API. 565 hpc_camel_cased = { 566 'multiShotOptimization': hpc['multi_shot_optimization'], 567 'ompNumThreads': hpc['omp_num_threads'] 568 } 569 570 return hpc_camel_cased 571 [end of qiskit/backends/ibmq/ibmqjob.py] [start of qiskit/tools/file_io.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 """Utilities for File Input/Output.""" 9 10 import copy 11 import datetime 12 import json 13 import os 14 15 import numpy 16 from sympy import Basic 17 18 from qiskit.result._utils import result_from_old_style_dict 19 from qiskit._qiskiterror import QISKitError 20 from qiskit.backends import BaseBackend 21 22 23 def convert_qobj_to_json(in_item): 24 """ 25 Combs recursively through a list/dictionary and finds any non-json 26 compatible elements and converts them. E.g. complex ndarray's are 27 converted to lists of strings. Assume that all such elements are 28 stored in dictionaries! 29 30 Arg: 31 in_item (dict or list): the input dict/list 32 """ 33 34 key_list = [] 35 for (item_index, item_iter) in enumerate(in_item): 36 if isinstance(in_item, list): 37 curkey = item_index 38 else: 39 curkey = item_iter 40 41 if isinstance(in_item[curkey], (list, dict)): 42 # go recursively through nested list/dictionaries 43 convert_qobj_to_json(in_item[curkey]) 44 elif isinstance(in_item[curkey], numpy.ndarray): 45 # ndarray's are not json compatible. Save the key. 46 key_list.append(curkey) 47 48 # convert ndarray's to lists 49 # split complex arrays into two lists because complex values are not 50 # json compatible 51 for curkey in key_list: 52 if in_item[curkey].dtype == 'complex': 53 in_item[curkey + '_ndarray_imag'] = numpy.imag( 54 in_item[curkey]).tolist() 55 in_item[curkey + '_ndarray_real'] = numpy.real( 56 in_item[curkey]).tolist() 57 in_item.pop(curkey) 58 else: 59 in_item[curkey] = in_item[curkey].tolist() 60 61 62 def convert_json_to_qobj(in_item): 63 """Combs recursively through a list/dictionary that was loaded from json 64 and finds any lists that were converted from ndarray and converts them back 65 66 Arg: 67 in_item (dict or list): the input dict/list 68 """ 69 70 key_list = [] 71 for (item_index, item_iter) in enumerate(in_item): 72 if isinstance(in_item, list): 73 curkey = item_index 74 else: 75 curkey = item_iter 76 77 # flat these lists so that we can recombine back into a complex 78 # number 79 if '_ndarray_real' in curkey: 80 key_list.append(curkey) 81 continue 82 83 if isinstance(in_item[curkey], (list, dict)): 84 convert_json_to_qobj(in_item[curkey]) 85 86 for curkey in key_list: 87 curkey_root = curkey[0:-13] 88 in_item[curkey_root] = numpy.array(in_item[curkey]) 89 in_item.pop(curkey) 90 if curkey_root + '_ndarray_imag' in in_item: 91 in_item[curkey_root] = in_item[curkey_root] + 1j * numpy.array( 92 in_item[curkey_root + '_ndarray_imag']) 93 in_item.pop(curkey_root + '_ndarray_imag') 94 95 96 def file_datestr(folder, fileroot): 97 """Constructs a filename using the current date-time 98 99 Args: 100 folder (str): path to the save folder 101 fileroot (str): root string for the file 102 103 Returns: 104 String: full file path of the form 105 'folder/YYYY_MM_DD_HH_MM_fileroot.json' 106 """ 107 108 # if the fileroot has .json appended strip it off 109 if len(fileroot) > 4 and fileroot[-5:].lower() == '.json': 110 fileroot = fileroot[0:-5] 111 112 return os.path.join( 113 folder, ('{:%Y_%m_%d_%H_%M_}'.format(datetime.datetime.now()) + 114 fileroot + '.json')) 115 116 117 def load_result_from_file(filename): 118 """Load a results dictionary file (.json) to a Result object. 119 Note: The json file may not load properly if it was saved with a previous 120 version of the SDK. 121 122 Args: 123 filename (str): filename of the dictionary 124 125 Returns: 126 tuple(Result, dict): 127 The new Results object 128 if the metadata exists it will get returned 129 Raises: 130 QISKitError: if the file does not exist or does not have the proper 131 dictionary structure. 132 """ 133 134 if not os.path.exists(filename): 135 raise QISKitError('File %s does not exist' % filename) 136 137 with open(filename, 'r') as load_file: 138 master_dict = json.load(load_file) 139 140 try: 141 qresult_dict = master_dict['result'] 142 convert_json_to_qobj(qresult_dict) 143 metadata = master_dict['metadata'] 144 except KeyError: 145 raise QISKitError('File %s does not have the proper dictionary ' 146 'structure') 147 148 # TODO: To keep backwards compatibility with previous saved versions, 149 # the method adapts the recovered JSON to match the new format. Since 150 # the save function takes a Result, not all the fields required by 151 # the new Qobj are saved so they are marked with 'TODO'. 152 qresult_dict['id'] = qresult_dict.get('id', 'TODO') 153 for experiment in qresult_dict['result']: 154 is_done = experiment['status'] == 'DONE' 155 experiment['success'] = experiment.get('success', is_done) 156 experiment['shots'] = experiment.get('shots', 'TODO') 157 158 qresult = result_from_old_style_dict( 159 qresult_dict, 160 [circuit_data['name'] for circuit_data in qresult_dict['result']] 161 ) 162 163 return qresult, metadata 164 165 166 class ResultEncoder(json.JSONEncoder): 167 """ 168 Custom JSON encoder for sympy types. 169 """ 170 def default(self, o): 171 # pylint: disable=method-hidden 172 if isinstance(o, Basic): # The element to serialize is a Symbolic type 173 if o.is_Integer: 174 return int(o) 175 if o.is_Float: 176 return float(o) 177 return str(o) 178 elif isinstance(o, BaseBackend): 179 # TODO: replace when the deprecation is completed (see also note in 180 # Result.__iadd__). 181 return o.configuration()['name'] 182 183 return json.JSONEncoder.default(self, o) 184 185 186 def save_result_to_file(resultobj, filename, metadata=None): 187 """Save a result and optional metatdata to a single dictionary file. 188 189 Args: 190 resultobj (Result): Result to save 191 filename (str): save path (with or without the json extension). If the 192 file already exists then numbers will be appended to the root to 193 generate a unique filename. 194 E.g. if filename=test.json and that file exists then the file will 195 be changed to test_1.json 196 metadata (dict): Add another dictionary with custom data for the 197 result (eg fit results) 198 199 Return: 200 String: full file path 201 """ 202 master_dict = { 203 'result': _old_style_dict_from_result(resultobj) 204 } 205 if metadata is None: 206 master_dict['metadata'] = {} 207 else: 208 master_dict['metadata'] = copy.deepcopy(metadata) 209 210 # need to convert any ndarray variables to lists so that they can be 211 # exported to the json file 212 convert_qobj_to_json(master_dict['result']) 213 214 # if the filename has .json appended strip it off 215 if filename[-5:].lower() == '.json': 216 filename = filename[0:-5] 217 218 append_str = '' 219 append_num = 0 220 221 while os.path.exists(filename + append_str + '.json'): 222 append_num += 1 223 append_str = '_%d' % append_num 224 225 with open(filename + append_str + '.json', 'w') as save_file: 226 json.dump(master_dict, save_file, indent=1, cls=ResultEncoder) 227 228 return filename + append_str + '.json' 229 230 231 def _old_style_dict_from_result(result): 232 """Convert a ``qiskit.Result`` instance into the old style dict format 233 expected by ``save_result_to_file``. 234 235 Args: 236 result (qiskit.Result): a ``qiskit.Result`` instance. 237 238 Returns: 239 dict: a dictionary with the format previous to Qobj's Result. 240 """ 241 return { 242 'job_id': result.job_id, 243 'status': result.status, 244 'backend': result.backend_name, 245 'result': [{ 246 'name': name, 247 'compiled_circuit_qasm': experiment.compiled_circuit_qasm, 248 'status': experiment.status, 249 'data': experiment.data 250 } for name, experiment in result.results.items()] 251 } 252 [end of qiskit/tools/file_io.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
485c707a51351c1101e4287485037e6fa3949140
Loading Qiskit with no internet results in a ConnectionError <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### What is the current behavior? Loading Qiskit with no Internet results in a `ConnectionError`. ### Steps to reproduce the problem Run Qiskit without Internet connection. ### What is the expected behavior? Qiskit prompt no error at all.
2018-09-13T01:29:48Z
<patch> diff --git a/qiskit/_util.py b/qiskit/_util.py --- a/qiskit/_util.py +++ b/qiskit/_util.py @@ -13,6 +13,7 @@ import re import sys import warnings +import socket from collections import UserDict API_NAME = 'IBMQuantumExperience' @@ -157,3 +158,26 @@ def _parse_ibmq_credentials(url, hub=None, group=None, project=None): "0.6+. Please use the new URL format provided in the q-console.", DeprecationWarning) return url + + +def _has_connection(hostname, port): + """Checks to see if internet connection exists to host + via specified port + + Args: + hostname (str): Hostname to connect to. + port (int): Port to connect to + + Returns: + bool: Has connection or not + + Raises: + gaierror: No connection established. + """ + try: + host = socket.gethostbyname(hostname) + socket.create_connection((host, port), 2) + return True + except socket.gaierror: + pass + return False diff --git a/qiskit/tools/visualization/__init__.py b/qiskit/tools/visualization/__init__.py --- a/qiskit/tools/visualization/__init__.py +++ b/qiskit/tools/visualization/__init__.py @@ -8,7 +8,7 @@ """Main QISKit visualization methods.""" import sys - +from qiskit._util import _has_connection from ._circuit_visualization import circuit_drawer, plot_circuit, generate_latex_source,\ latex_circuit_drawer, matplotlib_circuit_drawer, qx_color_scheme from ._error import VisualizationError @@ -16,9 +16,7 @@ if ('ipykernel' in sys.modules) and ('spyder' not in sys.modules): - import requests - if requests.get( - 'https://qvisualization.mybluemix.net/').status_code == 200: + if _has_connection('https://qvisualization.mybluemix.net/', 443): from .interactive._iplot_state import iplot_state as plot_state from .interactive._iplot_histogram import iplot_histogram as \ plot_histogram </patch>
[]
[]
pandas-dev__pandas-14344
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Conflicting documentation about index uniqueness The page http://pandas.pydata.org/pandas-docs/dev/generated/pandas.Series.html says: > index : array-like or Index (1d) > Values must be unique and hashable, same length as data. A little bit earlier the same page says: > Labels need not be unique but must be any hashable type. Other pages also mention non-unique index support (e.g. http://pandas.pydata.org/pandas-docs/dev/dsintro.html) It looks like `index` description should read: > index : array-like or Index (1d) > Values must be hashable, same length as data. Non-unique index values are allowed. </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pydata/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td> 13 </tr> 14 <td></td> 15 <td><img src="https://anaconda.org/pandas/pandas/badges/version.svg" alt="latest release" /></td> 16 </tr> 17 <tr> 18 <td>Package Status</td> 19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 20 </tr> 21 <tr> 22 <td>License</td> 23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td> 24 </tr> 25 <tr> 26 <td>Build Status</td> 27 <td> 28 <a href="https://travis-ci.org/pydata/pandas"> 29 <img src="https://travis-ci.org/pydata/pandas.svg?branch=master" alt="travis build status" /> 30 </a> 31 </td> 32 </tr> 33 <td></td> 34 <td> 35 <a href="https://ci.appveyor.com/project/jreback/pandas-465"> 36 <img src="https://ci.appveyor.com/api/projects/status/iblk29s98quexwxi/branch/master?svg=true" alt="appveyor build status" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Coverage</td> 42 <td><img src="https://codecov.io/github/pydata/pandas/coverage.svg?branch=master" alt="coverage" /></td> 43 </tr> 44 <tr> 45 <td>Conda</td> 46 <td> 47 <a href="http://pandas.pydata.org"> 48 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda downloads" /> 49 </a> 50 </td> 51 </tr> 52 <tr> 53 <td>PyPI</td> 54 <td> 55 <a href="https://pypi.python.org/pypi/pandas/"> 56 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" /> 57 </a> 58 </td> 59 </tr> 60 </table> 61 62 [![https://gitter.im/pydata/pandas](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 63 64 ## What is it 65 66 **pandas** is a Python package providing fast, flexible, and expressive data 67 structures designed to make working with "relational" or "labeled" data both 68 easy and intuitive. It aims to be the fundamental high-level building block for 69 doing practical, **real world** data analysis in Python. Additionally, it has 70 the broader goal of becoming **the most powerful and flexible open source data 71 analysis / manipulation tool available in any language**. It is already well on 72 its way toward this goal. 73 74 ## Main Features 75 Here are just a few of the things that pandas does well: 76 77 - Easy handling of [**missing data**][missing-data] (represented as 78 `NaN`) in floating point as well as non-floating point data 79 - Size mutability: columns can be [**inserted and 80 deleted**][insertion-deletion] from DataFrame and higher dimensional 81 objects 82 - Automatic and explicit [**data alignment**][alignment]: objects can 83 be explicitly aligned to a set of labels, or the user can simply 84 ignore the labels and let `Series`, `DataFrame`, etc. automatically 85 align the data for you in computations 86 - Powerful, flexible [**group by**][groupby] functionality to perform 87 split-apply-combine operations on data sets, for both aggregating 88 and transforming data 89 - Make it [**easy to convert**][conversion] ragged, 90 differently-indexed data in other Python and NumPy data structures 91 into DataFrame objects 92 - Intelligent label-based [**slicing**][slicing], [**fancy 93 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 94 large data sets 95 - Intuitive [**merging**][merging] and [**joining**][joining] data 96 sets 97 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 98 data sets 99 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 100 labels per tick) 101 - Robust IO tools for loading data from [**flat files**][flat-files] 102 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 103 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 104 - [**Time series**][timeseries]-specific functionality: date range 105 generation and frequency conversion, moving window statistics, 106 moving window linear regressions, date shifting and lagging, etc. 107 108 109 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 110 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 111 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 112 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 113 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 114 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 115 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 116 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 117 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 118 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 119 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 120 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 121 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 122 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 123 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 124 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 125 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 126 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 127 128 ## Where to get it 129 The source code is currently hosted on GitHub at: 130 http://github.com/pydata/pandas 131 132 Binary installers for the latest released version are available at the [Python 133 package index](http://pypi.python.org/pypi/pandas/) and on conda. 134 135 ```sh 136 # conda 137 conda install pandas 138 ``` 139 140 ```sh 141 # or PyPI 142 pip install pandas 143 ``` 144 145 ## Dependencies 146 - [NumPy](http://www.numpy.org): 1.7.0 or higher 147 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 148 - [pytz](http://pytz.sourceforge.net) 149 - Needed for time zone support with ``pandas.date_range`` 150 151 See the [full installation instructions](http://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 152 for recommended and optional dependencies. 153 154 ## Installation from sources 155 To install pandas from source you need Cython in addition to the normal 156 dependencies above. Cython can be installed from pypi: 157 158 ```sh 159 pip install cython 160 ``` 161 162 In the `pandas` directory (same one where you found this file after 163 cloning the git repo), execute: 164 165 ```sh 166 python setup.py install 167 ``` 168 169 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 170 171 ```sh 172 python setup.py develop 173 ``` 174 175 Alternatively, you can use `pip` if you want all the dependencies pulled 176 in automatically (the `-e` option is for installing it in [development 177 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 178 179 ```sh 180 pip install -e . 181 ``` 182 183 On Windows, you will need to install MinGW and execute: 184 185 ```sh 186 python setup.py build --compiler=mingw32 187 python setup.py install 188 ``` 189 190 See http://pandas.pydata.org/ for more information. 191 192 ## License 193 BSD 194 195 ## Documentation 196 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 197 198 The Sphinx documentation should provide a good starting point for learning how 199 to use the library. Expect the docs to continue to expand as time goes on. 200 201 ## Background 202 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 203 has been under active development since then. 204 205 ## Discussion and Development 206 Since pandas development is related to a number of other scientific 207 Python projects, questions are welcome on the scipy-user mailing 208 list. Specialized discussions or design issues should take place on 209 the PyData mailing list / Google group: 210 211 https://groups.google.com/forum/#!forum/pydata 212 [end of README.md] [start of doc/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # pandas documentation build configuration file, created by 4 # 5 # This file is execfile()d with the current directory set to its containing dir. 6 # 7 # Note that not all possible configuration values are present in this 8 # autogenerated file. 9 # 10 # All configuration values have a default; values that are commented out 11 # serve to show the default. 12 13 import sys 14 import os 15 import re 16 import inspect 17 from pandas.compat import u, PY3 18 19 # If extensions (or modules to document with autodoc) are in another directory, 20 # add these directories to sys.path here. If the directory is relative to the 21 # documentation root, use os.path.abspath to make it absolute, like shown here. 22 # sys.path.append(os.path.abspath('.')) 23 sys.path.insert(0, os.path.abspath('../sphinxext')) 24 25 sys.path.extend([ 26 27 # numpy standard doc extensions 28 os.path.join(os.path.dirname(__file__), 29 '..', '../..', 30 'sphinxext') 31 32 ]) 33 34 # -- General configuration ----------------------------------------------- 35 36 # Add any Sphinx extension module names here, as strings. They can be extensions 37 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sphinxext. 38 39 extensions = ['sphinx.ext.autodoc', 40 'sphinx.ext.autosummary', 41 'sphinx.ext.doctest', 42 'sphinx.ext.extlinks', 43 'sphinx.ext.todo', 44 'numpydoc', # used to parse numpy-style docstrings for autodoc 45 'ipython_sphinxext.ipython_directive', 46 'ipython_sphinxext.ipython_console_highlighting', 47 'sphinx.ext.intersphinx', 48 'sphinx.ext.coverage', 49 'sphinx.ext.pngmath', 50 'sphinx.ext.ifconfig', 51 'sphinx.ext.linkcode', 52 ] 53 54 55 56 with open("index.rst") as f: 57 index_rst_lines = f.readlines() 58 59 # only include the slow autosummary feature if we're building the API section 60 # of the docs 61 62 # JP: added from sphinxdocs 63 autosummary_generate = False 64 65 if any([re.match("\s*api\s*",l) for l in index_rst_lines]): 66 autosummary_generate = True 67 68 files_to_delete = [] 69 for f in os.listdir(os.path.dirname(__file__)): 70 if not f.endswith('.rst') or f.startswith('.') or os.path.basename(f) == 'index.rst': 71 continue 72 73 _file_basename = f.split('.rst')[0] 74 _regex_to_match = "\s*{}\s*$".format(_file_basename) 75 if not any([re.match(_regex_to_match, line) for line in index_rst_lines]): 76 files_to_delete.append(f) 77 78 if files_to_delete: 79 print("I'm about to DELETE the following:\n%s\n" % list(sorted(files_to_delete))) 80 sys.stdout.write("WARNING: I'd like to delete those to speed up processing (yes/no)? ") 81 if PY3: 82 answer = input() 83 else: 84 answer = raw_input() 85 86 if answer.lower().strip() in ('y','yes'): 87 for f in files_to_delete: 88 f = os.path.join(os.path.join(os.path.dirname(__file__),f)) 89 f= os.path.abspath(f) 90 try: 91 print("Deleting %s" % f) 92 os.unlink(f) 93 except: 94 print("Error deleting %s" % f) 95 pass 96 97 # Add any paths that contain templates here, relative to this directory. 98 templates_path = ['../_templates'] 99 100 # The suffix of source filenames. 101 source_suffix = '.rst' 102 103 # The encoding of source files. 104 source_encoding = 'utf-8' 105 106 # The master toctree document. 107 master_doc = 'index' 108 109 # General information about the project. 110 project = u('pandas') 111 copyright = u('2008-2014, the pandas development team') 112 113 # The version info for the project you're documenting, acts as replacement for 114 # |version| and |release|, also used in various other places throughout the 115 # built documents. 116 # 117 # The short X.Y version. 118 import pandas 119 120 # version = '%s r%s' % (pandas.__version__, svn_version()) 121 version = '%s' % (pandas.__version__) 122 123 # The full version, including alpha/beta/rc tags. 124 release = version 125 126 # The language for content autogenerated by Sphinx. Refer to documentation 127 # for a list of supported languages. 128 # language = None 129 130 # There are two options for replacing |today|: either, you set today to some 131 # non-false value, then it is used: 132 # today = '' 133 # Else, today_fmt is used as the format for a strftime call. 134 # today_fmt = '%B %d, %Y' 135 136 # List of documents that shouldn't be included in the build. 137 # unused_docs = [] 138 139 # List of directories, relative to source directory, that shouldn't be searched 140 # for source files. 141 exclude_trees = [] 142 143 # The reST default role (used for this markup: `text`) to use for all documents. 144 # default_role = None 145 146 # If true, '()' will be appended to :func: etc. cross-reference text. 147 # add_function_parentheses = True 148 149 # If true, the current module name will be prepended to all description 150 # unit titles (such as .. function::). 151 # add_module_names = True 152 153 # If true, sectionauthor and moduleauthor directives will be shown in the 154 # output. They are ignored by default. 155 # show_authors = False 156 157 # The name of the Pygments (syntax highlighting) style to use. 158 pygments_style = 'sphinx' 159 160 # A list of ignored prefixes for module index sorting. 161 # modindex_common_prefix = [] 162 163 164 # -- Options for HTML output --------------------------------------------- 165 166 # The theme to use for HTML and HTML Help pages. Major themes that come with 167 # Sphinx are currently 'default' and 'sphinxdoc'. 168 html_theme = 'nature_with_gtoc' 169 170 # The style sheet to use for HTML and HTML Help pages. A file of that name 171 # must exist either in Sphinx' static/ path, or in one of the custom paths 172 # given in html_static_path. 173 # html_style = 'statsmodels.css' 174 175 # Theme options are theme-specific and customize the look and feel of a theme 176 # further. For a list of options available for each theme, see the 177 # documentation. 178 # html_theme_options = {} 179 180 # Add any paths that contain custom themes here, relative to this directory. 181 html_theme_path = ['themes'] 182 183 # The name for this set of Sphinx documents. If None, it defaults to 184 # "<project> v<release> documentation". 185 # html_title = None 186 187 # A shorter title for the navigation bar. Default is the same as html_title. 188 # html_short_title = None 189 190 # The name of an image file (relative to this directory) to place at the top 191 # of the sidebar. 192 # html_logo = None 193 194 # The name of an image file (within the static path) to use as favicon of the 195 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 196 # pixels large. 197 # html_favicon = None 198 199 # Add any paths that contain custom static files (such as style sheets) here, 200 # relative to this directory. They are copied after the builtin static files, 201 # so a file named "default.css" will overwrite the builtin "default.css". 202 html_static_path = ['_static'] 203 204 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 205 # using the given strftime format. 206 # html_last_updated_fmt = '%b %d, %Y' 207 208 # If true, SmartyPants will be used to convert quotes and dashes to 209 # typographically correct entities. 210 # html_use_smartypants = True 211 212 # Custom sidebar templates, maps document names to template names. 213 # html_sidebars = {} 214 215 # Additional templates that should be rendered to pages, maps page names to 216 # template names. 217 218 # Add redirect for previously existing API pages (which are now included in 219 # the API pages as top-level functions) based on a template (GH9911) 220 moved_api_pages = [ 221 'pandas.core.common.isnull', 'pandas.core.common.notnull', 'pandas.core.reshape.get_dummies', 222 'pandas.tools.merge.concat', 'pandas.tools.merge.merge', 'pandas.tools.pivot.pivot_table', 223 'pandas.tseries.tools.to_datetime', 'pandas.io.clipboard.read_clipboard', 'pandas.io.excel.ExcelFile.parse', 224 'pandas.io.excel.read_excel', 'pandas.io.html.read_html', 'pandas.io.json.read_json', 225 'pandas.io.parsers.read_csv', 'pandas.io.parsers.read_fwf', 'pandas.io.parsers.read_table', 226 'pandas.io.pickle.read_pickle', 'pandas.io.pytables.HDFStore.append', 'pandas.io.pytables.HDFStore.get', 227 'pandas.io.pytables.HDFStore.put', 'pandas.io.pytables.HDFStore.select', 'pandas.io.pytables.read_hdf', 228 'pandas.io.sql.read_sql', 'pandas.io.sql.read_frame', 'pandas.io.sql.write_frame', 229 'pandas.io.stata.read_stata'] 230 231 html_additional_pages = {'generated/' + page: 'api_redirect.html' for page in moved_api_pages} 232 233 # If false, no module index is generated. 234 html_use_modindex = True 235 236 # If false, no index is generated. 237 # html_use_index = True 238 239 # If true, the index is split into individual pages for each letter. 240 # html_split_index = False 241 242 # If true, links to the reST sources are added to the pages. 243 # html_show_sourcelink = True 244 245 # If true, an OpenSearch description file will be output, and all pages will 246 # contain a <link> tag referring to it. The value of this option must be the 247 # base URL from which the finished HTML is served. 248 # html_use_opensearch = '' 249 250 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 251 # html_file_suffix = '' 252 253 # Output file base name for HTML help builder. 254 htmlhelp_basename = 'pandas' 255 256 257 # -- Options for LaTeX output -------------------------------------------- 258 259 # The paper size ('letter' or 'a4'). 260 # latex_paper_size = 'letter' 261 262 # The font size ('10pt', '11pt' or '12pt'). 263 # latex_font_size = '10pt' 264 265 # Grouping the document tree into LaTeX files. List of tuples 266 # (source start file, target name, title, author, documentclass [howto/manual]). 267 latex_documents = [ 268 ('index', 'pandas.tex', 269 u('pandas: powerful Python data analysis toolkit'), 270 u('Wes McKinney\n\& PyData Development Team'), 'manual'), 271 ] 272 273 # The name of an image file (relative to this directory) to place at the top of 274 # the title page. 275 # latex_logo = None 276 277 # For "manual" documents, if this is true, then toplevel headings are parts, 278 # not chapters. 279 # latex_use_parts = False 280 281 # Additional stuff for the LaTeX preamble. 282 # latex_preamble = '' 283 284 # Documents to append as an appendix to all manuals. 285 # latex_appendices = [] 286 287 # If false, no module index is generated. 288 # latex_use_modindex = True 289 290 291 # Example configuration for intersphinx: refer to the Python standard library. 292 intersphinx_mapping = { 293 'statsmodels': ('http://www.statsmodels.org/devel/', None), 294 'matplotlib': ('http://matplotlib.org/', None), 295 'python': ('http://docs.python.org/3', None), 296 'numpy': ('http://docs.scipy.org/doc/numpy', None), 297 'scipy': ('http://docs.scipy.org/doc/scipy/reference', None), 298 'py': ('http://pylib.readthedocs.org/en/latest/', None) 299 } 300 import glob 301 autosummary_generate = glob.glob("*.rst") 302 303 # extlinks alias 304 extlinks = {'issue': ('https://github.com/pydata/pandas/issues/%s', 305 'GH'), 306 'wiki': ('https://github.com/pydata/pandas/wiki/%s', 307 'wiki ')} 308 309 ipython_exec_lines = [ 310 'import numpy as np', 311 'import pandas as pd', 312 # This ensures correct rendering on system with console encoding != utf8 313 # (windows). It forces pandas to encode its output reprs using utf8 314 # whereever the docs are built. The docs' target is the browser, not 315 # the console, so this is fine. 316 'pd.options.display.encoding="utf8"' 317 ] 318 319 320 # Add custom Documenter to handle attributes/methods of an AccessorProperty 321 # eg pandas.Series.str and pandas.Series.dt (see GH9322) 322 323 import sphinx 324 from sphinx.util import rpartition 325 from sphinx.ext.autodoc import Documenter, MethodDocumenter, AttributeDocumenter 326 from sphinx.ext.autosummary import Autosummary 327 328 329 class AccessorLevelDocumenter(Documenter): 330 """ 331 Specialized Documenter subclass for objects on accessor level (methods, 332 attributes). 333 """ 334 335 # This is the simple straightforward version 336 # modname is None, base the last elements (eg 'hour') 337 # and path the part before (eg 'Series.dt') 338 # def resolve_name(self, modname, parents, path, base): 339 # modname = 'pandas' 340 # mod_cls = path.rstrip('.') 341 # mod_cls = mod_cls.split('.') 342 # 343 # return modname, mod_cls + [base] 344 345 def resolve_name(self, modname, parents, path, base): 346 if modname is None: 347 if path: 348 mod_cls = path.rstrip('.') 349 else: 350 mod_cls = None 351 # if documenting a class-level object without path, 352 # there must be a current class, either from a parent 353 # auto directive ... 354 mod_cls = self.env.temp_data.get('autodoc:class') 355 # ... or from a class directive 356 if mod_cls is None: 357 mod_cls = self.env.temp_data.get('py:class') 358 # ... if still None, there's no way to know 359 if mod_cls is None: 360 return None, [] 361 # HACK: this is added in comparison to ClassLevelDocumenter 362 # mod_cls still exists of class.accessor, so an extra 363 # rpartition is needed 364 modname, accessor = rpartition(mod_cls, '.') 365 modname, cls = rpartition(modname, '.') 366 parents = [cls, accessor] 367 # if the module name is still missing, get it like above 368 if not modname: 369 modname = self.env.temp_data.get('autodoc:module') 370 if not modname: 371 if sphinx.__version__ > '1.3': 372 modname = self.env.ref_context.get('py:module') 373 else: 374 modname = self.env.temp_data.get('py:module') 375 # ... else, it stays None, which means invalid 376 return modname, parents + [base] 377 378 379 class AccessorAttributeDocumenter(AccessorLevelDocumenter, AttributeDocumenter): 380 381 objtype = 'accessorattribute' 382 directivetype = 'attribute' 383 384 385 class AccessorMethodDocumenter(AccessorLevelDocumenter, MethodDocumenter): 386 387 objtype = 'accessormethod' 388 directivetype = 'method' 389 390 391 class AccessorCallableDocumenter(AccessorLevelDocumenter, MethodDocumenter): 392 """ 393 This documenter lets us removes .__call__ from the method signature for 394 callable accessors like Series.plot 395 """ 396 objtype = 'accessorcallable' 397 directivetype = 'method' 398 399 # lower than MethodDocumenter; otherwise the doc build prints warnings 400 priority = 0.5 401 402 def format_name(self): 403 return MethodDocumenter.format_name(self).rstrip('.__call__') 404 405 406 class PandasAutosummary(Autosummary): 407 """ 408 This alternative autosummary class lets us override the table summary for 409 Series.plot and DataFrame.plot in the API docs. 410 """ 411 412 def _replace_pandas_items(self, display_name, sig, summary, real_name): 413 # this a hack: ideally we should extract the signature from the 414 # .__call__ method instead of hard coding this 415 if display_name == 'DataFrame.plot': 416 sig = '([x, y, kind, ax, ....])' 417 summary = 'DataFrame plotting accessor and method' 418 elif display_name == 'Series.plot': 419 sig = '([kind, ax, figsize, ....])' 420 summary = 'Series plotting accessor and method' 421 return (display_name, sig, summary, real_name) 422 423 def get_items(self, names): 424 items = Autosummary.get_items(self, names) 425 items = [self._replace_pandas_items(*item) for item in items] 426 return items 427 428 429 # based on numpy doc/source/conf.py 430 def linkcode_resolve(domain, info): 431 """ 432 Determine the URL corresponding to Python object 433 """ 434 if domain != 'py': 435 return None 436 437 modname = info['module'] 438 fullname = info['fullname'] 439 440 submod = sys.modules.get(modname) 441 if submod is None: 442 return None 443 444 obj = submod 445 for part in fullname.split('.'): 446 try: 447 obj = getattr(obj, part) 448 except: 449 return None 450 451 try: 452 fn = inspect.getsourcefile(obj) 453 except: 454 fn = None 455 if not fn: 456 return None 457 458 try: 459 source, lineno = inspect.getsourcelines(obj) 460 except: 461 lineno = None 462 463 if lineno: 464 linespec = "#L%d-L%d" % (lineno, lineno + len(source) - 1) 465 else: 466 linespec = "" 467 468 fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__)) 469 470 if '+' in pandas.__version__: 471 return "http://github.com/pydata/pandas/blob/master/pandas/%s%s" % ( 472 fn, linespec) 473 else: 474 return "http://github.com/pydata/pandas/blob/v%s/pandas/%s%s" % ( 475 pandas.__version__, fn, linespec) 476 477 478 # remove the docstring of the flags attribute (inherited from numpy ndarray) 479 # because these give doc build errors (see GH issue 5331) 480 def remove_flags_docstring(app, what, name, obj, options, lines): 481 if what == "attribute" and name.endswith(".flags"): 482 del lines[:] 483 484 def setup(app): 485 app.connect("autodoc-process-docstring", remove_flags_docstring) 486 app.add_autodocumenter(AccessorAttributeDocumenter) 487 app.add_autodocumenter(AccessorMethodDocumenter) 488 app.add_autodocumenter(AccessorCallableDocumenter) 489 app.add_directive('autosummary', PandasAutosummary) 490 [end of doc/source/conf.py] [start of vb_suite/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # pandas documentation build configuration file, created by 4 # 5 # This file is execfile()d with the current directory set to its containing dir. 6 # 7 # Note that not all possible configuration values are present in this 8 # autogenerated file. 9 # 10 # All configuration values have a default; values that are commented out 11 # serve to show the default. 12 13 import sys 14 import os 15 16 # If extensions (or modules to document with autodoc) are in another directory, 17 # add these directories to sys.path here. If the directory is relative to the 18 # documentation root, use os.path.abspath to make it absolute, like shown here. 19 # sys.path.append(os.path.abspath('.')) 20 sys.path.insert(0, os.path.abspath('../sphinxext')) 21 22 sys.path.extend([ 23 24 # numpy standard doc extensions 25 os.path.join(os.path.dirname(__file__), 26 '..', '../..', 27 'sphinxext') 28 29 ]) 30 31 # -- General configuration ----------------------------------------------- 32 33 # Add any Sphinx extension module names here, as strings. They can be extensions 34 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sphinxext. 35 36 extensions = ['sphinx.ext.autodoc', 37 'sphinx.ext.doctest'] 38 39 # Add any paths that contain templates here, relative to this directory. 40 templates_path = ['_templates', '_templates/autosummary'] 41 42 # The suffix of source filenames. 43 source_suffix = '.rst' 44 45 # The encoding of source files. 46 # source_encoding = 'utf-8' 47 48 # The master toctree document. 49 master_doc = 'index' 50 51 # General information about the project. 52 project = u'pandas' 53 copyright = u'2008-2011, the pandas development team' 54 55 # The version info for the project you're documenting, acts as replacement for 56 # |version| and |release|, also used in various other places throughout the 57 # built documents. 58 # 59 # The short X.Y version. 60 import pandas 61 62 # version = '%s r%s' % (pandas.__version__, svn_version()) 63 version = '%s' % (pandas.__version__) 64 65 # The full version, including alpha/beta/rc tags. 66 release = version 67 68 # JP: added from sphinxdocs 69 autosummary_generate = True 70 71 # The language for content autogenerated by Sphinx. Refer to documentation 72 # for a list of supported languages. 73 # language = None 74 75 # There are two options for replacing |today|: either, you set today to some 76 # non-false value, then it is used: 77 # today = '' 78 # Else, today_fmt is used as the format for a strftime call. 79 # today_fmt = '%B %d, %Y' 80 81 # List of documents that shouldn't be included in the build. 82 # unused_docs = [] 83 84 # List of directories, relative to source directory, that shouldn't be searched 85 # for source files. 86 exclude_trees = [] 87 88 # The reST default role (used for this markup: `text`) to use for all documents. 89 # default_role = None 90 91 # If true, '()' will be appended to :func: etc. cross-reference text. 92 # add_function_parentheses = True 93 94 # If true, the current module name will be prepended to all description 95 # unit titles (such as .. function::). 96 # add_module_names = True 97 98 # If true, sectionauthor and moduleauthor directives will be shown in the 99 # output. They are ignored by default. 100 # show_authors = False 101 102 # The name of the Pygments (syntax highlighting) style to use. 103 pygments_style = 'sphinx' 104 105 # A list of ignored prefixes for module index sorting. 106 # modindex_common_prefix = [] 107 108 109 # -- Options for HTML output --------------------------------------------- 110 111 # The theme to use for HTML and HTML Help pages. Major themes that come with 112 # Sphinx are currently 'default' and 'sphinxdoc'. 113 html_theme = 'agogo' 114 115 # The style sheet to use for HTML and HTML Help pages. A file of that name 116 # must exist either in Sphinx' static/ path, or in one of the custom paths 117 # given in html_static_path. 118 # html_style = 'statsmodels.css' 119 120 # Theme options are theme-specific and customize the look and feel of a theme 121 # further. For a list of options available for each theme, see the 122 # documentation. 123 # html_theme_options = {} 124 125 # Add any paths that contain custom themes here, relative to this directory. 126 html_theme_path = ['themes'] 127 128 # The name for this set of Sphinx documents. If None, it defaults to 129 # "<project> v<release> documentation". 130 html_title = 'Vbench performance benchmarks for pandas' 131 132 # A shorter title for the navigation bar. Default is the same as html_title. 133 # html_short_title = None 134 135 # The name of an image file (relative to this directory) to place at the top 136 # of the sidebar. 137 # html_logo = None 138 139 # The name of an image file (within the static path) to use as favicon of the 140 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 141 # pixels large. 142 # html_favicon = None 143 144 # Add any paths that contain custom static files (such as style sheets) here, 145 # relative to this directory. They are copied after the builtin static files, 146 # so a file named "default.css" will overwrite the builtin "default.css". 147 html_static_path = ['_static'] 148 149 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 150 # using the given strftime format. 151 # html_last_updated_fmt = '%b %d, %Y' 152 153 # If true, SmartyPants will be used to convert quotes and dashes to 154 # typographically correct entities. 155 # html_use_smartypants = True 156 157 # Custom sidebar templates, maps document names to template names. 158 # html_sidebars = {} 159 160 # Additional templates that should be rendered to pages, maps page names to 161 # template names. 162 # html_additional_pages = {} 163 164 # If false, no module index is generated. 165 html_use_modindex = True 166 167 # If false, no index is generated. 168 # html_use_index = True 169 170 # If true, the index is split into individual pages for each letter. 171 # html_split_index = False 172 173 # If true, links to the reST sources are added to the pages. 174 # html_show_sourcelink = True 175 176 # If true, an OpenSearch description file will be output, and all pages will 177 # contain a <link> tag referring to it. The value of this option must be the 178 # base URL from which the finished HTML is served. 179 # html_use_opensearch = '' 180 181 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 182 # html_file_suffix = '' 183 184 # Output file base name for HTML help builder. 185 htmlhelp_basename = 'performance' 186 187 188 # -- Options for LaTeX output -------------------------------------------- 189 190 # The paper size ('letter' or 'a4'). 191 # latex_paper_size = 'letter' 192 193 # The font size ('10pt', '11pt' or '12pt'). 194 # latex_font_size = '10pt' 195 196 # Grouping the document tree into LaTeX files. List of tuples 197 # (source start file, target name, title, author, documentclass [howto/manual]). 198 latex_documents = [ 199 ('index', 'performance.tex', 200 u'pandas vbench Performance Benchmarks', 201 u'Wes McKinney', 'manual'), 202 ] 203 204 # The name of an image file (relative to this directory) to place at the top of 205 # the title page. 206 # latex_logo = None 207 208 # For "manual" documents, if this is true, then toplevel headings are parts, 209 # not chapters. 210 # latex_use_parts = False 211 212 # Additional stuff for the LaTeX preamble. 213 # latex_preamble = '' 214 215 # Documents to append as an appendix to all manuals. 216 # latex_appendices = [] 217 218 # If false, no module index is generated. 219 # latex_use_modindex = True 220 221 222 # Example configuration for intersphinx: refer to the Python standard library. 223 # intersphinx_mapping = {'http://docs.scipy.org/': None} 224 import glob 225 autosummary_generate = glob.glob("*.rst") 226 [end of vb_suite/source/conf.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
96b364a4a337b608c92dd6d5a00ceedd80c29315
Conflicting documentation about index uniqueness The page http://pandas.pydata.org/pandas-docs/dev/generated/pandas.Series.html says: > index : array-like or Index (1d) > Values must be unique and hashable, same length as data. A little bit earlier the same page says: > Labels need not be unique but must be any hashable type. Other pages also mention non-unique index support (e.g. http://pandas.pydata.org/pandas-docs/dev/dsintro.html) It looks like `index` description should read: > index : array-like or Index (1d) > Values must be hashable, same length as data. Non-unique index values are allowed.
yep looks like hasn't been update pls do a pull-request with those changes @bgbg pull-request for this?
2016-10-04T05:54:18Z
<patch> diff --git a/pandas/core/series.py b/pandas/core/series.py --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -102,11 +102,11 @@ class Series(base.IndexOpsMixin, strings.StringAccessorMixin, """ One-dimensional ndarray with axis labels (including time series). - Labels need not be unique but must be any hashable type. The object + Labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Statistical methods from ndarray have been overridden to automatically exclude - missing data (currently represented as NaN) + missing data (currently represented as NaN). Operations between Series (+, -, /, *, **) align values based on their associated index values-- they need not be the same length. The result @@ -117,8 +117,8 @@ class Series(base.IndexOpsMixin, strings.StringAccessorMixin, data : array-like, dict, or scalar value Contains data stored in Series index : array-like or Index (1d) - Values must be unique and hashable, same length as data. Index - object (or other iterable of same length as data) Will default to + Values must be hashable and have the same length as `data`. + Non-unique index values are allowed. Will default to RangeIndex(len(data)) if not provided. If both a dict and index sequence are used, the index will override the keys found in the dict. </patch>
[]
[]
pandas-dev__pandas-6495
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> iat, iloc don't work with no unique index ``` import pandas as pd s = pd.Series(range(5), index=[1,1,2,2,3]) s.iat[2] ``` which returns `array([2, 3], dtype=int64)`, I think the result should be `2`. `s.iloc[2]` works, but `s.iloc[[2, 3]]` raise error. </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.png) 4 5 [![Scatter-CI Status page](http://scatterci.github.io/scatterci48.jpg)](http://scatterci.github.io/pydata/pandas) 6 7 ## What is it 8 9 **pandas** is a Python package providing fast, flexible, and expressive data 10 structures designed to make working with "relational" or "labeled" data both 11 easy and intuitive. It aims to be the fundamental high-level building block for 12 doing practical, **real world** data analysis in Python. Additionally, it has 13 the broader goal of becoming **the most powerful and flexible open source data 14 analysis / manipulation tool available in any language**. It is already well on 15 its way toward this goal. 16 17 ## Main Features 18 Here are just a few of the things that pandas does well: 19 20 - Easy handling of [**missing data**][missing-data] (represented as 21 `NaN`) in floating point as well as non-floating point data 22 - Size mutability: columns can be [**inserted and 23 deleted**][insertion-deletion] from DataFrame and higher dimensional 24 objects 25 - Automatic and explicit [**data alignment**][alignment]: objects can 26 be explicitly aligned to a set of labels, or the user can simply 27 ignore the labels and let `Series`, `DataFrame`, etc. automatically 28 align the data for you in computations 29 - Powerful, flexible [**group by**][groupby] functionality to perform 30 split-apply-combine operations on data sets, for both aggregating 31 and transforming data 32 - Make it [**easy to convert**][conversion] ragged, 33 differently-indexed data in other Python and NumPy data structures 34 into DataFrame objects 35 - Intelligent label-based [**slicing**][slicing], [**fancy 36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 37 large data sets 38 - Intuitive [**merging**][merging] and [**joining**][joining] data 39 sets 40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 41 data sets 42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 43 labels per tick) 44 - Robust IO tools for loading data from [**flat files**][flat-files] 45 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 47 - [**Time series**][timeseries]-specific functionality: date range 48 generation and frequency conversion, moving window statistics, 49 moving window linear regressions, date shifting and lagging, etc. 50 51 52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 70 71 ## Where to get it 72 The source code is currently hosted on GitHub at: 73 http://github.com/pydata/pandas 74 75 Binary installers for the latest released version are available at the Python 76 package index 77 78 http://pypi.python.org/pypi/pandas/ 79 80 And via `easy_install`: 81 82 ```sh 83 easy_install pandas 84 ``` 85 86 or `pip`: 87 88 ```sh 89 pip install pandas 90 ``` 91 92 ## Dependencies 93 - [NumPy](http://www.numpy.org): 1.6.1 or higher 94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 95 - [pytz](http://pytz.sourceforge.net) 96 - Needed for time zone support with ``pandas.date_range`` 97 98 ### Highly Recommended Dependencies 99 - [numexpr](http://code.google.com/p/numexpr/) 100 - Needed to accelerate some expression evaluation operations 101 - Required by PyTables 102 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 103 - Needed to accelerate certain numerical operations 104 105 ### Optional dependencies 106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 111 - [statsmodels](http://statsmodels.sourceforge.net/) 112 - Needed for parts of `pandas.stats` 113 - For Excel I/O: 114 - [xlrd/xlwt](http://www.python-excel.org/) 115 - Excel reading (xlrd) and writing (xlwt) 116 - [openpyxl](http://packages.python.org/openpyxl/) 117 - openpyxl version 1.6.1 or higher, for writing .xlsx files 118 - xlrd >= 0.9.0 119 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 120 - Alternative Excel writer. 121 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/) 122 - Needed for `pandas.io.gbq` 123 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 124 - One of the following combinations of libraries is needed to use the 125 top-level [`pandas.read_html`][read-html-docs] function: 126 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 127 recent version of [html5lib][html5lib] is okay.) 128 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 129 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 130 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 131 for reasons as to why you should probably **not** take this approach. 132 133 #### Notes about HTML parsing libraries 134 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 135 either [lxml][lxml] or [html5lib][html5lib] or both. 136 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 137 installed. 138 - You are strongly encouraged to read [HTML reading 139 gotchas][html-gotchas]. It explains issues surrounding the 140 installation and usage of the above three libraries. 141 - You may need to install an older version of 142 [BeautifulSoup4][BeautifulSoup4]: 143 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 144 32-bit Ubuntu/Debian 145 - Additionally, if you're using [Anaconda][Anaconda] you should 146 definitely read [the gotchas about HTML parsing][html-gotchas] 147 libraries 148 - If you're on a system with `apt-get` you can do 149 150 ```sh 151 sudo apt-get build-dep python-lxml 152 ``` 153 154 to get the necessary dependencies for installation of [lxml][lxml]. 155 This will prevent further headaches down the line. 156 157 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 158 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 159 [lxml]: http://lxml.de 160 [Anaconda]: https://store.continuum.io/cshop/anaconda 161 [NumPy]: http://numpy.scipy.org/ 162 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 163 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 164 165 ## Installation from sources 166 To install pandas from source you need Cython in addition to the normal 167 dependencies above. Cython can be installed from pypi: 168 169 ```sh 170 pip install cython 171 ``` 172 173 In the `pandas` directory (same one where you found this file after 174 cloning the git repo), execute: 175 176 ```sh 177 python setup.py install 178 ``` 179 180 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html): 181 182 ```sh 183 python setup.py develop 184 ``` 185 186 Alternatively, you can use `pip` if you want all the dependencies pulled 187 in automatically (the `-e` option is for installing it in [development 188 mode](http://www.pip-installer.org/en/latest/usage.html)): 189 190 ```sh 191 pip install -e . 192 ``` 193 194 On Windows, you will need to install MinGW and execute: 195 196 ```sh 197 python setup.py build --compiler=mingw32 198 python setup.py install 199 ``` 200 201 See http://pandas.pydata.org/ for more information. 202 203 ## License 204 BSD 205 206 ## Documentation 207 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 208 209 The Sphinx documentation should provide a good starting point for learning how 210 to use the library. Expect the docs to continue to expand as time goes on. 211 212 ## Background 213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 214 has been under active development since then. 215 216 ## Discussion and Development 217 Since pandas development is related to a number of other scientific 218 Python projects, questions are welcome on the scipy-user mailing 219 list. Specialized discussions or design issues should take place on 220 the pystatsmodels mailing list / Google group, where 221 ``scikits.statsmodels`` and other libraries will also be discussed: 222 223 http://groups.google.com/group/pystatsmodels 224 [end of README.md] [start of pandas/core/algorithms.py] 1 """ 2 Generic data algorithms. This module is experimental at the moment and not 3 intended for public consumption 4 """ 5 from __future__ import division 6 from warnings import warn 7 import numpy as np 8 9 import pandas.core.common as com 10 import pandas.algos as algos 11 import pandas.hashtable as htable 12 import pandas.compat as compat 13 from pandas.compat import filter, string_types 14 15 def match(to_match, values, na_sentinel=-1): 16 """ 17 Compute locations of to_match into values 18 19 Parameters 20 ---------- 21 to_match : array-like 22 values to find positions of 23 values : array-like 24 Unique set of values 25 na_sentinel : int, default -1 26 Value to mark "not found" 27 28 Examples 29 -------- 30 31 Returns 32 ------- 33 match : ndarray of integers 34 """ 35 values = com._asarray_tuplesafe(values) 36 if issubclass(values.dtype.type, string_types): 37 values = np.array(values, dtype='O') 38 39 f = lambda htype, caster: _match_generic(to_match, values, htype, caster) 40 result = _hashtable_algo(f, values.dtype) 41 42 if na_sentinel != -1: 43 44 # replace but return a numpy array 45 # use a Series because it handles dtype conversions properly 46 from pandas.core.series import Series 47 result = Series(result.ravel()).replace(-1,na_sentinel).values.reshape(result.shape) 48 49 return result 50 51 52 def unique(values): 53 """ 54 Compute unique values (not necessarily sorted) efficiently from input array 55 of values 56 57 Parameters 58 ---------- 59 values : array-like 60 61 Returns 62 ------- 63 uniques 64 """ 65 values = com._asarray_tuplesafe(values) 66 f = lambda htype, caster: _unique_generic(values, htype, caster) 67 return _hashtable_algo(f, values.dtype) 68 69 70 def _hashtable_algo(f, dtype): 71 """ 72 f(HashTable, type_caster) -> result 73 """ 74 if com.is_float_dtype(dtype): 75 return f(htable.Float64HashTable, com._ensure_float64) 76 elif com.is_integer_dtype(dtype): 77 return f(htable.Int64HashTable, com._ensure_int64) 78 else: 79 return f(htable.PyObjectHashTable, com._ensure_object) 80 81 82 def _match_generic(values, index, table_type, type_caster): 83 values = type_caster(values) 84 index = type_caster(index) 85 table = table_type(min(len(index), 1000000)) 86 table.map_locations(index) 87 return table.lookup(values) 88 89 90 def _unique_generic(values, table_type, type_caster): 91 values = type_caster(values) 92 table = table_type(min(len(values), 1000000)) 93 uniques = table.unique(values) 94 return type_caster(uniques) 95 96 97 def factorize(values, sort=False, order=None, na_sentinel=-1): 98 """ 99 Encode input values as an enumerated type or categorical variable 100 101 Parameters 102 ---------- 103 values : ndarray (1-d) 104 Sequence 105 sort : boolean, default False 106 Sort by values 107 order : 108 na_sentinel: int, default -1 109 Value to mark "not found" 110 111 Returns 112 ------- 113 labels : the indexer to the original array 114 uniques : the unique values 115 116 note: an array of Periods will ignore sort as it returns an always sorted PeriodIndex 117 """ 118 from pandas.tseries.period import PeriodIndex 119 vals = np.asarray(values) 120 is_datetime = com.is_datetime64_dtype(vals) 121 (hash_klass, vec_klass), vals = _get_data_algo(vals, _hashtables) 122 123 table = hash_klass(len(vals)) 124 uniques = vec_klass() 125 labels = table.get_labels(vals, uniques, 0, na_sentinel) 126 127 labels = com._ensure_platform_int(labels) 128 129 uniques = uniques.to_array() 130 131 if sort and len(uniques) > 0: 132 try: 133 sorter = uniques.argsort() 134 except: 135 # unorderable in py3 if mixed str/int 136 t = hash_klass(len(uniques)) 137 t.map_locations(com._ensure_object(uniques)) 138 139 # order ints before strings 140 ordered = np.concatenate([ 141 np.sort(np.array([ e for i, e in enumerate(uniques) if f(e) ],dtype=object)) for f in [ lambda x: not isinstance(x,string_types), 142 lambda x: isinstance(x,string_types) ] 143 ]) 144 sorter = com._ensure_platform_int(t.lookup(com._ensure_object(ordered))) 145 146 reverse_indexer = np.empty(len(sorter), dtype=np.int_) 147 reverse_indexer.put(sorter, np.arange(len(sorter))) 148 149 mask = labels < 0 150 labels = reverse_indexer.take(labels) 151 np.putmask(labels, mask, -1) 152 153 uniques = uniques.take(sorter) 154 155 if is_datetime: 156 uniques = uniques.astype('M8[ns]') 157 if isinstance(values, PeriodIndex): 158 uniques = PeriodIndex(ordinal=uniques, freq=values.freq) 159 160 return labels, uniques 161 162 163 def value_counts(values, sort=True, ascending=False, normalize=False, 164 bins=None): 165 """ 166 Compute a histogram of the counts of non-null values 167 168 Parameters 169 ---------- 170 values : ndarray (1-d) 171 sort : boolean, default True 172 Sort by values 173 ascending : boolean, default False 174 Sort in ascending order 175 normalize: boolean, default False 176 If True then compute a relative histogram 177 bins : integer, optional 178 Rather than count values, group them into half-open bins, 179 convenience for pd.cut, only works with numeric data 180 181 Returns 182 ------- 183 value_counts : Series 184 185 """ 186 from pandas.core.series import Series 187 from pandas.tools.tile import cut 188 189 values = Series(values).values 190 191 if bins is not None: 192 try: 193 cat, bins = cut(values, bins, retbins=True) 194 except TypeError: 195 raise TypeError("bins argument only works with numeric data.") 196 values = cat.labels 197 198 if com.is_integer_dtype(values.dtype): 199 values = com._ensure_int64(values) 200 keys, counts = htable.value_count_int64(values) 201 202 elif issubclass(values.dtype.type, (np.datetime64, np.timedelta64)): 203 dtype = values.dtype 204 values = values.view(np.int64) 205 keys, counts = htable.value_count_int64(values) 206 207 # convert the keys back to the dtype we came in 208 keys = Series(keys, dtype=dtype) 209 210 else: 211 mask = com.isnull(values) 212 values = com._ensure_object(values) 213 keys, counts = htable.value_count_object(values, mask) 214 215 result = Series(counts, index=com._values_from_object(keys)) 216 217 if bins is not None: 218 # TODO: This next line should be more efficient 219 result = result.reindex(np.arange(len(cat.levels)), fill_value=0) 220 result.index = bins[:-1] 221 222 if sort: 223 result.sort() 224 if not ascending: 225 result = result[::-1] 226 227 if normalize: 228 result = result / float(values.size) 229 230 return result 231 232 233 def mode(values): 234 """Returns the mode or mode(s) of the passed Series or ndarray (sorted)""" 235 # must sort because hash order isn't necessarily defined. 236 from pandas.core.series import Series 237 238 if isinstance(values, Series): 239 constructor = values._constructor 240 values = values.values 241 else: 242 values = np.asanyarray(values) 243 constructor = Series 244 245 dtype = values.dtype 246 if com.is_integer_dtype(values.dtype): 247 values = com._ensure_int64(values) 248 result = constructor(sorted(htable.mode_int64(values)), dtype=dtype) 249 250 elif issubclass(values.dtype.type, (np.datetime64, np.timedelta64)): 251 dtype = values.dtype 252 values = values.view(np.int64) 253 result = constructor(sorted(htable.mode_int64(values)), dtype=dtype) 254 255 else: 256 mask = com.isnull(values) 257 values = com._ensure_object(values) 258 res = htable.mode_object(values, mask) 259 try: 260 res = sorted(res) 261 except TypeError as e: 262 warn("Unable to sort modes: %s" % e) 263 result = constructor(res, dtype=dtype) 264 265 return result 266 267 268 def rank(values, axis=0, method='average', na_option='keep', 269 ascending=True, pct=False): 270 """ 271 272 """ 273 if values.ndim == 1: 274 f, values = _get_data_algo(values, _rank1d_functions) 275 ranks = f(values, ties_method=method, ascending=ascending, 276 na_option=na_option, pct=pct) 277 elif values.ndim == 2: 278 f, values = _get_data_algo(values, _rank2d_functions) 279 ranks = f(values, axis=axis, ties_method=method, 280 ascending=ascending, na_option=na_option) 281 282 return ranks 283 284 285 def quantile(x, q, interpolation_method='fraction'): 286 """ 287 Compute sample quantile or quantiles of the input array. For example, q=0.5 288 computes the median. 289 290 The `interpolation_method` parameter supports three values, namely 291 `fraction` (default), `lower` and `higher`. Interpolation is done only, 292 if the desired quantile lies between two data points `i` and `j`. For 293 `fraction`, the result is an interpolated value between `i` and `j`; 294 for `lower`, the result is `i`, for `higher` the result is `j`. 295 296 Parameters 297 ---------- 298 x : ndarray 299 Values from which to extract score. 300 q : scalar or array 301 Percentile at which to extract score. 302 interpolation_method : {'fraction', 'lower', 'higher'}, optional 303 This optional parameter specifies the interpolation method to use, 304 when the desired quantile lies between two data points `i` and `j`: 305 306 - fraction: `i + (j - i)*fraction`, where `fraction` is the 307 fractional part of the index surrounded by `i` and `j`. 308 -lower: `i`. 309 - higher: `j`. 310 311 Returns 312 ------- 313 score : float 314 Score at percentile. 315 316 Examples 317 -------- 318 >>> from scipy import stats 319 >>> a = np.arange(100) 320 >>> stats.scoreatpercentile(a, 50) 321 49.5 322 323 """ 324 x = np.asarray(x) 325 mask = com.isnull(x) 326 327 x = x[-mask] 328 329 values = np.sort(x) 330 331 def _get_score(at): 332 if len(values) == 0: 333 return np.nan 334 335 idx = at * (len(values) - 1) 336 if idx % 1 == 0: 337 score = values[idx] 338 else: 339 if interpolation_method == 'fraction': 340 score = _interpolate(values[int(idx)], values[int(idx) + 1], 341 idx % 1) 342 elif interpolation_method == 'lower': 343 score = values[np.floor(idx)] 344 elif interpolation_method == 'higher': 345 score = values[np.ceil(idx)] 346 else: 347 raise ValueError("interpolation_method can only be 'fraction' " 348 ", 'lower' or 'higher'") 349 350 return score 351 352 if np.isscalar(q): 353 return _get_score(q) 354 else: 355 q = np.asarray(q, np.float64) 356 return algos.arrmap_float64(q, _get_score) 357 358 359 def _interpolate(a, b, fraction): 360 """Returns the point at the given fraction between a and b, where 361 'fraction' must be between 0 and 1. 362 """ 363 return a + (b - a) * fraction 364 365 366 def _get_data_algo(values, func_map): 367 mask = None 368 if com.is_float_dtype(values): 369 f = func_map['float64'] 370 values = com._ensure_float64(values) 371 elif com.is_datetime64_dtype(values): 372 373 # if we have NaT, punt to object dtype 374 mask = com.isnull(values) 375 if mask.ravel().any(): 376 f = func_map['generic'] 377 values = com._ensure_object(values) 378 values[mask] = np.nan 379 else: 380 f = func_map['int64'] 381 values = values.view('i8') 382 383 elif com.is_integer_dtype(values): 384 f = func_map['int64'] 385 values = com._ensure_int64(values) 386 else: 387 f = func_map['generic'] 388 values = com._ensure_object(values) 389 return f, values 390 391 392 def group_position(*args): 393 """ 394 Get group position 395 """ 396 from collections import defaultdict 397 table = defaultdict(int) 398 399 result = [] 400 for tup in zip(*args): 401 result.append(table[tup]) 402 table[tup] += 1 403 404 return result 405 406 407 _rank1d_functions = { 408 'float64': algos.rank_1d_float64, 409 'int64': algos.rank_1d_int64, 410 'generic': algos.rank_1d_generic 411 } 412 413 _rank2d_functions = { 414 'float64': algos.rank_2d_float64, 415 'int64': algos.rank_2d_int64, 416 'generic': algos.rank_2d_generic 417 } 418 419 _hashtables = { 420 'float64': (htable.Float64HashTable, htable.Float64Vector), 421 'int64': (htable.Int64HashTable, htable.Int64Vector), 422 'generic': (htable.PyObjectHashTable, htable.ObjectVector) 423 } 424 [end of pandas/core/algorithms.py] [start of pandas/tseries/tools.py] 1 from datetime import datetime, timedelta 2 import re 3 import sys 4 5 import numpy as np 6 7 import pandas.lib as lib 8 import pandas.tslib as tslib 9 import pandas.core.common as com 10 from pandas.compat import StringIO, callable 11 import pandas.compat as compat 12 13 try: 14 import dateutil 15 from dateutil.parser import parse, DEFAULTPARSER 16 from dateutil.relativedelta import relativedelta 17 18 # raise exception if dateutil 2.0 install on 2.x platform 19 if (sys.version_info[0] == 2 and 20 dateutil.__version__ == '2.0'): # pragma: no cover 21 raise Exception('dateutil 2.0 incompatible with Python 2.x, you must ' 22 'install version 1.5 or 2.1+!') 23 except ImportError: # pragma: no cover 24 print('Please install python-dateutil via easy_install or some method!') 25 raise # otherwise a 2nd import won't show the message 26 27 _DATEUTIL_LEXER_SPLIT = None 28 try: 29 # Since these are private methods from dateutil, it is safely imported 30 # here so in case this interface changes, pandas will just fallback 31 # to not using the functionality 32 from dateutil.parser import _timelex 33 34 if hasattr(_timelex, 'split'): 35 def _lexer_split_from_str(dt_str): 36 # The StringIO(str(_)) is for dateutil 2.2 compatibility 37 return _timelex.split(StringIO(str(dt_str))) 38 39 _DATEUTIL_LEXER_SPLIT = _lexer_split_from_str 40 except (ImportError, AttributeError): 41 pass 42 43 def _infer_tzinfo(start, end): 44 def _infer(a, b): 45 tz = a.tzinfo 46 if b and b.tzinfo: 47 if not (tslib.get_timezone(tz) == tslib.get_timezone(b.tzinfo)): 48 raise AssertionError('Inputs must both have the same timezone,' 49 ' {0} != {1}'.format(tz, b.tzinfo)) 50 return tz 51 tz = None 52 if start is not None: 53 tz = _infer(start, end) 54 elif end is not None: 55 tz = _infer(end, start) 56 return tz 57 58 59 def _maybe_get_tz(tz): 60 if isinstance(tz, compat.string_types): 61 import pytz 62 tz = pytz.timezone(tz) 63 if com.is_integer(tz): 64 import pytz 65 tz = pytz.FixedOffset(tz / 60) 66 return tz 67 68 def _guess_datetime_format(dt_str, dayfirst=False, 69 dt_str_parse=compat.parse_date, 70 dt_str_split=_DATEUTIL_LEXER_SPLIT): 71 """ 72 Guess the datetime format of a given datetime string. 73 74 Parameters 75 ---------- 76 dt_str : string, datetime string to guess the format of 77 dayfirst : boolean, default False 78 If True parses dates with the day first, eg 20/01/2005 79 Warning: dayfirst=True is not strict, but will prefer to parse 80 with day first (this is a known bug). 81 dt_str_parse : function, defaults to `compate.parse_date` (dateutil) 82 This function should take in a datetime string and return 83 a `datetime.datetime` guess that the datetime string represents 84 dt_str_split : function, defaults to `_DATEUTIL_LEXER_SPLIT` (dateutil) 85 This function should take in a datetime string and return 86 a list of strings, the guess of the various specific parts 87 e.g. '2011/12/30' -> ['2011', '/', '12', '/', '30'] 88 89 Returns 90 ------- 91 ret : datetime formatt string (for `strftime` or `strptime`) 92 """ 93 if dt_str_parse is None or dt_str_split is None: 94 return None 95 96 if not isinstance(dt_str, compat.string_types): 97 return None 98 99 day_attribute_and_format = (('day',), '%d') 100 101 datetime_attrs_to_format = [ 102 (('year', 'month', 'day'), '%Y%m%d'), 103 (('year',), '%Y'), 104 (('month',), '%B'), 105 (('month',), '%b'), 106 (('month',), '%m'), 107 day_attribute_and_format, 108 (('hour',), '%H'), 109 (('minute',), '%M'), 110 (('second',), '%S'), 111 (('microsecond',), '%f'), 112 (('second', 'microsecond'), '%S.%f'), 113 ] 114 115 if dayfirst: 116 datetime_attrs_to_format.remove(day_attribute_and_format) 117 datetime_attrs_to_format.insert(0, day_attribute_and_format) 118 119 try: 120 parsed_datetime = dt_str_parse(dt_str, dayfirst=dayfirst) 121 except: 122 # In case the datetime can't be parsed, its format cannot be guessed 123 return None 124 125 if parsed_datetime is None: 126 return None 127 128 try: 129 tokens = dt_str_split(dt_str) 130 except: 131 # In case the datetime string can't be split, its format cannot 132 # be guessed 133 return None 134 135 format_guess = [None] * len(tokens) 136 found_attrs = set() 137 138 for attrs, attr_format in datetime_attrs_to_format: 139 # If a given attribute has been placed in the format string, skip 140 # over other formats for that same underlying attribute (IE, month 141 # can be represented in multiple different ways) 142 if set(attrs) & found_attrs: 143 continue 144 145 if all(getattr(parsed_datetime, attr) is not None for attr in attrs): 146 for i, token_format in enumerate(format_guess): 147 if (token_format is None and 148 tokens[i] == parsed_datetime.strftime(attr_format)): 149 format_guess[i] = attr_format 150 found_attrs.update(attrs) 151 break 152 153 # Only consider it a valid guess if we have a year, month and day 154 if len(set(['year', 'month', 'day']) & found_attrs) != 3: 155 return None 156 157 output_format = [] 158 for i, guess in enumerate(format_guess): 159 if guess is not None: 160 # Either fill in the format placeholder (like %Y) 161 output_format.append(guess) 162 else: 163 # Or just the token separate (IE, the dashes in "01-01-2013") 164 try: 165 # If the token is numeric, then we likely didn't parse it 166 # properly, so our guess is wrong 167 float(tokens[i]) 168 return None 169 except ValueError: 170 pass 171 172 output_format.append(tokens[i]) 173 174 guessed_format = ''.join(output_format) 175 176 if parsed_datetime.strftime(guessed_format) == dt_str: 177 return guessed_format 178 179 def _guess_datetime_format_for_array(arr, **kwargs): 180 # Try to guess the format based on the first non-NaN element 181 non_nan_elements = com.notnull(arr).nonzero()[0] 182 if len(non_nan_elements): 183 return _guess_datetime_format(arr[non_nan_elements[0]], **kwargs) 184 185 def to_datetime(arg, errors='ignore', dayfirst=False, utc=None, box=True, 186 format=None, coerce=False, unit='ns', 187 infer_datetime_format=False): 188 """ 189 Convert argument to datetime 190 191 Parameters 192 ---------- 193 arg : string, datetime, array of strings (with possible NAs) 194 errors : {'ignore', 'raise'}, default 'ignore' 195 Errors are ignored by default (values left untouched) 196 dayfirst : boolean, default False 197 If True parses dates with the day first, eg 20/01/2005 198 Warning: dayfirst=True is not strict, but will prefer to parse 199 with day first (this is a known bug). 200 utc : boolean, default None 201 Return UTC DatetimeIndex if True (converting any tz-aware 202 datetime.datetime objects as well) 203 box : boolean, default True 204 If True returns a DatetimeIndex, if False returns ndarray of values 205 format : string, default None 206 strftime to parse time, eg "%d/%m/%Y" 207 coerce : force errors to NaT (False by default) 208 unit : unit of the arg (D,s,ms,us,ns) denote the unit in epoch 209 (e.g. a unix timestamp), which is an integer/float number 210 infer_datetime_format: boolean, default False 211 If no `format` is given, try to infer the format based on the first 212 datetime string. Provides a large speed-up in many cases. 213 214 Returns 215 ------- 216 ret : datetime if parsing succeeded 217 218 Examples 219 -------- 220 Take separate series and convert to datetime 221 222 >>> import pandas as pd 223 >>> i = pd.date_range('20000101',periods=100) 224 >>> df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day)) 225 >>> pd.to_datetime(df.year*10000 + df.month*100 + df.day, format='%Y%m%d') 226 227 Or from strings 228 229 >>> df = df.astype(str) 230 >>> pd.to_datetime(df.day + df.month + df.year, format="%d%m%Y") 231 """ 232 from pandas import Timestamp 233 from pandas.core.series import Series 234 from pandas.tseries.index import DatetimeIndex 235 236 def _convert_listlike(arg, box, format): 237 238 if isinstance(arg, (list,tuple)): 239 arg = np.array(arg, dtype='O') 240 241 if com.is_datetime64_ns_dtype(arg): 242 if box and not isinstance(arg, DatetimeIndex): 243 try: 244 return DatetimeIndex(arg, tz='utc' if utc else None) 245 except ValueError: 246 pass 247 248 return arg 249 250 arg = com._ensure_object(arg) 251 252 if infer_datetime_format and format is None: 253 format = _guess_datetime_format_for_array(arg, dayfirst=dayfirst) 254 255 if format is not None: 256 # There is a special fast-path for iso8601 formatted 257 # datetime strings, so in those cases don't use the inferred 258 # format because this path makes process slower in this 259 # special case 260 format_is_iso8601 = ( 261 '%Y-%m-%dT%H:%M:%S.%f'.startswith(format) or 262 '%Y-%m-%d %H:%M:%S.%f'.startswith(format) 263 ) 264 if format_is_iso8601: 265 format = None 266 267 try: 268 result = None 269 270 if format is not None: 271 # shortcut formatting here 272 if format == '%Y%m%d': 273 try: 274 result = _attempt_YYYYMMDD(arg) 275 except: 276 raise ValueError("cannot convert the input to '%Y%m%d' date format") 277 278 # fallback 279 if result is None: 280 try: 281 result = tslib.array_strptime( 282 arg, format, coerce=coerce 283 ) 284 except (tslib.OutOfBoundsDatetime): 285 if errors == 'raise': 286 raise 287 result = arg 288 except ValueError: 289 # Only raise this error if the user provided the 290 # datetime format, and not when it was inferred 291 if not infer_datetime_format: 292 raise 293 294 if result is None and (format is None or infer_datetime_format): 295 result = tslib.array_to_datetime(arg, raise_=errors == 'raise', 296 utc=utc, dayfirst=dayfirst, 297 coerce=coerce, unit=unit) 298 299 if com.is_datetime64_dtype(result) and box: 300 result = DatetimeIndex(result, tz='utc' if utc else None) 301 return result 302 303 except ValueError as e: 304 try: 305 values, tz = tslib.datetime_to_datetime64(arg) 306 return DatetimeIndex._simple_new(values, None, tz=tz) 307 except (ValueError, TypeError): 308 raise e 309 310 if arg is None: 311 return arg 312 elif isinstance(arg, Timestamp): 313 return arg 314 elif isinstance(arg, Series): 315 values = _convert_listlike(arg.values, False, format) 316 return Series(values, index=arg.index, name=arg.name) 317 elif com.is_list_like(arg): 318 return _convert_listlike(arg, box, format) 319 320 return _convert_listlike(np.array([ arg ]), box, format)[0] 321 322 class DateParseError(ValueError): 323 pass 324 325 def _attempt_YYYYMMDD(arg): 326 """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like, 327 arg is a passed in as an object dtype, but could really be ints/strings with nan-like/or floats (e.g. with nan) """ 328 329 def calc(carg): 330 # calculate the actual result 331 carg = carg.astype(object) 332 return lib.try_parse_year_month_day(carg/10000,carg/100 % 100, carg % 100) 333 334 def calc_with_mask(carg,mask): 335 result = np.empty(carg.shape, dtype='M8[ns]') 336 iresult = result.view('i8') 337 iresult[-mask] = tslib.iNaT 338 result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)).astype('M8[ns]') 339 return result 340 341 # try intlike / strings that are ints 342 try: 343 return calc(arg.astype(np.int64)) 344 except: 345 pass 346 347 # a float with actual np.nan 348 try: 349 carg = arg.astype(np.float64) 350 return calc_with_mask(carg,com.notnull(carg)) 351 except: 352 pass 353 354 # string with NaN-like 355 try: 356 mask = ~lib.ismember(arg, tslib._nat_strings) 357 return calc_with_mask(arg,mask) 358 except: 359 pass 360 361 return None 362 363 # patterns for quarters like '4Q2005', '05Q1' 364 qpat1full = re.compile(r'(\d)Q(\d\d\d\d)') 365 qpat2full = re.compile(r'(\d\d\d\d)Q(\d)') 366 qpat1 = re.compile(r'(\d)Q(\d\d)') 367 qpat2 = re.compile(r'(\d\d)Q(\d)') 368 ypat = re.compile(r'(\d\d\d\d)$') 369 has_time = re.compile('(.+)([\s]|T)+(.+)') 370 371 372 def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None): 373 """ 374 Try hard to parse datetime string, leveraging dateutil plus some extra 375 goodies like quarter recognition. 376 377 Parameters 378 ---------- 379 arg : compat.string_types 380 freq : str or DateOffset, default None 381 Helps with interpreting time string if supplied 382 dayfirst : bool, default None 383 If None uses default from print_config 384 yearfirst : bool, default None 385 If None uses default from print_config 386 387 Returns 388 ------- 389 datetime, datetime/dateutil.parser._result, str 390 """ 391 from pandas.core.config import get_option 392 from pandas.tseries.offsets import DateOffset 393 from pandas.tseries.frequencies import (_get_rule_month, _month_numbers, 394 _get_freq_str) 395 396 if not isinstance(arg, compat.string_types): 397 return arg 398 399 arg = arg.upper() 400 401 default = datetime(1, 1, 1).replace(hour=0, minute=0, 402 second=0, microsecond=0) 403 404 # special handling for possibilities eg, 2Q2005, 2Q05, 2005Q1, 05Q1 405 if len(arg) in [4, 6]: 406 m = ypat.match(arg) 407 if m: 408 ret = default.replace(year=int(m.group(1))) 409 return ret, ret, 'year' 410 411 add_century = False 412 if len(arg) == 4: 413 add_century = True 414 qpats = [(qpat1, 1), (qpat2, 0)] 415 else: 416 qpats = [(qpat1full, 1), (qpat2full, 0)] 417 418 for pat, yfirst in qpats: 419 qparse = pat.match(arg) 420 if qparse is not None: 421 if yfirst: 422 yi, qi = 1, 2 423 else: 424 yi, qi = 2, 1 425 q = int(qparse.group(yi)) 426 y_str = qparse.group(qi) 427 y = int(y_str) 428 if add_century: 429 y += 2000 430 431 if freq is not None: 432 # hack attack, #1228 433 mnum = _month_numbers[_get_rule_month(freq)] + 1 434 month = (mnum + (q - 1) * 3) % 12 + 1 435 if month > mnum: 436 y -= 1 437 else: 438 month = (q - 1) * 3 + 1 439 440 ret = default.replace(year=y, month=month) 441 return ret, ret, 'quarter' 442 443 is_mo_str = freq is not None and freq == 'M' 444 is_mo_off = getattr(freq, 'rule_code', None) == 'M' 445 is_monthly = is_mo_str or is_mo_off 446 if len(arg) == 6 and is_monthly: 447 try: 448 ret = _try_parse_monthly(arg) 449 if ret is not None: 450 return ret, ret, 'month' 451 except Exception: 452 pass 453 454 # montly f7u12 455 mresult = _attempt_monthly(arg) 456 if mresult: 457 return mresult 458 459 if dayfirst is None: 460 dayfirst = get_option("display.date_dayfirst") 461 if yearfirst is None: 462 yearfirst = get_option("display.date_yearfirst") 463 464 try: 465 parsed, reso = dateutil_parse(arg, default, dayfirst=dayfirst, 466 yearfirst=yearfirst) 467 except Exception as e: 468 # TODO: allow raise of errors within instead 469 raise DateParseError(e) 470 471 if parsed is None: 472 raise DateParseError("Could not parse %s" % arg) 473 474 return parsed, parsed, reso # datetime, resolution 475 476 477 def dateutil_parse(timestr, default, 478 ignoretz=False, tzinfos=None, 479 **kwargs): 480 """ lifted from dateutil to get resolution""" 481 from dateutil import tz 482 import time 483 fobj = StringIO(str(timestr)) 484 485 res = DEFAULTPARSER._parse(fobj, **kwargs) 486 487 # dateutil 2.2 compat 488 if isinstance(res, tuple): 489 res, _ = res 490 491 if res is None: 492 raise ValueError("unknown string format") 493 494 repl = {} 495 reso = None 496 for attr in ["year", "month", "day", "hour", 497 "minute", "second", "microsecond"]: 498 value = getattr(res, attr) 499 if value is not None: 500 repl[attr] = value 501 reso = attr 502 503 if reso is None: 504 raise ValueError("Cannot parse date.") 505 506 if reso == 'microsecond' and repl['microsecond'] == 0: 507 reso = 'second' 508 509 ret = default.replace(**repl) 510 if res.weekday is not None and not res.day: 511 ret = ret + relativedelta.relativedelta(weekday=res.weekday) 512 if not ignoretz: 513 if callable(tzinfos) or tzinfos and res.tzname in tzinfos: 514 if callable(tzinfos): 515 tzdata = tzinfos(res.tzname, res.tzoffset) 516 else: 517 tzdata = tzinfos.get(res.tzname) 518 if isinstance(tzdata, datetime.tzinfo): 519 tzinfo = tzdata 520 elif isinstance(tzdata, compat.string_types): 521 tzinfo = tz.tzstr(tzdata) 522 elif isinstance(tzdata, int): 523 tzinfo = tz.tzoffset(res.tzname, tzdata) 524 else: 525 raise ValueError("offset must be tzinfo subclass, " 526 "tz string, or int offset") 527 ret = ret.replace(tzinfo=tzinfo) 528 elif res.tzname and res.tzname in time.tzname: 529 ret = ret.replace(tzinfo=tz.tzlocal()) 530 elif res.tzoffset == 0: 531 ret = ret.replace(tzinfo=tz.tzutc()) 532 elif res.tzoffset: 533 ret = ret.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset)) 534 return ret, reso 535 536 537 def _attempt_monthly(val): 538 pats = ['%Y-%m', '%m-%Y', '%b %Y', '%b-%Y'] 539 for pat in pats: 540 try: 541 ret = datetime.strptime(val, pat) 542 return ret, ret, 'month' 543 except Exception: 544 pass 545 546 547 def _try_parse_monthly(arg): 548 base = 2000 549 add_base = False 550 default = datetime(1, 1, 1).replace(hour=0, minute=0, second=0, 551 microsecond=0) 552 553 if len(arg) == 4: 554 add_base = True 555 y = int(arg[:2]) 556 m = int(arg[2:4]) 557 elif len(arg) >= 6: # 201201 558 y = int(arg[:4]) 559 m = int(arg[4:6]) 560 if add_base: 561 y += base 562 ret = default.replace(year=y, month=m) 563 return ret 564 565 566 normalize_date = tslib.normalize_date 567 568 569 def format(dt): 570 """Returns date in YYYYMMDD format.""" 571 return dt.strftime('%Y%m%d') 572 573 OLE_TIME_ZERO = datetime(1899, 12, 30, 0, 0, 0) 574 575 576 def ole2datetime(oledt): 577 """function for converting excel date to normal date format""" 578 val = float(oledt) 579 580 # Excel has a bug where it thinks the date 2/29/1900 exists 581 # we just reject any date before 3/1/1900. 582 if val < 61: 583 raise ValueError("Value is outside of acceptable range: %s " % val) 584 585 return OLE_TIME_ZERO + timedelta(days=val) 586 [end of pandas/tseries/tools.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
8cd9819160d2c0cdcb06f6fac9dedb39d6530fdc
iat, iloc don't work with no unique index ``` import pandas as pd s = pd.Series(range(5), index=[1,1,2,2,3]) s.iat[2] ``` which returns `array([2, 3], dtype=int64)`, I think the result should be `2`. `s.iloc[2]` works, but `s.iloc[[2, 3]]` raise error.
2014-02-27T13:58:22Z
<patch> diff --git a/doc/source/release.rst b/doc/source/release.rst --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -183,6 +183,7 @@ Bug Fixes - Bug in ``io.data.DataReader`` when passed ``"F-F_Momentum_Factor"`` and ``data_source="famafrench"`` (:issue:`6460`) - Bug in ``sum`` of a ``timedelta64[ns]`` series (:issue:`6462`) - Bug in ``resample`` with a timezone and certain offsets (:issue:`6397`) +- Bug in ``iat/iloc`` with duplicate indices on a Series (:issue:`6493`) pandas 0.13.1 ------------- diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1519,7 +1519,7 @@ def _unpickle_matrix_compat(self, state): # pragma: no cover #---------------------------------------------------------------------- # Getting and setting elements - def get_value(self, index, col): + def get_value(self, index, col, takeable=False): """ Quickly retrieve single value at passed column and index @@ -1527,16 +1527,22 @@ def get_value(self, index, col): ---------- index : row label col : column label + takeable : interpret the index/col as indexers, default False Returns ------- value : scalar value """ + + if takeable is True: + series = self._iget_item_cache(col) + return series.values[index] + series = self._get_item_cache(col) engine = self.index._engine return engine.get_value(series.values, index) - def set_value(self, index, col, value): + def set_value(self, index, col, value, takeable=False): """ Put single value at passed column and index @@ -1545,6 +1551,7 @@ def set_value(self, index, col, value): index : row label col : column label value : scalar value + takeable : interpret the index/col as indexers, default False Returns ------- @@ -1553,6 +1560,10 @@ def set_value(self, index, col, value): otherwise a new object """ try: + if takeable is True: + series = self._iget_item_cache(col) + return series.set_value(index, value, takeable=True) + series = self._get_item_cache(col) engine = self.index._engine engine.set_value(series.values, index, value) diff --git a/pandas/core/generic.py b/pandas/core/generic.py --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -1004,6 +1004,7 @@ def __getitem__(self, item): return self._get_item_cache(item) def _get_item_cache(self, item): + """ return the cached item, item represents a label indexer """ cache = self._item_cache res = cache.get(item) if res is None: @@ -1021,6 +1022,15 @@ def _set_as_cached(self, item, cacher): a weakref to cacher """ self._cacher = (item, weakref.ref(cacher)) + def _iget_item_cache(self, item): + """ return the cached item, item represents a positional indexer """ + ax = self._info_axis + if ax.is_unique: + lower = self._get_item_cache(ax[item]) + else: + lower = self.take(item, axis=self._info_axis_number, convert=True) + return lower + def _box_item_values(self, key, values): raise NotImplementedError @@ -1595,7 +1605,8 @@ def _reindex_axes(self, axes, level, limit, method, fill_value, copy, obj = obj._reindex_with_indexers( {axis: [new_index, indexer]}, method=method, - fill_value=fill_value, limit=limit, copy=copy) + fill_value=fill_value, limit=limit, copy=copy, + allow_dups=takeable) return obj diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1419,7 +1419,7 @@ def __getitem__(self, key): raise ValueError('Invalid call for scalar access (getting)!') key = self._convert_key(key) - return self.obj.get_value(*key) + return self.obj.get_value(*key, takeable=self._takeable) def __setitem__(self, key, value): if not isinstance(key, tuple): @@ -1427,33 +1427,32 @@ def __setitem__(self, key, value): if len(key) != self.obj.ndim: raise ValueError('Not enough indexers for scalar access ' '(setting)!') - key = self._convert_key(key) + key = list(self._convert_key(key)) key.append(value) - self.obj.set_value(*key) + self.obj.set_value(*key, takeable=self._takeable) class _AtIndexer(_ScalarAccessIndexer): """ label based scalar accessor """ - pass + _takeable = False class _iAtIndexer(_ScalarAccessIndexer): """ integer based scalar accessor """ + _takeable = True def _has_valid_setitem_indexer(self, indexer): self._has_valid_positional_setitem_indexer(indexer) def _convert_key(self, key): """ require integer args (and convert to label arguments) """ - ckey = [] for a, i in zip(self.obj.axes, key): if not com.is_integer(i): raise ValueError("iAt based indexing can only have integer " "indexers") - ckey.append(a[i]) - return ckey + return key # 32-bit floating point machine epsilon _eps = np.finfo('f4').eps diff --git a/pandas/core/panel.py b/pandas/core/panel.py --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -444,7 +444,7 @@ def as_matrix(self): #---------------------------------------------------------------------- # Getting and setting elements - def get_value(self, *args): + def get_value(self, *args, **kwargs): """ Quickly retrieve single value at (item, major, minor) location @@ -453,6 +453,7 @@ def get_value(self, *args): item : item label (panel item) major : major axis label (panel item row) minor : minor axis label (panel item column) + takeable : interpret the passed labels as indexers, default False Returns ------- @@ -466,12 +467,16 @@ def get_value(self, *args): raise TypeError('There must be an argument for each axis, you gave' ' {0} args, but {1} are required'.format(nargs, nreq)) + takeable = kwargs.get('takeable') - # hm, two layers to the onion - frame = self._get_item_cache(args[0]) - return frame.get_value(*args[1:]) + if takeable is True: + lower = self._iget_item_cache(args[0]) + else: + lower = self._get_item_cache(args[0]) + + return lower.get_value(*args[1:], takeable=takeable) - def set_value(self, *args): + def set_value(self, *args, **kwargs): """ Quickly set single value at (item, major, minor) location @@ -481,6 +486,7 @@ def set_value(self, *args): major : major axis label (panel item row) minor : minor axis label (panel item column) value : scalar + takeable : interpret the passed labels as indexers, default False Returns ------- @@ -496,10 +502,15 @@ def set_value(self, *args): raise TypeError('There must be an argument for each axis plus the ' 'value provided, you gave {0} args, but {1} are ' 'required'.format(nargs, nreq)) + takeable = kwargs.get('takeable') try: - frame = self._get_item_cache(args[0]) - frame.set_value(*args[1:]) + if takeable is True: + lower = self._iget_item_cache(args[0]) + else: + lower = self._get_item_cache(args[0]) + + lower.set_value(*args[1:], takeable=takeable) return self except KeyError: axes = self._expand_axes(args) diff --git a/pandas/core/series.py b/pandas/core/series.py --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -725,21 +725,24 @@ def reshape(self, *args, **kwargs): iget = _ixs irow = _ixs - def get_value(self, label): + def get_value(self, label, takeable=False): """ Quickly retrieve single value at passed index label Parameters ---------- index : label + takeable : interpret the index as indexers, default False Returns ------- value : scalar value """ + if takeable is True: + return self.values[label] return self.index.get_value(self.values, label) - def set_value(self, label, value): + def set_value(self, label, value, takeable=False): """ Quickly set single value at passed label. If label is not contained, a new object is created with the label placed at the end of the result @@ -751,6 +754,7 @@ def set_value(self, label, value): Partial indexing with MultiIndex not allowed value : object Scalar value + takeable : interpret the index as indexers, default False Returns ------- @@ -759,7 +763,10 @@ def set_value(self, label, value): otherwise a new object """ try: - self.index._engine.set_value(self.values, label, value) + if takeable: + self.values[label] = value + else: + self.index._engine.set_value(self.values, label, value) return self except KeyError: diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py --- a/pandas/sparse/frame.py +++ b/pandas/sparse/frame.py @@ -346,10 +346,15 @@ def __getitem__(self, key): return self._get_item_cache(key) @Appender(DataFrame.get_value.__doc__, indents=0) - def get_value(self, index, col): - return self._get_item_cache(col).get_value(index) + def get_value(self, index, col, takeable=False): + if takeable is True: + series = self._iget_item_cache(col) + else: + series = self._get_item_cache(col) + + return series.get_value(index, takeable=takeable) - def set_value(self, index, col, value): + def set_value(self, index, col, value, takeable=False): """ Put single value at passed column and index @@ -358,6 +363,7 @@ def set_value(self, index, col, value): index : row label col : column label value : scalar value + takeable : interpret the index/col as indexers, default False Notes ----- @@ -369,7 +375,7 @@ def set_value(self, index, col, value): ------- frame : DataFrame """ - dense = self.to_dense().set_value(index, col, value) + dense = self.to_dense().set_value(index, col, value, takeable=takeable) return dense.to_sparse(kind=self._default_kind, fill_value=self._default_fill_value) diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py --- a/pandas/sparse/series.py +++ b/pandas/sparse/series.py @@ -409,22 +409,23 @@ def get(self, label, default=None): else: return default - def get_value(self, label): + def get_value(self, label, takeable=False): """ Retrieve single value at passed index label Parameters ---------- index : label + takeable : interpret the index as indexers, default False Returns ------- value : scalar value """ - loc = self.index.get_loc(label) + loc = label if takeable is True else self.index.get_loc(label) return self._get_val_at(loc) - def set_value(self, label, value): + def set_value(self, label, value, takeable=False): """ Quickly set single value at passed label. If label is not contained, a new object is created with the label placed at the end of the result @@ -436,6 +437,7 @@ def set_value(self, label, value): Partial indexing with MultiIndex not allowed value : object Scalar value + takeable : interpret the index as indexers, default False Notes ----- @@ -450,7 +452,7 @@ def set_value(self, label, value): # if the label doesn't exist, we will create a new object here # and possibily change the index - new_values = values.set_value(label, value) + new_values = values.set_value(label, value, takeable=takeable) if new_values is not None: values = new_values new_index = values.index </patch>
[]
[]
apache__airflow-15207
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Specify that exit code -9 is due to RAM Related to https://github.com/apache/airflow/issues/9655 It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that. I have found the code where the -9 is assigned but have no idea how to add a logging message. self.process = None if self._rc is None: # Something else reaped it before we had a chance, so let's just "guess" at an error code. self._rc = -9 </issue> <code> [start of README.md] 1 <!-- 2 Licensed to the Apache Software Foundation (ASF) under one 3 or more contributor license agreements. See the NOTICE file 4 distributed with this work for additional information 5 regarding copyright ownership. The ASF licenses this file 6 to you under the Apache License, Version 2.0 (the 7 "License"); you may not use this file except in compliance 8 with the License. You may obtain a copy of the License at 9 10 http://www.apache.org/licenses/LICENSE-2.0 11 12 Unless required by applicable law or agreed to in writing, 13 software distributed under the License is distributed on an 14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 KIND, either express or implied. See the License for the 16 specific language governing permissions and limitations 17 under the License. 18 --> 19 20 # Apache Airflow 21 22 [![PyPI version](https://badge.fury.io/py/apache-airflow.svg)](https://badge.fury.io/py/apache-airflow) 23 [![GitHub Build](https://github.com/apache/airflow/workflows/CI%20Build/badge.svg)](https://github.com/apache/airflow/actions) 24 [![Coverage Status](https://img.shields.io/codecov/c/github/apache/airflow/master.svg)](https://codecov.io/github/apache/airflow?branch=master) 25 [![License](https://img.shields.io/:license-Apache%202-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) 26 [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apache-airflow.svg)](https://pypi.org/project/apache-airflow/) 27 [![Docker Pulls](https://img.shields.io/docker/pulls/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 28 [![Docker Stars](https://img.shields.io/docker/stars/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 29 [![PyPI - Downloads](https://img.shields.io/pypi/dm/apache-airflow)](https://pypi.org/project/apache-airflow/) 30 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 31 [![Twitter Follow](https://img.shields.io/twitter/follow/ApacheAirflow.svg?style=social&label=Follow)](https://twitter.com/ApacheAirflow) 32 [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://s.apache.org/airflow-slack) 33 34 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. 35 36 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. 37 38 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. 39 40 <!-- START doctoc generated TOC please keep comment here to allow auto update --> 41 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> 42 **Table of contents** 43 44 - [Project Focus](#project-focus) 45 - [Principles](#principles) 46 - [Requirements](#requirements) 47 - [Support for Python versions](#support-for-python-versions) 48 - [Getting started](#getting-started) 49 - [Installing from PyPI](#installing-from-pypi) 50 - [Official source code](#official-source-code) 51 - [Convenience packages](#convenience-packages) 52 - [User Interface](#user-interface) 53 - [Contributing](#contributing) 54 - [Who uses Apache Airflow?](#who-uses-apache-airflow) 55 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow) 56 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation) 57 - [Airflow merchandise](#airflow-merchandise) 58 - [Links](#links) 59 60 <!-- END doctoc generated TOC please keep comment here to allow auto update --> 61 62 ## Project Focus 63 64 Airflow works best with workflows that are mostly static and slowly changing. When DAG structure is similar from one run to the next, it allows for clarity around unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/). 65 66 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e. results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services that specialize on that type of work. 67 68 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches. 69 70 ## Principles 71 72 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. 73 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment. 74 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine. 75 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. 76 77 ## Requirements 78 79 Apache Airflow is tested with: 80 81 | | Master version (dev) | Stable version (2.0.1) | Previous version (1.10.15) | 82 | ------------ | ------------------------- | ------------------------ | ------------------------- | 83 | Python | 3.6, 3.7, 3.8 | 3.6, 3.7, 3.8 | 2.7, 3.5, 3.6, 3.7, 3.8 | 84 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 | 85 | MySQL | 5.7, 8 | 5.7, 8 | 5.6, 5.7 | 86 | SQLite | 3.15.0+ | 3.15.0+ | 3.15.0+ | 87 | Kubernetes | 1.20, 1.19, 1.18 | 1.20, 1.19, 1.18 | 1.18, 1.17, 1.16 | 88 89 **Note:** MySQL 5.x versions are unable to or have limitations with 90 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html). 91 MariaDB is not tested/recommended. 92 93 **Note:** SQLite is used in Airflow tests. Do not use it in production. We recommend 94 using the latest stable version of SQLite for local development. 95 96 ## Support for Python versions 97 98 As of Airflow 2.0 we agreed to certain rules we follow for Python support. They are based on the official 99 release schedule of Python, nicely summarized in the 100 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) 101 102 1. We finish support for Python versions when they reach EOL (For Python 3.6 it means that we will remove it 103 from being supported on 23.12.2021). 104 105 2. The "oldest" supported version of Python is the default one. "Default" is only meaningful in terms of 106 "smoke tests" in CI PRs which are run using this default version. 107 108 3. We support a new version of Python after it is officially released, as soon as we manage to make 109 it works in our CI pipeline (which might not be immediate) and release a new version of Airflow 110 (non-Patch version) based on this CI set-up. 111 112 ### Additional notes on Python version requirements 113 114 * Previous version [requires](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3 115 when using Python 3 116 117 ## Getting started 118 119 Visit the official Airflow website documentation (latest **stable** release) for help with 120 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html), 121 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking 122 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html). 123 124 > Note: If you're looking for documentation for master branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/). 125 126 For more information on Airflow Improvement Proposals (AIPs), visit 127 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals). 128 129 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](IMAGES.rst). 130 131 ## Installing from PyPI 132 133 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky 134 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open and 135 applications usually pin them, but we should do neither and both at the same time. We decided to keep 136 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries 137 if needed. This means that from time to time plain `pip install apache-airflow` will not work or will 138 produce unusable Airflow installation. 139 140 In order to have repeatable installation, however, introduced in **Airflow 1.10.10** and updated in 141 **Airflow 1.10.12** we also keep a set of "known-to-be-working" constraint files in the 142 orphan `constraints-master`, `constraints-2-0` and `constraints-1-10` branches. We keep those "known-to-be-working" 143 constraints files separately per major/minor Python version. 144 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify 145 correct Airflow tag/version/branch and Python versions in the URL. 146 147 148 1. Installing just Airflow: 149 150 NOTE!!! 151 152 On November 2020, new version of PIP (20.3) has been released with a new, 2020 resolver. This resolver 153 might work with Apache Airflow as of 20.3.3, but it might lead to errors in installation. It might 154 depend on your choice of extras. In order to install Airflow reliably, you might need to either downgrade 155 pip to version 20.2.4 `pip install --upgrade pip==20.2.4` or, in case you use Pip 20.3, 156 you might need to add option] `--use-deprecated legacy-resolver` to your pip install command. 157 While `pip 20.3.3` solved most of the `teething` problems of 20.3, this note will remain here until we 158 set `pip 20.3` as official version in our CI pipeline where we are testing the installation as well. 159 Due to those constraints, only `pip` installation is currently officially supported. 160 161 While they are some successes with using other tools like [poetry](https://python-poetry.org) or 162 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as 163 `pip` - especially when it comes to constraint vs. requirements management. 164 Installing via `Poetry` or `pip-tools` is not currently supported. 165 166 If you wish to install airflow using those tools you should use the constraint files and convert 167 them to appropriate format and workflow that your tool requires. 168 169 170 ```bash 171 pip install apache-airflow==2.0.1 \ 172 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.7.txt" 173 ``` 174 175 2. Installing with extras (for example postgres,google) 176 177 ```bash 178 pip install apache-airflow[postgres,google]==2.0.1 \ 179 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.0.1/constraints-3.7.txt" 180 ``` 181 182 For information on installing provider packages check 183 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html). 184 185 ## Official source code 186 187 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project, 188 and our official source code releases: 189 190 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html) 191 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow) 192 - Are cryptographically signed by the release manager 193 - Are officially voted on by the PMC members during the 194 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval) 195 196 Following the ASF rules, the source packages released must be sufficient for a user to build and test the 197 release provided they have access to the appropriate platform and tools. 198 199 ## Convenience packages 200 201 There are other ways of installing and using Airflow. Those are "convenience" methods - they are 202 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users 203 who do not want to build the software themselves. 204 205 Those are - in the order of most common ways people install Airflow: 206 207 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool 208 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via 209 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm` etc. You can 210 read more about using, customising, and extending the images in the 211 [Latest docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html), and 212 learn details on the internals in the [IMAGES.rst](IMAGES.rst) document. 213 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that 214 were used to generate official source packages via git 215 216 All those artifacts are not official releases, but they are prepared using officially released sources. 217 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such 218 following the ASF Policy. 219 220 ## User Interface 221 222 - **DAGs**: Overview of all DAGs in your environment. 223 224 ![DAGs](/docs/apache-airflow/img/dags.png) 225 226 - **Tree View**: Tree representation of a DAG that spans across time. 227 228 ![Tree View](/docs/apache-airflow/img/tree.png) 229 230 - **Graph View**: Visualization of a DAG's dependencies and their current status for a specific run. 231 232 ![Graph View](/docs/apache-airflow/img/graph.png) 233 234 - **Task Duration**: Total time spent on different tasks over time. 235 236 ![Task Duration](/docs/apache-airflow/img/duration.png) 237 238 - **Gantt View**: Duration and overlap of a DAG. 239 240 ![Gantt View](/docs/apache-airflow/img/gantt.png) 241 242 - **Code View**: Quick way to view source code of a DAG. 243 244 ![Code View](/docs/apache-airflow/img/code.png) 245 246 247 ## Contributing 248 249 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst). 250 251 ## Who uses Apache Airflow? 252 253 More than 400 organizations are using Apache Airflow 254 [in the wild](https://github.com/apache/airflow/blob/master/INTHEWILD.md). 255 256 ## Who Maintains Apache Airflow? 257 258 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors), 259 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow) 260 are responsible for reviewing and merging PRs as well as steering conversation around new feature requests. 261 If you would like to become a maintainer, please review the Apache Airflow 262 [committer requirements](https://github.com/apache/airflow/blob/master/COMMITTERS.rst#guidelines-to-become-an-airflow-committer). 263 264 ## Can I use the Apache Airflow logo in my presentation? 265 266 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html). 267 268 ## Airflow merchandise 269 270 If you would love to have Apache Airflow stickers, t-shirt etc. then check out 271 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5). 272 273 ## Links 274 275 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/) 276 - [Chat](https://s.apache.org/airflow-slack) 277 [end of README.md] [start of airflow/providers/elasticsearch/log/es_task_handler.py] 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 19 import logging 20 import sys 21 from collections import defaultdict 22 from datetime import datetime 23 from time import time 24 from typing import List, Optional, Tuple 25 from urllib.parse import quote 26 27 # Using `from elasticsearch import *` would break elasticsearch mocking used in unit test. 28 import elasticsearch 29 import pendulum 30 from elasticsearch_dsl import Search 31 32 from airflow.configuration import conf 33 from airflow.models import TaskInstance 34 from airflow.utils import timezone 35 from airflow.utils.helpers import parse_template_string 36 from airflow.utils.log.file_task_handler import FileTaskHandler 37 from airflow.utils.log.json_formatter import JSONFormatter 38 from airflow.utils.log.logging_mixin import LoggingMixin 39 40 # Elasticsearch hosted log type 41 EsLogMsgType = List[Tuple[str, str]] 42 43 44 class ElasticsearchTaskHandler(FileTaskHandler, LoggingMixin): 45 """ 46 ElasticsearchTaskHandler is a python log handler that 47 reads logs from Elasticsearch. Note logs are not directly 48 indexed into Elasticsearch. Instead, it flushes logs 49 into local files. Additional software setup is required 50 to index the log into Elasticsearch, such as using 51 Filebeat and Logstash. 52 To efficiently query and sort Elasticsearch results, we assume each 53 log message has a field `log_id` consists of ti primary keys: 54 `log_id = {dag_id}-{task_id}-{execution_date}-{try_number}` 55 Log messages with specific log_id are sorted based on `offset`, 56 which is a unique integer indicates log message's order. 57 Timestamp here are unreliable because multiple log messages 58 might have the same timestamp. 59 """ 60 61 PAGE = 0 62 MAX_LINE_PER_PAGE = 1000 63 LOG_NAME = 'Elasticsearch' 64 65 def __init__( # pylint: disable=too-many-arguments 66 self, 67 base_log_folder: str, 68 filename_template: str, 69 log_id_template: str, 70 end_of_log_mark: str, 71 write_stdout: bool, 72 json_format: bool, 73 json_fields: str, 74 host: str = "localhost:9200", 75 frontend: str = "localhost:5601", 76 es_kwargs: Optional[dict] = conf.getsection("elasticsearch_configs"), 77 ): 78 """ 79 :param base_log_folder: base folder to store logs locally 80 :param log_id_template: log id template 81 :param host: Elasticsearch host name 82 """ 83 es_kwargs = es_kwargs or {} 84 super().__init__(base_log_folder, filename_template) 85 self.closed = False 86 87 self.log_id_template, self.log_id_jinja_template = parse_template_string(log_id_template) 88 89 self.client = elasticsearch.Elasticsearch([host], **es_kwargs) 90 91 self.frontend = frontend 92 self.mark_end_on_close = True 93 self.end_of_log_mark = end_of_log_mark 94 self.write_stdout = write_stdout 95 self.json_format = json_format 96 self.json_fields = [label.strip() for label in json_fields.split(",")] 97 self.handler = None 98 self.context_set = False 99 100 def _render_log_id(self, ti: TaskInstance, try_number: int) -> str: 101 if self.log_id_jinja_template: 102 jinja_context = ti.get_template_context() 103 jinja_context['try_number'] = try_number 104 return self.log_id_jinja_template.render(**jinja_context) 105 106 if self.json_format: 107 execution_date = self._clean_execution_date(ti.execution_date) 108 else: 109 execution_date = ti.execution_date.isoformat() 110 return self.log_id_template.format( 111 dag_id=ti.dag_id, task_id=ti.task_id, execution_date=execution_date, try_number=try_number 112 ) 113 114 @staticmethod 115 def _clean_execution_date(execution_date: datetime) -> str: 116 """ 117 Clean up an execution date so that it is safe to query in elasticsearch 118 by removing reserved characters. 119 # https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#_reserved_characters 120 121 :param execution_date: execution date of the dag run. 122 """ 123 return execution_date.strftime("%Y_%m_%dT%H_%M_%S_%f") 124 125 @staticmethod 126 def _group_logs_by_host(logs): 127 grouped_logs = defaultdict(list) 128 for log in logs: 129 key = getattr(log, 'host', 'default_host') 130 grouped_logs[key].append(log) 131 132 # return items sorted by timestamp. 133 result = sorted(grouped_logs.items(), key=lambda kv: getattr(kv[1][0], 'message', '_')) 134 135 return result 136 137 def _read_grouped_logs(self): 138 return True 139 140 def _read( 141 self, ti: TaskInstance, try_number: int, metadata: Optional[dict] = None 142 ) -> Tuple[EsLogMsgType, dict]: 143 """ 144 Endpoint for streaming log. 145 146 :param ti: task instance object 147 :param try_number: try_number of the task instance 148 :param metadata: log metadata, 149 can be used for steaming log reading and auto-tailing. 150 :return: a list of tuple with host and log documents, metadata. 151 """ 152 if not metadata: 153 metadata = {'offset': 0} 154 if 'offset' not in metadata: 155 metadata['offset'] = 0 156 157 offset = metadata['offset'] 158 log_id = self._render_log_id(ti, try_number) 159 160 logs = self.es_read(log_id, offset, metadata) 161 logs_by_host = self._group_logs_by_host(logs) 162 163 next_offset = offset if not logs else logs[-1].offset 164 165 # Ensure a string here. Large offset numbers will get JSON.parsed incorrectly 166 # on the client. Sending as a string prevents this issue. 167 # https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER 168 metadata['offset'] = str(next_offset) 169 170 # end_of_log_mark may contain characters like '\n' which is needed to 171 # have the log uploaded but will not be stored in elasticsearch. 172 loading_hosts = [ 173 item[0] for item in logs_by_host if item[-1][-1].message != self.end_of_log_mark.strip() 174 ] 175 metadata['end_of_log'] = False if not logs else len(loading_hosts) == 0 176 177 cur_ts = pendulum.now() 178 # Assume end of log after not receiving new log for 5 min, 179 # as executor heartbeat is 1 min and there might be some 180 # delay before Elasticsearch makes the log available. 181 if 'last_log_timestamp' in metadata: 182 last_log_ts = timezone.parse(metadata['last_log_timestamp']) 183 if ( 184 cur_ts.diff(last_log_ts).in_minutes() >= 5 185 or 'max_offset' in metadata 186 and int(offset) >= int(metadata['max_offset']) 187 ): 188 metadata['end_of_log'] = True 189 190 if int(offset) != int(next_offset) or 'last_log_timestamp' not in metadata: 191 metadata['last_log_timestamp'] = str(cur_ts) 192 193 # If we hit the end of the log, remove the actual end_of_log message 194 # to prevent it from showing in the UI. 195 def concat_logs(lines): 196 log_range = (len(lines) - 1) if lines[-1].message == self.end_of_log_mark.strip() else len(lines) 197 return '\n'.join([self._format_msg(lines[i]) for i in range(log_range)]) 198 199 message = [(host, concat_logs(hosted_log)) for host, hosted_log in logs_by_host] 200 201 return message, metadata 202 203 def _format_msg(self, log_line): 204 """Format ES Record to match settings.LOG_FORMAT when used with json_format""" 205 # Using formatter._style.format makes it future proof i.e. 206 # if we change the formatter style from '%' to '{' or '$', this will still work 207 if self.json_format: 208 try: 209 # pylint: disable=protected-access 210 return self.formatter._style.format(_ESJsonLogFmt(**log_line.to_dict())) 211 except Exception: # noqa pylint: disable=broad-except 212 pass 213 214 # Just a safe-guard to preserve backwards-compatibility 215 return log_line.message 216 217 def es_read(self, log_id: str, offset: str, metadata: dict) -> list: 218 """ 219 Returns the logs matching log_id in Elasticsearch and next offset. 220 Returns '' if no log is found or there was an error. 221 222 :param log_id: the log_id of the log to read. 223 :type log_id: str 224 :param offset: the offset start to read log from. 225 :type offset: str 226 :param metadata: log metadata, used for steaming log download. 227 :type metadata: dict 228 """ 229 # Offset is the unique key for sorting logs given log_id. 230 search = Search(using=self.client).query('match_phrase', log_id=log_id).sort('offset') 231 232 search = search.filter('range', offset={'gt': int(offset)}) 233 max_log_line = search.count() 234 if 'download_logs' in metadata and metadata['download_logs'] and 'max_offset' not in metadata: 235 try: 236 if max_log_line > 0: 237 metadata['max_offset'] = search[max_log_line - 1].execute()[-1].offset 238 else: 239 metadata['max_offset'] = 0 240 except Exception: # pylint: disable=broad-except 241 self.log.exception('Could not get current log size with log_id: %s', log_id) 242 243 logs = [] 244 if max_log_line != 0: 245 try: 246 247 logs = search[self.MAX_LINE_PER_PAGE * self.PAGE : self.MAX_LINE_PER_PAGE].execute() 248 except Exception as e: # pylint: disable=broad-except 249 self.log.exception('Could not read log with log_id: %s, error: %s', log_id, str(e)) 250 251 return logs 252 253 def set_context(self, ti: TaskInstance) -> None: 254 """ 255 Provide task_instance context to airflow task handler. 256 257 :param ti: task instance object 258 """ 259 self.mark_end_on_close = not ti.raw 260 261 if self.json_format: 262 self.formatter = JSONFormatter( 263 fmt=self.formatter._fmt, # pylint: disable=protected-access 264 json_fields=self.json_fields, 265 extras={ 266 'dag_id': str(ti.dag_id), 267 'task_id': str(ti.task_id), 268 'execution_date': self._clean_execution_date(ti.execution_date), 269 'try_number': str(ti.try_number), 270 'log_id': self._render_log_id(ti, ti.try_number), 271 'offset': int(time() * (10 ** 9)), 272 }, 273 ) 274 275 if self.write_stdout: 276 if self.context_set: 277 # We don't want to re-set up the handler if this logger has 278 # already been initialized 279 return 280 281 self.handler = logging.StreamHandler(stream=sys.__stdout__) # type: ignore 282 self.handler.setLevel(self.level) # type: ignore 283 self.handler.setFormatter(self.formatter) # type: ignore 284 else: 285 super().set_context(ti) 286 self.context_set = True 287 288 def close(self) -> None: 289 # When application exit, system shuts down all handlers by 290 # calling close method. Here we check if logger is already 291 # closed to prevent uploading the log to remote storage multiple 292 # times when `logging.shutdown` is called. 293 if self.closed: 294 return 295 296 if not self.mark_end_on_close: 297 self.closed = True 298 return 299 300 # Case which context of the handler was not set. 301 if self.handler is None: 302 self.closed = True 303 return 304 305 # Reopen the file stream, because FileHandler.close() would be called 306 # first in logging.shutdown() and the stream in it would be set to None. 307 if self.handler.stream is None or self.handler.stream.closed: 308 self.handler.stream = self.handler._open() # pylint: disable=protected-access 309 310 # Mark the end of file using end of log mark, 311 # so we know where to stop while auto-tailing. 312 self.handler.stream.write(self.end_of_log_mark) 313 314 if self.write_stdout: 315 self.handler.close() 316 sys.stdout = sys.__stdout__ 317 318 super().close() 319 320 self.closed = True 321 322 @property 323 def log_name(self) -> str: 324 """The log name""" 325 return self.LOG_NAME 326 327 def get_external_log_url(self, task_instance: TaskInstance, try_number: int) -> str: 328 """ 329 Creates an address for an external log collecting service. 330 331 :param task_instance: task instance object 332 :type: task_instance: TaskInstance 333 :param try_number: task instance try_number to read logs from. 334 :type try_number: Optional[int] 335 :return: URL to the external log collection service 336 :rtype: str 337 """ 338 log_id = self.log_id_template.format( 339 dag_id=task_instance.dag_id, 340 task_id=task_instance.task_id, 341 execution_date=task_instance.execution_date, 342 try_number=try_number, 343 ) 344 url = 'https://' + self.frontend.format(log_id=quote(log_id)) 345 return url 346 347 348 class _ESJsonLogFmt: 349 """Helper class to read ES Logs and re-format it to match settings.LOG_FORMAT""" 350 351 # A separate class is needed because 'self.formatter._style.format' uses '.__dict__' 352 def __init__(self, **kwargs): 353 self.__dict__.update(kwargs) 354 [end of airflow/providers/elasticsearch/log/es_task_handler.py] [start of airflow/task/task_runner/standard_task_runner.py] 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 """Standard task runner""" 19 import logging 20 import os 21 from typing import Optional 22 23 import psutil 24 from setproctitle import setproctitle # pylint: disable=no-name-in-module 25 26 from airflow.settings import CAN_FORK 27 from airflow.task.task_runner.base_task_runner import BaseTaskRunner 28 from airflow.utils.process_utils import reap_process_group 29 30 31 class StandardTaskRunner(BaseTaskRunner): 32 """Standard runner for all tasks.""" 33 34 def __init__(self, local_task_job): 35 super().__init__(local_task_job) 36 self._rc = None 37 self.dag = local_task_job.task_instance.task.dag 38 39 def start(self): 40 if CAN_FORK and not self.run_as_user: 41 self.process = self._start_by_fork() 42 else: 43 self.process = self._start_by_exec() 44 45 def _start_by_exec(self): 46 subprocess = self.run_command() 47 return psutil.Process(subprocess.pid) 48 49 def _start_by_fork(self): # pylint: disable=inconsistent-return-statements 50 pid = os.fork() 51 if pid: 52 self.log.info("Started process %d to run task", pid) 53 return psutil.Process(pid) 54 else: 55 import signal 56 57 from airflow import settings 58 from airflow.cli.cli_parser import get_parser 59 from airflow.sentry import Sentry 60 61 signal.signal(signal.SIGINT, signal.SIG_DFL) 62 signal.signal(signal.SIGTERM, signal.SIG_DFL) 63 # Start a new process group 64 os.setpgid(0, 0) 65 66 # Force a new SQLAlchemy session. We can't share open DB handles 67 # between process. The cli code will re-create this as part of its 68 # normal startup 69 settings.engine.pool.dispose() 70 settings.engine.dispose() 71 72 parser = get_parser() 73 # [1:] - remove "airflow" from the start of the command 74 args = parser.parse_args(self._command[1:]) 75 76 self.log.info('Running: %s', self._command) 77 self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id) 78 79 proc_title = "airflow task runner: {0.dag_id} {0.task_id} {0.execution_date}" 80 if hasattr(args, "job_id"): 81 proc_title += " {0.job_id}" 82 setproctitle(proc_title.format(args)) 83 84 try: 85 args.func(args, dag=self.dag) 86 return_code = 0 87 except Exception: # pylint: disable=broad-except 88 return_code = 1 89 finally: 90 # Explicitly flush any pending exception to Sentry if enabled 91 Sentry.flush() 92 logging.shutdown() 93 os._exit(return_code) # pylint: disable=protected-access 94 95 def return_code(self, timeout: int = 0) -> Optional[int]: 96 # We call this multiple times, but we can only wait on the process once 97 if self._rc is not None or not self.process: 98 return self._rc 99 100 try: 101 self._rc = self.process.wait(timeout=timeout) 102 self.process = None 103 except psutil.TimeoutExpired: 104 pass 105 106 return self._rc 107 108 def terminate(self): 109 if self.process is None: 110 return 111 112 # Reap the child process - it may already be finished 113 _ = self.return_code(timeout=0) 114 115 if self.process and self.process.is_running(): 116 rcs = reap_process_group(self.process.pid, self.log) 117 self._rc = rcs.get(self.process.pid) 118 119 self.process = None 120 121 if self._rc is None: 122 # Something else reaped it before we had a chance, so let's just "guess" at an error code. 123 self._rc = -9 124 [end of airflow/task/task_runner/standard_task_runner.py] [start of airflow/utils/dates.py] 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 19 from datetime import datetime, timedelta 20 from typing import Dict, List, Optional, Union 21 22 from croniter import croniter 23 from dateutil.relativedelta import relativedelta # noqa: F401 for doctest 24 25 from airflow.utils import timezone 26 27 cron_presets: Dict[str, str] = { 28 '@hourly': '0 * * * *', 29 '@daily': '0 0 * * *', 30 '@weekly': '0 0 * * 0', 31 '@monthly': '0 0 1 * *', 32 '@quarterly': '0 0 1 */3 *', 33 '@yearly': '0 0 1 1 *', 34 } 35 36 37 # pylint: disable=too-many-branches 38 def date_range( 39 start_date: datetime, 40 end_date: Optional[datetime] = None, 41 num: Optional[int] = None, 42 delta: Optional[Union[str, timedelta, relativedelta]] = None, 43 ) -> List[datetime]: 44 """ 45 Get a set of dates as a list based on a start, end and delta, delta 46 can be something that can be added to `datetime.datetime` 47 or a cron expression as a `str` 48 49 .. code-block:: python 50 51 date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta=timedelta(1)) 52 [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0), 53 datetime.datetime(2016, 1, 3, 0, 0)] 54 date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta='0 0 * * *') 55 [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0), 56 datetime.datetime(2016, 1, 3, 0, 0)] 57 date_range(datetime(2016, 1, 1), datetime(2016, 3, 3), delta="0 0 0 * *") 58 [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 2, 1, 0, 0), 59 datetime.datetime(2016, 3, 1, 0, 0)] 60 61 :param start_date: anchor date to start the series from 62 :type start_date: datetime.datetime 63 :param end_date: right boundary for the date range 64 :type end_date: datetime.datetime 65 :param num: alternatively to end_date, you can specify the number of 66 number of entries you want in the range. This number can be negative, 67 output will always be sorted regardless 68 :type num: int 69 :param delta: step length. It can be datetime.timedelta or cron expression as string 70 :type delta: datetime.timedelta or str or dateutil.relativedelta 71 """ 72 if not delta: 73 return [] 74 if end_date: 75 if start_date > end_date: 76 raise Exception("Wait. start_date needs to be before end_date") 77 if num: 78 raise Exception("Wait. Either specify end_date OR num") 79 if not end_date and not num: 80 end_date = timezone.utcnow() 81 82 delta_iscron = False 83 time_zone = start_date.tzinfo 84 85 abs_delta: Union[timedelta, relativedelta] 86 if isinstance(delta, str): 87 delta_iscron = True 88 if timezone.is_localized(start_date): 89 start_date = timezone.make_naive(start_date, time_zone) 90 cron = croniter(cron_presets.get(delta, delta), start_date) 91 elif isinstance(delta, timedelta): 92 abs_delta = abs(delta) 93 elif isinstance(delta, relativedelta): 94 abs_delta = abs(delta) 95 else: 96 raise Exception("Wait. delta must be either datetime.timedelta or cron expression as str") 97 98 dates = [] 99 if end_date: 100 if timezone.is_naive(start_date) and not timezone.is_naive(end_date): 101 end_date = timezone.make_naive(end_date, time_zone) 102 while start_date <= end_date: # type: ignore 103 if timezone.is_naive(start_date): 104 dates.append(timezone.make_aware(start_date, time_zone)) 105 else: 106 dates.append(start_date) 107 108 if delta_iscron: 109 start_date = cron.get_next(datetime) 110 else: 111 start_date += abs_delta 112 else: 113 num_entries: int = num # type: ignore 114 for _ in range(abs(num_entries)): 115 if timezone.is_naive(start_date): 116 dates.append(timezone.make_aware(start_date, time_zone)) 117 else: 118 dates.append(start_date) 119 120 if delta_iscron and num_entries > 0: 121 start_date = cron.get_next(datetime) 122 elif delta_iscron: 123 start_date = cron.get_prev(datetime) 124 elif num_entries > 0: 125 start_date += abs_delta 126 else: 127 start_date -= abs_delta 128 129 return sorted(dates) 130 131 132 def round_time(dt, delta, start_date=timezone.make_aware(datetime.min)): 133 """ 134 Returns the datetime of the form start_date + i * delta 135 which is closest to dt for any non-negative integer i. 136 Note that delta may be a datetime.timedelta or a dateutil.relativedelta 137 >>> round_time(datetime(2015, 1, 1, 6), timedelta(days=1)) 138 datetime.datetime(2015, 1, 1, 0, 0) 139 >>> round_time(datetime(2015, 1, 2), relativedelta(months=1)) 140 datetime.datetime(2015, 1, 1, 0, 0) 141 >>> round_time(datetime(2015, 9, 16, 0, 0), timedelta(1), datetime(2015, 9, 14, 0, 0)) 142 datetime.datetime(2015, 9, 16, 0, 0) 143 >>> round_time(datetime(2015, 9, 15, 0, 0), timedelta(1), datetime(2015, 9, 14, 0, 0)) 144 datetime.datetime(2015, 9, 15, 0, 0) 145 >>> round_time(datetime(2015, 9, 14, 0, 0), timedelta(1), datetime(2015, 9, 14, 0, 0)) 146 datetime.datetime(2015, 9, 14, 0, 0) 147 >>> round_time(datetime(2015, 9, 13, 0, 0), timedelta(1), datetime(2015, 9, 14, 0, 0)) 148 datetime.datetime(2015, 9, 14, 0, 0) 149 """ 150 if isinstance(delta, str): 151 # It's cron based, so it's easy 152 time_zone = start_date.tzinfo 153 start_date = timezone.make_naive(start_date, time_zone) 154 cron = croniter(delta, start_date) 155 prev = cron.get_prev(datetime) 156 if prev == start_date: 157 return timezone.make_aware(start_date, time_zone) 158 else: 159 return timezone.make_aware(prev, time_zone) 160 161 # Ignore the microseconds of dt 162 dt -= timedelta(microseconds=dt.microsecond) 163 164 # We are looking for a datetime in the form start_date + i * delta 165 # which is as close as possible to dt. Since delta could be a relative 166 # delta we don't know its exact length in seconds so we cannot rely on 167 # division to find i. Instead we employ a binary search algorithm, first 168 # finding an upper and lower limit and then dissecting the interval until 169 # we have found the closest match. 170 171 # We first search an upper limit for i for which start_date + upper * delta 172 # exceeds dt. 173 upper = 1 174 while start_date + upper * delta < dt: 175 # To speed up finding an upper limit we grow this exponentially by a 176 # factor of 2 177 upper *= 2 178 179 # Since upper is the first value for which start_date + upper * delta 180 # exceeds dt, upper // 2 is below dt and therefore forms a lower limited 181 # for the i we are looking for 182 lower = upper // 2 183 184 # We now continue to intersect the interval between 185 # start_date + lower * delta and start_date + upper * delta 186 # until we find the closest value 187 while True: 188 # Invariant: start + lower * delta < dt <= start + upper * delta 189 # If start_date + (lower + 1)*delta exceeds dt, then either lower or 190 # lower+1 has to be the solution we are searching for 191 if start_date + (lower + 1) * delta >= dt: 192 # Check if start_date + (lower + 1)*delta or 193 # start_date + lower*delta is closer to dt and return the solution 194 if (start_date + (lower + 1) * delta) - dt <= dt - (start_date + lower * delta): 195 return start_date + (lower + 1) * delta 196 else: 197 return start_date + lower * delta 198 199 # We intersect the interval and either replace the lower or upper 200 # limit with the candidate 201 candidate = lower + (upper - lower) // 2 202 if start_date + candidate * delta >= dt: 203 upper = candidate 204 else: 205 lower = candidate 206 207 # in the special case when start_date > dt the search for upper will 208 # immediately stop for upper == 1 which results in lower = upper // 2 = 0 209 # and this function returns start_date. 210 211 212 def infer_time_unit(time_seconds_arr): 213 """ 214 Determine the most appropriate time unit for an array of time durations 215 specified in seconds. 216 e.g. 5400 seconds => 'minutes', 36000 seconds => 'hours' 217 """ 218 if len(time_seconds_arr) == 0: 219 return 'hours' 220 max_time_seconds = max(time_seconds_arr) 221 if max_time_seconds <= 60 * 2: 222 return 'seconds' 223 elif max_time_seconds <= 60 * 60 * 2: 224 return 'minutes' 225 elif max_time_seconds <= 24 * 60 * 60 * 2: 226 return 'hours' 227 else: 228 return 'days' 229 230 231 def scale_time_units(time_seconds_arr, unit): 232 """Convert an array of time durations in seconds to the specified time unit.""" 233 if unit == 'minutes': 234 return list(map(lambda x: x / 60, time_seconds_arr)) 235 elif unit == 'hours': 236 return list(map(lambda x: x / (60 * 60), time_seconds_arr)) 237 elif unit == 'days': 238 return list(map(lambda x: x / (24 * 60 * 60), time_seconds_arr)) 239 return time_seconds_arr 240 241 242 def days_ago(n, hour=0, minute=0, second=0, microsecond=0): 243 """ 244 Get a datetime object representing `n` days ago. By default the time is 245 set to midnight. 246 """ 247 today = timezone.utcnow().replace(hour=hour, minute=minute, second=second, microsecond=microsecond) 248 return today - timedelta(days=n) 249 250 251 def parse_execution_date(execution_date_str): 252 """Parse execution date string to datetime object.""" 253 return timezone.parse(execution_date_str) 254 [end of airflow/utils/dates.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
apache/airflow
d89bcad26445c8926093680aac84d969ac34b54c
Specify that exit code -9 is due to RAM Related to https://github.com/apache/airflow/issues/9655 It would be nice to add a message when you get this error with some info, like 'This probably is because a lack of RAM' or something like that. I have found the code where the -9 is assigned but have no idea how to add a logging message. self.process = None if self._rc is None: # Something else reaped it before we had a chance, so let's just "guess" at an error code. self._rc = -9
2021-04-05T16:20:16Z
<patch> diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py --- a/airflow/task/task_runner/standard_task_runner.py +++ b/airflow/task/task_runner/standard_task_runner.py @@ -121,3 +121,11 @@ def terminate(self): if self._rc is None: # Something else reaped it before we had a chance, so let's just "guess" at an error code. self._rc = -9 + + if self._rc == -9: + # If either we or psutil gives out a -9 return code, it likely means + # an OOM happened + self.log.error( + 'Job %s was killed before it finished (likely due to running out of memory)', + self._task_instance.job_id, + ) </patch>
[]
[]
Qiskit__qiskit-5760
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> ParametereExpression.__eq__ doesn't gracefully convert numeric types <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? `__eq__` in python conventionally considers objects equal if they are semantically equivalent, even if their types differ. ``` >>> 2 == 2 True >>> 2.0 == 2.0 True >>> 2 == 2.0 True ``` Same for Sympy: ``` >>> import sympy as sp >>> sp.Integer(2) == sp.Integer(2) True >>> sp.Float('2.0', precision=53) == sp.Float(2.0, precision=54) True >>> sp.Integer(2) == sp.Float('2.0', precision=53) True ``` But `ParameterExpression.__eq__` breaks this convention. ``` >>> import qiskit as qk >>> theta = qk.circuit.Parameter('theta') >>> theta * 2 == theta * 2 True >>> theta * 2.0 == theta * 2.0 True >>> theta * 2 == theta * 2.0 False ``` Currently, at https://github.com/Qiskit/qiskit-terra/blob/44462a8b13ea6c2cce0f9c7345c26c15fb0d4ce3/qiskit/circuit/parameterexpression.py#L389 , we compare `srepr(self._symbol_expr) == srepr(other._symbol_expr)` as strings. (`srepr` of the examples above would look like `Mul(Integer(2), Symbol('th'))` for `theta * 2` and `Mul(Float('2.0', precision=53), Symbol('th'))` for `theta * 2.0`. Instead, we should walk the sympy expression tree (following https://docs.sympy.org/latest/tutorial/manipulation.html#recursing-through-an-expression-tree ) and rely on sympy's built in `__eq__`. (The original motivation for using `srepr` instead of directly comparing `._symbol_expr` isn't clear from the current code, so that may be another option worth investigating.) </issue> <code> [start of README.md] 1 # Qiskit Terra 2 3 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)[![Build Status](https://img.shields.io/travis/com/Qiskit/qiskit-terra/master.svg?style=popout-square)](https://travis-ci.com/Qiskit/qiskit-terra)[![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=master)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master) 4 5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms. 6 7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built. 8 9 ## Installation 10 11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra. 12 13 ```bash 14 pip install qiskit 15 ``` 16 17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version. 18 19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source). 20 21 ## Creating Your First Quantum Program in Qiskit Terra 22 23 Now that Qiskit is installed, it's time to begin working with Terra. 24 25 We are ready to try out a quantum circuit example, which is simulated locally using 26 the Qiskit BasicAer element. This is a simple example that makes an entangled state. 27 28 ``` 29 $ python 30 ``` 31 32 ```python 33 >>> from qiskit import * 34 >>> qc = QuantumCircuit(2, 2) 35 >>> qc.h(0) 36 >>> qc.cx(0, 1) 37 >>> qc.measure([0,1], [0,1]) 38 >>> backend_sim = BasicAer.get_backend('qasm_simulator') 39 >>> transpiled_qc = transpile(qc, backend_sim) 40 >>> result = backend_sim.run(assemble(transpiled_qc)).result() 41 >>> print(result.get_counts(qc)) 42 ``` 43 44 In this case, the output will be: 45 46 ```python 47 {'00': 513, '11': 511} 48 ``` 49 50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to 51 run the same program on a real quantum computer via IBMQ. 52 53 ### Executing your code on a real quantum chip 54 55 You can also use Qiskit to execute your code on a 56 **real quantum chip**. 57 In order to do so, you need to configure Qiskit for using the credentials in 58 your IBM Q account: 59 60 #### Configure your IBMQ credentials 61 62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so. 63 64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account. 65 66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run: 67 68 ```python 69 >>> from qiskit import IBMQ 70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL') 71 ``` 72 73 After calling `IBMQ.save_account()`, your credentials will be stored on disk. 74 Once they are stored, at any point in the future you can load and use them 75 in your program simply via: 76 77 ```python 78 >>> from qiskit import IBMQ 79 >>> IBMQ.load_account() 80 ``` 81 82 Those who do not want to save their credentials to disk should use instead: 83 84 ```python 85 >>> from qiskit import IBMQ 86 >>> IBMQ.enable_account('MY_API_TOKEN') 87 ``` 88 89 and the token will only be active for the session. For examples using Terra with real 90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in 91 the levels. 92 93 ## Contribution Guidelines 94 95 If you'd like to contribute to Qiskit Terra, please take a look at our 96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 97 98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please 99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack) 100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions. 101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit). 102 103 ## Next Steps 104 105 Now you're set up and ready to check out some of the other examples from our 106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository. 107 108 ## Authors and Citation 109 110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib). 112 113 ## Changelog and Release Notes 114 115 The changelog for a particular release is dynamically generated and gets 116 written to the release page on Github for each release. For example, you can 117 find the page for the `0.9.0` release here: 118 119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0 120 121 The changelog for the current release can be found in the releases tab: 122 ![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square) 123 The changelog provides a quick overview of notable changes for a given 124 release. 125 126 Additionally, as part of each release detailed release notes are written to 127 document in detail what has changed as part of a release. This includes any 128 documentation on potential breaking changes on upgrade and new features. 129 For example, You can find the release notes for the `0.9.0` release in the 130 Qiskit documentation here: 131 132 https://qiskit.org/documentation/release_notes.html#terra-0-9 133 134 ## License 135 136 [Apache License 2.0](LICENSE.txt) 137 [end of README.md] [start of qiskit/algorithms/amplitude_estimators/iae.py] 1 2 # This code is part of Qiskit. 3 # 4 # (C) Copyright IBM 2018, 2020. 5 # 6 # This code is licensed under the Apache License, Version 2.0. You may 7 # obtain a copy of this license in the LICENSE.txt file in the root directory 8 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 9 # 10 # Any modifications or derivative works of this code must retain this 11 # copyright notice, and modified files need to carry a notice indicating 12 # that they have been altered from the originals. 13 14 """The Iterative Quantum Amplitude Estimation Algorithm.""" 15 16 from typing import Optional, Union, List, Tuple, Dict, cast 17 import numpy as np 18 from scipy.stats import beta 19 20 from qiskit import ClassicalRegister, QuantumCircuit 21 from qiskit.providers import BaseBackend, Backend 22 from qiskit.utils import QuantumInstance 23 24 from .amplitude_estimator import AmplitudeEstimator, AmplitudeEstimatorResult 25 from .estimation_problem import EstimationProblem 26 from ..exceptions import AlgorithmError 27 28 29 class IterativeAmplitudeEstimation(AmplitudeEstimator): 30 r"""The Iterative Amplitude Estimation algorithm. 31 32 This class implements the Iterative Quantum Amplitude Estimation (IQAE) algorithm, proposed 33 in [1]. The output of the algorithm is an estimate that, 34 with at least probability :math:`1 - \alpha`, differs by epsilon to the target value, where 35 both alpha and epsilon can be specified. 36 37 It differs from the original QAE algorithm proposed by Brassard [2] in that it does not rely on 38 Quantum Phase Estimation, but is only based on Grover's algorithm. IQAE iteratively 39 applies carefully selected Grover iterations to find an estimate for the target amplitude. 40 41 References: 42 [1]: Grinko, D., Gacon, J., Zoufal, C., & Woerner, S. (2019). 43 Iterative Quantum Amplitude Estimation. 44 `arXiv:1912.05559 <https://arxiv.org/abs/1912.05559>`_. 45 [2]: Brassard, G., Hoyer, P., Mosca, M., & Tapp, A. (2000). 46 Quantum Amplitude Amplification and Estimation. 47 `arXiv:quant-ph/0005055 <http://arxiv.org/abs/quant-ph/0005055>`_. 48 """ 49 50 def __init__(self, 51 epsilon_target: float, 52 alpha: float, 53 confint_method: str = 'beta', 54 min_ratio: float = 2, 55 quantum_instance: Optional[Union[QuantumInstance, BaseBackend, Backend]] = None 56 ) -> None: 57 r""" 58 The output of the algorithm is an estimate for the amplitude `a`, that with at least 59 probability 1 - alpha has an error of epsilon. The number of A operator calls scales 60 linearly in 1/epsilon (up to a logarithmic factor). 61 62 Args: 63 epsilon_target: Target precision for estimation target `a`, has values between 0 and 0.5 64 alpha: Confidence level, the target probability is 1 - alpha, has values between 0 and 1 65 confint_method: Statistical method used to estimate the confidence intervals in 66 each iteration, can be 'chernoff' for the Chernoff intervals or 'beta' for the 67 Clopper-Pearson intervals (default) 68 min_ratio: Minimal q-ratio (:math:`K_{i+1} / K_i`) for FindNextK 69 quantum_instance: Quantum Instance or Backend 70 71 Raises: 72 AlgorithmError: if the method to compute the confidence intervals is not supported 73 ValueError: If the target epsilon is not in (0, 0.5] 74 ValueError: If alpha is not in (0, 1) 75 ValueError: If confint_method is not supported 76 """ 77 # validate ranges of input arguments 78 if not 0 < epsilon_target <= 0.5: 79 raise ValueError(f'The target epsilon must be in (0, 0.5], but is {epsilon_target}.') 80 81 if not 0 < alpha < 1: 82 raise ValueError(f'The confidence level alpha must be in (0, 1), but is {alpha}') 83 84 if confint_method not in {'chernoff', 'beta'}: 85 raise ValueError('The confidence interval method must be chernoff or beta, but ' 86 f'is {confint_method}.') 87 88 super().__init__() 89 90 # set quantum instance 91 self.quantum_instance = quantum_instance 92 93 # store parameters 94 self._epsilon = epsilon_target 95 self._alpha = alpha 96 self._min_ratio = min_ratio 97 self._confint_method = confint_method 98 99 @property 100 def quantum_instance(self) -> Optional[QuantumInstance]: 101 """Get the quantum instance. 102 103 Returns: 104 The quantum instance used to run this algorithm. 105 """ 106 return self._quantum_instance 107 108 @quantum_instance.setter 109 def quantum_instance(self, quantum_instance: Union[QuantumInstance, 110 BaseBackend, Backend]) -> None: 111 """Set quantum instance. 112 113 Args: 114 quantum_instance: The quantum instance used to run this algorithm. 115 """ 116 if isinstance(quantum_instance, (BaseBackend, Backend)): 117 quantum_instance = QuantumInstance(quantum_instance) 118 self._quantum_instance = quantum_instance 119 120 @property 121 def epsilon_target(self) -> float: 122 """Returns the target precision ``epsilon_target`` of the algorithm. 123 124 Returns: 125 The target precision (which is half the width of the confidence interval). 126 """ 127 return self._epsilon 128 129 @epsilon_target.setter 130 def epsilon_target(self, epsilon: float) -> None: 131 """Set the target precision of the algorithm. 132 133 Args: 134 epsilon: Target precision for estimation target `a`. 135 """ 136 self._epsilon = epsilon 137 138 def _find_next_k(self, k: int, upper_half_circle: bool, theta_interval: Tuple[float, float], 139 min_ratio: float = 2.0) -> Tuple[int, bool]: 140 """Find the largest integer k_next, such that the interval (4 * k_next + 2)*theta_interval 141 lies completely in [0, pi] or [pi, 2pi], for theta_interval = (theta_lower, theta_upper). 142 143 Args: 144 k: The current power of the Q operator. 145 upper_half_circle: Boolean flag of whether theta_interval lies in the 146 upper half-circle [0, pi] or in the lower one [pi, 2pi]. 147 theta_interval: The current confidence interval for the angle theta, 148 i.e. (theta_lower, theta_upper). 149 min_ratio: Minimal ratio K/K_next allowed in the algorithm. 150 151 Returns: 152 The next power k, and boolean flag for the extrapolated interval. 153 154 Raises: 155 AlgorithmError: if min_ratio is smaller or equal to 1 156 """ 157 if min_ratio <= 1: 158 raise AlgorithmError('min_ratio must be larger than 1 to ensure convergence') 159 160 # initialize variables 161 theta_l, theta_u = theta_interval 162 old_scaling = 4 * k + 2 # current scaling factor, called K := (4k + 2) 163 164 # the largest feasible scaling factor K cannot be larger than K_max, 165 # which is bounded by the length of the current confidence interval 166 max_scaling = int(1 / (2 * (theta_u - theta_l))) 167 scaling = max_scaling - (max_scaling - 2) % 4 # bring into the form 4 * k_max + 2 168 169 # find the largest feasible scaling factor K_next, and thus k_next 170 while scaling >= min_ratio * old_scaling: 171 theta_min = scaling * theta_l - int(scaling * theta_l) 172 theta_max = scaling * theta_u - int(scaling * theta_u) 173 174 if theta_min <= theta_max <= 0.5 and theta_min <= 0.5: 175 # the extrapolated theta interval is in the upper half-circle 176 upper_half_circle = True 177 return int((scaling - 2) / 4), upper_half_circle 178 179 elif theta_max >= 0.5 and theta_max >= theta_min >= 0.5: 180 # the extrapolated theta interval is in the upper half-circle 181 upper_half_circle = False 182 return int((scaling - 2) / 4), upper_half_circle 183 184 scaling -= 4 185 186 # if we do not find a feasible k, return the old one 187 return int(k), upper_half_circle 188 189 def construct_circuit(self, estimation_problem: EstimationProblem, 190 k: int = 0, measurement: bool = False) -> QuantumCircuit: 191 r"""Construct the circuit :math:`\mathcal{Q}^k \mathcal{A} |0\rangle`. 192 193 The A operator is the unitary specifying the QAE problem and Q the associated Grover 194 operator. 195 196 Args: 197 estimation_problem: The estimation problem for which to construct the QAE circuit. 198 k: The power of the Q operator. 199 measurement: Boolean flag to indicate if measurements should be included in the 200 circuits. 201 202 Returns: 203 The circuit implementing :math:`\mathcal{Q}^k \mathcal{A} |0\rangle`. 204 """ 205 num_qubits = max(estimation_problem.state_preparation.num_qubits, 206 estimation_problem.grover_operator.num_qubits) 207 circuit = QuantumCircuit(num_qubits, name='circuit') 208 209 # add classical register if needed 210 if measurement: 211 c = ClassicalRegister(len(estimation_problem.objective_qubits)) 212 circuit.add_register(c) 213 214 # add A operator 215 circuit.compose(estimation_problem.state_preparation, inplace=True) 216 217 # add Q^k 218 if k != 0: 219 circuit.compose(estimation_problem.grover_operator.power(k), inplace=True) 220 221 # add optional measurement 222 if measurement: 223 # real hardware can currently not handle operations after measurements, which might 224 # happen if the circuit gets transpiled, hence we're adding a safeguard-barrier 225 circuit.barrier() 226 circuit.measure(estimation_problem.objective_qubits, c[:]) 227 228 return circuit 229 230 def _good_state_probability(self, 231 problem: EstimationProblem, 232 counts_or_statevector: Union[Dict[str, int], np.ndarray], 233 num_state_qubits: int, 234 ) -> Union[Tuple[int, float], float]: 235 """Get the probability to measure '1' in the last qubit. 236 237 Args: 238 problem: The estimation problem, used to obtain the number of objective qubits and 239 the ``is_good_state`` function. 240 counts_or_statevector: Either a counts-dictionary (with one measured qubit only!) or 241 the statevector returned from the statevector_simulator. 242 num_state_qubits: The number of state qubits. 243 244 Returns: 245 If a dict is given, return (#one-counts, #one-counts/#all-counts), 246 otherwise Pr(measure '1' in the last qubit). 247 """ 248 if isinstance(counts_or_statevector, dict): 249 one_counts = 0 250 for state, counts in counts_or_statevector.items(): 251 if problem.is_good_state(state): 252 one_counts += counts 253 254 return int(one_counts), one_counts / sum(counts_or_statevector.values()) 255 else: 256 statevector = counts_or_statevector 257 num_qubits = int(np.log2(len(statevector))) # the total number of qubits 258 259 # sum over all amplitudes where the objective qubit is 1 260 prob = 0 261 for i, amplitude in enumerate(statevector): 262 # consider only state qubits and revert bit order 263 bitstr = bin(i)[2:].zfill(num_qubits)[-num_state_qubits:][::-1] 264 objectives = [bitstr[index] for index in problem.objective_qubits] 265 if problem.is_good_state(objectives): 266 prob = prob + np.abs(amplitude)**2 267 268 return prob 269 270 def estimate(self, estimation_problem: EstimationProblem 271 ) -> 'IterativeAmplitudeEstimationResult': 272 # initialize memory variables 273 powers = [0] # list of powers k: Q^k, (called 'k' in paper) 274 ratios = [] # list of multiplication factors (called 'q' in paper) 275 theta_intervals = [[0, 1 / 4]] # a priori knowledge of theta / 2 / pi 276 a_intervals = [[0.0, 1.0]] # a priori knowledge of the confidence interval of the estimate 277 num_oracle_queries = 0 278 num_one_shots = [] 279 280 # maximum number of rounds 281 max_rounds = int(np.log(self._min_ratio * np.pi / 8 282 / self._epsilon) / np.log(self._min_ratio)) + 1 283 upper_half_circle = True # initially theta is in the upper half-circle 284 285 # for statevector we can directly return the probability to measure 1 286 # note, that no iterations here are necessary 287 if self._quantum_instance.is_statevector: 288 # simulate circuit 289 circuit = self.construct_circuit(estimation_problem, k=0, measurement=False) 290 ret = self._quantum_instance.execute(circuit) 291 292 # get statevector 293 statevector = ret.get_statevector(circuit) 294 295 # calculate the probability of measuring '1' 296 num_qubits = circuit.num_qubits - circuit.num_ancillas 297 prob = self._good_state_probability(estimation_problem, statevector, num_qubits) 298 prob = cast(float, prob) # tell MyPy it's a float and not Tuple[int, float ] 299 300 a_confidence_interval = [prob, prob] # type: List[float] 301 a_intervals.append(a_confidence_interval) 302 303 theta_i_interval = [np.arccos(1 - 2 * a_i) / 2 / np.pi # type: ignore 304 for a_i in a_confidence_interval] 305 theta_intervals.append(theta_i_interval) 306 num_oracle_queries = 0 # no Q-oracle call, only a single one to A 307 308 else: 309 num_iterations = 0 # keep track of the number of iterations 310 shots = self._quantum_instance._run_config.shots # number of shots per iteration 311 312 # do while loop, keep in mind that we scaled theta mod 2pi such that it lies in [0,1] 313 while theta_intervals[-1][1] - theta_intervals[-1][0] > self._epsilon / np.pi: 314 num_iterations += 1 315 316 # get the next k 317 k, upper_half_circle = self._find_next_k(powers[-1], upper_half_circle, 318 theta_intervals[-1], # type: ignore 319 min_ratio=self._min_ratio) 320 321 # store the variables 322 powers.append(k) 323 ratios.append((2 * powers[-1] + 1) / (2 * powers[-2] + 1)) 324 325 # run measurements for Q^k A|0> circuit 326 circuit = self.construct_circuit(estimation_problem, k, measurement=True) 327 ret = self._quantum_instance.execute(circuit) 328 329 # get the counts and store them 330 counts = ret.get_counts(circuit) 331 332 # calculate the probability of measuring '1', 'prob' is a_i in the paper 333 num_qubits = circuit.num_qubits - circuit.num_ancillas 334 # type: ignore 335 one_counts, prob = self._good_state_probability(estimation_problem, counts, 336 num_qubits) 337 338 num_one_shots.append(one_counts) 339 340 # track number of Q-oracle calls 341 num_oracle_queries += shots * k 342 343 # if on the previous iterations we have K_{i-1} == K_i, we sum these samples up 344 j = 1 # number of times we stayed fixed at the same K 345 round_shots = shots 346 round_one_counts = one_counts 347 if num_iterations > 1: 348 while powers[num_iterations - j] == powers[num_iterations] \ 349 and num_iterations >= j + 1: 350 j = j + 1 351 round_shots += shots 352 round_one_counts += num_one_shots[-j] 353 354 # compute a_min_i, a_max_i 355 if self._confint_method == 'chernoff': 356 a_i_min, a_i_max = _chernoff_confint(prob, round_shots, max_rounds, 357 self._alpha) 358 else: # 'beta' 359 a_i_min, a_i_max = _clopper_pearson_confint(round_one_counts, round_shots, 360 self._alpha / max_rounds) 361 362 # compute theta_min_i, theta_max_i 363 if upper_half_circle: 364 theta_min_i = np.arccos(1 - 2 * a_i_min) / 2 / np.pi 365 theta_max_i = np.arccos(1 - 2 * a_i_max) / 2 / np.pi 366 else: 367 theta_min_i = 1 - np.arccos(1 - 2 * a_i_max) / 2 / np.pi 368 theta_max_i = 1 - np.arccos(1 - 2 * a_i_min) / 2 / np.pi 369 370 # compute theta_u, theta_l of this iteration 371 scaling = 4 * k + 2 # current K_i factor 372 theta_u = (int(scaling * theta_intervals[-1][1]) + theta_max_i) / scaling 373 theta_l = (int(scaling * theta_intervals[-1][0]) + theta_min_i) / scaling 374 theta_intervals.append([theta_l, theta_u]) 375 376 # compute a_u_i, a_l_i 377 a_u = np.sin(2 * np.pi * theta_u)**2 378 a_l = np.sin(2 * np.pi * theta_l)**2 379 a_u = cast(float, a_u) 380 a_l = cast(float, a_l) 381 a_intervals.append([a_l, a_u]) 382 383 # get the latest confidence interval for the estimate of a 384 confidence_interval = tuple(a_intervals[-1]) 385 386 # the final estimate is the mean of the confidence interval 387 estimation = np.mean(confidence_interval) 388 389 result = IterativeAmplitudeEstimationResult() 390 result.alpha = self._alpha 391 result.post_processing = estimation_problem.post_processing 392 result.num_oracle_queries = num_oracle_queries 393 394 result.estimation = estimation 395 result.epsilon_estimated = (confidence_interval[1] - confidence_interval[0]) / 2 396 result.confidence_interval = confidence_interval 397 398 result.estimation_processed = estimation_problem.post_processing(estimation) 399 confidence_interval = tuple(estimation_problem.post_processing(x) 400 for x in confidence_interval) 401 result.confidence_interval_processed = confidence_interval 402 result.epsilon_estimated_processed = (confidence_interval[1] - confidence_interval[0]) / 2 403 result.estimate_intervals = a_intervals 404 result.theta_intervals = theta_intervals 405 result.powers = powers 406 result.ratios = ratios 407 408 return result 409 410 411 class IterativeAmplitudeEstimationResult(AmplitudeEstimatorResult): 412 """The ``IterativeAmplitudeEstimation`` result object.""" 413 414 def __init__(self) -> None: 415 super().__init__() 416 self._alpha = None 417 self._epsilon_target = None 418 self._epsilon_estimated = None 419 self._epsilon_estimated_processed = None 420 self._estimate_intervals = None 421 self._theta_intervals = None 422 self._powers = None 423 self._ratios = None 424 self._confidence_interval_processed = None 425 426 @property 427 def alpha(self) -> float: 428 r"""Return the confidence level :math:`\alpha`.""" 429 return self._alpha 430 431 @alpha.setter 432 def alpha(self, value: float) -> None: 433 r"""Set the confidence level :math:`\alpha`.""" 434 self._alpha = value 435 436 @property 437 def epsilon_target(self) -> float: 438 """Return the target half-width of the confidence interval.""" 439 return self._epsilon_target 440 441 @epsilon_target.setter 442 def epsilon_target(self, value: float) -> None: 443 """Set the target half-width of the confidence interval.""" 444 self._epsilon_target = value 445 446 @property 447 def epsilon_estimated(self) -> float: 448 """Return the estimated half-width of the confidence interval.""" 449 return self._epsilon_estimated 450 451 @epsilon_estimated.setter 452 def epsilon_estimated(self, value: float) -> None: 453 """Set the estimated half-width of the confidence interval.""" 454 self._epsilon_estimated = value 455 456 @property 457 def epsilon_estimated_processed(self) -> float: 458 """Return the post-processed estimated half-width of the confidence interval.""" 459 return self._epsilon_estimated_processed 460 461 @epsilon_estimated_processed.setter 462 def epsilon_estimated_processed(self, value: float) -> None: 463 """Set the post-processed estimated half-width of the confidence interval.""" 464 self._epsilon_estimated_processed = value 465 466 @property 467 def estimate_intervals(self) -> List[List[float]]: 468 """Return the confidence intervals for the estimate in each iteration.""" 469 return self._estimate_intervals 470 471 @estimate_intervals.setter 472 def estimate_intervals(self, value: List[List[float]]) -> None: 473 """Set the confidence intervals for the estimate in each iteration.""" 474 self._estimate_intervals = value 475 476 @property 477 def theta_intervals(self) -> List[List[float]]: 478 """Return the confidence intervals for the angles in each iteration.""" 479 return self._theta_intervals 480 481 @theta_intervals.setter 482 def theta_intervals(self, value: List[List[float]]) -> None: 483 """Set the confidence intervals for the angles in each iteration.""" 484 self._theta_intervals = value 485 486 @property 487 def powers(self) -> List[int]: 488 """Return the powers of the Grover operator in each iteration.""" 489 return self._powers 490 491 @powers.setter 492 def powers(self, value: List[int]) -> None: 493 """Set the powers of the Grover operator in each iteration.""" 494 self._powers = value 495 496 @property 497 def ratios(self) -> List[float]: 498 r"""Return the ratios :math:`K_{i+1}/K_{i}` for each iteration :math:`i`.""" 499 return self._ratios 500 501 @ratios.setter 502 def ratios(self, value: List[float]) -> None: 503 r"""Set the ratios :math:`K_{i+1}/K_{i}` for each iteration :math:`i`.""" 504 self._ratios = value 505 506 @property 507 def confidence_interval_processed(self) -> Tuple[float, float]: 508 """Return the post-processed confidence interval.""" 509 return self._confidence_interval_processed 510 511 @confidence_interval_processed.setter 512 def confidence_interval_processed(self, value: Tuple[float, float]) -> None: 513 """Set the post-processed confidence interval.""" 514 self._confidence_interval_processed = value 515 516 517 def _chernoff_confint(value: float, shots: int, max_rounds: int, alpha: float 518 ) -> Tuple[float, float]: 519 """Compute the Chernoff confidence interval for `shots` i.i.d. Bernoulli trials. 520 521 The confidence interval is 522 523 [value - eps, value + eps], where eps = sqrt(3 * log(2 * max_rounds/ alpha) / shots) 524 525 but at most [0, 1]. 526 527 Args: 528 value: The current estimate. 529 shots: The number of shots. 530 max_rounds: The maximum number of rounds, used to compute epsilon_a. 531 alpha: The confidence level, used to compute epsilon_a. 532 533 Returns: 534 The Chernoff confidence interval. 535 """ 536 eps = np.sqrt(3 * np.log(2 * max_rounds / alpha) / shots) 537 lower = np.maximum(0, value - eps) 538 upper = np.minimum(1, value + eps) 539 return lower, upper 540 541 542 def _clopper_pearson_confint(counts: int, shots: int, alpha: float) -> Tuple[float, float]: 543 """Compute the Clopper-Pearson confidence interval for `shots` i.i.d. Bernoulli trials. 544 545 Args: 546 counts: The number of positive counts. 547 shots: The number of shots. 548 alpha: The confidence level for the confidence interval. 549 550 Returns: 551 The Clopper-Pearson confidence interval. 552 """ 553 lower, upper = 0, 1 554 555 # if counts == 0, the beta quantile returns nan 556 if counts != 0: 557 lower = beta.ppf(alpha / 2, counts, shots - counts + 1) 558 559 # if counts == shots, the beta quantile returns nan 560 if counts != shots: 561 upper = beta.ppf(1 - alpha / 2, counts + 1, shots - counts) 562 563 return lower, upper 564 [end of qiskit/algorithms/amplitude_estimators/iae.py] [start of qiskit/circuit/parameterexpression.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2017, 2019. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 """ 13 ParameterExpression Class to enable creating simple expressions of Parameters. 14 """ 15 from typing import Callable, Dict, Set, Union 16 17 import numbers 18 import operator 19 20 import numpy 21 22 from qiskit.circuit.exceptions import CircuitError 23 24 ParameterValueType = Union['ParameterExpression', float, int] 25 26 27 class ParameterExpression: 28 """ParameterExpression class to enable creating expressions of Parameters.""" 29 30 __slots__ = ['_parameter_symbols', '_parameters', '_symbol_expr', '_names'] 31 32 def __init__(self, symbol_map: Dict, expr): 33 """Create a new :class:`ParameterExpression`. 34 35 Not intended to be called directly, but to be instantiated via operations 36 on other :class:`Parameter` or :class:`ParameterExpression` objects. 37 38 Args: 39 symbol_map (Dict[Parameter, [ParameterExpression, float, or int]]): 40 Mapping of :class:`Parameter` instances to the :class:`sympy.Symbol` 41 serving as their placeholder in expr. 42 expr (sympy.Expr): Expression of :class:`sympy.Symbol` s. 43 """ 44 self._parameter_symbols = symbol_map 45 self._parameters = set(self._parameter_symbols) 46 self._symbol_expr = expr 47 self._names = None 48 49 @property 50 def parameters(self) -> Set: 51 """Returns a set of the unbound Parameters in the expression.""" 52 return self._parameters 53 54 def conjugate(self) -> 'ParameterExpression': 55 """Return the conjugate.""" 56 conjugated = ParameterExpression(self._parameter_symbols, self._symbol_expr.conjugate()) 57 return conjugated 58 59 def assign(self, parameter, value: ParameterValueType) -> 'ParameterExpression': 60 """ 61 Assign one parameter to a value, which can either be numeric or another parameter 62 expression. 63 64 Args: 65 parameter (Parameter): A parameter in this expression whose value will be updated. 66 value: The new value to bind to. 67 68 Returns: 69 A new expression parameterized by any parameters which were not bound by assignment. 70 """ 71 if isinstance(value, ParameterExpression): 72 return self.subs({parameter: value}) 73 return self.bind({parameter: value}) 74 75 def bind(self, parameter_values: Dict) -> 'ParameterExpression': 76 """Binds the provided set of parameters to their corresponding values. 77 78 Args: 79 parameter_values: Mapping of Parameter instances to the numeric value to which 80 they will be bound. 81 82 Raises: 83 CircuitError: 84 - If parameter_values contains Parameters outside those in self. 85 - If a non-numeric value is passed in parameter_values. 86 ZeroDivisionError: 87 - If binding the provided values requires division by zero. 88 89 Returns: 90 A new expression parameterized by any parameters which were not bound by 91 parameter_values. 92 """ 93 94 self._raise_if_passed_unknown_parameters(parameter_values.keys()) 95 self._raise_if_passed_nan(parameter_values) 96 97 symbol_values = {self._parameter_symbols[parameter]: value 98 for parameter, value in parameter_values.items()} 99 bound_symbol_expr = self._symbol_expr.subs(symbol_values) 100 101 # Don't use sympy.free_symbols to count remaining parameters here. 102 # sympy will in some cases reduce the expression and remove even 103 # unbound symbols. 104 # e.g. (sympy.Symbol('s') * 0).free_symbols == set() 105 106 free_parameters = self.parameters - parameter_values.keys() 107 free_parameter_symbols = {p: s for p, s in self._parameter_symbols.items() 108 if p in free_parameters} 109 110 if bound_symbol_expr.is_infinite: 111 raise ZeroDivisionError('Binding provided for expression ' 112 'results in division by zero ' 113 '(Expression: {}, Bindings: {}).'.format( 114 self, parameter_values)) 115 116 return ParameterExpression(free_parameter_symbols, bound_symbol_expr) 117 118 def subs(self, 119 parameter_map: Dict) -> 'ParameterExpression': 120 """Returns a new Expression with replacement Parameters. 121 122 Args: 123 parameter_map: Mapping from Parameters in self to the ParameterExpression 124 instances with which they should be replaced. 125 126 Raises: 127 CircuitError: 128 - If parameter_map contains Parameters outside those in self. 129 - If the replacement Parameters in parameter_map would result in 130 a name conflict in the generated expression. 131 132 Returns: 133 A new expression with the specified parameters replaced. 134 """ 135 136 inbound_parameters = {p 137 for replacement_expr in parameter_map.values() 138 for p in replacement_expr.parameters} 139 140 self._raise_if_passed_unknown_parameters(parameter_map.keys()) 141 self._raise_if_parameter_names_conflict(inbound_parameters, parameter_map.keys()) 142 143 from sympy import Symbol 144 new_parameter_symbols = {p: Symbol(p.name) 145 for p in inbound_parameters} 146 147 # Include existing parameters in self not set to be replaced. 148 new_parameter_symbols.update({p: s 149 for p, s in self._parameter_symbols.items() 150 if p not in parameter_map}) 151 152 # If new_param is an expr, we'll need to construct a matching sympy expr 153 # but with our sympy symbols instead of theirs. 154 155 symbol_map = { 156 self._parameter_symbols[old_param]: new_param._symbol_expr 157 for old_param, new_param in parameter_map.items() 158 } 159 160 substituted_symbol_expr = self._symbol_expr.subs(symbol_map) 161 162 return ParameterExpression(new_parameter_symbols, substituted_symbol_expr) 163 164 def _raise_if_passed_unknown_parameters(self, parameters): 165 unknown_parameters = parameters - self.parameters 166 if unknown_parameters: 167 raise CircuitError('Cannot bind Parameters ({}) not present in ' 168 'expression.'.format([str(p) for p in unknown_parameters])) 169 170 def _raise_if_passed_nan(self, parameter_values): 171 nan_parameter_values = {p: v for p, v in parameter_values.items() 172 if not isinstance(v, numbers.Number)} 173 if nan_parameter_values: 174 raise CircuitError('Expression cannot bind non-numeric values ({})'.format( 175 nan_parameter_values)) 176 177 def _raise_if_parameter_names_conflict(self, inbound_parameters, outbound_parameters=None): 178 if outbound_parameters is None: 179 outbound_parameters = set() 180 181 if self._names is None: 182 self._names = {p.name: p for p in self._parameters} 183 184 inbound_names = {p.name: p for p in inbound_parameters} 185 outbound_names = {p.name: p for p in outbound_parameters} 186 187 shared_names = (self._names.keys() - outbound_names.keys()) & inbound_names.keys() 188 conflicting_names = {name for name in shared_names 189 if self._names[name] != inbound_names[name]} 190 if conflicting_names: 191 raise CircuitError('Name conflict applying operation for parameters: ' 192 '{}'.format(conflicting_names)) 193 194 def _apply_operation(self, operation: Callable, 195 other: ParameterValueType, 196 reflected: bool = False) -> 'ParameterExpression': 197 """Base method implementing math operations between Parameters and 198 either a constant or a second ParameterExpression. 199 200 Args: 201 operation: One of operator.{add,sub,mul,truediv}. 202 other: The second argument to be used with self in operation. 203 reflected: Optional - The default ordering is "self operator other". 204 If reflected is True, this is switched to "other operator self". 205 For use in e.g. __radd__, ... 206 207 Raises: 208 CircuitError: 209 - If parameter_map contains Parameters outside those in self. 210 - If the replacement Parameters in parameter_map would result in 211 a name conflict in the generated expression. 212 213 Returns: 214 A new expression describing the result of the operation. 215 """ 216 self_expr = self._symbol_expr 217 218 if isinstance(other, ParameterExpression): 219 self._raise_if_parameter_names_conflict(other._parameter_symbols.keys()) 220 221 parameter_symbols = {**self._parameter_symbols, **other._parameter_symbols} 222 other_expr = other._symbol_expr 223 elif isinstance(other, numbers.Number) and numpy.isfinite(other): 224 parameter_symbols = self._parameter_symbols.copy() 225 other_expr = other 226 else: 227 return NotImplemented 228 229 if reflected: 230 expr = operation(other_expr, self_expr) 231 else: 232 expr = operation(self_expr, other_expr) 233 234 return ParameterExpression(parameter_symbols, expr) 235 236 def gradient(self, param) -> Union['ParameterExpression', float]: 237 """Get the derivative of a parameter expression w.r.t. a specified parameter expression. 238 239 Args: 240 param (Parameter): Parameter w.r.t. which we want to take the derivative 241 242 Returns: 243 ParameterExpression representing the gradient of param_expr w.r.t. param 244 """ 245 # Check if the parameter is contained in the parameter expression 246 if param not in self._parameter_symbols.keys(): 247 # If it is not contained then return 0 248 return 0.0 249 250 # Compute the gradient of the parameter expression w.r.t. param 251 import sympy as sy 252 key = self._parameter_symbols[param] 253 # TODO enable nth derivative 254 expr_grad = sy.Derivative(self._symbol_expr, key).doit() 255 256 # generate the new dictionary of symbols 257 # this needs to be done since in the derivative some symbols might disappear (e.g. 258 # when deriving linear expression) 259 parameter_symbols = {} 260 for parameter, symbol in self._parameter_symbols.items(): 261 if symbol in expr_grad.free_symbols: 262 parameter_symbols[parameter] = symbol 263 # If the gradient corresponds to a parameter expression then return the new expression. 264 if len(parameter_symbols) > 0: 265 return ParameterExpression(parameter_symbols, expr=expr_grad) 266 # If no free symbols left, return a float corresponding to the gradient. 267 return float(expr_grad) 268 269 def __add__(self, other): 270 return self._apply_operation(operator.add, other) 271 272 def __radd__(self, other): 273 return self._apply_operation(operator.add, other, reflected=True) 274 275 def __sub__(self, other): 276 return self._apply_operation(operator.sub, other) 277 278 def __rsub__(self, other): 279 return self._apply_operation(operator.sub, other, reflected=True) 280 281 def __mul__(self, other): 282 return self._apply_operation(operator.mul, other) 283 284 def __neg__(self): 285 return self._apply_operation(operator.mul, -1.0) 286 287 def __rmul__(self, other): 288 return self._apply_operation(operator.mul, other, reflected=True) 289 290 def __truediv__(self, other): 291 if other == 0: 292 raise ZeroDivisionError('Division of a ParameterExpression by zero.') 293 return self._apply_operation(operator.truediv, other) 294 295 def __rtruediv__(self, other): 296 return self._apply_operation(operator.truediv, other, reflected=True) 297 298 def _call(self, ufunc): 299 return ParameterExpression( 300 self._parameter_symbols, 301 ufunc(self._symbol_expr) 302 ) 303 304 def sin(self): 305 """Sine of a ParameterExpression""" 306 from sympy import sin as _sin 307 return self._call(_sin) 308 309 def cos(self): 310 """Cosine of a ParameterExpression""" 311 from sympy import cos as _cos 312 return self._call(_cos) 313 314 def tan(self): 315 """Tangent of a ParameterExpression""" 316 from sympy import tan as _tan 317 return self._call(_tan) 318 319 def arcsin(self): 320 """Arcsin of a ParameterExpression""" 321 from sympy import asin as _asin 322 return self._call(_asin) 323 324 def arccos(self): 325 """Arccos of a ParameterExpression""" 326 from sympy import acos as _acos 327 return self._call(_acos) 328 329 def arctan(self): 330 """Arctan of a ParameterExpression""" 331 from sympy import atan as _atan 332 return self._call(_atan) 333 334 def exp(self): 335 """Exponential of a ParameterExpression""" 336 from sympy import exp as _exp 337 return self._call(_exp) 338 339 def log(self): 340 """Logarithm of a ParameterExpression""" 341 from sympy import log as _log 342 return self._call(_log) 343 344 def __repr__(self): 345 return '{}({})'.format(self.__class__.__name__, str(self)) 346 347 def __str__(self): 348 return str(self._symbol_expr) 349 350 def __float__(self): 351 if self.parameters: 352 raise TypeError('ParameterExpression with unbound parameters ({}) ' 353 'cannot be cast to a float.'.format(self.parameters)) 354 return float(self._symbol_expr) 355 356 def __complex__(self): 357 if self.parameters: 358 raise TypeError('ParameterExpression with unbound parameters ({}) ' 359 'cannot be cast to a complex.'.format(self.parameters)) 360 return complex(self._symbol_expr) 361 362 def __int__(self): 363 if self.parameters: 364 raise TypeError('ParameterExpression with unbound parameters ({}) ' 365 'cannot be cast to an int.'.format(self.parameters)) 366 return int(self._symbol_expr) 367 368 def __hash__(self): 369 return hash((frozenset(self._parameter_symbols), self._symbol_expr)) 370 371 def __copy__(self): 372 return self 373 374 def __deepcopy__(self, memo=None): 375 return self 376 377 def __eq__(self, other): 378 """Check if this parameter expression is equal to another parameter expression 379 or a fixed value (only if this is a bound expression). 380 Args: 381 other (ParameterExpression or a number): 382 Parameter expression or numeric constant used for comparison 383 Returns: 384 bool: result of the comparison 385 """ 386 from sympy import srepr 387 if isinstance(other, ParameterExpression): 388 return (self.parameters == other.parameters 389 and srepr(self._symbol_expr) == srepr(other._symbol_expr)) 390 elif isinstance(other, numbers.Number): 391 return (len(self.parameters) == 0 392 and complex(self._symbol_expr) == other) 393 return False 394 [end of qiskit/circuit/parameterexpression.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
aa5e6eed1409d3b043a1bbe9d58004f56b8dbb18
ParametereExpression.__eq__ doesn't gracefully convert numeric types <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? `__eq__` in python conventionally considers objects equal if they are semantically equivalent, even if their types differ. ``` >>> 2 == 2 True >>> 2.0 == 2.0 True >>> 2 == 2.0 True ``` Same for Sympy: ``` >>> import sympy as sp >>> sp.Integer(2) == sp.Integer(2) True >>> sp.Float('2.0', precision=53) == sp.Float(2.0, precision=54) True >>> sp.Integer(2) == sp.Float('2.0', precision=53) True ``` But `ParameterExpression.__eq__` breaks this convention. ``` >>> import qiskit as qk >>> theta = qk.circuit.Parameter('theta') >>> theta * 2 == theta * 2 True >>> theta * 2.0 == theta * 2.0 True >>> theta * 2 == theta * 2.0 False ``` Currently, at https://github.com/Qiskit/qiskit-terra/blob/44462a8b13ea6c2cce0f9c7345c26c15fb0d4ce3/qiskit/circuit/parameterexpression.py#L389 , we compare `srepr(self._symbol_expr) == srepr(other._symbol_expr)` as strings. (`srepr` of the examples above would look like `Mul(Integer(2), Symbol('th'))` for `theta * 2` and `Mul(Float('2.0', precision=53), Symbol('th'))` for `theta * 2.0`. Instead, we should walk the sympy expression tree (following https://docs.sympy.org/latest/tutorial/manipulation.html#recursing-through-an-expression-tree ) and rely on sympy's built in `__eq__`. (The original motivation for using `srepr` instead of directly comparing `._symbol_expr` isn't clear from the current code, so that may be another option worth investigating.)
This sounds reasonable to me. The `srepr` approach seemed a bit strict.
2021-02-01T15:01:29Z
<patch> diff --git a/qiskit/circuit/parameterexpression.py b/qiskit/circuit/parameterexpression.py --- a/qiskit/circuit/parameterexpression.py +++ b/qiskit/circuit/parameterexpression.py @@ -383,10 +383,9 @@ def __eq__(self, other): Returns: bool: result of the comparison """ - from sympy import srepr if isinstance(other, ParameterExpression): return (self.parameters == other.parameters - and srepr(self._symbol_expr) == srepr(other._symbol_expr)) + and self._symbol_expr.equals(other._symbol_expr)) elif isinstance(other, numbers.Number): return (len(self.parameters) == 0 and complex(self._symbol_expr) == other) </patch>
[]
[]
Qiskit__qiskit-5560
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> implement iterating/collecting PauliSumOp coefficients <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> It would be useful to be able to iterate over, or return a list of coefficients in PauliSumOp. At present, one has to dig into the details to construct the coefficients for each term. In particular, it requires multiplying two numbers for each coeffcient. There are use cases, for instance when bounding the eigenvalues of the operators. For example, this gives a bound on the eigenvalues: ```python float(sum(abs(pauli_sum.primitive.coeffs)) * abs(pauli_sum.coeff)) ``` Abstracting access would be cleaner and resistant to implementation changes. Like this, if we want to return an numpy array ```python float(sum(abs(pauli_sum.coeffs))) ``` or this, if we return a list ```python sum(abs(c) for c in pauli_sum.coeffs) ``` This should be quite easy to implement. https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/opflow/primitive_ops/pauli_sum_op.py </issue> <code> [start of README.md] 1 # Qiskit Terra 2 3 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)[![Build Status](https://img.shields.io/travis/com/Qiskit/qiskit-terra/master.svg?style=popout-square)](https://travis-ci.com/Qiskit/qiskit-terra)[![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=master)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master) 4 5 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms. 6 7 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built. 8 9 ## Installation 10 11 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra. 12 13 ```bash 14 pip install qiskit 15 ``` 16 17 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version. 18 19 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source). 20 21 ## Creating Your First Quantum Program in Qiskit Terra 22 23 Now that Qiskit is installed, it's time to begin working with Terra. 24 25 We are ready to try out a quantum circuit example, which is simulated locally using 26 the Qiskit BasicAer element. This is a simple example that makes an entangled state. 27 28 ``` 29 $ python 30 ``` 31 32 ```python 33 >>> from qiskit import * 34 >>> qc = QuantumCircuit(2, 2) 35 >>> qc.h(0) 36 >>> qc.cx(0, 1) 37 >>> qc.measure([0,1], [0,1]) 38 >>> backend_sim = BasicAer.get_backend('qasm_simulator') 39 >>> transpiled_qc = transpile(qc, backend_sim) 40 >>> result = backend_sim.run(assemble(transpiled_qc)).result() 41 >>> print(result.get_counts(qc)) 42 ``` 43 44 In this case, the output will be: 45 46 ```python 47 {'00': 513, '11': 511} 48 ``` 49 50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to 51 run the same program on a real quantum computer via IBMQ. 52 53 ### Executing your code on a real quantum chip 54 55 You can also use Qiskit to execute your code on a 56 **real quantum chip**. 57 In order to do so, you need to configure Qiskit for using the credentials in 58 your IBM Q account: 59 60 #### Configure your IBMQ credentials 61 62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so. 63 64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account. 65 66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run: 67 68 ```python 69 >>> from qiskit import IBMQ 70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL') 71 ``` 72 73 After calling `IBMQ.save_account()`, your credentials will be stored on disk. 74 Once they are stored, at any point in the future you can load and use them 75 in your program simply via: 76 77 ```python 78 >>> from qiskit import IBMQ 79 >>> IBMQ.load_account() 80 ``` 81 82 Those who do not want to save their credentials to disk should use instead: 83 84 ```python 85 >>> from qiskit import IBMQ 86 >>> IBMQ.enable_account('MY_API_TOKEN') 87 ``` 88 89 and the token will only be active for the session. For examples using Terra with real 90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in 91 the levels. 92 93 ## Contribution Guidelines 94 95 If you'd like to contribute to Qiskit Terra, please take a look at our 96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 97 98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please 99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack) 100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions. 101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit). 102 103 ## Next Steps 104 105 Now you're set up and ready to check out some of the other examples from our 106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository. 107 108 ## Authors and Citation 109 110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib). 112 113 ## Changelog and Release Notes 114 115 The changelog for a particular release is dynamically generated and gets 116 written to the release page on Github for each release. For example, you can 117 find the page for the `0.9.0` release here: 118 119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0 120 121 The changelog for the current release can be found in the releases tab: 122 ![](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square) 123 The changelog provides a quick overview of notable changes for a given 124 release. 125 126 Additionally, as part of each release detailed release notes are written to 127 document in detail what has changed as part of a release. This includes any 128 documentation on potential breaking changes on upgrade and new features. 129 For example, You can find the release notes for the `0.9.0` release in the 130 Qiskit documentation here: 131 132 https://qiskit.org/documentation/release_notes.html#terra-0-9 133 134 ## License 135 136 [Apache License 2.0](LICENSE.txt) 137 [end of README.md] [start of qiskit/algorithms/minimum_eigen_solvers/vqe.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2018, 2020. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 """The Variational Quantum Eigensolver algorithm. 14 15 See https://arxiv.org/abs/1304.3061 16 """ 17 18 from typing import Optional, List, Callable, Union, Dict 19 import logging 20 from time import time 21 import numpy as np 22 23 from qiskit import ClassicalRegister, QuantumCircuit 24 from qiskit.circuit import Parameter 25 from qiskit.circuit.library import RealAmplitudes 26 from qiskit.providers import BaseBackend 27 from qiskit.providers import Backend 28 from qiskit.opflow import (OperatorBase, ExpectationBase, ExpectationFactory, StateFn, 29 CircuitStateFn, ListOp, I, CircuitSampler) 30 from qiskit.opflow.gradients import GradientBase 31 from qiskit.utils.validation import validate_min 32 from qiskit.utils.backend_utils import is_aer_provider 33 from qiskit.utils.quantum_instance import QuantumInstance 34 from ..optimizers import Optimizer, SLSQP 35 from ..variational_forms import VariationalForm 36 from ..variational_algorithm import VariationalAlgorithm, VariationalResult 37 from .minimum_eigen_solver import MinimumEigensolver, MinimumEigensolverResult 38 from ..exceptions import AlgorithmError 39 40 logger = logging.getLogger(__name__) 41 42 # disable check for var_forms, optimizer setter because of pylint bug 43 # pylint: disable=no-member 44 45 46 class VQE(VariationalAlgorithm, MinimumEigensolver): 47 r"""The Variational Quantum Eigensolver algorithm. 48 49 `VQE <https://arxiv.org/abs/1304.3061>`__ is a hybrid algorithm that uses a 50 variational technique and interleaves quantum and classical computations in order to find 51 the minimum eigenvalue of the Hamiltonian :math:`H` of a given system. 52 53 An instance of VQE requires defining two algorithmic sub-components: 54 a trial state (ansatz) from :mod:`~qiskit.algorithms.variational_forms`, and one 55 of the classical :mod:`~qiskit.algorithms.optimizers`. The ansatz is varied, via its set 56 of parameters, by the optimizer, such that it works towards a state, as determined by the 57 parameters applied to the variational form, that will result in the minimum expectation value 58 being measured of the input operator (Hamiltonian). 59 60 An optional array of parameter values, via the *initial_point*, may be provided as the 61 starting point for the search of the minimum eigenvalue. This feature is particularly useful 62 such as when there are reasons to believe that the solution point is close to a particular 63 point. As an example, when building the dissociation profile of a molecule, 64 it is likely that using the previous computed optimal solution as the starting 65 initial point for the next interatomic distance is going to reduce the number of iterations 66 necessary for the variational algorithm to converge. It provides an 67 `initial point tutorial <https://github.com/Qiskit/qiskit-tutorials-community/blob/master 68 /chemistry/h2_vqe_initial_point.ipynb>`__ detailing this use case. 69 70 The length of the *initial_point* list value must match the number of the parameters 71 expected by the variational form being used. If the *initial_point* is left at the default 72 of ``None``, then VQE will look to the variational form for a preferred value, based on its 73 given initial state. If the variational form returns ``None``, 74 then a random point will be generated within the parameter bounds set, as per above. 75 If the variational form provides ``None`` as the lower bound, then VQE 76 will default it to :math:`-2\pi`; similarly, if the variational form returns ``None`` 77 as the upper bound, the default value will be :math:`2\pi`. 78 79 .. note:: 80 81 The VQE stores the parameters of ``var_form`` sorted by name to map the values 82 provided by the optimizer to the circuit. This is done to ensure reproducible results, 83 for example such that running the optimization twice with same random seeds yields the 84 same result. Also, the ``optimal_point`` of the result object can be used as initial 85 point of another VQE run by passing it as ``initial_point`` to the initializer. 86 87 """ 88 89 def __init__(self, 90 var_form: Optional[Union[QuantumCircuit, VariationalForm]] = None, 91 optimizer: Optional[Optimizer] = None, 92 initial_point: Optional[np.ndarray] = None, 93 gradient: Optional[Union[GradientBase, Callable]] = None, 94 expectation: Optional[ExpectationBase] = None, 95 include_custom: bool = False, 96 max_evals_grouped: int = 1, 97 callback: Optional[Callable[[int, np.ndarray, float, float], None]] = None, 98 quantum_instance: Optional[ 99 Union[QuantumInstance, BaseBackend, Backend]] = None) -> None: 100 """ 101 102 Args: 103 var_form: A parameterized circuit used as Ansatz for the wave function. 104 optimizer: A classical optimizer. 105 initial_point: An optional initial point (i.e. initial parameter values) 106 for the optimizer. If ``None`` then VQE will look to the variational form for a 107 preferred point and if not will simply compute a random one. 108 gradient: An optional gradient function or operator for optimizer. 109 expectation: The Expectation converter for taking the average value of the 110 Observable over the var_form state function. When ``None`` (the default) an 111 :class:`~qiskit.opflow.expectations.ExpectationFactory` is used to select 112 an appropriate expectation based on the operator and backend. When using Aer 113 qasm_simulator backend, with paulis, it is however much faster to leverage custom 114 Aer function for the computation but, although VQE performs much faster 115 with it, the outcome is ideal, with no shot noise, like using a state vector 116 simulator. If you are just looking for the quickest performance when choosing Aer 117 qasm_simulator and the lack of shot noise is not an issue then set `include_custom` 118 parameter here to ``True`` (defaults to ``False``). 119 include_custom: When `expectation` parameter here is None setting this to ``True`` will 120 allow the factory to include the custom Aer pauli expectation. 121 max_evals_grouped: Max number of evaluations performed simultaneously. Signals the 122 given optimizer that more than one set of parameters can be supplied so that 123 potentially the expectation values can be computed in parallel. Typically this is 124 possible when a finite difference gradient is used by the optimizer such that 125 multiple points to compute the gradient can be passed and if computed in parallel 126 improve overall execution time. Deprecated if a gradient operator or function is 127 given. 128 callback: a callback that can access the intermediate data during the optimization. 129 Four parameter values are passed to the callback as follows during each evaluation 130 by the optimizer for its current set of parameters as it works towards the minimum. 131 These are: the evaluation count, the optimizer parameters for the 132 variational form, the evaluated mean and the evaluated standard deviation.` 133 quantum_instance: Quantum Instance or Backend 134 """ 135 validate_min('max_evals_grouped', max_evals_grouped, 1) 136 if var_form is None: 137 var_form = RealAmplitudes() 138 139 if optimizer is None: 140 optimizer = SLSQP() 141 142 # set the initial point to the preferred parameters of the variational form 143 if initial_point is None and hasattr(var_form, 'preferred_init_points'): 144 initial_point = var_form.preferred_init_points 145 146 self._max_evals_grouped = max_evals_grouped 147 self._circuit_sampler = None # type: Optional[CircuitSampler] 148 self._expectation = expectation 149 self._user_valid_expectation = self._expectation is not None 150 self._include_custom = include_custom 151 self._expect_op = None 152 153 super().__init__(var_form=var_form, 154 optimizer=optimizer, 155 cost_fn=self._energy_evaluation, 156 gradient=gradient, 157 initial_point=initial_point, 158 quantum_instance=quantum_instance) 159 self._ret = VQEResult() 160 self._eval_time = None 161 self._optimizer.set_max_evals_grouped(max_evals_grouped) 162 self._callback = callback 163 164 self._eval_count = 0 165 logger.info(self.print_settings()) 166 167 def _try_set_expectation_value_from_factory(self, 168 operator: OperatorBase) -> None: 169 if operator is not None and self.quantum_instance is not None: 170 self._set_expectation(ExpectationFactory.build(operator=operator, 171 backend=self.quantum_instance, 172 include_custom=self._include_custom)) 173 174 def _set_expectation(self, exp: ExpectationBase) -> None: 175 self._expectation = exp 176 self._user_valid_expectation = False 177 self._expect_op = None 178 179 @VariationalAlgorithm.quantum_instance.setter 180 def quantum_instance(self, quantum_instance: Union[QuantumInstance, 181 BaseBackend, Backend]) -> None: 182 """ set quantum_instance """ 183 super(VQE, self.__class__).quantum_instance.__set__(self, quantum_instance) 184 185 self._circuit_sampler = CircuitSampler( 186 self._quantum_instance, 187 param_qobj=is_aer_provider(self._quantum_instance.backend)) 188 189 @property 190 def expectation(self) -> ExpectationBase: 191 """ The expectation value algorithm used to construct the expectation measurement from 192 the observable. """ 193 return self._expectation 194 195 @expectation.setter 196 def expectation(self, exp: ExpectationBase) -> None: 197 self._set_expectation(exp) 198 self._user_valid_expectation = self._expectation is not None 199 200 def _check_operator_varform(self, 201 operator: OperatorBase): 202 """Check that the number of qubits of operator and variational form match.""" 203 if operator is not None and self.var_form is not None: 204 if operator.num_qubits != self.var_form.num_qubits: 205 # try to set the number of qubits on the variational form, if possible 206 try: 207 self.var_form.num_qubits = operator.num_qubits 208 self._var_form_params = sorted(self.var_form.parameters, key=lambda p: p.name) 209 except AttributeError as ex: 210 raise AlgorithmError("The number of qubits of the variational form " 211 "does not match the operator, and the variational " 212 "form does not allow setting the number of qubits " 213 " using `num_qubits`.") from ex 214 215 @VariationalAlgorithm.optimizer.setter # type: ignore 216 def optimizer(self, optimizer: Optimizer): 217 """ Sets optimizer """ 218 super(VQE, self.__class__).optimizer.__set__(self, optimizer) # type: ignore 219 if optimizer is not None: 220 optimizer.set_max_evals_grouped(self._max_evals_grouped) 221 222 @property 223 def setting(self): 224 """Prepare the setting of VQE as a string.""" 225 ret = "Algorithm: {}\n".format(self.__class__.__name__) 226 params = "" 227 for key, value in self.__dict__.items(): 228 if key[0] == "_": 229 if "initial_point" in key and value is None: 230 params += "-- {}: {}\n".format(key[1:], "Random seed") 231 else: 232 params += "-- {}: {}\n".format(key[1:], value) 233 ret += "{}".format(params) 234 return ret 235 236 def print_settings(self): 237 """ 238 Preparing the setting of VQE into a string. 239 240 Returns: 241 str: the formatted setting of VQE 242 """ 243 ret = "\n" 244 ret += "==================== Setting of {} ============================\n".format( 245 self.__class__.__name__) 246 ret += "{}".format(self.setting) 247 ret += "===============================================================\n" 248 if hasattr(self._var_form, 'setting'): 249 ret += "{}".format(self._var_form.setting) 250 elif hasattr(self._var_form, 'print_settings'): 251 ret += "{}".format(self._var_form.print_settings()) 252 elif isinstance(self._var_form, QuantumCircuit): 253 ret += "var_form is a custom circuit" 254 else: 255 ret += "var_form has not been set" 256 ret += "===============================================================\n" 257 ret += "{}".format(self._optimizer.setting) 258 ret += "===============================================================\n" 259 return ret 260 261 def construct_expectation(self, 262 parameter: Union[List[float], List[Parameter], np.ndarray], 263 operator: OperatorBase, 264 ) -> OperatorBase: 265 r""" 266 Generate the ansatz circuit and expectation value measurement, and return their 267 runnable composition. 268 269 Args: 270 parameter: Parameters for the ansatz circuit. 271 operator: Qubit operator of the Observable 272 273 Returns: 274 The Operator equalling the measurement of the ansatz :class:`StateFn` by the 275 Observable's expectation :class:`StateFn`. 276 277 Raises: 278 AlgorithmError: If no operator has been provided. 279 """ 280 if operator is None: 281 raise AlgorithmError("The operator was never provided.") 282 283 operator = self._check_operator(operator) 284 285 if isinstance(self.var_form, QuantumCircuit): 286 param_dict = dict(zip(self._var_form_params, parameter)) # type: Dict 287 wave_function = self.var_form.assign_parameters(param_dict) 288 else: 289 wave_function = self.var_form.construct_circuit(parameter) 290 291 # Expectation was never created , try to create one 292 if self._expectation is None: 293 self._try_set_expectation_value_from_factory(operator) 294 295 # If setting the expectation failed, raise an Error: 296 if self._expectation is None: 297 raise AlgorithmError('No expectation set and could not automatically set one, please ' 298 'try explicitly setting an expectation or specify a backend so it ' 299 'can be chosen automatically.') 300 301 observable_meas = self.expectation.convert(StateFn(operator, is_measurement=True)) 302 ansatz_circuit_op = CircuitStateFn(wave_function) 303 return observable_meas.compose(ansatz_circuit_op).reduce() 304 305 def construct_circuit(self, 306 parameter: Union[List[float], List[Parameter], np.ndarray], 307 operator: OperatorBase, 308 ) -> List[QuantumCircuit]: 309 """Return the circuits used to compute the expectation value. 310 311 Args: 312 parameter: Parameters for the ansatz circuit. 313 operator: Qubit operator of the Observable 314 315 Returns: 316 A list of the circuits used to compute the expectation value. 317 """ 318 expect_op = self.construct_expectation(parameter, operator).to_circuit_op() 319 320 circuits = [] 321 322 # recursively extract circuits 323 def extract_circuits(op): 324 if isinstance(op, CircuitStateFn): 325 circuits.append(op.primitive) 326 elif isinstance(op, ListOp): 327 for op_i in op.oplist: 328 extract_circuits(op_i) 329 330 extract_circuits(expect_op) 331 332 return circuits 333 334 @classmethod 335 def supports_aux_operators(cls) -> bool: 336 return True 337 338 def _eval_aux_ops(self, 339 aux_operators: List[OperatorBase], 340 threshold: float = 1e-12) -> None: 341 # Create new CircuitSampler to avoid breaking existing one's caches. 342 sampler = CircuitSampler(self.quantum_instance) 343 344 aux_op_meas = self.expectation.convert(StateFn(ListOp(aux_operators), 345 is_measurement=True)) 346 aux_op_expect = aux_op_meas.compose(CircuitStateFn(self.get_optimal_circuit())) 347 values = np.real(sampler.convert(aux_op_expect).eval()) 348 349 # Discard values below threshold 350 aux_op_results = (values * (np.abs(values) > threshold)) 351 # Deal with the aux_op behavior where there can be Nones or Zero qubit Paulis in the list 352 _aux_op_nones = [op is None for op in aux_operators] 353 self._ret.aux_operator_eigenvalues = \ 354 [None if is_none else [result] 355 for (is_none, result) in zip(_aux_op_nones, aux_op_results)] 356 # As this has mixed types, since it can included None, it needs to explicitly pass object 357 # data type to avoid numpy 1.19 warning message about implicit conversion being deprecated 358 self._ret.aux_operator_eigenvalues = \ 359 np.array([self._ret.aux_operator_eigenvalues], dtype=object) 360 361 def _check_operator(self, operator: OperatorBase) -> OperatorBase: 362 """ set operator """ 363 self._expect_op = None 364 self._check_operator_varform(operator) 365 # Expectation was not passed by user, try to create one 366 if not self._user_valid_expectation: 367 self._try_set_expectation_value_from_factory(operator) 368 return operator 369 370 def compute_minimum_eigenvalue( 371 self, 372 operator: OperatorBase, 373 aux_operators: Optional[List[Optional[OperatorBase]]] = None 374 ) -> MinimumEigensolverResult: 375 super().compute_minimum_eigenvalue(operator, aux_operators) 376 377 if self.quantum_instance is None: 378 raise AlgorithmError("A QuantumInstance or Backend " 379 "must be supplied to run the quantum algorithm.") 380 381 if operator is None: 382 raise AlgorithmError("The operator was never provided.") 383 384 operator = self._check_operator(operator) 385 # We need to handle the array entries being Optional i.e. having value None 386 if aux_operators: 387 zero_op = I.tensorpower(operator.num_qubits) * 0.0 388 converted = [] 389 for op in aux_operators: 390 if op is None: 391 converted.append(zero_op) 392 else: 393 converted.append(op) 394 395 # For some reason Chemistry passes aux_ops with 0 qubits and paulis sometimes. 396 aux_operators = [zero_op if op == 0 else op for op in converted] 397 else: 398 aux_operators = None 399 400 self._quantum_instance.circuit_summary = True 401 402 self._eval_count = 0 403 404 # Convert the gradient operator into a callable function that is compatible with the 405 # optimization routine. 406 if self._gradient: 407 if isinstance(self._gradient, GradientBase): 408 self._gradient = self._gradient.gradient_wrapper( 409 ~StateFn(operator) @ StateFn(self._var_form), 410 bind_params=self._var_form_params, 411 backend=self._quantum_instance) 412 if not self._expect_op: 413 self._expect_op = self.construct_expectation(self._var_form_params, operator) 414 vqresult = self.find_minimum(initial_point=self.initial_point, 415 var_form=self.var_form, 416 cost_fn=self._energy_evaluation, 417 gradient_fn=self._gradient, 418 optimizer=self.optimizer) 419 420 self._ret = VQEResult() 421 self._ret.combine(vqresult) 422 423 if vqresult.optimizer_evals is not None and \ 424 self._eval_count >= vqresult.optimizer_evals: 425 self._eval_count = vqresult.optimizer_evals 426 self._eval_time = vqresult.optimizer_time 427 logger.info('Optimization complete in %s seconds.\nFound opt_params %s in %s evals', 428 self._eval_time, vqresult.optimal_point, self._eval_count) 429 430 self._ret.eigenvalue = vqresult.optimal_value + 0j 431 self._ret.eigenstate = self.get_optimal_vector() 432 self._ret.eigenvalue = self.get_optimal_cost() 433 if aux_operators: 434 self._eval_aux_ops(aux_operators) 435 self._ret.aux_operator_eigenvalues = self._ret.aux_operator_eigenvalues[0] 436 437 self._ret.cost_function_evals = self._eval_count 438 439 return self._ret 440 441 def _energy_evaluation(self, 442 parameters: Union[List[float], np.ndarray] 443 ) -> Union[float, List[float]]: 444 """Evaluate energy at given parameters for the variational form. 445 446 This is the objective function to be passed to the optimizer that is used for evaluation. 447 448 Args: 449 parameters: The parameters for the variational form. 450 451 Returns: 452 Energy of the hamiltonian of each parameter. 453 454 455 Raises: 456 RuntimeError: If the variational form has no parameters. 457 """ 458 num_parameters = self.var_form.num_parameters 459 if self._var_form.num_parameters == 0: 460 raise RuntimeError('The var_form cannot have 0 parameters.') 461 462 parameter_sets = np.reshape(parameters, (-1, num_parameters)) 463 # Create dict associating each parameter with the lists of parameterization values for it 464 param_bindings = dict(zip(self._var_form_params, 465 parameter_sets.transpose().tolist())) # type: Dict 466 467 start_time = time() 468 sampled_expect_op = self._circuit_sampler.convert(self._expect_op, params=param_bindings) 469 means = np.real(sampled_expect_op.eval()) 470 471 if self._callback is not None: 472 variance = np.real(self._expectation.compute_variance(sampled_expect_op)) 473 estimator_error = np.sqrt(variance / self.quantum_instance.run_config.shots) 474 for i, param_set in enumerate(parameter_sets): 475 self._eval_count += 1 476 self._callback(self._eval_count, param_set, means[i], estimator_error[i]) 477 else: 478 self._eval_count += len(means) 479 480 end_time = time() 481 logger.info('Energy evaluation returned %s - %.5f (ms), eval count: %s', 482 means, (end_time - start_time) * 1000, self._eval_count) 483 484 return means if len(means) > 1 else means[0] 485 486 def get_optimal_cost(self) -> float: 487 """Get the minimal cost or energy found by the VQE.""" 488 if self._ret.optimal_point is None: 489 raise AlgorithmError("Cannot return optimal cost before running the " 490 "algorithm to find optimal params.") 491 return self._ret.optimal_value 492 493 def get_optimal_circuit(self) -> QuantumCircuit: 494 """Get the circuit with the optimal parameters.""" 495 if self._ret.optimal_point is None: 496 raise AlgorithmError("Cannot find optimal circuit before running the " 497 "algorithm to find optimal params.") 498 if isinstance(self.var_form, VariationalForm): 499 return self._var_form.construct_circuit(self._ret.optimal_point) 500 return self.var_form.assign_parameters(self._ret.optimal_parameters) 501 502 def get_optimal_vector(self) -> Union[List[float], Dict[str, int]]: 503 """Get the simulation outcome of the optimal circuit. """ 504 # pylint: disable=import-outside-toplevel 505 from qiskit.utils.run_circuits import find_regs_by_name 506 507 if self._ret.optimal_point is None: 508 raise AlgorithmError("Cannot find optimal vector before running the " 509 "algorithm to find optimal params.") 510 qc = self.get_optimal_circuit() 511 min_vector = {} 512 if self._quantum_instance.is_statevector: 513 ret = self._quantum_instance.execute(qc) 514 min_vector = ret.get_statevector(qc) 515 else: 516 c = ClassicalRegister(qc.width(), name='c') 517 q = find_regs_by_name(qc, 'q') 518 qc.add_register(c) 519 qc.barrier(q) 520 qc.measure(q, c) 521 ret = self._quantum_instance.execute(qc) 522 counts = ret.get_counts(qc) 523 # normalize, just as done in CircuitSampler.sample_circuits 524 shots = self._quantum_instance._run_config.shots 525 min_vector = {b: (v / shots) ** 0.5 for (b, v) in counts.items()} 526 return min_vector 527 528 @property 529 def optimal_params(self) -> List[float]: 530 """The optimal parameters for the variational form.""" 531 if self._ret.optimal_point is None: 532 raise AlgorithmError("Cannot find optimal params before running the algorithm.") 533 return self._ret.optimal_point 534 535 536 class VQEResult(VariationalResult, MinimumEigensolverResult): 537 """ VQE Result.""" 538 539 def __init__(self) -> None: 540 super().__init__() 541 self._cost_function_evals = None 542 543 @property 544 def cost_function_evals(self) -> Optional[int]: 545 """ Returns number of cost optimizer evaluations """ 546 return self._cost_function_evals 547 548 @cost_function_evals.setter 549 def cost_function_evals(self, value: int) -> None: 550 """ Sets number of cost function evaluations """ 551 self._cost_function_evals = value 552 [end of qiskit/algorithms/minimum_eigen_solvers/vqe.py] [start of qiskit/opflow/primitive_ops/pauli_sum_op.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2020. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 """ PauliSumOp Class """ 14 15 import logging 16 from typing import Dict, List, Optional, Set, Tuple, Union, cast 17 18 import numpy as np 19 from scipy.sparse import spmatrix 20 21 from qiskit.circuit import Instruction, ParameterExpression 22 from qiskit.quantum_info import Pauli, SparsePauliOp 23 from ..exceptions import OpflowError 24 from ..list_ops.summed_op import SummedOp 25 from ..list_ops.tensored_op import TensoredOp 26 from ..operator_base import OperatorBase 27 from .primitive_op import PrimitiveOp 28 29 logger = logging.getLogger(__name__) 30 31 32 class PauliSumOp(PrimitiveOp): 33 """Class for Operators backend by Terra's ``SparsePauliOp`` class.""" 34 35 def __init__( 36 self, 37 primitive: SparsePauliOp, 38 coeff: Union[int, float, complex, ParameterExpression] = 1.0, 39 ) -> None: 40 """ 41 Args: 42 primitive: The SparsePauliOp which defines the behavior of the underlying function. 43 coeff: A coefficient multiplying the primitive. 44 45 Raises: 46 TypeError: invalid parameters. 47 """ 48 if not isinstance(primitive, SparsePauliOp): 49 raise TypeError( 50 f"PauliSumOp can only be instantiated with SparsePauliOp, not {type(primitive)}" 51 ) 52 53 super().__init__(primitive, coeff=coeff) 54 55 def primitive_strings(self) -> Set[str]: 56 return {"SparsePauliOp"} 57 58 @property 59 def num_qubits(self) -> int: 60 return self.primitive.num_qubits # type: ignore 61 62 def add(self, other: OperatorBase) -> OperatorBase: 63 if not self.num_qubits == other.num_qubits: 64 raise ValueError( 65 f"Sum of operators with different numbers of qubits, {self.num_qubits} and " 66 f"{other.num_qubits}, is not well defined" 67 ) 68 69 if isinstance(other, PauliSumOp): 70 return PauliSumOp( 71 self.coeff * self.primitive + other.coeff * other.primitive, coeff=1 # type: ignore 72 ) 73 74 from .pauli_op import PauliOp 75 76 if isinstance(other, PauliOp): 77 return PauliSumOp( 78 self.coeff * self.primitive # type: ignore 79 + other.coeff * SparsePauliOp(other.primitive) 80 ) 81 82 return SummedOp([self, other]) 83 84 def mul(self, scalar: Union[int, float, complex, ParameterExpression]) -> OperatorBase: 85 if isinstance(scalar, (int, float, complex)) and scalar != 0: 86 return PauliSumOp(scalar * self.primitive, coeff=self.coeff) # type: ignore 87 88 return super().mul(scalar) 89 90 def adjoint(self) -> OperatorBase: 91 return PauliSumOp( 92 self.primitive.conjugate(), coeff=self.coeff.conjugate() # type:ignore 93 ) 94 95 def equals(self, other: OperatorBase) -> bool: 96 self_reduced, other_reduced = self.reduce(), other.reduce() 97 if not isinstance(other_reduced, PauliSumOp): 98 return False 99 100 if isinstance(self_reduced.coeff, ParameterExpression) or isinstance( 101 other_reduced.coeff, ParameterExpression 102 ): 103 return ( 104 self_reduced.coeff == other_reduced.coeff 105 and self_reduced.primitive == other_reduced.primitive # type:ignore 106 ) 107 return ( 108 len(self_reduced) == len(other_reduced) 109 and self_reduced.primitive == other_reduced.primitive 110 ) 111 112 def _expand_dim(self, num_qubits: int) -> "PauliSumOp": 113 return PauliSumOp( 114 self.primitive.tensor( # type:ignore 115 SparsePauliOp(Pauli("I" * num_qubits)) 116 ), 117 coeff=self.coeff, 118 ) 119 120 def tensor(self, other: OperatorBase) -> OperatorBase: 121 if isinstance(other, PauliSumOp): 122 return PauliSumOp( 123 self.primitive.tensor(other.primitive), # type:ignore 124 coeff=self.coeff * other.coeff, 125 ) 126 127 return TensoredOp([self, other]) 128 129 def permute(self, permutation: List[int]) -> "PauliSumOp": 130 """Permutes the sequence of ``PauliSumOp``. 131 132 Args: 133 permutation: A list defining where each Pauli should be permuted. The Pauli at index 134 j of the primitive should be permuted to position permutation[j]. 135 136 Returns: 137 A new PauliSumOp representing the permuted operator. For operator (X ^ Y ^ Z) and 138 indices=[1,2,4], it returns (X ^ I ^ Y ^ Z ^ I). 139 140 Raises: 141 OpflowError: if indices do not define a new index for each qubit. 142 """ 143 if len(permutation) != self.num_qubits: 144 raise OpflowError("List of indices to permute must have the " 145 "same size as Pauli Operator") 146 length = max(permutation) + 1 147 spop = self.primitive.tensor( # type:ignore 148 SparsePauliOp(Pauli("I" * (length - self.num_qubits))) 149 ) 150 permutation = [i for i in range(length) if i not in permutation] + permutation 151 permutation = np.arange(length)[np.argsort(permutation)] 152 permutation = np.hstack([permutation, permutation + length]) # type: ignore 153 spop.table.array = spop.table.array[:, permutation] 154 return PauliSumOp(spop, self.coeff) 155 156 def compose( 157 self, 158 other: OperatorBase, 159 permutation: Optional[List[int]] = None, 160 front: bool = False, 161 ) -> OperatorBase: 162 163 new_self, other = self._expand_shorter_operator_and_permute(other, permutation) 164 new_self = cast(PauliSumOp, new_self) 165 166 if front: 167 return other.compose(new_self) 168 # If self is identity, just return other. 169 if not np.any(new_self.primitive.table.array): # type: ignore 170 return other * new_self.coeff * sum(new_self.coeffs) # type: ignore 171 172 # Both PauliSumOps 173 if isinstance(other, PauliSumOp): 174 return PauliSumOp( 175 new_self.primitive * other.primitive, # type:ignore 176 coeff=new_self.coeff * other.coeff, 177 ) 178 # TODO: implement compose with PauliOp 179 180 # pylint: disable=cyclic-import,import-outside-toplevel 181 from ..state_fns.circuit_state_fn import CircuitStateFn 182 from .circuit_op import CircuitOp 183 184 if isinstance(other, (CircuitOp, CircuitStateFn)): 185 return new_self.to_pauli_op().to_circuit_op().compose(other) # type: ignore 186 187 return super(PauliSumOp, new_self).compose(other) 188 189 def to_matrix(self, massive: bool = False) -> np.ndarray: 190 OperatorBase._check_massive("to_matrix", True, self.num_qubits, massive) 191 if isinstance(self.coeff, ParameterExpression): 192 return (self.primitive.to_matrix(sparse=True)).toarray() * self.coeff # type: ignore 193 return (self.primitive.to_matrix(sparse=True) * self.coeff).toarray() # type: ignore 194 195 def __str__(self) -> str: 196 def format_sign(x): 197 return x.real if np.isreal(x) else x 198 199 def format_number(x): 200 x = format_sign(x) 201 if isinstance(x, (int, float)) and x < 0: 202 return f"- {-x}" 203 return f"+ {x}" 204 205 indent = "" if self.coeff == 1 else " " 206 prim_list = self.primitive.to_list() # type: ignore 207 if prim_list: 208 first = prim_list[0] 209 if isinstance(first[1], (int, float)) and first[1] < 0: 210 main_string = indent + f"- {-first[1].real} * {first[0]}" 211 else: 212 main_string = indent + f"{format_sign(first[1])} * {first[0]}" 213 214 main_string += "".join([f"\n{indent}{format_number(c)} * {p}" for p, c in prim_list[1:]]) 215 return f"{main_string}" if self.coeff == 1 else f"{self.coeff} * (\n{main_string}\n)" 216 217 def eval( 218 self, 219 front: Optional[Union[str, Dict[str, complex], np.ndarray, OperatorBase]] = None, 220 ) -> Union[OperatorBase, float, complex]: 221 if front is None: 222 return self.to_matrix_op() 223 224 # pylint: disable=import-outside-toplevel,cyclic-import 225 from ..list_ops.list_op import ListOp 226 from ..state_fns.circuit_state_fn import CircuitStateFn 227 from ..state_fns.dict_state_fn import DictStateFn 228 from ..state_fns.state_fn import StateFn 229 from .circuit_op import CircuitOp 230 from .pauli_op import PauliOp 231 232 # For now, always do this. If it's not performant, we can be more granular. 233 if not isinstance(front, OperatorBase): 234 front = StateFn(front, is_measurement=False) 235 236 if isinstance(front, ListOp) and front.distributive: 237 return front.combo_fn( 238 [self.eval(front.coeff * front_elem) for front_elem in front.oplist] # type: ignore 239 ) 240 241 else: 242 243 if self.num_qubits != front.num_qubits: 244 raise ValueError( 245 "eval does not support operands with differing numbers of qubits, " 246 "{} and {}, respectively.".format(self.num_qubits, front.num_qubits) 247 ) 248 249 if isinstance(front, DictStateFn): 250 251 new_dict = {} # type: Dict 252 corrected_x_bits = self.primitive.table.X[::-1] # type: ignore 253 corrected_z_bits = self.primitive.table.Z[::-1] # type: ignore 254 coeffs = self.primitive.coeffs # type:ignore 255 256 for bstr, v in front.primitive.items(): 257 bitstr = np.asarray(list(bstr)).astype(np.int).astype(np.bool) 258 new_b_str = np.logical_xor(bitstr, corrected_x_bits) 259 new_str = ["".join(map(str, 1 * bs)) for bs in new_b_str] 260 z_factor = np.product(1 - 2 * np.logical_and(bitstr, corrected_z_bits), axis=1) 261 y_factor = np.product( 262 np.sqrt(1 - 2 * np.logical_and(corrected_x_bits, corrected_z_bits) + 0j), 263 axis=1, 264 ) 265 for i, n_str in enumerate(new_str): 266 new_dict[n_str] = ( 267 v * z_factor[i] * y_factor[i] * coeffs[i] 268 ) + new_dict.get(n_str, 0) 269 return DictStateFn(new_dict, coeff=self.coeff * front.coeff) 270 271 elif isinstance(front, StateFn) and front.is_measurement: 272 raise ValueError("Operator composed with a measurement is undefined.") 273 274 # Composable types with PauliOp 275 elif isinstance(front, (PauliSumOp, PauliOp, CircuitOp, CircuitStateFn)): 276 return self.compose(front).eval() # type: ignore 277 278 # Covers VectorStateFn and OperatorStateFn 279 return self.to_matrix_op().eval(front.to_matrix_op()) # type: ignore 280 281 def exp_i(self) -> OperatorBase: 282 """ Return a ``CircuitOp`` equivalent to e^-iH for this operator H. """ 283 # TODO: optimize for some special cases 284 from ..evolutions.evolved_op import EvolvedOp 285 286 return EvolvedOp(self) 287 288 def to_instruction(self) -> Instruction: 289 return self.to_matrix_op().to_circuit().to_instruction() # type: ignore 290 291 def to_pauli_op(self, massive: bool = False) -> OperatorBase: 292 from .pauli_op import PauliOp 293 294 def to_real(x): 295 return x.real if np.isreal(x) else x 296 297 def to_native(x): 298 return x.item() if isinstance(x, np.generic) else x 299 300 if len(self.primitive) == 1: 301 return PauliOp( 302 Pauli((self.primitive.table.Z[0], self.primitive.table.X[0])), # type: ignore 303 to_native(to_real(self.primitive.coeffs[0])) * self.coeff, # type: ignore 304 ) 305 return SummedOp( 306 [ 307 PauliOp( 308 Pauli((s.table.Z[0], s.table.X[0])), 309 to_native(to_real(s.coeffs[0])), 310 ) 311 for s in self.primitive 312 ], 313 coeff=self.coeff, 314 ) 315 316 def __getitem__(self, offset: Union[int, slice]) -> "PauliSumOp": 317 """Allows array-indexing style access to the ``PauliSumOp``. 318 319 Args: 320 offset: The index of ``PauliSumOp``. 321 322 Returns: 323 The ``PauliSumOp`` at index ``offset``, 324 """ 325 return PauliSumOp(self.primitive[offset], self.coeff) 326 327 def __len__(self) -> int: 328 """Length of ``SparsePauliOp``. 329 330 Returns: 331 An int equal to the length of SparsePauliOp. 332 """ 333 return len(self.primitive) 334 335 # pylint: disable=arguments-differ 336 def reduce(self, atol: Optional[float] = None, rtol: Optional[float] = None) -> "PauliSumOp": 337 """Simplify the primitive ``SparsePauliOp``. 338 339 Args: 340 atol: Absolute tolerance for checking if coefficients are zero (Default: 1e-8). 341 rtol: Relative tolerance for checking if coefficients are zero (Default: 1e-5). 342 343 Returns: 344 The simplified ``PauliSumOp``. 345 """ 346 if isinstance(self.coeff, (int, float, complex)): 347 primitive = self.coeff * self.primitive # type: ignore 348 return PauliSumOp(primitive.simplify(atol=atol, rtol=rtol)) # type: ignore 349 return PauliSumOp(self.primitive.simplify(atol=atol, rtol=rtol), self.coeff) # type: ignore 350 351 def to_spmatrix(self) -> spmatrix: 352 """Returns SciPy sparse matrix representation of the ``PauliSumOp``. 353 354 Returns: 355 CSR sparse matrix representation of the ``PauliSumOp``. 356 357 Raises: 358 ValueError: invalid parameters. 359 """ 360 return self.primitive.to_matrix(sparse=True) * self.coeff # type: ignore 361 362 @classmethod 363 def from_list( 364 cls, 365 pauli_list: List[Tuple[str, Union[int, float, complex]]], 366 coeff: Union[int, float, complex, ParameterExpression] = 1.0, 367 ) -> "PauliSumOp": 368 """Construct from a pauli_list with the form [(pauli_str, coeffs)] 369 370 Args: 371 pauli_list: A list of Tuple of pauli_str and coefficient. 372 coeff: A coefficient multiplying the primitive. 373 374 Returns: 375 The PauliSumOp constructed from the pauli_list. 376 """ 377 return cls(SparsePauliOp.from_list(pauli_list), coeff=coeff) 378 [end of qiskit/opflow/primitive_ops/pauli_sum_op.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
f54222e44a88dd0732779b561b4a863ea799a660
implement iterating/collecting PauliSumOp coefficients <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> It would be useful to be able to iterate over, or return a list of coefficients in PauliSumOp. At present, one has to dig into the details to construct the coefficients for each term. In particular, it requires multiplying two numbers for each coeffcient. There are use cases, for instance when bounding the eigenvalues of the operators. For example, this gives a bound on the eigenvalues: ```python float(sum(abs(pauli_sum.primitive.coeffs)) * abs(pauli_sum.coeff)) ``` Abstracting access would be cleaner and resistant to implementation changes. Like this, if we want to return an numpy array ```python float(sum(abs(pauli_sum.coeffs))) ``` or this, if we return a list ```python sum(abs(c) for c in pauli_sum.coeffs) ``` This should be quite easy to implement. https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/opflow/primitive_ops/pauli_sum_op.py
Hello, I'm pretty new to quantum, but I wanted to start contributing. If this is the patch you were looking for, I can create a pull request. ``` diff --git a/qiskit/opflow/primitive_ops/pauli_sum_op.py b/qiskit/opflow/primitive_ops/pauli_sum_op.py index 0b5c0526..b4ee9399 100644 --- a/qiskit/opflow/primitive_ops/pauli_sum_op.py +++ b/qiskit/opflow/primitive_ops/pauli_sum_op.py @@ -58,7 +58,13 @@ class PauliSumOp(PrimitiveOp): @property def num_qubits(self) -> int: return self.primitive.num_qubits # type: ignore - + + # issue 5547 start + @property + def coeffs(self): + return self.primitive.coeffs + # issue 5547 end + def add(self, other: OperatorBase) -> OperatorBase: if not self.num_qubits == other.num_qubits: raise ValueError( ``` Here's how it is working now: ``` a_table = PauliTable.from_labels(['X']) pauli_sum = PauliSumOp(SparsePauliOp(a_table),1) print(pauli_sum.coeffs) print(type(pauli_sum.coeffs)) [1.+0.j] <class 'numpy.ndarray'> ``` Paolo Thanks for jumping in. I don't think that patch gives the correct result in all cases. There are two "layers" of coefficients: `pauli_sum.primitive.coeffs` and a single number `pauli_sum.coeff`. A list of the full coefficient might be generated like this: `[c * pauli_sum.coeff for c in pauli_sum.primitive.coeffs]` Thank you for the additional info. Sorry I did not get it in the first place. So how about this? ``` diff --git a/qiskit/opflow/primitive_ops/pauli_sum_op.py b/qiskit/opflow/primitive_ops/pauli_sum_op.py index 0b5c0526..32538152 100644 --- a/qiskit/opflow/primitive_ops/pauli_sum_op.py +++ b/qiskit/opflow/primitive_ops/pauli_sum_op.py @@ -59,6 +59,12 @@ class PauliSumOp(PrimitiveOp): def num_qubits(self) -> int: return self.primitive.num_qubits # type: ignore + # issue 5547 start + @property + def coeffs(self): + return self.coeff * self.primitive.coeffs + # issue 5547 end + def add(self, other: OperatorBase) -> OperatorBase: if not self.num_qubits == other.num_qubits: raise ValueError( ``` that would return something like ``` from qiskit.quantum_info import SparsePauliOp from qiskit.opflow.primitive_ops import PauliSumOp from qiskit.quantum_info.operators import PauliTable a_table = PauliTable.from_labels(['X','I','Y']) pauli_sum = PauliSumOp(SparsePauliOp(a_table),3) print(pauli_sum.coeffs) print(type(pauli_sum.coeffs)) [3.+0.j 3.+0.j 3.+0.j] <class 'numpy.ndarray'> ``` Thanks, this looks good. Before making a PR, it would be a good idea to wait a bit to see if someone comments. There might be a reason to return a list rather than an array. All right. Thanks a lot. I'll just leave this here for others to jump in and provide review/direction. ```python diff --git a/qiskit/opflow/primitive_ops/pauli_sum_op.py b/qiskit/opflow/primitive_ops/pauli_sum_op.py index 0b5c0526..89ec23ec 100644 --- a/qiskit/opflow/primitive_ops/pauli_sum_op.py +++ b/qiskit/opflow/primitive_ops/pauli_sum_op.py @@ -59,6 +59,16 @@ class PauliSumOp(PrimitiveOp): def num_qubits(self) -> int: return self.primitive.num_qubits # type: ignore + # issue 5547 start + @property + def coeffs(self): + return self.coeff * self.primitive.coeffs + + @property + def coeffslist(self): + return (self.coeff * self.primitive.coeffs).tolist() + # issue 5547 end + def add(self, other: OperatorBase) -> OperatorBase: if not self.num_qubits == other.num_qubits: raise ValueError( ``` ```python from qiskit.quantum_info import SparsePauliOp from qiskit.opflow.primitive_ops import PauliSumOp from qiskit.quantum_info.operators import PauliTable a_table = PauliTable.from_labels(['X','I','Y']) pauli_sum = PauliSumOp(SparsePauliOp(a_table),3) print(pauli_sum.coeffs) print(type(pauli_sum.coeffs)) print(pauli_sum.coeffslist) print(type(pauli_sum.coeffslist)) ``` ``` [3.+0.j 3.+0.j 3.+0.j] <class 'numpy.ndarray'> [(3+0j), (3+0j), (3+0j)] <class 'list'> ``` Thanks. Nice. I provide some discussion points. 1. Are the internal types of lists python Native numeric type? or numpy.complex128? (Which is better?) 2. How about returning an iterator like https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/quantum_info/operators/symplectic/sparse_pauli_op.py#L549-L575. 1. Items in a list can be anything. It depends on what produced them. Note that `numpy.ndarray.tolist` converts the numpy.complex128, etc to builtin Python `complex`. My guess is that it is kind of standard to convert to builtin types if you are converting to (or building) a list. 2. I believe an iterator is used there in order to be memory efficient, since each matrix may be very large. Not to say an iterator would be wrong here, but I don't think the justification would be the same. Hi thanks for the comments. 1. we could avoid the conversion and return a list of `numpy.complex128` with ```python return list(self.coeff * self.primitive.coeffs) ``` instead of calling the `tolist()` 2. I will try to implement that iterator as per the suggested code and will post it here. I might need to come back for help on that. I don't have any experience with this matter, so forgive me, but I'd keep it simple and go with the basic implementation of returning the coeffs with a numpy array, since that is the type that is returned by the primitive object, and leave any typecasting/conversion to the caller. So, here's what I have so far: ```python diff --git a/qiskit/opflow/primitive_ops/pauli_sum_op.py b/qiskit/opflow/primitive_ops/pauli_sum_op.py index 0b5c0526..6bf43fd5 100644 --- a/qiskit/opflow/primitive_ops/pauli_sum_op.py +++ b/qiskit/opflow/primitive_ops/pauli_sum_op.py @@ -20,6 +20,10 @@ from scipy.sparse import spmatrix from qiskit.circuit import Instruction, ParameterExpression from qiskit.quantum_info import Pauli, SparsePauliOp +# issue 5547 start +from qiskit.quantum_info.operators.symplectic.pauli_table import PauliTable +from qiskit.quantum_info.operators.custom_iterator import CustomIterator +# issue 5547 end from ..exceptions import OpflowError from ..list_ops.summed_op import SummedOp from ..list_ops.tensored_op import TensoredOp @@ -59,6 +63,47 @@ class PauliSumOp(PrimitiveOp): def num_qubits(self) -> int: return self.primitive.num_qubits # type: ignore + # issue 5547 start + @property + def coeffs(self): + return self.coeff * self.primitive.coeffs + + @property + def coeffslist(self): + # return (self.coeff * self.primitive.coeffs).tolist() + return list(self.coeff * self.primitive.coeffs) + + def matrix_iter(self, sparse=False): + """Return a matrix representation iterator. + + This is a lazy iterator that converts each term in the PauliSumOp + into a matrix as it is used. To convert to a single matrix use the + :meth:`to_matrix` method. + + Args: + sparse (bool): optionally return sparse CSR matrices if True, + otherwise return Numpy array matrices + (Default: False) + + Returns: + MatrixIterator: matrix iterator object for the PauliTable. + """ + class MatrixIterator(CustomIterator): + """Matrix representation iteration and item access.""" + def __repr__(self): + return "<PauliSumOp_matrix_iterator at {}>".format(hex(id(self))) + + + def __getitem__(self, key): + sumopcoeff = self.obj.coeff * self.obj.primitive.coeffs[key] + mat = PauliTable._to_matrix(self.obj.primitive.table.array[key], + sparse=sparse) + return sumopcoeff * mat + + + return MatrixIterator(self) + # issue 5547 end + def add(self, other: OperatorBase) -> OperatorBase: if not self.num_qubits == other.num_qubits: raise ValueError( ``` ```python from qiskit.quantum_info import SparsePauliOp from qiskit.opflow.primitive_ops import PauliSumOp from qiskit.quantum_info.operators import PauliTable a_table = PauliTable.from_labels(['X','I','Y']) pauli_sum = PauliSumOp(SparsePauliOp(a_table),3) print(pauli_sum.coeffs) print(type(pauli_sum.coeffs)) print(pauli_sum.coeffslist) print(type(pauli_sum.coeffslist[0])) the_iterator = pauli_sum.matrix_iter() print(the_iterator) while True: try: print(next(the_iterator)) except StopIteration: break ``` ``` [3.+0.j 3.+0.j 3.+0.j] <class 'numpy.ndarray'> [(3+0j), (3+0j), (3+0j)] <class 'numpy.complex128'> <PauliSumOp_matrix_iterator at 0x7fef37701ee0> [[0.+0.j 3.+0.j] [3.+0.j 0.+0.j]] [[3.+0.j 0.+0.j] [0.+0.j 3.+0.j]] [[0.+0.j 0.-3.j] [0.+3.j 0.+0.j]] ```
2020-12-25T11:41:26Z
<patch> diff --git a/qiskit/opflow/primitive_ops/pauli_sum_op.py b/qiskit/opflow/primitive_ops/pauli_sum_op.py --- a/qiskit/opflow/primitive_ops/pauli_sum_op.py +++ b/qiskit/opflow/primitive_ops/pauli_sum_op.py @@ -20,6 +20,8 @@ from qiskit.circuit import Instruction, ParameterExpression from qiskit.quantum_info import Pauli, SparsePauliOp +from qiskit.quantum_info.operators.symplectic.pauli_table import PauliTable +from qiskit.quantum_info.operators.custom_iterator import CustomIterator from ..exceptions import OpflowError from ..list_ops.summed_op import SummedOp from ..list_ops.tensored_op import TensoredOp @@ -59,6 +61,39 @@ def primitive_strings(self) -> Set[str]: def num_qubits(self) -> int: return self.primitive.num_qubits # type: ignore + @property + def coeffs(self): + """Return the Pauli coefficients.""" + return self.coeff * self.primitive.coeffs + + def matrix_iter(self, sparse=False): + """Return a matrix representation iterator. + + This is a lazy iterator that converts each term in the PauliSumOp + into a matrix as it is used. To convert to a single matrix use the + :meth:`to_matrix` method. + + Args: + sparse (bool): optionally return sparse CSR matrices if True, + otherwise return Numpy array matrices + (Default: False) + + Returns: + MatrixIterator: matrix iterator object for the PauliTable. + """ + class MatrixIterator(CustomIterator): + """Matrix representation iteration and item access.""" + def __repr__(self): + return "<PauliSumOp_matrix_iterator at {}>".format(hex(id(self))) + + def __getitem__(self, key): + sumopcoeff = self.obj.coeff * self.obj.primitive.coeffs[key] + mat = PauliTable._to_matrix(self.obj.primitive.table.array[key], + sparse=sparse) + return sumopcoeff * mat + + return MatrixIterator(self) + def add(self, other: OperatorBase) -> OperatorBase: if not self.num_qubits == other.num_qubits: raise ValueError( </patch>
[]
[]
conda__conda-5273
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> conda env export under python2 is ug ``` $ python2 -m conda_env export -p /conda name: null channels: - !!python/unicode 'file:///Users/kfranz/.conda/conda-bld' - !!python/unicode 'file:///conda/conda-bld' - !!python/unicode 'bkreider' - !!python/unicode 'conda-canary' - !!python/unicode 'conda-forge' - !!python/unicode 'defaults' dependencies: - !!python/unicode 'wget=1.15=2' - !!python/unicode 'conda=4.3.0=py27_0' - !!python/unicode 'conda-env=2.6.0=0' - !!python/unicode 'filelock=2.0.6=py27_0' - !!python/unicode 'boltons=16.3.1=py27_0' - !!python/unicode 'ca-certificates=2016.8.31=0' - !!python/unicode 'certifi=2016.8.31=py27_0' - !!python/unicode 'functools32=3.2.3.2=py27_1' ... ``` </issue> <code> [start of README.rst] 1 .. NOTE: This file serves both as the README on GitHub and the index.html for 2 conda.pydata.org. If you update this file, be sure to cd to the web 3 directory and run ``make html; make live`` 4 5 .. image:: https://s3.amazonaws.com/conda-dev/conda_logo.svg 6 :alt: Conda Logo 7 8 ---------------------------------------- 9 10 .. image:: https://img.shields.io/travis/conda/conda/4.3.x.svg?maxAge=900&label=Linux%20%26%20MacOS 11 :target: https://travis-ci.org/conda/conda 12 :alt: Linux & MacOS tests (Travis) 13 14 .. image:: https://img.shields.io/appveyor/ci/ContinuumAnalyticsFOSS/conda/4.3.x.svg?maxAge=900&label=Windows 15 :target: https://ci.appveyor.com/project/ContinuumAnalyticsFOSS/conda 16 :alt: Windows tests (Appveyor) 17 18 .. image:: https://img.shields.io/codecov/c/github/conda/conda/4.3.x.svg?label=coverage 19 :alt: Codecov Status 20 :target: https://codecov.io/gh/conda/conda/branch/4.3.x 21 22 .. image:: https://img.shields.io/github/release/conda/conda.svg 23 :alt: latest release version 24 :target: https://github.com/conda/conda/releases 25 26 | 27 28 .. image:: https://s3.amazonaws.com/conda-dev/conda-announce-signup-button.svg 29 :alt: Join the Conda Announcment List 30 :target: http://conda.pydata.org/docs/announcements.html 31 32 | 33 34 Conda is a cross-platform, language-agnostic binary package manager. It is the 35 package manager used by `Anaconda 36 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be 37 used for other systems as well. Conda makes environments first-class 38 citizens, making it easy to create independent environments even for C 39 libraries. Conda is written entirely in Python, and is BSD licensed open 40 source. 41 42 Conda is enhanced by organizations, tools, and repositories created and managed by 43 the amazing members of the conda community. Some of them can be found 44 `here <https://github.com/conda/conda/wiki/Conda-Community>`_. 45 46 47 Installation 48 ------------ 49 50 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a 51 minimal installation that only includes conda and its dependencies, called 52 `Miniconda <http://conda.pydata.org/miniconda.html>`_. 53 54 55 Getting Started 56 --------------- 57 58 If you install Anaconda, you will already have hundreds of packages 59 installed. You can see what packages are installed by running 60 61 .. code-block:: bash 62 63 $ conda list 64 65 to see all the packages that are available, use 66 67 .. code-block:: bash 68 69 $ conda search 70 71 and to install a package, use 72 73 .. code-block:: bash 74 75 $ conda install <package-name> 76 77 78 The real power of conda comes from its ability to manage environments. In 79 conda, an environment can be thought of as a completely separate installation. 80 Conda installs packages into environments efficiently using `hard links 81 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so 82 environments are space efficient, and take seconds to create. 83 84 The default environment, which ``conda`` itself is installed into is called 85 ``root``. To create another environment, use the ``conda create`` 86 command. For instance, to create an environment with the IPython notebook and 87 NumPy 1.6, which is older than the version that comes with Anaconda by 88 default, you would run 89 90 .. code-block:: bash 91 92 $ conda create -n numpy16 ipython-notebook numpy=1.6 93 94 This creates an environment called ``numpy16`` with the latest version of 95 the IPython notebook, NumPy 1.6, and their dependencies. 96 97 We can now activate this environment, use 98 99 .. code-block:: bash 100 101 # On Linux and Mac OS X 102 $ source activate numpy16 103 104 # On Windows 105 > activate numpy16 106 107 This puts the bin directory of the ``numpy16`` environment in the front of the 108 ``PATH``, and sets it as the default environment for all subsequent conda commands. 109 110 To go back to the root environment, use 111 112 .. code-block:: bash 113 114 # On Linux and Mac OS X 115 $ source deactivate 116 117 # On Windows 118 > deactivate 119 120 121 Building Your Own Packages 122 -------------------------- 123 124 You can easily build your own packages for conda, and upload them 125 to `anaconda.org <https://anaconda.org>`_, a free service for hosting 126 packages for conda, as well as other package managers. 127 To build a package, create a recipe. 128 See http://github.com/conda/conda-recipes for many example recipes, and 129 http://docs.continuum.io/conda/build.html for documentation on how to build 130 recipes. 131 132 To upload to anaconda.org, create an account. Then, install the 133 anaconda-client and login 134 135 .. code-block:: bash 136 137 $ conda install anaconda-client 138 $ anaconda login 139 140 Then, after you build your recipe 141 142 .. code-block:: bash 143 144 $ conda build <recipe-dir> 145 146 you will be prompted to upload to anaconda.org. 147 148 To add your anaconda.org channel, or the channel of others to conda so 149 that ``conda install`` will find and install their packages, run 150 151 .. code-block:: bash 152 153 $ conda config --add channels https://conda.anaconda.org/username 154 155 (replacing ``username`` with the user name of the person whose channel you want 156 to add). 157 158 Getting Help 159 ------------ 160 161 The documentation for conda is at http://conda.pydata.org/docs/. You can 162 subscribe to the `conda mailing list 163 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source 164 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_. 165 166 Contributing 167 ------------ 168 169 Contributions to conda are welcome. Just fork the GitHub repository and send a 170 pull request. 171 172 To develop on conda, the easiest way is to use a development build. This can be 173 accomplished as follows: 174 175 * clone the conda git repository to a computer with conda already installed 176 * navigate to the root directory of the git clone 177 * run ``$CONDA/bin/python setup.py develop`` where ``$CONDA`` is the path to your 178 miniconda installation 179 180 Note building a development file requires git to be installed. 181 182 To undo this, run ``$CONDA/bin/python setup.py develop -u``. Note that if you 183 used a python other than ``$CONDA/bin/python`` to install, you may have to manually 184 delete the conda executable. For example, on OS X, if you use a homebrew python 185 located at ``/usr/local/bin/python``, then you'll need to ``rm /usr/local/bin/conda`` 186 so that ``which -a conda`` lists first your miniconda installation. 187 188 If you are worried about breaking your conda installation, you can install a 189 separate instance of `Miniconda <http://conda.pydata.org/miniconda.html>`_ and 190 work off it. This is also the only way to test conda in both Python 2 and 191 Python 3, as conda can only be installed into a root environment. 192 193 Run the conda tests by ``conda install pytest pytest-cov pytest-timeout mock responses`` and then running ``py.test`` 194 in the conda directory. The tests are also run by Travis CI when you make a 195 pull request. 196 [end of README.rst] [start of conda/_vendor/auxlib/_vendor/six.py] 1 # Copyright (c) 2010-2015 Benjamin Peterson 2 # 3 # Permission is hereby granted, free of charge, to any person obtaining a copy 4 # of this software and associated documentation files (the "Software"), to deal 5 # in the Software without restriction, including without limitation the rights 6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 # copies of the Software, and to permit persons to whom the Software is 8 # furnished to do so, subject to the following conditions: 9 # 10 # The above copyright notice and this permission notice shall be included in all 11 # copies or substantial portions of the Software. 12 # 13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 19 # SOFTWARE. 20 21 """Utilities for writing code that runs on Python 2 and 3""" 22 23 from __future__ import absolute_import 24 25 import functools 26 import itertools 27 import operator 28 import sys 29 import types 30 31 __author__ = "Benjamin Peterson <[email protected]>" 32 __version__ = "1.10.0" 33 34 35 # Useful for very coarse version differentiation. 36 PY2 = sys.version_info[0] == 2 37 PY3 = sys.version_info[0] == 3 38 PY34 = sys.version_info[0:2] >= (3, 4) 39 40 if PY3: 41 string_types = str, 42 integer_types = int, 43 class_types = type, 44 text_type = str 45 binary_type = bytes 46 47 MAXSIZE = sys.maxsize 48 else: 49 string_types = basestring, 50 integer_types = (int, long) 51 class_types = (type, types.ClassType) 52 text_type = unicode 53 binary_type = str 54 55 if sys.platform.startswith("java"): 56 # Jython always uses 32 bits. 57 MAXSIZE = int((1 << 31) - 1) 58 else: 59 # It's possible to have sizeof(long) != sizeof(Py_ssize_t). 60 class X(object): 61 62 def __len__(self): 63 return 1 << 31 64 try: 65 len(X()) 66 except OverflowError: 67 # 32-bit 68 MAXSIZE = int((1 << 31) - 1) 69 else: 70 # 64-bit 71 MAXSIZE = int((1 << 63) - 1) 72 del X 73 74 75 def _add_doc(func, doc): 76 """Add documentation to a function.""" 77 func.__doc__ = doc 78 79 80 def _import_module(name): 81 """Import module, returning the module after the last dot.""" 82 __import__(name) 83 return sys.modules[name] 84 85 86 class _LazyDescr(object): 87 88 def __init__(self, name): 89 self.name = name 90 91 def __get__(self, obj, tp): 92 result = self._resolve() 93 setattr(obj, self.name, result) # Invokes __set__. 94 try: 95 # This is a bit ugly, but it avoids running this again by 96 # removing this descriptor. 97 delattr(obj.__class__, self.name) 98 except AttributeError: 99 pass 100 return result 101 102 103 class MovedModule(_LazyDescr): 104 105 def __init__(self, name, old, new=None): 106 super(MovedModule, self).__init__(name) 107 if PY3: 108 if new is None: 109 new = name 110 self.mod = new 111 else: 112 self.mod = old 113 114 def _resolve(self): 115 return _import_module(self.mod) 116 117 def __getattr__(self, attr): 118 _module = self._resolve() 119 value = getattr(_module, attr) 120 setattr(self, attr, value) 121 return value 122 123 124 class _LazyModule(types.ModuleType): 125 126 def __init__(self, name): 127 super(_LazyModule, self).__init__(name) 128 self.__doc__ = self.__class__.__doc__ 129 130 def __dir__(self): 131 attrs = ["__doc__", "__name__"] 132 attrs += [attr.name for attr in self._moved_attributes] 133 return attrs 134 135 # Subclasses should override this 136 _moved_attributes = [] 137 138 139 class MovedAttribute(_LazyDescr): 140 141 def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): 142 super(MovedAttribute, self).__init__(name) 143 if PY3: 144 if new_mod is None: 145 new_mod = name 146 self.mod = new_mod 147 if new_attr is None: 148 if old_attr is None: 149 new_attr = name 150 else: 151 new_attr = old_attr 152 self.attr = new_attr 153 else: 154 self.mod = old_mod 155 if old_attr is None: 156 old_attr = name 157 self.attr = old_attr 158 159 def _resolve(self): 160 module = _import_module(self.mod) 161 return getattr(module, self.attr) 162 163 164 class _SixMetaPathImporter(object): 165 166 """ 167 A meta path importer to import six.moves and its submodules. 168 169 This class implements a PEP302 finder and loader. It should be compatible 170 with Python 2.5 and all existing versions of Python3 171 """ 172 173 def __init__(self, six_module_name): 174 self.name = six_module_name 175 self.known_modules = {} 176 177 def _add_module(self, mod, *fullnames): 178 for fullname in fullnames: 179 self.known_modules[self.name + "." + fullname] = mod 180 181 def _get_module(self, fullname): 182 return self.known_modules[self.name + "." + fullname] 183 184 def find_module(self, fullname, path=None): 185 if fullname in self.known_modules: 186 return self 187 return None 188 189 def __get_module(self, fullname): 190 try: 191 return self.known_modules[fullname] 192 except KeyError: 193 raise ImportError("This loader does not know module " + fullname) 194 195 def load_module(self, fullname): 196 try: 197 # in case of a reload 198 return sys.modules[fullname] 199 except KeyError: 200 pass 201 mod = self.__get_module(fullname) 202 if isinstance(mod, MovedModule): 203 mod = mod._resolve() 204 else: 205 mod.__loader__ = self 206 sys.modules[fullname] = mod 207 return mod 208 209 def is_package(self, fullname): 210 """ 211 Return true, if the named module is a package. 212 213 We need this method to get correct spec objects with 214 Python 3.4 (see PEP451) 215 """ 216 return hasattr(self.__get_module(fullname), "__path__") 217 218 def get_code(self, fullname): 219 """Return None 220 221 Required, if is_package is implemented""" 222 self.__get_module(fullname) # eventually raises ImportError 223 return None 224 get_source = get_code # same as get_code 225 226 _importer = _SixMetaPathImporter(__name__) 227 228 229 class _MovedItems(_LazyModule): 230 231 """Lazy loading of moved objects""" 232 __path__ = [] # mark as package 233 234 235 _moved_attributes = [ 236 MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), 237 MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), 238 MovedAttribute("filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse"), 239 MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), 240 MovedAttribute("intern", "__builtin__", "sys"), 241 MovedAttribute("map", "itertools", "builtins", "imap", "map"), 242 MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), 243 MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), 244 MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), 245 MovedAttribute("reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload"), 246 MovedAttribute("reduce", "__builtin__", "functools"), 247 MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), 248 MovedAttribute("StringIO", "StringIO", "io"), 249 MovedAttribute("UserDict", "UserDict", "collections"), 250 MovedAttribute("UserList", "UserList", "collections"), 251 MovedAttribute("UserString", "UserString", "collections"), 252 MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), 253 MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), 254 MovedAttribute("zip_longest", "itertools", "itertools", "izip_longest", "zip_longest"), 255 MovedModule("builtins", "__builtin__"), 256 MovedModule("configparser", "ConfigParser"), 257 MovedModule("copyreg", "copy_reg"), 258 MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), 259 MovedModule("_dummy_thread", "dummy_thread", "_dummy_thread"), 260 MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), 261 MovedModule("http_cookies", "Cookie", "http.cookies"), 262 MovedModule("html_entities", "htmlentitydefs", "html.entities"), 263 MovedModule("html_parser", "HTMLParser", "html.parser"), 264 MovedModule("http_client", "httplib", "http.client"), 265 MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), 266 MovedModule("email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart"), 267 MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), 268 MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), 269 MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), 270 MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), 271 MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), 272 MovedModule("cPickle", "cPickle", "pickle"), 273 MovedModule("queue", "Queue"), 274 MovedModule("reprlib", "repr"), 275 MovedModule("socketserver", "SocketServer"), 276 MovedModule("_thread", "thread", "_thread"), 277 MovedModule("tkinter", "Tkinter"), 278 MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), 279 MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), 280 MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), 281 MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), 282 MovedModule("tkinter_tix", "Tix", "tkinter.tix"), 283 MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), 284 MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), 285 MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), 286 MovedModule("tkinter_colorchooser", "tkColorChooser", 287 "tkinter.colorchooser"), 288 MovedModule("tkinter_commondialog", "tkCommonDialog", 289 "tkinter.commondialog"), 290 MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), 291 MovedModule("tkinter_font", "tkFont", "tkinter.font"), 292 MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), 293 MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", 294 "tkinter.simpledialog"), 295 MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), 296 MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), 297 MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), 298 MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), 299 MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), 300 MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), 301 ] 302 # Add windows specific modules. 303 if sys.platform == "win32": 304 _moved_attributes += [ 305 MovedModule("winreg", "_winreg"), 306 ] 307 308 for attr in _moved_attributes: 309 setattr(_MovedItems, attr.name, attr) 310 if isinstance(attr, MovedModule): 311 _importer._add_module(attr, "moves." + attr.name) 312 del attr 313 314 _MovedItems._moved_attributes = _moved_attributes 315 316 moves = _MovedItems(__name__ + ".moves") 317 _importer._add_module(moves, "moves") 318 319 320 class Module_six_moves_urllib_parse(_LazyModule): 321 322 """Lazy loading of moved objects in six.moves.urllib_parse""" 323 324 325 _urllib_parse_moved_attributes = [ 326 MovedAttribute("ParseResult", "urlparse", "urllib.parse"), 327 MovedAttribute("SplitResult", "urlparse", "urllib.parse"), 328 MovedAttribute("parse_qs", "urlparse", "urllib.parse"), 329 MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), 330 MovedAttribute("urldefrag", "urlparse", "urllib.parse"), 331 MovedAttribute("urljoin", "urlparse", "urllib.parse"), 332 MovedAttribute("urlparse", "urlparse", "urllib.parse"), 333 MovedAttribute("urlsplit", "urlparse", "urllib.parse"), 334 MovedAttribute("urlunparse", "urlparse", "urllib.parse"), 335 MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), 336 MovedAttribute("quote", "urllib", "urllib.parse"), 337 MovedAttribute("quote_plus", "urllib", "urllib.parse"), 338 MovedAttribute("unquote", "urllib", "urllib.parse"), 339 MovedAttribute("unquote_plus", "urllib", "urllib.parse"), 340 MovedAttribute("urlencode", "urllib", "urllib.parse"), 341 MovedAttribute("splitquery", "urllib", "urllib.parse"), 342 MovedAttribute("splittag", "urllib", "urllib.parse"), 343 MovedAttribute("splituser", "urllib", "urllib.parse"), 344 MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), 345 MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), 346 MovedAttribute("uses_params", "urlparse", "urllib.parse"), 347 MovedAttribute("uses_query", "urlparse", "urllib.parse"), 348 MovedAttribute("uses_relative", "urlparse", "urllib.parse"), 349 ] 350 for attr in _urllib_parse_moved_attributes: 351 setattr(Module_six_moves_urllib_parse, attr.name, attr) 352 del attr 353 354 Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes 355 356 _importer._add_module(Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), 357 "moves.urllib_parse", "moves.urllib.parse") 358 359 360 class Module_six_moves_urllib_error(_LazyModule): 361 362 """Lazy loading of moved objects in six.moves.urllib_error""" 363 364 365 _urllib_error_moved_attributes = [ 366 MovedAttribute("URLError", "urllib2", "urllib.error"), 367 MovedAttribute("HTTPError", "urllib2", "urllib.error"), 368 MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), 369 ] 370 for attr in _urllib_error_moved_attributes: 371 setattr(Module_six_moves_urllib_error, attr.name, attr) 372 del attr 373 374 Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes 375 376 _importer._add_module(Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), 377 "moves.urllib_error", "moves.urllib.error") 378 379 380 class Module_six_moves_urllib_request(_LazyModule): 381 382 """Lazy loading of moved objects in six.moves.urllib_request""" 383 384 385 _urllib_request_moved_attributes = [ 386 MovedAttribute("urlopen", "urllib2", "urllib.request"), 387 MovedAttribute("install_opener", "urllib2", "urllib.request"), 388 MovedAttribute("build_opener", "urllib2", "urllib.request"), 389 MovedAttribute("pathname2url", "urllib", "urllib.request"), 390 MovedAttribute("url2pathname", "urllib", "urllib.request"), 391 MovedAttribute("getproxies", "urllib", "urllib.request"), 392 MovedAttribute("Request", "urllib2", "urllib.request"), 393 MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), 394 MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), 395 MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), 396 MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), 397 MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), 398 MovedAttribute("BaseHandler", "urllib2", "urllib.request"), 399 MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), 400 MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), 401 MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), 402 MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), 403 MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), 404 MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), 405 MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), 406 MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), 407 MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), 408 MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), 409 MovedAttribute("FileHandler", "urllib2", "urllib.request"), 410 MovedAttribute("FTPHandler", "urllib2", "urllib.request"), 411 MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), 412 MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), 413 MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), 414 MovedAttribute("urlretrieve", "urllib", "urllib.request"), 415 MovedAttribute("urlcleanup", "urllib", "urllib.request"), 416 MovedAttribute("URLopener", "urllib", "urllib.request"), 417 MovedAttribute("FancyURLopener", "urllib", "urllib.request"), 418 MovedAttribute("proxy_bypass", "urllib", "urllib.request"), 419 ] 420 for attr in _urllib_request_moved_attributes: 421 setattr(Module_six_moves_urllib_request, attr.name, attr) 422 del attr 423 424 Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes 425 426 _importer._add_module(Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), 427 "moves.urllib_request", "moves.urllib.request") 428 429 430 class Module_six_moves_urllib_response(_LazyModule): 431 432 """Lazy loading of moved objects in six.moves.urllib_response""" 433 434 435 _urllib_response_moved_attributes = [ 436 MovedAttribute("addbase", "urllib", "urllib.response"), 437 MovedAttribute("addclosehook", "urllib", "urllib.response"), 438 MovedAttribute("addinfo", "urllib", "urllib.response"), 439 MovedAttribute("addinfourl", "urllib", "urllib.response"), 440 ] 441 for attr in _urllib_response_moved_attributes: 442 setattr(Module_six_moves_urllib_response, attr.name, attr) 443 del attr 444 445 Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes 446 447 _importer._add_module(Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), 448 "moves.urllib_response", "moves.urllib.response") 449 450 451 class Module_six_moves_urllib_robotparser(_LazyModule): 452 453 """Lazy loading of moved objects in six.moves.urllib_robotparser""" 454 455 456 _urllib_robotparser_moved_attributes = [ 457 MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), 458 ] 459 for attr in _urllib_robotparser_moved_attributes: 460 setattr(Module_six_moves_urllib_robotparser, attr.name, attr) 461 del attr 462 463 Module_six_moves_urllib_robotparser._moved_attributes = _urllib_robotparser_moved_attributes 464 465 _importer._add_module(Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), 466 "moves.urllib_robotparser", "moves.urllib.robotparser") 467 468 469 class Module_six_moves_urllib(types.ModuleType): 470 471 """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" 472 __path__ = [] # mark as package 473 parse = _importer._get_module("moves.urllib_parse") 474 error = _importer._get_module("moves.urllib_error") 475 request = _importer._get_module("moves.urllib_request") 476 response = _importer._get_module("moves.urllib_response") 477 robotparser = _importer._get_module("moves.urllib_robotparser") 478 479 def __dir__(self): 480 return ['parse', 'error', 'request', 'response', 'robotparser'] 481 482 _importer._add_module(Module_six_moves_urllib(__name__ + ".moves.urllib"), 483 "moves.urllib") 484 485 486 def add_move(move): 487 """Add an item to six.moves.""" 488 setattr(_MovedItems, move.name, move) 489 490 491 def remove_move(name): 492 """Remove item from six.moves.""" 493 try: 494 delattr(_MovedItems, name) 495 except AttributeError: 496 try: 497 del moves.__dict__[name] 498 except KeyError: 499 raise AttributeError("no such move, %r" % (name,)) 500 501 502 if PY3: 503 _meth_func = "__func__" 504 _meth_self = "__self__" 505 506 _func_closure = "__closure__" 507 _func_code = "__code__" 508 _func_defaults = "__defaults__" 509 _func_globals = "__globals__" 510 else: 511 _meth_func = "im_func" 512 _meth_self = "im_self" 513 514 _func_closure = "func_closure" 515 _func_code = "func_code" 516 _func_defaults = "func_defaults" 517 _func_globals = "func_globals" 518 519 520 try: 521 advance_iterator = next 522 except NameError: 523 def advance_iterator(it): 524 return it.next() 525 next = advance_iterator 526 527 528 try: 529 callable = callable 530 except NameError: 531 def callable(obj): 532 return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) 533 534 535 if PY3: 536 def get_unbound_function(unbound): 537 return unbound 538 539 create_bound_method = types.MethodType 540 541 def create_unbound_method(func, cls): 542 return func 543 544 Iterator = object 545 else: 546 def get_unbound_function(unbound): 547 return unbound.im_func 548 549 def create_bound_method(func, obj): 550 return types.MethodType(func, obj, obj.__class__) 551 552 def create_unbound_method(func, cls): 553 return types.MethodType(func, None, cls) 554 555 class Iterator(object): 556 557 def next(self): 558 return type(self).__next__(self) 559 560 callable = callable 561 _add_doc(get_unbound_function, 562 """Get the function out of a possibly unbound function""") 563 564 565 get_method_function = operator.attrgetter(_meth_func) 566 get_method_self = operator.attrgetter(_meth_self) 567 get_function_closure = operator.attrgetter(_func_closure) 568 get_function_code = operator.attrgetter(_func_code) 569 get_function_defaults = operator.attrgetter(_func_defaults) 570 get_function_globals = operator.attrgetter(_func_globals) 571 572 573 if PY3: 574 def iterkeys(d, **kw): 575 return iter(d.keys(**kw)) 576 577 def itervalues(d, **kw): 578 return iter(d.values(**kw)) 579 580 def iteritems(d, **kw): 581 return iter(d.items(**kw)) 582 583 def iterlists(d, **kw): 584 return iter(d.lists(**kw)) 585 586 viewkeys = operator.methodcaller("keys") 587 588 viewvalues = operator.methodcaller("values") 589 590 viewitems = operator.methodcaller("items") 591 else: 592 def iterkeys(d, **kw): 593 return d.iterkeys(**kw) 594 595 def itervalues(d, **kw): 596 return d.itervalues(**kw) 597 598 def iteritems(d, **kw): 599 return d.iteritems(**kw) 600 601 def iterlists(d, **kw): 602 return d.iterlists(**kw) 603 604 viewkeys = operator.methodcaller("viewkeys") 605 606 viewvalues = operator.methodcaller("viewvalues") 607 608 viewitems = operator.methodcaller("viewitems") 609 610 _add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") 611 _add_doc(itervalues, "Return an iterator over the values of a dictionary.") 612 _add_doc(iteritems, 613 "Return an iterator over the (key, value) pairs of a dictionary.") 614 _add_doc(iterlists, 615 "Return an iterator over the (key, [values]) pairs of a dictionary.") 616 617 618 if PY3: 619 def b(s): 620 return s.encode("latin-1") 621 622 def u(s): 623 return s 624 unichr = chr 625 import struct 626 int2byte = struct.Struct(">B").pack 627 del struct 628 byte2int = operator.itemgetter(0) 629 indexbytes = operator.getitem 630 iterbytes = iter 631 import io 632 StringIO = io.StringIO 633 BytesIO = io.BytesIO 634 _assertCountEqual = "assertCountEqual" 635 if sys.version_info[1] <= 1: 636 _assertRaisesRegex = "assertRaisesRegexp" 637 _assertRegex = "assertRegexpMatches" 638 else: 639 _assertRaisesRegex = "assertRaisesRegex" 640 _assertRegex = "assertRegex" 641 else: 642 def b(s): 643 return s 644 # Workaround for standalone backslash 645 646 def u(s): 647 return unicode(s.replace(r'\\', r'\\\\'), "unicode_escape") 648 unichr = unichr 649 int2byte = chr 650 651 def byte2int(bs): 652 return ord(bs[0]) 653 654 def indexbytes(buf, i): 655 return ord(buf[i]) 656 iterbytes = functools.partial(itertools.imap, ord) 657 import StringIO 658 StringIO = BytesIO = StringIO.StringIO 659 _assertCountEqual = "assertItemsEqual" 660 _assertRaisesRegex = "assertRaisesRegexp" 661 _assertRegex = "assertRegexpMatches" 662 _add_doc(b, """Byte literal""") 663 _add_doc(u, """Text literal""") 664 665 666 def assertCountEqual(self, *args, **kwargs): 667 return getattr(self, _assertCountEqual)(*args, **kwargs) 668 669 670 def assertRaisesRegex(self, *args, **kwargs): 671 return getattr(self, _assertRaisesRegex)(*args, **kwargs) 672 673 674 def assertRegex(self, *args, **kwargs): 675 return getattr(self, _assertRegex)(*args, **kwargs) 676 677 678 if PY3: 679 exec_ = getattr(moves.builtins, "exec") 680 681 def reraise(tp, value, tb=None): 682 if value is None: 683 value = tp() 684 if value.__traceback__ is not tb: 685 raise value.with_traceback(tb) 686 raise value 687 688 else: 689 def exec_(_code_, _globs_=None, _locs_=None): 690 """Execute code in a namespace.""" 691 if _globs_ is None: 692 frame = sys._getframe(1) 693 _globs_ = frame.f_globals 694 if _locs_ is None: 695 _locs_ = frame.f_locals 696 del frame 697 elif _locs_ is None: 698 _locs_ = _globs_ 699 exec("""exec _code_ in _globs_, _locs_""") 700 701 exec_("""def reraise(tp, value, tb=None): 702 raise tp, value, tb 703 """) 704 705 706 if sys.version_info[:2] == (3, 2): 707 exec_("""def raise_from(value, from_value): 708 if from_value is None: 709 raise value 710 raise value from from_value 711 """) 712 elif sys.version_info[:2] > (3, 2): 713 exec_("""def raise_from(value, from_value): 714 raise value from from_value 715 """) 716 else: 717 def raise_from(value, from_value): 718 raise value 719 720 721 print_ = getattr(moves.builtins, "print", None) 722 if print_ is None: 723 def print_(*args, **kwargs): 724 """The new-style print function for Python 2.4 and 2.5.""" 725 fp = kwargs.pop("file", sys.stdout) 726 if fp is None: 727 return 728 729 def write(data): 730 if not isinstance(data, basestring): 731 data = str(data) 732 # If the file has an encoding, encode unicode with it. 733 if (isinstance(fp, file) and 734 isinstance(data, unicode) and 735 fp.encoding is not None): 736 errors = getattr(fp, "errors", None) 737 if errors is None: 738 errors = "strict" 739 data = data.encode(fp.encoding, errors) 740 fp.write(data) 741 want_unicode = False 742 sep = kwargs.pop("sep", None) 743 if sep is not None: 744 if isinstance(sep, unicode): 745 want_unicode = True 746 elif not isinstance(sep, str): 747 raise TypeError("sep must be None or a string") 748 end = kwargs.pop("end", None) 749 if end is not None: 750 if isinstance(end, unicode): 751 want_unicode = True 752 elif not isinstance(end, str): 753 raise TypeError("end must be None or a string") 754 if kwargs: 755 raise TypeError("invalid keyword arguments to print()") 756 if not want_unicode: 757 for arg in args: 758 if isinstance(arg, unicode): 759 want_unicode = True 760 break 761 if want_unicode: 762 newline = unicode("\n") 763 space = unicode(" ") 764 else: 765 newline = "\n" 766 space = " " 767 if sep is None: 768 sep = space 769 if end is None: 770 end = newline 771 for i, arg in enumerate(args): 772 if i: 773 write(sep) 774 write(arg) 775 write(end) 776 if sys.version_info[:2] < (3, 3): 777 _print = print_ 778 779 def print_(*args, **kwargs): 780 fp = kwargs.get("file", sys.stdout) 781 flush = kwargs.pop("flush", False) 782 _print(*args, **kwargs) 783 if flush and fp is not None: 784 fp.flush() 785 786 _add_doc(reraise, """Reraise an exception.""") 787 788 if sys.version_info[0:2] < (3, 4): 789 def wraps(wrapped, assigned=functools.WRAPPER_ASSIGNMENTS, 790 updated=functools.WRAPPER_UPDATES): 791 def wrapper(f): 792 f = functools.wraps(wrapped, assigned, updated)(f) 793 f.__wrapped__ = wrapped 794 return f 795 return wrapper 796 else: 797 wraps = functools.wraps 798 799 800 def with_metaclass(meta, *bases): 801 """Create a base class with a metaclass.""" 802 # This requires a bit of explanation: the basic idea is to make a dummy 803 # metaclass for one level of class instantiation that replaces itself with 804 # the actual metaclass. 805 class metaclass(meta): 806 807 def __new__(cls, name, this_bases, d): 808 return meta(name, bases, d) 809 return type.__new__(metaclass, 'temporary_class', (), {}) 810 811 812 def add_metaclass(metaclass): 813 """Class decorator for creating a class with a metaclass.""" 814 def wrapper(cls): 815 orig_vars = cls.__dict__.copy() 816 slots = orig_vars.get('__slots__') 817 if slots is not None: 818 if isinstance(slots, str): 819 slots = [slots] 820 for slots_var in slots: 821 orig_vars.pop(slots_var) 822 orig_vars.pop('__dict__', None) 823 orig_vars.pop('__weakref__', None) 824 return metaclass(cls.__name__, cls.__bases__, orig_vars) 825 return wrapper 826 827 828 def python_2_unicode_compatible(klass): 829 """ 830 A decorator that defines __unicode__ and __str__ methods under Python 2. 831 Under Python 3 it does nothing. 832 833 To support Python 2 and 3 with a single code base, define a __str__ method 834 returning text and apply this decorator to the class. 835 """ 836 if PY2: 837 if '__str__' not in klass.__dict__: 838 raise ValueError("@python_2_unicode_compatible cannot be applied " 839 "to %s because it doesn't define __str__()." % 840 klass.__name__) 841 klass.__unicode__ = klass.__str__ 842 klass.__str__ = lambda self: self.__unicode__().encode('utf-8') 843 return klass 844 845 846 # Complete the moves implementation. 847 # This code is at the end of this module to speed up module loading. 848 # Turn this module into a package. 849 __path__ = [] # required for PEP 302 and PEP 451 850 __package__ = __name__ # see PEP 366 @ReservedAssignment 851 if globals().get("__spec__") is not None: 852 __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable 853 # Remove other six meta path importers, since they cause problems. This can 854 # happen if six is removed from sys.modules and then reloaded. (Setuptools does 855 # this for some reason.) 856 if sys.meta_path: 857 for i, importer in enumerate(sys.meta_path): 858 # Here's some real nastiness: Another "instance" of the six module might 859 # be floating around. Therefore, we can't use isinstance() to check for 860 # the six meta path importer, since the other six instance will have 861 # inserted an importer with different class. 862 if (type(importer).__name__ == "_SixMetaPathImporter" and 863 importer.name == __name__): 864 del sys.meta_path[i] 865 break 866 del i, importer 867 # Finally, add the importer to the meta path import hook. 868 sys.meta_path.append(_importer) 869 [end of conda/_vendor/auxlib/_vendor/six.py] [start of conda/common/compat.py] 1 # -*- coding: utf-8 -*- 2 # Try to keep compat small because it's imported by everything 3 # What is compat, and what isn't? 4 # If a piece of code is "general" and used in multiple modules, it goes here. 5 # If it's only used in one module, keep it in that module, preferably near the top. 6 from __future__ import absolute_import, division, print_function, unicode_literals 7 8 from itertools import chain 9 from operator import methodcaller 10 import sys 11 12 on_win = bool(sys.platform == "win32") 13 14 PY2 = sys.version_info[0] == 2 15 PY3 = sys.version_info[0] == 3 16 FILESYSTEM_ENCODING = sys.getfilesystemencoding() 17 18 19 # ############################# 20 # equivalent commands 21 # ############################# 22 23 if PY3: # pragma: py2 no cover 24 string_types = str, 25 integer_types = int, 26 class_types = type, 27 text_type = str 28 binary_type = bytes 29 input = input 30 range = range 31 32 elif PY2: # pragma: py3 no cover 33 from types import ClassType 34 string_types = basestring, 35 integer_types = (int, long) 36 class_types = (type, ClassType) 37 text_type = unicode 38 binary_type = str 39 input = raw_input 40 range = xrange 41 42 43 # ############################# 44 # equivalent imports 45 # ############################# 46 47 if PY3: # pragma: py2 no cover 48 from io import StringIO 49 from itertools import zip_longest 50 elif PY2: # pragma: py3 no cover 51 from cStringIO import StringIO 52 from itertools import izip as zip, izip_longest as zip_longest 53 54 StringIO = StringIO 55 zip = zip 56 zip_longest = zip_longest 57 58 59 # ############################# 60 # equivalent functions 61 # ############################# 62 63 if PY3: # pragma: py2 no cover 64 def iterkeys(d, **kw): 65 return iter(d.keys(**kw)) 66 67 def itervalues(d, **kw): 68 return iter(d.values(**kw)) 69 70 def iteritems(d, **kw): 71 return iter(d.items(**kw)) 72 73 viewkeys = methodcaller("keys") 74 viewvalues = methodcaller("values") 75 viewitems = methodcaller("items") 76 77 from collections import Iterable 78 def isiterable(obj): 79 return not isinstance(obj, string_types) and isinstance(obj, Iterable) 80 81 elif PY2: # pragma: py3 no cover 82 def iterkeys(d, **kw): 83 return d.iterkeys(**kw) 84 85 def itervalues(d, **kw): 86 return d.itervalues(**kw) 87 88 def iteritems(d, **kw): 89 return d.iteritems(**kw) 90 91 viewkeys = methodcaller("viewkeys") 92 viewvalues = methodcaller("viewvalues") 93 viewitems = methodcaller("viewitems") 94 95 def isiterable(obj): 96 return (hasattr(obj, '__iter__') 97 and not isinstance(obj, string_types) 98 and type(obj) is not type) 99 100 101 # ############################# 102 # other 103 # ############################# 104 105 from collections import OrderedDict as odict # NOQA 106 odict = odict 107 108 from io import open as io_open # NOQA 109 110 111 def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True): 112 if 'b' in mode: 113 return io_open(ensure_fs_path_encoding(file), str(mode), buffering=buffering, 114 errors=errors, newline=newline, closefd=closefd) 115 else: 116 return io_open(ensure_fs_path_encoding(file), str(mode), buffering=buffering, 117 encoding=encoding or 'utf-8', errors=errors, newline=newline, 118 closefd=closefd) 119 120 121 def with_metaclass(Type, skip_attrs=set(('__dict__', '__weakref__'))): 122 """Class decorator to set metaclass. 123 124 Works with both Python 2 and Python 3 and it does not add 125 an extra class in the lookup order like ``six.with_metaclass`` does 126 (that is -- it copies the original class instead of using inheritance). 127 128 """ 129 130 def _clone_with_metaclass(Class): 131 attrs = dict((key, value) for key, value in iteritems(vars(Class)) 132 if key not in skip_attrs) 133 return Type(Class.__name__, Class.__bases__, attrs) 134 135 return _clone_with_metaclass 136 137 138 139 NoneType = type(None) 140 primitive_types = tuple(chain(string_types, integer_types, (float, complex, bool, NoneType))) 141 142 143 def ensure_binary(value): 144 try: 145 return value.encode('utf-8') 146 except AttributeError: 147 # AttributeError: '<>' object has no attribute 'encode' 148 # In this case assume already binary type and do nothing 149 return value 150 151 152 def ensure_text_type(value): 153 try: 154 return value.decode('utf-8') 155 except AttributeError: 156 # AttributeError: '<>' object has no attribute 'decode' 157 # In this case assume already text_type and do nothing 158 return value 159 except UnicodeDecodeError: 160 from requests.packages.chardet import detect 161 encoding = detect(value).get('encoding') or 'utf-8' 162 return value.decode(encoding) 163 164 165 def ensure_unicode(value): 166 try: 167 return value.decode('unicode_escape') 168 except AttributeError: 169 # AttributeError: '<>' object has no attribute 'decode' 170 # In this case assume already unicode and do nothing 171 return value 172 173 174 def ensure_fs_path_encoding(value): 175 try: 176 return value.decode(FILESYSTEM_ENCODING) 177 except AttributeError: 178 return value 179 [end of conda/common/compat.py] [start of conda_env/compat.py] 1 import sys 2 3 PY3 = sys.version_info[0] == 3 4 5 6 def u(some_str): 7 if PY3: 8 return some_str 9 else: 10 return unicode(some_str) 11 12 13 def b(some_str, encoding="utf-8"): 14 try: 15 return bytes(some_str, encoding=encoding) 16 except TypeError: 17 return some_str 18 [end of conda_env/compat.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
d46ee3535f8049f1f9408bb48246a9f741e58fb0
conda env export under python2 is ug ``` $ python2 -m conda_env export -p /conda name: null channels: - !!python/unicode 'file:///Users/kfranz/.conda/conda-bld' - !!python/unicode 'file:///conda/conda-bld' - !!python/unicode 'bkreider' - !!python/unicode 'conda-canary' - !!python/unicode 'conda-forge' - !!python/unicode 'defaults' dependencies: - !!python/unicode 'wget=1.15=2' - !!python/unicode 'conda=4.3.0=py27_0' - !!python/unicode 'conda-env=2.6.0=0' - !!python/unicode 'filelock=2.0.6=py27_0' - !!python/unicode 'boltons=16.3.1=py27_0' - !!python/unicode 'ca-certificates=2016.8.31=0' - !!python/unicode 'certifi=2016.8.31=py27_0' - !!python/unicode 'functools32=3.2.3.2=py27_1' ... ```
2017-05-10T07:58:44Z
<patch> diff --git a/conda_env/yaml.py b/conda_env/yaml.py --- a/conda_env/yaml.py +++ b/conda_env/yaml.py @@ -6,6 +6,7 @@ from __future__ import absolute_import, print_function from collections import OrderedDict +from conda.common.compat import PY2 from conda.common.yaml import get_yaml yaml = get_yaml() @@ -24,6 +25,12 @@ def represent_ordereddict(dumper, data): yaml.add_representer(OrderedDict, represent_ordereddict) +if PY2: + def represent_unicode(self, data): + return self.represent_str(data.encode('utf-8')) + + yaml.add_representer(unicode, represent_unicode) # NOQA + dump = yaml.dump load = yaml.load dict = OrderedDict </patch>
[]
[]
pandas-dev__pandas-7485
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> ENH: return `NotImplemented` rather than raise TypeError in Period arithmetic. Right now, some of the error messages for in pandas/tseries/period.py are not so great: ``` python In [9]: period = Period('2000-01-03', 'B') In [10]: period > 1 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-df287da31a9f> in <module>() ----> 1 period > 1 /pandas/tseries/period.pyc in f(self, other) 172 return func(self.ordinal, other.ordinal) 173 else: --> 174 raise TypeError(other) 175 176 f.__name__ = name TypeError: 1 ``` And in the case above, should really return `NotImplemented` instead (so that they end up with an error message like): ``` python ----> 1 1 + period TypeError: unsupported operand type(s) for +: 'int' and 'Period' ``` Side note - that's only appropriate these are only used as magic methods (so that the Python interpreter converts them into the TypeError shown above...). </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.svg) 4 5 [![Scatter-CI Status page](http://scatterci.github.io/scatterci48.jpg)](http://scatterci.github.io/pydata/pandas) 6 7 ## What is it 8 9 **pandas** is a Python package providing fast, flexible, and expressive data 10 structures designed to make working with "relational" or "labeled" data both 11 easy and intuitive. It aims to be the fundamental high-level building block for 12 doing practical, **real world** data analysis in Python. Additionally, it has 13 the broader goal of becoming **the most powerful and flexible open source data 14 analysis / manipulation tool available in any language**. It is already well on 15 its way toward this goal. 16 17 ## Main Features 18 Here are just a few of the things that pandas does well: 19 20 - Easy handling of [**missing data**][missing-data] (represented as 21 `NaN`) in floating point as well as non-floating point data 22 - Size mutability: columns can be [**inserted and 23 deleted**][insertion-deletion] from DataFrame and higher dimensional 24 objects 25 - Automatic and explicit [**data alignment**][alignment]: objects can 26 be explicitly aligned to a set of labels, or the user can simply 27 ignore the labels and let `Series`, `DataFrame`, etc. automatically 28 align the data for you in computations 29 - Powerful, flexible [**group by**][groupby] functionality to perform 30 split-apply-combine operations on data sets, for both aggregating 31 and transforming data 32 - Make it [**easy to convert**][conversion] ragged, 33 differently-indexed data in other Python and NumPy data structures 34 into DataFrame objects 35 - Intelligent label-based [**slicing**][slicing], [**fancy 36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 37 large data sets 38 - Intuitive [**merging**][merging] and [**joining**][joining] data 39 sets 40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 41 data sets 42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 43 labels per tick) 44 - Robust IO tools for loading data from [**flat files**][flat-files] 45 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 47 - [**Time series**][timeseries]-specific functionality: date range 48 generation and frequency conversion, moving window statistics, 49 moving window linear regressions, date shifting and lagging, etc. 50 51 52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 70 71 ## Where to get it 72 The source code is currently hosted on GitHub at: 73 http://github.com/pydata/pandas 74 75 Binary installers for the latest released version are available at the Python 76 package index 77 78 http://pypi.python.org/pypi/pandas/ 79 80 And via `easy_install`: 81 82 ```sh 83 easy_install pandas 84 ``` 85 86 or `pip`: 87 88 ```sh 89 pip install pandas 90 ``` 91 92 ## Dependencies 93 - [NumPy](http://www.numpy.org): 1.6.1 or higher 94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 95 - [pytz](http://pytz.sourceforge.net) 96 - Needed for time zone support with ``pandas.date_range`` 97 98 ### Highly Recommended Dependencies 99 - [numexpr](http://code.google.com/p/numexpr/) 100 - Needed to accelerate some expression evaluation operations 101 - Required by PyTables 102 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 103 - Needed to accelerate certain numerical operations 104 105 ### Optional dependencies 106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 111 - [statsmodels](http://statsmodels.sourceforge.net/) 112 - Needed for parts of `pandas.stats` 113 - For Excel I/O: 114 - [xlrd/xlwt](http://www.python-excel.org/) 115 - Excel reading (xlrd) and writing (xlwt) 116 - [openpyxl](http://packages.python.org/openpyxl/) 117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for 118 writing .xlsx files 119 - xlrd >= 0.9.0 120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 121 - Alternative Excel writer. 122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/) 123 - Needed for `pandas.io.gbq` 124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 125 - One of the following combinations of libraries is needed to use the 126 top-level [`pandas.read_html`][read-html-docs] function: 127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 128 recent version of [html5lib][html5lib] is okay.) 129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 132 for reasons as to why you should probably **not** take this approach. 133 134 #### Notes about HTML parsing libraries 135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 136 either [lxml][lxml] or [html5lib][html5lib] or both. 137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 138 installed. 139 - You are strongly encouraged to read [HTML reading 140 gotchas][html-gotchas]. It explains issues surrounding the 141 installation and usage of the above three libraries. 142 - You may need to install an older version of 143 [BeautifulSoup4][BeautifulSoup4]: 144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 145 32-bit Ubuntu/Debian 146 - Additionally, if you're using [Anaconda][Anaconda] you should 147 definitely read [the gotchas about HTML parsing][html-gotchas] 148 libraries 149 - If you're on a system with `apt-get` you can do 150 151 ```sh 152 sudo apt-get build-dep python-lxml 153 ``` 154 155 to get the necessary dependencies for installation of [lxml][lxml]. 156 This will prevent further headaches down the line. 157 158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 160 [lxml]: http://lxml.de 161 [Anaconda]: https://store.continuum.io/cshop/anaconda 162 [NumPy]: http://numpy.scipy.org/ 163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 165 166 ## Installation from sources 167 To install pandas from source you need Cython in addition to the normal 168 dependencies above. Cython can be installed from pypi: 169 170 ```sh 171 pip install cython 172 ``` 173 174 In the `pandas` directory (same one where you found this file after 175 cloning the git repo), execute: 176 177 ```sh 178 python setup.py install 179 ``` 180 181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html): 182 183 ```sh 184 python setup.py develop 185 ``` 186 187 Alternatively, you can use `pip` if you want all the dependencies pulled 188 in automatically (the `-e` option is for installing it in [development 189 mode](http://www.pip-installer.org/en/latest/usage.html)): 190 191 ```sh 192 pip install -e . 193 ``` 194 195 On Windows, you will need to install MinGW and execute: 196 197 ```sh 198 python setup.py build --compiler=mingw32 199 python setup.py install 200 ``` 201 202 See http://pandas.pydata.org/ for more information. 203 204 ## License 205 BSD 206 207 ## Documentation 208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 209 210 The Sphinx documentation should provide a good starting point for learning how 211 to use the library. Expect the docs to continue to expand as time goes on. 212 213 ## Background 214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 215 has been under active development since then. 216 217 ## Discussion and Development 218 Since pandas development is related to a number of other scientific 219 Python projects, questions are welcome on the scipy-user mailing 220 list. Specialized discussions or design issues should take place on 221 the pystatsmodels mailing list / Google group, where 222 ``scikits.statsmodels`` and other libraries will also be discussed: 223 224 http://groups.google.com/group/pystatsmodels 225 [end of README.md] [start of pandas/compat/__init__.py] 1 """ 2 compat 3 ====== 4 5 Cross-compatible functions for Python 2 and 3. 6 7 Key items to import for 2/3 compatible code: 8 * iterators: range(), map(), zip(), filter(), reduce() 9 * lists: lrange(), lmap(), lzip(), lfilter() 10 * unicode: u() [u"" is a syntax error in Python 3.0-3.2] 11 * longs: long (int in Python 3) 12 * callable 13 * iterable method compatibility: iteritems, iterkeys, itervalues 14 * Uses the original method if available, otherwise uses items, keys, values. 15 * types: 16 * text_type: unicode in Python 2, str in Python 3 17 * binary_type: str in Python 2, bythes in Python 3 18 * string_types: basestring in Python 2, str in Python 3 19 * bind_method: binds functions to classes 20 * add_metaclass(metaclass) - class decorator that recreates class with with the 21 given metaclass instead (and avoids intermediary class creation) 22 23 Python 2.6 compatibility: 24 * OrderedDict 25 * Counter 26 27 Other items: 28 * OrderedDefaultDict 29 """ 30 # pylint disable=W0611 31 import functools 32 import itertools 33 from distutils.version import LooseVersion 34 from itertools import product 35 import sys 36 import types 37 38 PY3 = (sys.version_info[0] >= 3) 39 PY3_2 = sys.version_info[:2] == (3, 2) 40 41 try: 42 import __builtin__ as builtins 43 # not writeable when instantiated with string, doesn't handle unicode well 44 from cStringIO import StringIO as cStringIO 45 # always writeable 46 from StringIO import StringIO 47 BytesIO = StringIO 48 import cPickle 49 import httplib 50 except ImportError: 51 import builtins 52 from io import StringIO, BytesIO 53 cStringIO = StringIO 54 import pickle as cPickle 55 import http.client as httplib 56 57 from pandas.compat.chainmap import DeepChainMap 58 59 60 if PY3: 61 def isidentifier(s): 62 return s.isidentifier() 63 64 def str_to_bytes(s, encoding=None): 65 return s.encode(encoding or 'ascii') 66 67 def bytes_to_str(b, encoding=None): 68 return b.decode(encoding or 'utf-8') 69 70 # have to explicitly put builtins into the namespace 71 range = range 72 map = map 73 zip = zip 74 filter = filter 75 reduce = functools.reduce 76 long = int 77 unichr = chr 78 79 # list-producing versions of the major Python iterating functions 80 def lrange(*args, **kwargs): 81 return list(range(*args, **kwargs)) 82 83 def lzip(*args, **kwargs): 84 return list(zip(*args, **kwargs)) 85 86 def lmap(*args, **kwargs): 87 return list(map(*args, **kwargs)) 88 89 def lfilter(*args, **kwargs): 90 return list(filter(*args, **kwargs)) 91 else: 92 # Python 2 93 import re 94 _name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$") 95 96 def isidentifier(s, dotted=False): 97 return bool(_name_re.match(s)) 98 99 def str_to_bytes(s, encoding='ascii'): 100 return s 101 102 def bytes_to_str(b, encoding='ascii'): 103 return b 104 105 # import iterator versions of these functions 106 range = xrange 107 zip = itertools.izip 108 filter = itertools.ifilter 109 map = itertools.imap 110 reduce = reduce 111 long = long 112 unichr = unichr 113 114 # Python 2-builtin ranges produce lists 115 lrange = builtins.range 116 lzip = builtins.zip 117 lmap = builtins.map 118 lfilter = builtins.filter 119 120 121 def iteritems(obj, **kwargs): 122 """replacement for six's iteritems for Python2/3 compat 123 uses 'iteritems' if available and otherwise uses 'items'. 124 125 Passes kwargs to method. 126 """ 127 func = getattr(obj, "iteritems", None) 128 if not func: 129 func = obj.items 130 return func(**kwargs) 131 132 133 def iterkeys(obj, **kwargs): 134 func = getattr(obj, "iterkeys", None) 135 if not func: 136 func = obj.keys 137 return func(**kwargs) 138 139 140 def itervalues(obj, **kwargs): 141 func = getattr(obj, "itervalues", None) 142 if not func: 143 func = obj.values 144 return func(**kwargs) 145 146 147 def bind_method(cls, name, func): 148 """Bind a method to class, python 2 and python 3 compatible. 149 150 Parameters 151 ---------- 152 153 cls : type 154 class to receive bound method 155 name : basestring 156 name of method on class instance 157 func : function 158 function to be bound as method 159 160 161 Returns 162 ------- 163 None 164 """ 165 # only python 2 has bound/unbound method issue 166 if not PY3: 167 setattr(cls, name, types.MethodType(func, None, cls)) 168 else: 169 setattr(cls, name, func) 170 # ---------------------------------------------------------------------------- 171 # functions largely based / taken from the six module 172 173 # Much of the code in this module comes from Benjamin Peterson's six library. 174 # The license for this library can be found in LICENSES/SIX and the code can be 175 # found at https://bitbucket.org/gutworth/six 176 177 if PY3: 178 string_types = str, 179 integer_types = int, 180 class_types = type, 181 text_type = str 182 binary_type = bytes 183 184 def u(s): 185 return s 186 187 def u_safe(s): 188 return s 189 else: 190 string_types = basestring, 191 integer_types = (int, long) 192 class_types = (type, types.ClassType) 193 text_type = unicode 194 binary_type = str 195 196 def u(s): 197 return unicode(s, "unicode_escape") 198 199 def u_safe(s): 200 try: 201 return unicode(s, "unicode_escape") 202 except: 203 return s 204 205 206 string_and_binary_types = string_types + (binary_type,) 207 208 209 try: 210 # callable reintroduced in later versions of Python 211 callable = callable 212 except NameError: 213 def callable(obj): 214 return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) 215 216 217 def add_metaclass(metaclass): 218 """Class decorator for creating a class with a metaclass.""" 219 def wrapper(cls): 220 orig_vars = cls.__dict__.copy() 221 orig_vars.pop('__dict__', None) 222 orig_vars.pop('__weakref__', None) 223 for slots_var in orig_vars.get('__slots__', ()): 224 orig_vars.pop(slots_var) 225 return metaclass(cls.__name__, cls.__bases__, orig_vars) 226 return wrapper 227 228 229 # ---------------------------------------------------------------------------- 230 # Python 2.6 compatibility shims 231 # 232 233 # OrderedDict Shim from Raymond Hettinger, python core dev 234 # http://code.activestate.com/recipes/576693-ordered-dictionary-for-py24/ 235 # here to support versions before 2.6 236 if not PY3: 237 # don't need this except in 2.6 238 try: 239 from thread import get_ident as _get_ident 240 except ImportError: 241 from dummy_thread import get_ident as _get_ident 242 243 try: 244 from _abcoll import KeysView, ValuesView, ItemsView 245 except ImportError: 246 pass 247 248 249 class _OrderedDict(dict): 250 251 """Dictionary that remembers insertion order""" 252 # An inherited dict maps keys to values. 253 # The inherited dict provides __getitem__, __len__, __contains__, and get. 254 # The remaining methods are order-aware. 255 # Big-O running times for all methods are the same as for regular 256 # dictionaries. 257 258 # The internal self.__map dictionary maps keys to links in a doubly linked 259 # list. The circular doubly linked list starts and ends with a sentinel 260 # element. The sentinel element never gets deleted (this simplifies the 261 # algorithm). Each link is stored as a list of length three: [PREV, NEXT, 262 # KEY]. 263 264 def __init__(self, *args, **kwds): 265 """Initialize an ordered dictionary. Signature is the same as for 266 regular dictionaries, but keyword arguments are not recommended 267 because their insertion order is arbitrary. 268 """ 269 if len(args) > 1: 270 raise TypeError('expected at most 1 arguments, got %d' % len(args)) 271 try: 272 self.__root 273 except AttributeError: 274 self.__root = root = [] # sentinel node 275 root[:] = [root, root, None] 276 self.__map = {} 277 self.__update(*args, **kwds) 278 279 def __setitem__(self, key, value, dict_setitem=dict.__setitem__): 280 """od.__setitem__(i, y) <==> od[i]=y""" 281 # Setting a new item creates a new link which goes at the end of the 282 # linked list, and the inherited dictionary is updated with the new 283 # key/value pair. 284 if key not in self: 285 root = self.__root 286 last = root[0] 287 last[1] = root[0] = self.__map[key] = [last, root, key] 288 dict_setitem(self, key, value) 289 290 def __delitem__(self, key, dict_delitem=dict.__delitem__): 291 """od.__delitem__(y) <==> del od[y]""" 292 # Deleting an existing item uses self.__map to find the link which is 293 # then removed by updating the links in the predecessor and successor 294 # nodes. 295 dict_delitem(self, key) 296 link_prev, link_next, key = self.__map.pop(key) 297 link_prev[1] = link_next 298 link_next[0] = link_prev 299 300 def __iter__(self): 301 """od.__iter__() <==> iter(od)""" 302 root = self.__root 303 curr = root[1] 304 while curr is not root: 305 yield curr[2] 306 curr = curr[1] 307 308 def __reversed__(self): 309 """od.__reversed__() <==> reversed(od)""" 310 root = self.__root 311 curr = root[0] 312 while curr is not root: 313 yield curr[2] 314 curr = curr[0] 315 316 def clear(self): 317 """od.clear() -> None. Remove all items from od.""" 318 try: 319 for node in itervalues(self.__map): 320 del node[:] 321 root = self.__root 322 root[:] = [root, root, None] 323 self.__map.clear() 324 except AttributeError: 325 pass 326 dict.clear(self) 327 328 def popitem(self, last=True): 329 """od.popitem() -> (k, v), return and remove a (key, value) pair. 330 331 Pairs are returned in LIFO order if last is true or FIFO order if 332 false. 333 """ 334 if not self: 335 raise KeyError('dictionary is empty') 336 root = self.__root 337 if last: 338 link = root[0] 339 link_prev = link[0] 340 link_prev[1] = root 341 root[0] = link_prev 342 else: 343 link = root[1] 344 link_next = link[1] 345 root[1] = link_next 346 link_next[0] = root 347 key = link[2] 348 del self.__map[key] 349 value = dict.pop(self, key) 350 return key, value 351 352 # -- the following methods do not depend on the internal structure -- 353 354 def keys(self): 355 """od.keys() -> list of keys in od""" 356 return list(self) 357 358 def values(self): 359 """od.values() -> list of values in od""" 360 return [self[key] for key in self] 361 362 def items(self): 363 """od.items() -> list of (key, value) pairs in od""" 364 return [(key, self[key]) for key in self] 365 366 def iterkeys(self): 367 """od.iterkeys() -> an iterator over the keys in od""" 368 return iter(self) 369 370 def itervalues(self): 371 """od.itervalues -> an iterator over the values in od""" 372 for k in self: 373 yield self[k] 374 375 def iteritems(self): 376 """od.iteritems -> an iterator over the (key, value) items in od""" 377 for k in self: 378 yield (k, self[k]) 379 380 def update(*args, **kwds): 381 """od.update(E, **F) -> None. Update od from dict/iterable E and F. 382 383 If E is a dict instance, does: for k in E: od[k] = E[k] 384 If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] 385 Or if E is an iterable of items, does:for k, v in E: od[k] = v 386 In either case, this is followed by: for k, v in F.items(): od[k] = v 387 """ 388 if len(args) > 2: 389 raise TypeError('update() takes at most 2 positional ' 390 'arguments (%d given)' % (len(args),)) 391 elif not args: 392 raise TypeError('update() takes at least 1 argument (0 given)') 393 self = args[0] 394 # Make progressively weaker assumptions about "other" 395 other = () 396 if len(args) == 2: 397 other = args[1] 398 if isinstance(other, dict): 399 for key in other: 400 self[key] = other[key] 401 elif hasattr(other, 'keys'): 402 for key in other.keys(): 403 self[key] = other[key] 404 else: 405 for key, value in other: 406 self[key] = value 407 for key, value in kwds.items(): 408 self[key] = value 409 # let subclasses override update without breaking __init__ 410 __update = update 411 412 __marker = object() 413 414 def pop(self, key, default=__marker): 415 """od.pop(k[,d]) -> v, remove specified key and return the 416 corresponding value. If key is not found, d is returned if given, 417 otherwise KeyError is raised. 418 """ 419 if key in self: 420 result = self[key] 421 del self[key] 422 return result 423 if default is self.__marker: 424 raise KeyError(key) 425 return default 426 427 def setdefault(self, key, default=None): 428 """od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od 429 """ 430 if key in self: 431 return self[key] 432 self[key] = default 433 return default 434 435 def __repr__(self, _repr_running={}): 436 """od.__repr__() <==> repr(od)""" 437 call_key = id(self), _get_ident() 438 if call_key in _repr_running: 439 return '...' 440 _repr_running[call_key] = 1 441 try: 442 if not self: 443 return '%s()' % (self.__class__.__name__,) 444 return '%s(%r)' % (self.__class__.__name__, list(self.items())) 445 finally: 446 del _repr_running[call_key] 447 448 def __reduce__(self): 449 """Return state information for pickling""" 450 items = [[k, self[k]] for k in self] 451 inst_dict = vars(self).copy() 452 for k in vars(OrderedDict()): 453 inst_dict.pop(k, None) 454 if inst_dict: 455 return (self.__class__, (items,), inst_dict) 456 return self.__class__, (items,) 457 458 def copy(self): 459 """od.copy() -> a shallow copy of od""" 460 return self.__class__(self) 461 462 @classmethod 463 def fromkeys(cls, iterable, value=None): 464 """OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and 465 values equal to v (which defaults to None). 466 """ 467 d = cls() 468 for key in iterable: 469 d[key] = value 470 return d 471 472 def __eq__(self, other): 473 """od.__eq__(y) <==> od==y. Comparison to another OD is 474 order-sensitive while comparison to a regular mapping is 475 order-insensitive. 476 """ 477 if isinstance(other, OrderedDict): 478 return (len(self) == len(other) and 479 list(self.items()) == list(other.items())) 480 return dict.__eq__(self, other) 481 482 def __ne__(self, other): 483 return not self == other 484 485 # -- the following methods are only used in Python 2.7 -- 486 487 def viewkeys(self): 488 """od.viewkeys() -> a set-like object providing a view on od's keys""" 489 return KeysView(self) 490 491 def viewvalues(self): 492 """od.viewvalues() -> an object providing a view on od's values""" 493 return ValuesView(self) 494 495 def viewitems(self): 496 """od.viewitems() -> a set-like object providing a view on od's items 497 """ 498 return ItemsView(self) 499 500 501 # {{{ http://code.activestate.com/recipes/576611/ (r11) 502 503 try: 504 from operator import itemgetter 505 from heapq import nlargest 506 except ImportError: 507 pass 508 509 510 class _Counter(dict): 511 512 """Dict subclass for counting hashable objects. Sometimes called a bag 513 or multiset. Elements are stored as dictionary keys and their counts 514 are stored as dictionary values. 515 516 >>> Counter('zyzygy') 517 Counter({'y': 3, 'z': 2, 'g': 1}) 518 519 """ 520 521 def __init__(self, iterable=None, **kwds): 522 """Create a new, empty Counter object. And if given, count elements 523 from an input iterable. Or, initialize the count from another mapping 524 of elements to their counts. 525 526 >>> c = Counter() # a new, empty counter 527 >>> c = Counter('gallahad') # a new counter from an iterable 528 >>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping 529 >>> c = Counter(a=4, b=2) # a new counter from keyword args 530 531 """ 532 self.update(iterable, **kwds) 533 534 def __missing__(self, key): 535 return 0 536 537 def most_common(self, n=None): 538 """List the n most common elements and their counts from the most 539 common to the least. If n is None, then list all element counts. 540 541 >>> Counter('abracadabra').most_common(3) 542 [('a', 5), ('r', 2), ('b', 2)] 543 544 """ 545 if n is None: 546 return sorted(iteritems(self), key=itemgetter(1), reverse=True) 547 return nlargest(n, iteritems(self), key=itemgetter(1)) 548 549 def elements(self): 550 """Iterator over elements repeating each as many times as its count. 551 552 >>> c = Counter('ABCABC') 553 >>> sorted(c.elements()) 554 ['A', 'A', 'B', 'B', 'C', 'C'] 555 556 If an element's count has been set to zero or is a negative number, 557 elements() will ignore it. 558 559 """ 560 for elem, count in iteritems(self): 561 for _ in range(count): 562 yield elem 563 564 # Override dict methods where the meaning changes for Counter objects. 565 566 @classmethod 567 def fromkeys(cls, iterable, v=None): 568 raise NotImplementedError( 569 'Counter.fromkeys() is undefined. Use Counter(iterable) instead.') 570 571 def update(self, iterable=None, **kwds): 572 """Like dict.update() but add counts instead of replacing them. 573 574 Source can be an iterable, a dictionary, or another Counter instance. 575 576 >>> c = Counter('which') 577 >>> c.update('witch') # add elements from another iterable 578 >>> d = Counter('watch') 579 >>> c.update(d) # add elements from another counter 580 >>> c['h'] # four 'h' in which, witch, and watch 581 4 582 583 """ 584 if iterable is not None: 585 if hasattr(iterable, 'iteritems'): 586 if self: 587 self_get = self.get 588 for elem, count in iteritems(iterable): 589 self[elem] = self_get(elem, 0) + count 590 else: 591 dict.update( 592 self, iterable) # fast path when counter is empty 593 else: 594 self_get = self.get 595 for elem in iterable: 596 self[elem] = self_get(elem, 0) + 1 597 if kwds: 598 self.update(kwds) 599 600 def copy(self): 601 """Like dict.copy() but returns a Counter instance instead of a dict. 602 """ 603 return Counter(self) 604 605 def __delitem__(self, elem): 606 """Like dict.__delitem__() but does not raise KeyError for missing 607 values. 608 """ 609 if elem in self: 610 dict.__delitem__(self, elem) 611 612 def __repr__(self): 613 if not self: 614 return '%s()' % self.__class__.__name__ 615 items = ', '.join(map('%r: %r'.__mod__, self.most_common())) 616 return '%s({%s})' % (self.__class__.__name__, items) 617 618 # Multiset-style mathematical operations discussed in: 619 # Knuth TAOCP Volume II section 4.6.3 exercise 19 620 # and at http://en.wikipedia.org/wiki/Multiset 621 # 622 # Outputs guaranteed to only include positive counts. 623 # 624 # To strip negative and zero counts, add-in an empty counter: 625 # c += Counter() 626 627 def __add__(self, other): 628 """Add counts from two counters. 629 630 >>> Counter('abbb') + Counter('bcc') 631 Counter({'b': 4, 'c': 2, 'a': 1}) 632 633 """ 634 if not isinstance(other, Counter): 635 return NotImplemented 636 result = Counter() 637 for elem in set(self) | set(other): 638 newcount = self[elem] + other[elem] 639 if newcount > 0: 640 result[elem] = newcount 641 return result 642 643 def __sub__(self, other): 644 """Subtract count, but keep only results with positive counts. 645 646 >>> Counter('abbbc') - Counter('bccd') 647 Counter({'b': 2, 'a': 1}) 648 649 """ 650 if not isinstance(other, Counter): 651 return NotImplemented 652 result = Counter() 653 for elem in set(self) | set(other): 654 newcount = self[elem] - other[elem] 655 if newcount > 0: 656 result[elem] = newcount 657 return result 658 659 def __or__(self, other): 660 """Union is the maximum of value in either of the input counters. 661 662 >>> Counter('abbb') | Counter('bcc') 663 Counter({'b': 3, 'c': 2, 'a': 1}) 664 665 """ 666 if not isinstance(other, Counter): 667 return NotImplemented 668 _max = max 669 result = Counter() 670 for elem in set(self) | set(other): 671 newcount = _max(self[elem], other[elem]) 672 if newcount > 0: 673 result[elem] = newcount 674 return result 675 676 def __and__(self, other): 677 """Intersection is the minimum of corresponding counts. 678 679 >>> Counter('abbb') & Counter('bcc') 680 Counter({'b': 1}) 681 682 """ 683 if not isinstance(other, Counter): 684 return NotImplemented 685 _min = min 686 result = Counter() 687 if len(self) < len(other): 688 self, other = other, self 689 for elem in filter(self.__contains__, other): 690 newcount = _min(self[elem], other[elem]) 691 if newcount > 0: 692 result[elem] = newcount 693 return result 694 695 if sys.version_info[:2] < (2, 7): 696 OrderedDict = _OrderedDict 697 Counter = _Counter 698 else: 699 from collections import OrderedDict, Counter 700 701 if PY3: 702 def raise_with_traceback(exc, traceback=Ellipsis): 703 if traceback == Ellipsis: 704 _, _, traceback = sys.exc_info() 705 raise exc.with_traceback(traceback) 706 else: 707 # this version of raise is a syntax error in Python 3 708 exec(""" 709 def raise_with_traceback(exc, traceback=Ellipsis): 710 if traceback == Ellipsis: 711 _, _, traceback = sys.exc_info() 712 raise exc, None, traceback 713 """) 714 715 raise_with_traceback.__doc__ = """Raise exception with existing traceback. 716 If traceback is not passed, uses sys.exc_info() to get traceback.""" 717 718 719 # http://stackoverflow.com/questions/4126348 720 # Thanks to @martineau at SO 721 722 from dateutil import parser as _date_parser 723 import dateutil 724 if LooseVersion(dateutil.__version__) < '2.0': 725 @functools.wraps(_date_parser.parse) 726 def parse_date(timestr, *args, **kwargs): 727 timestr = bytes(timestr) 728 return _date_parser.parse(timestr, *args, **kwargs) 729 else: 730 parse_date = _date_parser.parse 731 732 733 class OrderedDefaultdict(OrderedDict): 734 735 def __init__(self, *args, **kwargs): 736 newdefault = None 737 newargs = () 738 if args: 739 newdefault = args[0] 740 if not (newdefault is None or callable(newdefault)): 741 raise TypeError('first argument must be callable or None') 742 newargs = args[1:] 743 self.default_factory = newdefault 744 super(self.__class__, self).__init__(*newargs, **kwargs) 745 746 def __missing__(self, key): 747 if self.default_factory is None: 748 raise KeyError(key) 749 self[key] = value = self.default_factory() 750 return value 751 752 def __reduce__(self): # optional, for pickle support 753 args = self.default_factory if self.default_factory else tuple() 754 return type(self), args, None, None, list(self.items()) 755 [end of pandas/compat/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
741b2fafbceaabdbfc8609f70f52aa605d160caa
ENH: return `NotImplemented` rather than raise TypeError in Period arithmetic. Right now, some of the error messages for in pandas/tseries/period.py are not so great: ``` python In [9]: period = Period('2000-01-03', 'B') In [10]: period > 1 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-df287da31a9f> in <module>() ----> 1 period > 1 /pandas/tseries/period.pyc in f(self, other) 172 return func(self.ordinal, other.ordinal) 173 else: --> 174 raise TypeError(other) 175 176 f.__name__ = name TypeError: 1 ``` And in the case above, should really return `NotImplemented` instead (so that they end up with an error message like): ``` python ----> 1 1 + period TypeError: unsupported operand type(s) for +: 'int' and 'Period' ``` Side note - that's only appropriate these are only used as magic methods (so that the Python interpreter converts them into the TypeError shown above...).
2014-06-17T16:04:40Z
<patch> diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt --- a/doc/source/v0.14.1.txt +++ b/doc/source/v0.14.1.txt @@ -130,7 +130,7 @@ Enhancements - All offsets ``apply``, ``rollforward`` and ``rollback`` can now handle ``np.datetime64``, previously results in ``ApplyTypeError`` (:issue:`7452`) - +- ``Period`` and ``PeriodIndex`` can contain ``NaT`` in its values (:issue:`7485`) .. _whatsnew_0141.performance: @@ -239,6 +239,9 @@ Bug Fixes - Bug in passing input with ``tzinfo`` to some offsets ``apply``, ``rollforward`` or ``rollback`` resets ``tzinfo`` or raises ``ValueError`` (:issue:`7465`) +- Bug in ``DatetimeIndex.to_period``, ``PeriodIndex.asobject``, ``PeriodIndex.to_timestamp`` doesn't preserve ``name`` (:issue:`7485`) +- Bug in ``DatetimeIndex.to_period`` and ``PeriodIndex.to_timestanp`` handle ``NaT`` incorrectly (:issue:`7228`) + - BUG in ``resample`` raises ``ValueError`` when target contains ``NaT`` (:issue:`7227`) diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -809,7 +809,7 @@ def to_period(self, freq=None): if freq is None: freq = get_period_alias(self.freqstr) - return PeriodIndex(self.values, freq=freq, tz=self.tz) + return PeriodIndex(self.values, name=self.name, freq=freq, tz=self.tz) def order(self, return_indexer=False, ascending=True): """ diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py --- a/pandas/tseries/period.py +++ b/pandas/tseries/period.py @@ -102,6 +102,12 @@ def __init__(self, value=None, freq=None, ordinal=None, converted = other.asfreq(freq) self.ordinal = converted.ordinal + elif com._is_null_datelike_scalar(value) or value in tslib._nat_strings: + self.ordinal = tslib.iNaT + if freq is None: + raise ValueError("If value is NaT, freq cannot be None " + "because it cannot be inferred") + elif isinstance(value, compat.string_types) or com.is_integer(value): if com.is_integer(value): value = str(value) @@ -136,6 +142,8 @@ def __eq__(self, other): if isinstance(other, Period): if other.freq != self.freq: raise ValueError("Cannot compare non-conforming periods") + if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT: + return False return (self.ordinal == other.ordinal and _gfc(self.freq) == _gfc(other.freq)) return NotImplemented @@ -148,26 +156,38 @@ def __hash__(self): def __add__(self, other): if com.is_integer(other): - return Period(ordinal=self.ordinal + other, freq=self.freq) + if self.ordinal == tslib.iNaT: + ordinal = self.ordinal + else: + ordinal = self.ordinal + other + return Period(ordinal=ordinal, freq=self.freq) else: # pragma: no cover - raise TypeError(other) + return NotImplemented def __sub__(self, other): if com.is_integer(other): - return Period(ordinal=self.ordinal - other, freq=self.freq) + if self.ordinal == tslib.iNaT: + ordinal = self.ordinal + else: + ordinal = self.ordinal - other + return Period(ordinal=ordinal, freq=self.freq) if isinstance(other, Period): if other.freq != self.freq: raise ValueError("Cannot do arithmetic with " "non-conforming periods") + if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT: + return Period(ordinal=tslib.iNaT, freq=self.freq) return self.ordinal - other.ordinal else: # pragma: no cover - raise TypeError(other) + return NotImplemented def _comp_method(func, name): def f(self, other): if isinstance(other, Period): if other.freq != self.freq: raise ValueError("Cannot compare non-conforming periods") + if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT: + return False return func(self.ordinal, other.ordinal) else: raise TypeError(other) @@ -213,7 +233,10 @@ def start_time(self): @property def end_time(self): - ordinal = (self + 1).start_time.value - 1 + if self.ordinal == tslib.iNaT: + ordinal = self.ordinal + else: + ordinal = (self + 1).start_time.value - 1 return Timestamp(ordinal) def to_timestamp(self, freq=None, how='start', tz=None): @@ -480,6 +503,11 @@ def _period_index_cmp(opname): Wrap comparison operations to convert datetime-like to datetime64 """ def wrapper(self, other): + if opname == '__ne__': + fill_value = True + else: + fill_value = False + if isinstance(other, Period): func = getattr(self.values, opname) if other.freq != self.freq: @@ -489,12 +517,26 @@ def wrapper(self, other): elif isinstance(other, PeriodIndex): if other.freq != self.freq: raise AssertionError("Frequencies must be equal") - return getattr(self.values, opname)(other.values) + + result = getattr(self.values, opname)(other.values) + + mask = (com.mask_missing(self.values, tslib.iNaT) | + com.mask_missing(other.values, tslib.iNaT)) + if mask.any(): + result[mask] = fill_value + + return result else: other = Period(other, freq=self.freq) func = getattr(self.values, opname) result = func(other.ordinal) + if other.ordinal == tslib.iNaT: + result.fill(fill_value) + mask = self.values == tslib.iNaT + if mask.any(): + result[mask] = fill_value + return result return wrapper @@ -712,7 +754,7 @@ def asof_locs(self, where, mask): @property def asobject(self): - return Index(self._box_values(self.values), dtype=object) + return Index(self._box_values(self.values), name=self.name, dtype=object) def _array_values(self): return self.asobject @@ -768,11 +810,7 @@ def asfreq(self, freq=None, how='E'): end = how == 'E' new_data = tslib.period_asfreq_arr(self.values, base1, base2, end) - - result = new_data.view(PeriodIndex) - result.name = self.name - result.freq = freq - return result + return self._simple_new(new_data, self.name, freq=freq) def to_datetime(self, dayfirst=False): return self.to_timestamp() @@ -868,16 +906,23 @@ def shift(self, n): ------- shifted : PeriodIndex """ - if n == 0: - return self - - return PeriodIndex(data=self.values + n, freq=self.freq) + mask = self.values == tslib.iNaT + values = self.values + n + values[mask] = tslib.iNaT + return PeriodIndex(data=values, name=self.name, freq=self.freq) def __add__(self, other): - return PeriodIndex(ordinal=self.values + other, freq=self.freq) + try: + return self.shift(other) + except TypeError: + # self.values + other raises TypeError for invalid input + return NotImplemented def __sub__(self, other): - return PeriodIndex(ordinal=self.values - other, freq=self.freq) + try: + return self.shift(-other) + except TypeError: + return NotImplemented @property def inferred_type(self): @@ -1207,8 +1252,11 @@ def _get_ordinal_range(start, end, periods, freq): is_start_per = isinstance(start, Period) is_end_per = isinstance(end, Period) - if is_start_per and is_end_per and (start.freq != end.freq): + if is_start_per and is_end_per and start.freq != end.freq: raise ValueError('Start and end must have same freq') + if ((is_start_per and start.ordinal == tslib.iNaT) or + (is_end_per and end.ordinal == tslib.iNaT)): + raise ValueError('Start and end must not be NaT') if freq is None: if is_start_per: diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx --- a/pandas/tslib.pyx +++ b/pandas/tslib.pyx @@ -3028,6 +3028,9 @@ def dt64arr_to_periodarr(ndarray[int64_t] dtarr, int freq, tz=None): if tz is None: for i in range(l): + if dtarr[i] == iNaT: + out[i] = iNaT + continue pandas_datetime_to_datetimestruct(dtarr[i], PANDAS_FR_ns, &dts) out[i] = get_period_ordinal(dts.year, dts.month, dts.day, dts.hour, dts.min, dts.sec, dts.us, dts.ps, freq) @@ -3049,6 +3052,9 @@ def periodarr_to_dt64arr(ndarray[int64_t] periodarr, int freq): out = np.empty(l, dtype='i8') for i in range(l): + if periodarr[i] == iNaT: + out[i] = iNaT + continue out[i] = period_ordinal_to_dt64(periodarr[i], freq) return out @@ -3065,6 +3071,9 @@ cpdef int64_t period_asfreq(int64_t period_ordinal, int freq1, int freq2, cdef: int64_t retval + if period_ordinal == iNaT: + return iNaT + if end: retval = asfreq(period_ordinal, freq1, freq2, END) else: @@ -3100,6 +3109,9 @@ def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end): relation = START for i in range(n): + if arr[i] == iNaT: + result[i] = iNaT + continue val = func(arr[i], relation, &finfo) if val == INT32_MIN: raise ValueError("Unable to convert to desired frequency.") @@ -3120,6 +3132,9 @@ cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq): date_info dinfo float subsecond_fraction + if ordinal == iNaT: + return NPY_NAT + get_date_info(ordinal, freq, &dinfo) dts.year = dinfo.year @@ -3138,6 +3153,9 @@ def period_format(int64_t value, int freq, object fmt=None): cdef: int freq_group + if value == iNaT: + return repr(NaT) + if fmt is None: freq_group = (freq // 1000) * 1000 if freq_group == 1000: # FR_ANN @@ -3241,6 +3259,8 @@ def get_period_field(int code, int64_t value, int freq): cdef accessor f = _get_accessor_func(code) if f is NULL: raise ValueError('Unrecognized period code: %d' % code) + if value == iNaT: + return -1 return f(value, freq) def get_period_field_arr(int code, ndarray[int64_t] arr, int freq): @@ -3257,6 +3277,9 @@ def get_period_field_arr(int code, ndarray[int64_t] arr, int freq): out = np.empty(sz, dtype=np.int64) for i in range(sz): + if arr[i] == iNaT: + out[i] = -1 + continue out[i] = f(arr[i], freq) return out </patch>
[]
[]
numpy__numpy-3244
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> 2to3 run `zip` fixer zip is now an iterator. This is simiar to the filter and map changers. </issue> <code> [start of README.txt] 1 NumPy is the fundamental package needed for scientific computing with Python. 2 This package contains: 3 4 * a powerful N-dimensional array object 5 * sophisticated (broadcasting) functions 6 * tools for integrating C/C++ and Fortran code 7 * useful linear algebra, Fourier transform, and random number capabilities. 8 9 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray. 10 11 More information can be found at the website: 12 13 http://scipy.org/NumPy 14 15 After installation, tests can be run with: 16 17 python -c 'import numpy; numpy.test()' 18 19 Starting in NumPy 1.7, deprecation warnings have been set to 'raise' by 20 default, so the -Wd command-line option is no longer necessary. 21 22 The most current development version is always available from our 23 git repository: 24 25 http://github.com/numpy/numpy 26 27 28 [end of README.txt] [start of numpy/doc/glossary.py] 1 """ 2 ======== 3 Glossary 4 ======== 5 6 .. glossary:: 7 8 along an axis 9 Axes are defined for arrays with more than one dimension. A 10 2-dimensional array has two corresponding axes: the first running 11 vertically downwards across rows (axis 0), and the second running 12 horizontally across columns (axis 1). 13 14 Many operation can take place along one of these axes. For example, 15 we can sum each row of an array, in which case we operate along 16 columns, or axis 1:: 17 18 >>> x = np.arange(12).reshape((3,4)) 19 20 >>> x 21 array([[ 0, 1, 2, 3], 22 [ 4, 5, 6, 7], 23 [ 8, 9, 10, 11]]) 24 25 >>> x.sum(axis=1) 26 array([ 6, 22, 38]) 27 28 array 29 A homogeneous container of numerical elements. Each element in the 30 array occupies a fixed amount of memory (hence homogeneous), and 31 can be a numerical element of a single type (such as float, int 32 or complex) or a combination (such as ``(float, int, float)``). Each 33 array has an associated data-type (or ``dtype``), which describes 34 the numerical type of its elements:: 35 36 >>> x = np.array([1, 2, 3], float) 37 38 >>> x 39 array([ 1., 2., 3.]) 40 41 >>> x.dtype # floating point number, 64 bits of memory per element 42 dtype('float64') 43 44 45 # More complicated data type: each array element is a combination of 46 # and integer and a floating point number 47 >>> np.array([(1, 2.0), (3, 4.0)], dtype=[('x', int), ('y', float)]) 48 array([(1, 2.0), (3, 4.0)], 49 dtype=[('x', '<i4'), ('y', '<f8')]) 50 51 Fast element-wise operations, called `ufuncs`_, operate on arrays. 52 53 array_like 54 Any sequence that can be interpreted as an ndarray. This includes 55 nested lists, tuples, scalars and existing arrays. 56 57 attribute 58 A property of an object that can be accessed using ``obj.attribute``, 59 e.g., ``shape`` is an attribute of an array:: 60 61 >>> x = np.array([1, 2, 3]) 62 >>> x.shape 63 (3,) 64 65 BLAS 66 `Basic Linear Algebra Subprograms <http://en.wikipedia.org/wiki/BLAS>`_ 67 68 broadcast 69 NumPy can do operations on arrays whose shapes are mismatched:: 70 71 >>> x = np.array([1, 2]) 72 >>> y = np.array([[3], [4]]) 73 74 >>> x 75 array([1, 2]) 76 77 >>> y 78 array([[3], 79 [4]]) 80 81 >>> x + y 82 array([[4, 5], 83 [5, 6]]) 84 85 See `doc.broadcasting`_ for more information. 86 87 C order 88 See `row-major` 89 90 column-major 91 A way to represent items in a N-dimensional array in the 1-dimensional 92 computer memory. In column-major order, the leftmost index "varies the 93 fastest": for example the array:: 94 95 [[1, 2, 3], 96 [4, 5, 6]] 97 98 is represented in the column-major order as:: 99 100 [1, 4, 2, 5, 3, 6] 101 102 Column-major order is also known as the Fortran order, as the Fortran 103 programming language uses it. 104 105 decorator 106 An operator that transforms a function. For example, a ``log`` 107 decorator may be defined to print debugging information upon 108 function execution:: 109 110 >>> def log(f): 111 ... def new_logging_func(*args, **kwargs): 112 ... print "Logging call with parameters:", args, kwargs 113 ... return f(*args, **kwargs) 114 ... 115 ... return new_logging_func 116 117 Now, when we define a function, we can "decorate" it using ``log``:: 118 119 >>> @log 120 ... def add(a, b): 121 ... return a + b 122 123 Calling ``add`` then yields: 124 125 >>> add(1, 2) 126 Logging call with parameters: (1, 2) {} 127 3 128 129 dictionary 130 Resembling a language dictionary, which provides a mapping between 131 words and descriptions thereof, a Python dictionary is a mapping 132 between two objects:: 133 134 >>> x = {1: 'one', 'two': [1, 2]} 135 136 Here, `x` is a dictionary mapping keys to values, in this case 137 the integer 1 to the string "one", and the string "two" to 138 the list ``[1, 2]``. The values may be accessed using their 139 corresponding keys:: 140 141 >>> x[1] 142 'one' 143 144 >>> x['two'] 145 [1, 2] 146 147 Note that dictionaries are not stored in any specific order. Also, 148 most mutable (see *immutable* below) objects, such as lists, may not 149 be used as keys. 150 151 For more information on dictionaries, read the 152 `Python tutorial <http://docs.python.org/tut>`_. 153 154 Fortran order 155 See `column-major` 156 157 flattened 158 Collapsed to a one-dimensional array. See `ndarray.flatten`_ for details. 159 160 immutable 161 An object that cannot be modified after execution is called 162 immutable. Two common examples are strings and tuples. 163 164 instance 165 A class definition gives the blueprint for constructing an object:: 166 167 >>> class House(object): 168 ... wall_colour = 'white' 169 170 Yet, we have to *build* a house before it exists:: 171 172 >>> h = House() # build a house 173 174 Now, ``h`` is called a ``House`` instance. An instance is therefore 175 a specific realisation of a class. 176 177 iterable 178 A sequence that allows "walking" (iterating) over items, typically 179 using a loop such as:: 180 181 >>> x = [1, 2, 3] 182 >>> [item**2 for item in x] 183 [1, 4, 9] 184 185 It is often used in combintion with ``enumerate``:: 186 >>> keys = ['a','b','c'] 187 >>> for n, k in enumerate(keys): 188 ... print "Key %d: %s" % (n, k) 189 ... 190 Key 0: a 191 Key 1: b 192 Key 2: c 193 194 list 195 A Python container that can hold any number of objects or items. 196 The items do not have to be of the same type, and can even be 197 lists themselves:: 198 199 >>> x = [2, 2.0, "two", [2, 2.0]] 200 201 The list `x` contains 4 items, each which can be accessed individually:: 202 203 >>> x[2] # the string 'two' 204 'two' 205 206 >>> x[3] # a list, containing an integer 2 and a float 2.0 207 [2, 2.0] 208 209 It is also possible to select more than one item at a time, 210 using *slicing*:: 211 212 >>> x[0:2] # or, equivalently, x[:2] 213 [2, 2.0] 214 215 In code, arrays are often conveniently expressed as nested lists:: 216 217 218 >>> np.array([[1, 2], [3, 4]]) 219 array([[1, 2], 220 [3, 4]]) 221 222 For more information, read the section on lists in the `Python 223 tutorial <http://docs.python.org/tut>`_. For a mapping 224 type (key-value), see *dictionary*. 225 226 mask 227 A boolean array, used to select only certain elements for an operation:: 228 229 >>> x = np.arange(5) 230 >>> x 231 array([0, 1, 2, 3, 4]) 232 233 >>> mask = (x > 2) 234 >>> mask 235 array([False, False, False, True, True], dtype=bool) 236 237 >>> x[mask] = -1 238 >>> x 239 array([ 0, 1, 2, -1, -1]) 240 241 masked array 242 Array that suppressed values indicated by a mask:: 243 244 >>> x = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) 245 >>> x 246 masked_array(data = [-- 2.0 --], 247 mask = [ True False True], 248 fill_value = 1e+20) 249 <BLANKLINE> 250 251 >>> x + [1, 2, 3] 252 masked_array(data = [-- 4.0 --], 253 mask = [ True False True], 254 fill_value = 1e+20) 255 <BLANKLINE> 256 257 258 Masked arrays are often used when operating on arrays containing 259 missing or invalid entries. 260 261 matrix 262 A 2-dimensional ndarray that preserves its two-dimensional nature 263 throughout operations. It has certain special operations, such as ``*`` 264 (matrix multiplication) and ``**`` (matrix power), defined:: 265 266 >>> x = np.mat([[1, 2], [3, 4]]) 267 268 >>> x 269 matrix([[1, 2], 270 [3, 4]]) 271 272 >>> x**2 273 matrix([[ 7, 10], 274 [15, 22]]) 275 276 method 277 A function associated with an object. For example, each ndarray has a 278 method called ``repeat``:: 279 280 >>> x = np.array([1, 2, 3]) 281 282 >>> x.repeat(2) 283 array([1, 1, 2, 2, 3, 3]) 284 285 ndarray 286 See *array*. 287 288 reference 289 If ``a`` is a reference to ``b``, then ``(a is b) == True``. Therefore, 290 ``a`` and ``b`` are different names for the same Python object. 291 292 row-major 293 A way to represent items in a N-dimensional array in the 1-dimensional 294 computer memory. In row-major order, the rightmost index "varies 295 the fastest": for example the array:: 296 297 [[1, 2, 3], 298 [4, 5, 6]] 299 300 is represented in the row-major order as:: 301 302 [1, 2, 3, 4, 5, 6] 303 304 Row-major order is also known as the C order, as the C programming 305 language uses it. New Numpy arrays are by default in row-major order. 306 307 self 308 Often seen in method signatures, ``self`` refers to the instance 309 of the associated class. For example: 310 311 >>> class Paintbrush(object): 312 ... color = 'blue' 313 ... 314 ... def paint(self): 315 ... print "Painting the city %s!" % self.color 316 ... 317 >>> p = Paintbrush() 318 >>> p.color = 'red' 319 >>> p.paint() # self refers to 'p' 320 Painting the city red! 321 322 slice 323 Used to select only certain elements from a sequence:: 324 325 >>> x = range(5) 326 >>> x 327 [0, 1, 2, 3, 4] 328 329 >>> x[1:3] # slice from 1 to 3 (excluding 3 itself) 330 [1, 2] 331 332 >>> x[1:5:2] # slice from 1 to 5, but skipping every second element 333 [1, 3] 334 335 >>> x[::-1] # slice a sequence in reverse 336 [4, 3, 2, 1, 0] 337 338 Arrays may have more than one dimension, each which can be sliced 339 individually:: 340 341 >>> x = np.array([[1, 2], [3, 4]]) 342 >>> x 343 array([[1, 2], 344 [3, 4]]) 345 346 >>> x[:, 1] 347 array([2, 4]) 348 349 tuple 350 A sequence that may contain a variable number of types of any 351 kind. A tuple is immutable, i.e., once constructed it cannot be 352 changed. Similar to a list, it can be indexed and sliced:: 353 354 >>> x = (1, 'one', [1, 2]) 355 356 >>> x 357 (1, 'one', [1, 2]) 358 359 >>> x[0] 360 1 361 362 >>> x[:2] 363 (1, 'one') 364 365 A useful concept is "tuple unpacking", which allows variables to 366 be assigned to the contents of a tuple:: 367 368 >>> x, y = (1, 2) 369 >>> x, y = 1, 2 370 371 This is often used when a function returns multiple values: 372 373 >>> def return_many(): 374 ... return 1, 'alpha', None 375 376 >>> a, b, c = return_many() 377 >>> a, b, c 378 (1, 'alpha', None) 379 380 >>> a 381 1 382 >>> b 383 'alpha' 384 385 ufunc 386 Universal function. A fast element-wise array operation. Examples include 387 ``add``, ``sin`` and ``logical_or``. 388 389 view 390 An array that does not own its data, but refers to another array's 391 data instead. For example, we may create a view that only shows 392 every second element of another array:: 393 394 >>> x = np.arange(5) 395 >>> x 396 array([0, 1, 2, 3, 4]) 397 398 >>> y = x[::2] 399 >>> y 400 array([0, 2, 4]) 401 402 >>> x[0] = 3 # changing x changes y as well, since y is a view on x 403 >>> y 404 array([3, 2, 4]) 405 406 wrapper 407 Python is a high-level (highly abstracted, or English-like) language. 408 This abstraction comes at a price in execution speed, and sometimes 409 it becomes necessary to use lower level languages to do fast 410 computations. A wrapper is code that provides a bridge between 411 high and the low level languages, allowing, e.g., Python to execute 412 code written in C or Fortran. 413 414 Examples include ctypes, SWIG and Cython (which wraps C and C++) 415 and f2py (which wraps Fortran). 416 417 """ 418 from __future__ import division, absolute_import, print_function 419 420 [end of numpy/doc/glossary.py] [start of numpy/lib/arrayterator.py] 1 """ 2 A buffered iterator for big arrays. 3 4 This module solves the problem of iterating over a big file-based array 5 without having to read it into memory. The `Arrayterator` class wraps 6 an array object, and when iterated it will return sub-arrays with at most 7 a user-specified number of elements. 8 9 """ 10 from __future__ import division, absolute_import, print_function 11 12 import sys 13 from operator import mul 14 from functools import reduce 15 16 from numpy.compat import long 17 18 __all__ = ['Arrayterator'] 19 20 21 class Arrayterator(object): 22 """ 23 Buffered iterator for big arrays. 24 25 `Arrayterator` creates a buffered iterator for reading big arrays in small 26 contiguous blocks. The class is useful for objects stored in the 27 file system. It allows iteration over the object *without* reading 28 everything in memory; instead, small blocks are read and iterated over. 29 30 `Arrayterator` can be used with any object that supports multidimensional 31 slices. This includes NumPy arrays, but also variables from 32 Scientific.IO.NetCDF or pynetcdf for example. 33 34 Parameters 35 ---------- 36 var : array_like 37 The object to iterate over. 38 buf_size : int, optional 39 The buffer size. If `buf_size` is supplied, the maximum amount of 40 data that will be read into memory is `buf_size` elements. 41 Default is None, which will read as many element as possible 42 into memory. 43 44 Attributes 45 ---------- 46 var 47 buf_size 48 start 49 stop 50 step 51 shape 52 flat 53 54 See Also 55 -------- 56 ndenumerate : Multidimensional array iterator. 57 flatiter : Flat array iterator. 58 memmap : Create a memory-map to an array stored in a binary file on disk. 59 60 Notes 61 ----- 62 The algorithm works by first finding a "running dimension", along which 63 the blocks will be extracted. Given an array of dimensions 64 ``(d1, d2, ..., dn)``, e.g. if `buf_size` is smaller than ``d1``, the 65 first dimension will be used. If, on the other hand, 66 ``d1 < buf_size < d1*d2`` the second dimension will be used, and so on. 67 Blocks are extracted along this dimension, and when the last block is 68 returned the process continues from the next dimension, until all 69 elements have been read. 70 71 Examples 72 -------- 73 >>> import numpy as np 74 >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) 75 >>> a_itor = np.lib.arrayterator.Arrayterator(a, 2) 76 >>> a_itor.shape 77 (3, 4, 5, 6) 78 79 Now we can iterate over ``a_itor``, and it will return arrays of size 80 two. Since `buf_size` was smaller than any dimension, the first 81 dimension will be iterated over first: 82 83 >>> for subarr in a_itor: 84 ... if not subarr.all(): 85 ... print subarr, subarr.shape 86 ... 87 [[[[0 1]]]] (1, 1, 1, 2) 88 89 """ 90 91 def __init__(self, var, buf_size=None): 92 self.var = var 93 self.buf_size = buf_size 94 95 self.start = [0 for dim in var.shape] 96 self.stop = [dim for dim in var.shape] 97 self.step = [1 for dim in var.shape] 98 99 def __getattr__(self, attr): 100 return getattr(self.var, attr) 101 102 def __getitem__(self, index): 103 """ 104 Return a new arrayterator. 105 106 """ 107 # Fix index, handling ellipsis and incomplete slices. 108 if not isinstance(index, tuple): index = (index,) 109 fixed = [] 110 length, dims = len(index), len(self.shape) 111 for slice_ in index: 112 if slice_ is Ellipsis: 113 fixed.extend([slice(None)] * (dims-length+1)) 114 length = len(fixed) 115 elif isinstance(slice_, (int, long)): 116 fixed.append(slice(slice_, slice_+1, 1)) 117 else: 118 fixed.append(slice_) 119 index = tuple(fixed) 120 if len(index) < dims: 121 index += (slice(None),) * (dims-len(index)) 122 123 # Return a new arrayterator object. 124 out = self.__class__(self.var, self.buf_size) 125 for i, (start, stop, step, slice_) in enumerate( 126 zip(self.start, self.stop, self.step, index)): 127 out.start[i] = start + (slice_.start or 0) 128 out.step[i] = step * (slice_.step or 1) 129 out.stop[i] = start + (slice_.stop or stop-start) 130 out.stop[i] = min(stop, out.stop[i]) 131 return out 132 133 def __array__(self): 134 """ 135 Return corresponding data. 136 137 """ 138 slice_ = tuple(slice(*t) for t in zip( 139 self.start, self.stop, self.step)) 140 return self.var[slice_] 141 142 @property 143 def flat(self): 144 """ 145 A 1-D flat iterator for Arrayterator objects. 146 147 This iterator returns elements of the array to be iterated over in 148 `Arrayterator` one by one. It is similar to `flatiter`. 149 150 See Also 151 -------- 152 `Arrayterator` 153 flatiter 154 155 Examples 156 -------- 157 >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) 158 >>> a_itor = np.lib.arrayterator.Arrayterator(a, 2) 159 160 >>> for subarr in a_itor.flat: 161 ... if not subarr: 162 ... print subarr, type(subarr) 163 ... 164 0 <type 'numpy.int32'> 165 166 """ 167 for block in self: 168 for value in block.flat: 169 yield value 170 171 @property 172 def shape(self): 173 """ 174 The shape of the array to be iterated over. 175 176 For an example, see `Arrayterator`. 177 178 """ 179 return tuple(((stop-start-1)//step+1) for start, stop, step in 180 zip(self.start, self.stop, self.step)) 181 182 def __iter__(self): 183 # Skip arrays with degenerate dimensions 184 if [dim for dim in self.shape if dim <= 0]: raise StopIteration 185 186 start = self.start[:] 187 stop = self.stop[:] 188 step = self.step[:] 189 ndims = len(self.var.shape) 190 191 while 1: 192 count = self.buf_size or reduce(mul, self.shape) 193 194 # iterate over each dimension, looking for the 195 # running dimension (ie, the dimension along which 196 # the blocks will be built from) 197 rundim = 0 198 for i in range(ndims-1, -1, -1): 199 # if count is zero we ran out of elements to read 200 # along higher dimensions, so we read only a single position 201 if count == 0: 202 stop[i] = start[i]+1 203 elif count <= self.shape[i]: # limit along this dimension 204 stop[i] = start[i] + count*step[i] 205 rundim = i 206 else: 207 stop[i] = self.stop[i] # read everything along this 208 # dimension 209 stop[i] = min(self.stop[i], stop[i]) 210 count = count//self.shape[i] 211 212 # yield a block 213 slice_ = tuple(slice(*t) for t in zip(start, stop, step)) 214 yield self.var[slice_] 215 216 # Update start position, taking care of overflow to 217 # other dimensions 218 start[rundim] = stop[rundim] # start where we stopped 219 for i in range(ndims-1, 0, -1): 220 if start[i] >= self.stop[i]: 221 start[i] = self.start[i] 222 start[i-1] += self.step[i-1] 223 if start[0] >= self.stop[0]: 224 raise StopIteration 225 [end of numpy/lib/arrayterator.py] [start of setup.py] 1 #!/usr/bin/env python 2 """NumPy: array processing for numbers, strings, records, and objects. 3 4 NumPy is a general-purpose array-processing package designed to 5 efficiently manipulate large multi-dimensional arrays of arbitrary 6 records without sacrificing too much speed for small multi-dimensional 7 arrays. NumPy is built on the Numeric code base and adds features 8 introduced by numarray as well as an extended C-API and the ability to 9 create arrays of arbitrary type which also makes NumPy suitable for 10 interfacing with general-purpose data-base applications. 11 12 There are also basic facilities for discrete fourier transform, 13 basic linear algebra and random number generation. 14 15 """ 16 from __future__ import division, print_function 17 18 DOCLINES = __doc__.split("\n") 19 20 import os 21 import shutil 22 import sys 23 import re 24 import subprocess 25 26 if sys.version_info[0] >= 3: 27 import builtins 28 else: 29 import __builtin__ as builtins 30 31 CLASSIFIERS = """\ 32 Development Status :: 5 - Production/Stable 33 Intended Audience :: Science/Research 34 Intended Audience :: Developers 35 License :: OSI Approved 36 Programming Language :: C 37 Programming Language :: Python 38 Programming Language :: Python :: 3 39 Topic :: Software Development 40 Topic :: Scientific/Engineering 41 Operating System :: Microsoft :: Windows 42 Operating System :: POSIX 43 Operating System :: Unix 44 Operating System :: MacOS 45 """ 46 47 NAME = 'numpy' 48 MAINTAINER = "NumPy Developers" 49 MAINTAINER_EMAIL = "[email protected]" 50 DESCRIPTION = DOCLINES[0] 51 LONG_DESCRIPTION = "\n".join(DOCLINES[2:]) 52 URL = "http://www.numpy.org" 53 DOWNLOAD_URL = "http://sourceforge.net/projects/numpy/files/NumPy/" 54 LICENSE = 'BSD' 55 CLASSIFIERS = [_f for _f in CLASSIFIERS.split('\n') if _f] 56 AUTHOR = "Travis E. Oliphant et al." 57 AUTHOR_EMAIL = "[email protected]" 58 PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"] 59 MAJOR = 1 60 MINOR = 8 61 MICRO = 0 62 ISRELEASED = False 63 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO) 64 65 # Return the git revision as a string 66 def git_version(): 67 def _minimal_ext_cmd(cmd): 68 # construct minimal environment 69 env = {} 70 for k in ['SYSTEMROOT', 'PATH']: 71 v = os.environ.get(k) 72 if v is not None: 73 env[k] = v 74 # LANGUAGE is used on win32 75 env['LANGUAGE'] = 'C' 76 env['LANG'] = 'C' 77 env['LC_ALL'] = 'C' 78 out = subprocess.Popen(cmd, stdout = subprocess.PIPE, env=env).communicate()[0] 79 return out 80 81 try: 82 out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) 83 GIT_REVISION = out.strip().decode('ascii') 84 except OSError: 85 GIT_REVISION = "Unknown" 86 87 return GIT_REVISION 88 89 # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly 90 # update it when the contents of directories change. 91 if os.path.exists('MANIFEST'): os.remove('MANIFEST') 92 93 # This is a bit hackish: we are setting a global variable so that the main 94 # numpy __init__ can detect if it is being loaded by the setup routine, to 95 # avoid attempting to load components that aren't built yet. While ugly, it's 96 # a lot more robust than what was previously being used. 97 builtins.__NUMPY_SETUP__ = True 98 99 100 def write_version_py(filename='numpy/version.py'): 101 cnt = """ 102 # THIS FILE IS GENERATED FROM NUMPY SETUP.PY 103 short_version = '%(version)s' 104 version = '%(version)s' 105 full_version = '%(full_version)s' 106 git_revision = '%(git_revision)s' 107 release = %(isrelease)s 108 109 if not release: 110 version = full_version 111 """ 112 # Adding the git rev number needs to be done inside write_version_py(), 113 # otherwise the import of numpy.version messes up the build under Python 3. 114 FULLVERSION = VERSION 115 if os.path.exists('.git'): 116 GIT_REVISION = git_version() 117 elif os.path.exists('numpy/version.py'): 118 # must be a source distribution, use existing version file 119 try: 120 from numpy.version import git_revision as GIT_REVISION 121 except ImportError: 122 raise ImportError("Unable to import git_revision. Try removing " \ 123 "numpy/version.py and the build directory " \ 124 "before building.") 125 else: 126 GIT_REVISION = "Unknown" 127 128 if not ISRELEASED: 129 FULLVERSION += '.dev-' + GIT_REVISION[:7] 130 131 a = open(filename, 'w') 132 try: 133 a.write(cnt % {'version': VERSION, 134 'full_version' : FULLVERSION, 135 'git_revision' : GIT_REVISION, 136 'isrelease': str(ISRELEASED)}) 137 finally: 138 a.close() 139 140 def configuration(parent_package='',top_path=None): 141 from numpy.distutils.misc_util import Configuration 142 143 config = Configuration(None, parent_package, top_path) 144 config.set_options(ignore_setup_xxx_py=True, 145 assume_default_configuration=True, 146 delegate_options_to_subpackages=True, 147 quiet=True) 148 149 config.add_subpackage('numpy') 150 151 config.get_version('numpy/version.py') # sets config.version 152 153 return config 154 155 def setup_package(): 156 157 # Perform 2to3 if needed 158 local_path = os.path.dirname(os.path.abspath(sys.argv[0])) 159 src_path = local_path 160 161 if sys.version_info[0] == 3: 162 src_path = os.path.join(local_path, 'build', 'py3k') 163 sys.path.insert(0, os.path.join(local_path, 'tools')) 164 import py3tool 165 print("Converting to Python3 via 2to3...") 166 py3tool.sync_2to3('numpy', os.path.join(src_path, 'numpy')) 167 168 site_cfg = os.path.join(local_path, 'site.cfg') 169 if os.path.isfile(site_cfg): 170 shutil.copy(site_cfg, src_path) 171 172 # Ugly hack to make pip work with Python 3, see #1857. 173 # Explanation: pip messes with __file__ which interacts badly with the 174 # change in directory due to the 2to3 conversion. Therefore we restore 175 # __file__ to what it would have been otherwise. 176 global __file__ 177 __file__ = os.path.join(os.curdir, os.path.basename(__file__)) 178 if '--egg-base' in sys.argv: 179 # Change pip-egg-info entry to absolute path, so pip can find it 180 # after changing directory. 181 idx = sys.argv.index('--egg-base') 182 if sys.argv[idx + 1] == 'pip-egg-info': 183 sys.argv[idx + 1] = os.path.join(local_path, 'pip-egg-info') 184 185 old_path = os.getcwd() 186 os.chdir(src_path) 187 sys.path.insert(0, src_path) 188 189 # Rewrite the version file everytime 190 write_version_py() 191 192 # Run build 193 from numpy.distutils.core import setup 194 195 try: 196 setup( 197 name=NAME, 198 maintainer=MAINTAINER, 199 maintainer_email=MAINTAINER_EMAIL, 200 description=DESCRIPTION, 201 long_description=LONG_DESCRIPTION, 202 url=URL, 203 download_url=DOWNLOAD_URL, 204 license=LICENSE, 205 classifiers=CLASSIFIERS, 206 author=AUTHOR, 207 author_email=AUTHOR_EMAIL, 208 platforms=PLATFORMS, 209 configuration=configuration ) 210 finally: 211 del sys.path[0] 212 os.chdir(old_path) 213 return 214 215 if __name__ == '__main__': 216 setup_package() 217 [end of setup.py] [start of tools/py3tool.py] 1 #!/usr/bin/env python3 2 # -*- python -*- 3 """ 4 %prog SUBMODULE... 5 6 Hack to pipe submodules of Numpy through 2to3 and build them in-place 7 one-by-one. 8 9 Example usage: 10 11 python3 tools/py3tool.py testing distutils core 12 13 This will copy files to _py3k/numpy, add a dummy __init__.py and 14 version.py on the top level, and copy and 2to3 the files of the three 15 submodules. 16 17 When running py3tool again, only changed files are re-processed, which 18 makes the test-bugfix cycle faster. 19 20 """ 21 from __future__ import division, absolute_import, print_function 22 23 from optparse import OptionParser 24 import shutil 25 import os 26 import sys 27 import re 28 import subprocess 29 import fnmatch 30 31 if os.environ.get('USE_2TO3CACHE'): 32 import lib2to3cache 33 34 BASE = os.path.normpath(os.path.join(os.path.dirname(__file__), '..')) 35 TEMP = os.path.normpath(os.path.join(BASE, '_py3k')) 36 37 SCRIPT_2TO3 = os.path.join(BASE, 'tools', '2to3.py') 38 39 EXTRA_2TO3_FLAGS = { 40 'numpy/core/defchararray.py': '-x unicode', 41 'numpy/compat/py3k.py': '-x unicode', 42 'numpy/ma/timer_comparison.py': 'skip', 43 } 44 45 # Names of fixers to skip when running 2to3. This is a complete list of 46 # available fixers, with fixers not currently skipped commented out. 47 FIXES_TO_SKIP = [ 48 'apply', 49 # 'basestring', 50 'buffer', 51 'callable', 52 'dict', 53 'exec', 54 'execfile', 55 'exitfunc', 56 'filter', 57 'funcattrs', 58 'future', 59 'getcwdu', 60 'has_key', 61 # 'idioms', 62 'import', 63 'imports', 64 'imports2', 65 'input', 66 'intern', 67 # 'isinstance', 68 'itertools', 69 'itertools_imports', 70 'long', 71 'map', 72 'metaclass', 73 'methodattrs', 74 'ne', 75 # 'next', 76 # 'nonzero', 77 'numliterals', 78 'operator', 79 'paren', 80 'print', 81 'raise', 82 'raw_input', 83 'reduce', 84 # 'renames', 85 'repr', 86 'setliteral', 87 'standarderror', 88 'sys_exc', 89 'throw', 90 'tuple_params', 91 # 'types', 92 # 'unicode', 93 # 'urllib', 94 # 'ws_comma', 95 'xrange', 96 'xreadlines', 97 # 'zip', 98 ] 99 100 skip_fixes= [] 101 for _t in FIXES_TO_SKIP: 102 skip_fixes.append('-x') 103 skip_fixes.append(_t) 104 105 106 def main(): 107 p = OptionParser(usage=__doc__.strip()) 108 p.add_option("--clean", "-c", action="store_true", 109 help="clean source directory") 110 options, args = p.parse_args() 111 112 if not args: 113 p.error('no submodules given') 114 else: 115 dirs = ['numpy/%s' % x for x in map(os.path.basename, args)] 116 117 # Prepare 118 if not os.path.isdir(TEMP): 119 os.makedirs(TEMP) 120 121 # Set up dummy files (for building only submodules) 122 dummy_files = { 123 '__init__.py': 'from numpy.version import version as __version__', 124 'version.py': 'version = "1.4.0.dev"' 125 } 126 127 for fn, content in dummy_files.items(): 128 fn = os.path.join(TEMP, 'numpy', fn) 129 if not os.path.isfile(fn): 130 try: 131 os.makedirs(os.path.dirname(fn)) 132 except OSError: 133 pass 134 f = open(fn, 'wb+') 135 f.write(content.encode('ascii')) 136 f.close() 137 138 # Environment 139 pp = [os.path.abspath(TEMP)] 140 def getenv(): 141 env = dict(os.environ) 142 env.update({'PYTHONPATH': ':'.join(pp)}) 143 return env 144 145 # Copy 146 for d in dirs: 147 src = os.path.join(BASE, d) 148 dst = os.path.join(TEMP, d) 149 150 # Run 2to3 151 sync_2to3(dst=dst, 152 src=src, 153 patchfile=os.path.join(TEMP, os.path.basename(d) + '.patch'), 154 clean=options.clean) 155 156 # Run setup.py, falling back to Pdb post-mortem on exceptions 157 setup_py = os.path.join(dst, 'setup.py') 158 if os.path.isfile(setup_py): 159 code = """\ 160 import pdb, sys, traceback 161 p = pdb.Pdb() 162 try: 163 import __main__ 164 __main__.__dict__.update({ 165 "__name__": "__main__", "__file__": "setup.py", 166 "__builtins__": __builtins__}) 167 fp = open("setup.py", "rb") 168 try: 169 exec(compile(fp.read(), "setup.py", 'exec')) 170 finally: 171 fp.close() 172 except SystemExit: 173 raise 174 except: 175 traceback.print_exc() 176 t = sys.exc_info()[2] 177 p.interaction(None, t) 178 """ 179 ret = subprocess.call([sys.executable, '-c', code, 180 'build_ext', '-i'], 181 cwd=dst, 182 env=getenv()) 183 if ret != 0: 184 raise RuntimeError("Build failed.") 185 186 # Run nosetests 187 subprocess.call(['nosetests3', '-v', d], cwd=TEMP) 188 189 190 def walk_sync(dir1, dir2, _seen=None): 191 if _seen is None: 192 seen = {} 193 else: 194 seen = _seen 195 196 if not dir1.endswith(os.path.sep): 197 dir1 = dir1 + os.path.sep 198 199 # Walk through stuff (which we haven't yet gone through) in dir1 200 for root, dirs, files in os.walk(dir1): 201 sub = root[len(dir1):] 202 if sub in seen: 203 dirs = [x for x in dirs if x not in seen[sub][0]] 204 files = [x for x in files if x not in seen[sub][1]] 205 seen[sub][0].extend(dirs) 206 seen[sub][1].extend(files) 207 else: 208 seen[sub] = (dirs, files) 209 if not dirs and not files: 210 continue 211 yield os.path.join(dir1, sub), os.path.join(dir2, sub), dirs, files 212 213 if _seen is None: 214 # Walk through stuff (which we haven't yet gone through) in dir2 215 for root2, root1, dirs, files in walk_sync(dir2, dir1, _seen=seen): 216 yield root1, root2, dirs, files 217 218 def sync_2to3(src, dst, patchfile=None, clean=False): 219 import lib2to3.main 220 from io import StringIO 221 222 to_convert = [] 223 224 for src_dir, dst_dir, dirs, files in walk_sync(src, dst): 225 for fn in dirs + files: 226 src_fn = os.path.join(src_dir, fn) 227 dst_fn = os.path.join(dst_dir, fn) 228 229 # skip temporary etc. files 230 if fn.startswith('.#') or fn.endswith('~'): 231 continue 232 233 # remove non-existing 234 if os.path.exists(dst_fn) and not os.path.exists(src_fn): 235 if clean: 236 if os.path.isdir(dst_fn): 237 shutil.rmtree(dst_fn) 238 else: 239 os.unlink(dst_fn) 240 continue 241 242 # make directories 243 if os.path.isdir(src_fn): 244 if not os.path.isdir(dst_fn): 245 os.makedirs(dst_fn) 246 continue 247 248 dst_dir = os.path.dirname(dst_fn) 249 if os.path.isfile(dst_fn) and not os.path.isdir(dst_dir): 250 os.makedirs(dst_dir) 251 252 # don't replace up-to-date files 253 try: 254 if os.path.isfile(dst_fn) and \ 255 os.stat(dst_fn).st_mtime >= os.stat(src_fn).st_mtime: 256 continue 257 except OSError: 258 pass 259 260 # copy file 261 shutil.copyfile(src_fn, dst_fn) 262 263 # add .py files to 2to3 list 264 if dst_fn.endswith('.py'): 265 to_convert.append((src_fn, dst_fn)) 266 267 # run 2to3 268 flag_sets = {} 269 for fn, dst_fn in to_convert: 270 flag = '' 271 for pat, opt in EXTRA_2TO3_FLAGS.items(): 272 if fnmatch.fnmatch(fn, pat): 273 flag = opt 274 break 275 flag_sets.setdefault(flag, []).append(dst_fn) 276 277 if patchfile: 278 p = open(patchfile, 'wb+') 279 else: 280 p = open(os.devnull, 'wb') 281 282 for flags, filenames in flag_sets.items(): 283 if flags == 'skip': 284 continue 285 286 _old_stdout = sys.stdout 287 try: 288 sys.stdout = StringIO() 289 opt = [] 290 opt.extend(['-w', '-n']) 291 opt.extend(skip_fixes) 292 opt.extend(flags.split()) 293 opt.extend(filenames) 294 lib2to3.main.main("lib2to3.fixes", opt) 295 finally: 296 sys.stdout = _old_stdout 297 298 p.close() 299 300 if __name__ == "__main__": 301 main() 302 [end of tools/py3tool.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
numpy/numpy
e589c6ed1dac7755bb7bd9e181a43ebeff62dcec
2to3 run `zip` fixer zip is now an iterator. This is simiar to the filter and map changers.
2013-04-13T23:10:40Z
<patch> diff --git a/doc/sphinxext/numpydoc/compiler_unparse.py b/doc/sphinxext/numpydoc/compiler_unparse.py --- a/doc/sphinxext/numpydoc/compiler_unparse.py +++ b/doc/sphinxext/numpydoc/compiler_unparse.py @@ -18,7 +18,7 @@ if sys.version_info[0] >= 3: from io import StringIO else: - from io import StringIO + from StringIO import StringIO def unparse(ast, single_line_functions=False): s = StringIO() @@ -106,13 +106,13 @@ def _And(self, t): if i != len(t.nodes)-1: self._write(") and (") self._write(")") - + def _AssAttr(self, t): """ Handle assigning an attribute of an object """ self._dispatch(t.expr) self._write('.'+t.attrname) - + def _Assign(self, t): """ Expression Assignment such as "a = 1". @@ -150,36 +150,36 @@ def _AssTuple(self, t): def _AugAssign(self, t): """ +=,-=,*=,/=,**=, etc. operations """ - + self._fill() self._dispatch(t.node) self._write(' '+t.op+' ') self._dispatch(t.expr) if not self._do_indent: self._write(';') - + def _Bitand(self, t): """ Bit and operation. """ - + for i, node in enumerate(t.nodes): self._write("(") self._dispatch(node) self._write(")") if i != len(t.nodes)-1: self._write(" & ") - + def _Bitor(self, t): """ Bit or operation """ - + for i, node in enumerate(t.nodes): self._write("(") self._dispatch(node) self._write(")") if i != len(t.nodes)-1: self._write(" | ") - + def _CallFunc(self, t): """ Function call. """ @@ -254,7 +254,7 @@ def _From(self, t): self._write(name) if asname is not None: self._write(" as "+asname) - + def _Function(self, t): """ Handle function definitions """ diff --git a/numpy/lib/_iotools.py b/numpy/lib/_iotools.py --- a/numpy/lib/_iotools.py +++ b/numpy/lib/_iotools.py @@ -855,7 +855,7 @@ def easy_dtype(ndtype, names=None, defaultfmt="f%i", **validationargs): if nbtypes == 0: formats = tuple([ndtype.type] * len(names)) names = validate(names, defaultfmt=defaultfmt) - ndtype = np.dtype(zip(names, formats)) + ndtype = np.dtype(list(zip(names, formats))) # Structured dtype: just validate the names as needed else: ndtype.names = validate(names, nbfields=nbtypes, diff --git a/numpy/lib/npyio.py b/numpy/lib/npyio.py --- a/numpy/lib/npyio.py +++ b/numpy/lib/npyio.py @@ -1664,11 +1664,11 @@ def genfromtxt(fname, dtype=float, comments='#', delimiter=None, # rows[i] = tuple([convert(val) # for (convert, val) in zip(conversionfuncs, vals)]) if loose: - rows = zip(*[[converter._loose_call(_r) for _r in map(itemgetter(i), rows)] - for (i, converter) in enumerate(converters)]) + rows = list(zip(*[[converter._loose_call(_r) for _r in map(itemgetter(i), rows)] + for (i, converter) in enumerate(converters)])) else: - rows = zip(*[[converter._strict_call(_r) for _r in map(itemgetter(i), rows)] - for (i, converter) in enumerate(converters)]) + rows = list(zip(*[[converter._strict_call(_r) for _r in map(itemgetter(i), rows)] + for (i, converter) in enumerate(converters)])) # Reset the dtype data = rows if dtype is None: @@ -1693,8 +1693,8 @@ def genfromtxt(fname, dtype=float, comments='#', delimiter=None, mdtype = [(defaultfmt % i, np.bool) for (i, dt) in enumerate(column_types)] else: - ddtype = zip(names, column_types) - mdtype = zip(names, [np.bool] * len(column_types)) + ddtype = list(zip(names, column_types)) + mdtype = list(zip(names, [np.bool] * len(column_types))) output = np.array(data, dtype=ddtype) if usemask: outputmask = np.array(masks, dtype=mdtype) diff --git a/numpy/ma/mrecords.py b/numpy/ma/mrecords.py --- a/numpy/ma/mrecords.py +++ b/numpy/ma/mrecords.py @@ -508,7 +508,7 @@ def fromarrays(arraylist, dtype=None, shape=None, formats=None, dtype=dtype, shape=shape, formats=formats, names=names, titles=titles, aligned=aligned, byteorder=byteorder).view(mrecarray) - _array._mask.flat = zip(*masklist) + _array._mask.flat = list(zip(*masklist)) if fill_value is not None: _array.fill_value = fill_value return _array diff --git a/tools/py3tool.py b/tools/py3tool.py --- a/tools/py3tool.py +++ b/tools/py3tool.py @@ -94,7 +94,7 @@ # 'ws_comma', 'xrange', 'xreadlines', -# 'zip', + 'zip', ] skip_fixes= [] </patch>
[]
[]
conda__conda-6773
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> conda-env may update conda while it's still in use (cc @levy5674, @patricksnape) Issue reported in https://github.com/conda/conda/issues/6606#issuecomment-355149329: `conda-env` can update `conda` (either explicitly or by `auto_update_conda = True`) while it's still using it and thus makes the current process get into an inconsistent state (using parts of the old as well as updated `conda`). `conda-env` loops over the different dependency types in https://github.com/conda/conda/blob/4.4.6/conda_env/cli/main_update.py#L104. If it updates `conda`, other subsequent updates, e.g., using `conda_env.installers.pip`, may already introduce parts of the updated `conda` into the current process, see: ```yaml # environment.yaml dependencies: - conda - pip: - bumpversion ``` ```bash (ins)$ cat environment.yml dependencies: - conda - pip: - bumpversion (ins)$ bash Miniconda3-4.3.31-Linux-x86_64.sh -bp ./mc PREFIX=/home/maba/code/conda/mc installing: python-3.6.3-h6c0c0dc_5 ... Python 3.6.3 :: Anaconda, Inc. installing: ca-certificates-2017.08.26-h1d4fec5_0 ... installing: conda-env-2.6.0-h36134e3_1 ... installing: libgcc-ng-7.2.0-h7cc24e2_2 ... installing: libstdcxx-ng-7.2.0-h7a57d05_2 ... installing: libffi-3.2.1-hd88cf55_4 ... installing: ncurses-6.0-h9df7e31_2 ... installing: openssl-1.0.2n-hb7f436b_0 ... installing: tk-8.6.7-hc745277_3 ... installing: xz-5.2.3-h55aa19d_2 ... installing: yaml-0.1.7-had09818_2 ... installing: zlib-1.2.11-ha838bed_2 ... installing: libedit-3.1-heed3624_0 ... installing: readline-7.0-ha6073c6_4 ... installing: sqlite-3.20.1-hb898158_2 ... installing: asn1crypto-0.23.0-py36h4639342_0 ... installing: certifi-2017.11.5-py36hf29ccca_0 ... installing: chardet-3.0.4-py36h0f667ec_1 ... installing: idna-2.6-py36h82fb2a8_1 ... installing: pycosat-0.6.3-py36h0a5515d_0 ... installing: pycparser-2.18-py36hf9f622e_1 ... installing: pysocks-1.6.7-py36hd97a5b1_1 ... installing: ruamel_yaml-0.11.14-py36ha2fb22d_2 ... installing: six-1.11.0-py36h372c433_1 ... installing: cffi-1.11.2-py36h2825082_0 ... installing: setuptools-36.5.0-py36he42e2e1_0 ... installing: cryptography-2.1.4-py36hd09be54_0 ... installing: wheel-0.30.0-py36hfd4bba0_1 ... installing: pip-9.0.1-py36h6c6f9ce_4 ... installing: pyopenssl-17.5.0-py36h20ba746_0 ... installing: urllib3-1.22-py36hbe7ace6_0 ... installing: requests-2.18.4-py36he2e5f8d_1 ... installing: conda-4.3.31-py36_0 ... installation finished. (ins)$ source mc/bin/activate (ins)(root) $ conda env update -n root Fetching package metadata ........... Solving package specifications: . conda-4.4.6-py 100% |################################################################################################################################################################################| Time: 0:00:01 669.69 kB/s An unexpected error has occurred. Please consider posting the following information to the conda GitHub issue tracker at: https://github.com/conda/conda/issues Traceback (most recent call last): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 640, in conda_exception_handler original data Length: %(original_data_length)d File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main_update.py", line 106, in execute installer = get_installer(installer_type) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/installers/pip.py", line 40, in _pip_install_via_requirements args, pip_version = pip_args(prefix) ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/maba/code/conda/mc/bin/conda-env", line 11, in <module> sys.exit(main()) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main.py", line 76, in main File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 644, in conda_exception_handler 'path': path, File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 634, in handle_exception def __init__(self, path, placeholder, new_prefix, original_data_length, new_data_length): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 595, in print_unexpected_error_message File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/cli/main_info.py", line 19, in <module> from .common import print_envs_list, stdout_json ImportError: cannot import name 'print_envs_list' ``` This of course affects `conda 4.4` as well: <details><summary> logs for conda 4.4.x (using 4.4.0rc2 to let it trip at the same changed import as 4.3.31 did) </summary><p> ```bash (ins)$ cat environment.yml dependencies: - conda - pip: - bumpversion (ins)$ bash Miniconda3-4.3.31-Linux-x86_64.sh -bp ./mc PREFIX=/home/maba/code/conda/mc installing: python-3.6.3-h6c0c0dc_5 ... Python 3.6.3 :: Anaconda, Inc. installing: ca-certificates-2017.08.26-h1d4fec5_0 ... installing: conda-env-2.6.0-h36134e3_1 ... installing: libgcc-ng-7.2.0-h7cc24e2_2 ... installing: libstdcxx-ng-7.2.0-h7a57d05_2 ... installing: libffi-3.2.1-hd88cf55_4 ... installing: ncurses-6.0-h9df7e31_2 ... installing: openssl-1.0.2n-hb7f436b_0 ... installing: tk-8.6.7-hc745277_3 ... installing: xz-5.2.3-h55aa19d_2 ... installing: yaml-0.1.7-had09818_2 ... installing: zlib-1.2.11-ha838bed_2 ... installing: libedit-3.1-heed3624_0 ... installing: readline-7.0-ha6073c6_4 ... installing: sqlite-3.20.1-hb898158_2 ... installing: asn1crypto-0.23.0-py36h4639342_0 ... installing: certifi-2017.11.5-py36hf29ccca_0 ... installing: chardet-3.0.4-py36h0f667ec_1 ... installing: idna-2.6-py36h82fb2a8_1 ... installing: pycosat-0.6.3-py36h0a5515d_0 ... installing: pycparser-2.18-py36hf9f622e_1 ... installing: pysocks-1.6.7-py36hd97a5b1_1 ... installing: ruamel_yaml-0.11.14-py36ha2fb22d_2 ... installing: six-1.11.0-py36h372c433_1 ... installing: cffi-1.11.2-py36h2825082_0 ... installing: setuptools-36.5.0-py36he42e2e1_0 ... installing: cryptography-2.1.4-py36hd09be54_0 ... installing: wheel-0.30.0-py36hfd4bba0_1 ... installing: pip-9.0.1-py36h6c6f9ce_4 ... installing: pyopenssl-17.5.0-py36h20ba746_0 ... installing: urllib3-1.22-py36hbe7ace6_0 ... installing: requests-2.18.4-py36he2e5f8d_1 ... installing: conda-4.3.31-py36_0 ... installation finished. (ins)$ source mc/bin/activate (ins)(root) $ conda install -yc conda-canary conda=4.4.0rc2 Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment /home/maba/code/conda/mc: The following packages will be UPDATED: conda: 4.3.31-py36_0 --> 4.4.0rc2-py36h7fbbb7a_0 conda-canary conda-4.4.0rc2 100% |################################################################################################################################################################################| Time: 0:00:00 707.92 kB/s (ins)(root) $ conda env update -n root Solving environment: done Downloading and Extracting Packages conda 4.4.6: ########################################################################################################################################################################################################### | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done Traceback (most recent call last): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 683, in __call__ elif isinstance(error, SafetyError): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main.py", line 74, in do_call exit_code = getattr(module, func_name)(args, parser) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main_update.py", line 107, in execute installer.install(prefix, specs, args, env) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/installers/pip.py", line 40, in _pip_install_via_requirements args, pip_version = pip_args(prefix) ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/maba/code/conda/mc/bin/conda-env", line 11, in <module> sys.exit(main()) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main.py", line 84, in main return conda_exception_handler(do_call, args, parser) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 907, in conda_exception_handler url = response.headers['Location'] File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 686, in __call__ elif context.safety_checks == SafetyChecks.warn: File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 720, in handle_exception class ExceptionHandler(object): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 732, in handle_unexpected_exception return sys.stdout if context.json else sys.stderr File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 783, in print_error_report command = ' '.join(ensure_text_type(s) for s in sys.argv) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/cli/main_info.py", line 19, in <module> from .common import print_envs_list, stdout_json ImportError: cannot import name 'print_envs_list' ``` </p></details> </issue> <code> [start of README.rst] 1 .. NOTE: This file serves both as the README on GitHub and the index.html for 2 conda.pydata.org. If you update this file, be sure to cd to the web 3 directory and run ``make html; make live`` 4 5 .. image:: https://s3.amazonaws.com/conda-dev/conda_logo.svg 6 :alt: Conda Logo 7 8 ---------------------------------------- 9 10 .. image:: https://img.shields.io/circleci/project/github/conda/conda/4.4.x.svg?maxAge=900&label=Unix 11 :target: https://circleci.com/gh/conda/workflows/conda/tree/4.4.x 12 :alt: Unix tests (CircleCI) 13 14 .. image:: https://img.shields.io/appveyor/ci/ContinuumAnalyticsFOSS/conda/4.4.x.svg?maxAge=900&label=Windows 15 :target: https://ci.appveyor.com/project/ContinuumAnalyticsFOSS/conda 16 :alt: Windows tests (Appveyor) 17 18 .. image:: https://img.shields.io/codecov/c/github/conda/conda/4.4.x.svg?label=coverage 19 :alt: Codecov Status 20 :target: https://codecov.io/gh/conda/conda/branch/4.4.x 21 22 .. image:: https://img.shields.io/github/release/conda/conda.svg 23 :alt: latest release version 24 :target: https://github.com/conda/conda/releases 25 26 | 27 28 .. image:: https://s3.amazonaws.com/conda-dev/conda-announce-signup-button.svg 29 :alt: Join the Conda Announcment List 30 :target: http://conda.pydata.org/docs/announcements.html 31 32 | 33 34 Conda is a cross-platform, language-agnostic binary package manager. It is the 35 package manager used by `Anaconda 36 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be 37 used for other systems as well. Conda makes environments first-class 38 citizens, making it easy to create independent environments even for C 39 libraries. Conda is written entirely in Python, and is BSD licensed open 40 source. 41 42 Conda is enhanced by organizations, tools, and repositories created and managed by 43 the amazing members of the conda community. Some of them can be found 44 `here <https://github.com/conda/conda/wiki/Conda-Community>`_. 45 46 47 Installation 48 ------------ 49 50 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a 51 minimal installation that only includes conda and its dependencies, called 52 `Miniconda <http://conda.pydata.org/miniconda.html>`_. 53 54 55 Getting Started 56 --------------- 57 58 If you install Anaconda, you will already have hundreds of packages 59 installed. You can see what packages are installed by running 60 61 .. code-block:: bash 62 63 $ conda list 64 65 to see all the packages that are available, use 66 67 .. code-block:: bash 68 69 $ conda search 70 71 and to install a package, use 72 73 .. code-block:: bash 74 75 $ conda install <package-name> 76 77 78 The real power of conda comes from its ability to manage environments. In 79 conda, an environment can be thought of as a completely separate installation. 80 Conda installs packages into environments efficiently using `hard links 81 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so 82 environments are space efficient, and take seconds to create. 83 84 The default environment, which ``conda`` itself is installed into is called 85 ``base``. To create another environment, use the ``conda create`` 86 command. For instance, to create an environment with the IPython notebook and 87 NumPy 1.6, which is older than the version that comes with Anaconda by 88 default, you would run 89 90 .. code-block:: bash 91 92 $ conda create -n numpy16 ipython-notebook numpy=1.6 93 94 This creates an environment called ``numpy16`` with the latest version of 95 the IPython notebook, NumPy 1.6, and their dependencies. 96 97 We can now activate this environment, use 98 99 .. code-block:: bash 100 101 # On Linux and Mac OS X 102 $ source activate numpy16 103 104 # On Windows 105 > activate numpy16 106 107 This puts the bin directory of the ``numpy16`` environment in the front of the 108 ``PATH``, and sets it as the default environment for all subsequent conda commands. 109 110 To go back to the base environment, use 111 112 .. code-block:: bash 113 114 # On Linux and Mac OS X 115 $ source deactivate 116 117 # On Windows 118 > deactivate 119 120 121 Building Your Own Packages 122 -------------------------- 123 124 You can easily build your own packages for conda, and upload them 125 to `anaconda.org <https://anaconda.org>`_, a free service for hosting 126 packages for conda, as well as other package managers. 127 To build a package, create a recipe. 128 See http://github.com/conda/conda-recipes for many example recipes, and 129 http://docs.continuum.io/conda/build.html for documentation on how to build 130 recipes. 131 132 To upload to anaconda.org, create an account. Then, install the 133 anaconda-client and login 134 135 .. code-block:: bash 136 137 $ conda install anaconda-client 138 $ anaconda login 139 140 Then, after you build your recipe 141 142 .. code-block:: bash 143 144 $ conda build <recipe-dir> 145 146 you will be prompted to upload to anaconda.org. 147 148 To add your anaconda.org channel, or the channel of others to conda so 149 that ``conda install`` will find and install their packages, run 150 151 .. code-block:: bash 152 153 $ conda config --add channels https://conda.anaconda.org/username 154 155 (replacing ``username`` with the user name of the person whose channel you want 156 to add). 157 158 Getting Help 159 ------------ 160 161 The documentation for conda is at http://conda.pydata.org/docs/. You can 162 subscribe to the `conda mailing list 163 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source 164 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_. 165 166 Contributing 167 ------------ 168 169 Contributions to conda are welcome. Just fork the GitHub repository and send a 170 pull request. 171 172 To develop on conda, the easiest way is to use a development build. This can be 173 accomplished as follows: 174 175 * clone the conda git repository to a computer with conda already installed 176 * navigate to the root directory of the git clone 177 * run ``$CONDA/bin/python setup.py develop`` where ``$CONDA`` is the path to your 178 miniconda installation 179 180 Note building a development file requires git to be installed. 181 182 To undo this, run ``$CONDA/bin/python setup.py develop -u``. Note that if you 183 used a python other than ``$CONDA/bin/python`` to install, you may have to manually 184 delete the conda executable. For example, on OS X, if you use a homebrew python 185 located at ``/usr/local/bin/python``, then you'll need to ``rm /usr/local/bin/conda`` 186 so that ``which -a conda`` lists first your miniconda installation. 187 188 If you are worried about breaking your conda installation, you can install a 189 separate instance of `Miniconda <http://conda.pydata.org/miniconda.html>`_ and 190 work off it. This is also the only way to test conda in both Python 2 and 191 Python 3, as conda can only be installed into a base environment. 192 193 To run the tests, set up a testing environment by running 194 195 * ``$CONDA/bin/python -m pip install -r utils/requirements-test.txt``. 196 * ``$CONDA/bin/python utils/setup-testing.py develop`` 197 198 and then running ``py.test`` in the conda directory. You can also run tests using the 199 Makefile by running ``make unit``, ``make smoketest`` (a single integration test), or 200 ``make integration``. The tests are also run by various CI systems when you make a 201 pull request. 202 [end of README.rst] [start of conda/cli/main_info.py] 1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 7 from __future__ import absolute_import, division, print_function, unicode_literals 8 9 from collections import OrderedDict 10 import json 11 from logging import getLogger 12 import os 13 from os import listdir 14 from os.path import exists, expanduser, isfile, join 15 import re 16 import sys 17 from textwrap import dedent 18 19 from .common import print_envs_list, stdout_json 20 from .. import CONDA_PACKAGE_ROOT, __version__ as conda_version 21 from ..base.context import conda_in_private_env, context, sys_rc_path, user_rc_path 22 from ..common.compat import iteritems, itervalues, on_win, text_type 23 from ..common.url import mask_anaconda_token 24 from ..core.envs_manager import env_name 25 from ..core.repodata import SubdirData 26 from ..models.channel import all_channel_urls, offline_keep 27 from ..models.match_spec import MatchSpec 28 from ..utils import human_bytes 29 30 log = getLogger(__name__) 31 32 33 def get_user_site(): # pragma: no cover 34 site_dirs = [] 35 try: 36 if not on_win: 37 if exists(expanduser('~/.local/lib')): 38 python_re = re.compile('python\d\.\d') 39 for path in listdir(expanduser('~/.local/lib/')): 40 if python_re.match(path): 41 site_dirs.append("~/.local/lib/%s" % path) 42 else: 43 if 'APPDATA' not in os.environ: 44 return site_dirs 45 APPDATA = os.environ[str('APPDATA')] 46 if exists(join(APPDATA, 'Python')): 47 site_dirs = [join(APPDATA, 'Python', i) for i in 48 listdir(join(APPDATA, 'PYTHON'))] 49 except (IOError, OSError) as e: 50 log.debug('Error accessing user site directory.\n%r', e) 51 return site_dirs 52 53 54 IGNORE_FIELDS = {'files', 'auth', 'preferred_env', 'priority'} 55 56 SKIP_FIELDS = IGNORE_FIELDS | {'name', 'version', 'build', 'build_number', 57 'channel', 'schannel', 'size', 'fn', 'depends'} 58 59 60 def dump_record(pkg): 61 return {k: v for k, v in iteritems(pkg.dump()) if k not in IGNORE_FIELDS} 62 63 64 def pretty_package(prec): 65 66 pkg = dump_record(prec) 67 d = OrderedDict([ 68 ('file name', prec.fn), 69 ('name', pkg['name']), 70 ('version', pkg['version']), 71 ('build string', pkg['build']), 72 ('build number', pkg['build_number']), 73 ('channel', text_type(prec.channel)), 74 ('size', human_bytes(pkg['size'])), 75 ]) 76 for key in sorted(set(pkg.keys()) - SKIP_FIELDS): 77 d[key] = pkg[key] 78 79 print() 80 header = "%s %s %s" % (d['name'], d['version'], d['build string']) 81 print(header) 82 print('-'*len(header)) 83 for key in d: 84 print("%-12s: %s" % (key, d[key])) 85 print('dependencies:') 86 for dep in pkg['depends']: 87 print(' %s' % dep) 88 89 90 def print_package_info(packages): 91 92 results = {} 93 for package in packages: 94 spec = MatchSpec(package) 95 results[package] = tuple(SubdirData.query_all(context.channels, context.subdirs, spec)) 96 97 if context.json: 98 stdout_json({package: results[package] for package in packages}) 99 else: 100 for result in itervalues(results): 101 for prec in result: 102 pretty_package(prec) 103 104 105 def get_info_dict(system=False): 106 try: 107 from ..install import linked_data 108 root_pkgs = linked_data(context.root_prefix) 109 except: # pragma: no cover 110 root_pkgs = {} 111 112 try: 113 from requests import __version__ as requests_version 114 # These environment variables can influence requests' behavior, along with configuration 115 # in a .netrc file 116 # REQUESTS_CA_BUNDLE 117 # HTTP_PROXY 118 # HTTPS_PROXY 119 except ImportError: # pragma: no cover 120 try: 121 from pip._vendor.requests import __version__ as requests_version 122 except Exception as e: # pragma: no cover 123 requests_version = "Error %r" % e 124 except Exception as e: # pragma: no cover 125 requests_version = "Error %r" % e 126 127 try: 128 from conda_env import __version__ as conda_env_version 129 except: # pragma: no cover 130 try: 131 cenv = [p for p in itervalues(root_pkgs) if p['name'] == 'conda-env'] 132 conda_env_version = cenv[0]['version'] 133 except: 134 conda_env_version = "not installed" 135 136 try: 137 import conda_build 138 except ImportError: # pragma: no cover 139 conda_build_version = "not installed" 140 except Exception as e: # pragma: no cover 141 conda_build_version = "Error %s" % e 142 else: # pragma: no cover 143 conda_build_version = conda_build.__version__ 144 145 channels = list(all_channel_urls(context.channels)) 146 if not context.json: 147 channels = [c + ('' if offline_keep(c) else ' (offline)') 148 for c in channels] 149 channels = [mask_anaconda_token(c) for c in channels] 150 151 config_files = tuple(path for path in context.collect_all() 152 if path not in ('envvars', 'cmd_line')) 153 154 netrc_file = os.environ.get('NETRC') 155 if not netrc_file: 156 user_netrc = expanduser("~/.netrc") 157 if isfile(user_netrc): 158 netrc_file = user_netrc 159 160 active_prefix_name = env_name(context.active_prefix) 161 162 info_dict = dict( 163 platform=context.subdir, 164 conda_version=conda_version, 165 conda_env_version=conda_env_version, 166 conda_build_version=conda_build_version, 167 root_prefix=context.root_prefix, 168 conda_prefix=context.conda_prefix, 169 conda_private=conda_in_private_env(), 170 root_writable=context.root_writable, 171 pkgs_dirs=context.pkgs_dirs, 172 envs_dirs=context.envs_dirs, 173 default_prefix=context.default_prefix, 174 active_prefix=context.active_prefix, 175 active_prefix_name=active_prefix_name, 176 conda_shlvl=context.shlvl, 177 channels=channels, 178 user_rc_path=user_rc_path, 179 rc_path=user_rc_path, 180 sys_rc_path=sys_rc_path, 181 # is_foreign=bool(foreign), 182 offline=context.offline, 183 envs=[], 184 python_version='.'.join(map(str, sys.version_info)), 185 requests_version=requests_version, 186 user_agent=context.user_agent, 187 conda_location=CONDA_PACKAGE_ROOT, 188 config_files=config_files, 189 netrc_file=netrc_file, 190 ) 191 if on_win: 192 from ..common.platform import is_admin_on_windows 193 info_dict['is_windows_admin'] = is_admin_on_windows() 194 else: 195 info_dict['UID'] = os.geteuid() 196 info_dict['GID'] = os.getegid() 197 198 evars = { 199 'CIO_TEST', 200 'REQUESTS_CA_BUNDLE', 201 'SSL_CERT_FILE', 202 } 203 204 # add all relevant env vars, e.g. startswith('CONDA') or endswith('PATH') 205 evars.update(v for v in os.environ if v.upper().startswith('CONDA')) 206 evars.update(v for v in os.environ if v.upper().startswith('PYTHON')) 207 evars.update(v for v in os.environ if v.upper().endswith('PROXY')) 208 evars.update(v for v in os.environ if v.upper().endswith('PATH')) 209 evars.update(v for v in os.environ if v.upper().startswith('SUDO')) 210 211 info_dict.update({ 212 'sys.version': sys.version, 213 'sys.prefix': sys.prefix, 214 'sys.executable': sys.executable, 215 'site_dirs': get_user_site(), 216 'env_vars': {ev: os.getenv(ev, os.getenv(ev.lower(), '<not set>')) for ev in evars}, 217 }) 218 219 return info_dict 220 221 222 def get_env_vars_str(info_dict): 223 from textwrap import wrap 224 builder = [] 225 builder.append("%23s:" % "environment variables") 226 env_vars = info_dict.get('env_vars', {}) 227 for key in sorted(env_vars): 228 value = wrap(env_vars[key]) 229 first_line = value[0] if len(value) else "" 230 other_lines = value[1:] if len(value) > 1 else () 231 builder.append("%25s=%s" % (key, first_line)) 232 for val in other_lines: 233 builder.append(' ' * 26 + val) 234 return '\n'.join(builder) 235 236 237 def get_main_info_str(info_dict): 238 for key in 'pkgs_dirs', 'envs_dirs', 'channels', 'config_files': 239 info_dict['_' + key] = ('\n' + 26 * ' ').join(info_dict[key]) 240 info_dict['_rtwro'] = ('writable' if info_dict['root_writable'] else 'read only') 241 242 format_param = lambda nm, val: "%23s : %s" % (nm, val) 243 244 builder = [''] 245 246 if info_dict['active_prefix_name']: 247 builder.append(format_param('active environment', info_dict['active_prefix_name'])) 248 builder.append(format_param('active env location', info_dict['active_prefix'])) 249 else: 250 builder.append(format_param('active environment', info_dict['active_prefix'])) 251 252 if info_dict['conda_shlvl'] >= 0: 253 builder.append(format_param('shell level', info_dict['conda_shlvl'])) 254 255 builder.extend(( 256 format_param('user config file', info_dict['user_rc_path']), 257 format_param('populated config files', info_dict['_config_files']), 258 format_param('conda version', info_dict['conda_version']), 259 format_param('conda-build version', info_dict['conda_build_version']), 260 format_param('python version', info_dict['python_version']), 261 format_param('base environment', '%s (%s)' % (info_dict['root_prefix'], 262 info_dict['_rtwro'])), 263 format_param('channel URLs', info_dict['_channels']), 264 format_param('package cache', info_dict['_pkgs_dirs']), 265 format_param('envs directories', info_dict['_envs_dirs']), 266 format_param('platform', info_dict['platform']), 267 format_param('user-agent', info_dict['user_agent']), 268 )) 269 270 if on_win: 271 builder.append(format_param("administrator", info_dict['is_windows_admin'])) 272 else: 273 builder.append(format_param("UID:GID", '%s:%s' % (info_dict['UID'], info_dict['GID']))) 274 275 builder.extend(( 276 format_param('netrc file', info_dict['netrc_file']), 277 format_param('offline mode', info_dict['offline']), 278 )) 279 280 builder.append('') 281 return '\n'.join(builder) 282 283 284 def execute(args, parser): 285 if args.base: 286 if context.json: 287 stdout_json({'root_prefix': context.root_prefix}) 288 else: 289 print(context.root_prefix) 290 return 291 292 if args.packages: 293 from ..resolve import ResolvePackageNotFound 294 try: 295 print_package_info(args.packages) 296 return 297 except ResolvePackageNotFound as e: # pragma: no cover 298 from ..exceptions import PackagesNotFoundError 299 raise PackagesNotFoundError(e.bad_deps) 300 301 if args.unsafe_channels: 302 if not context.json: 303 print("\n".join(context.channels)) 304 else: 305 print(json.dumps({"channels": context.channels})) 306 return 0 307 308 options = 'envs', 'system', 'license' 309 310 if args.all or context.json: 311 for option in options: 312 setattr(args, option, True) 313 info_dict = get_info_dict(args.system) 314 315 if (args.all or all(not getattr(args, opt) for opt in options)) and not context.json: 316 print(get_main_info_str(info_dict)) 317 318 if args.envs: 319 from ..core.envs_manager import list_all_known_prefixes 320 info_dict['envs'] = list_all_known_prefixes() 321 print_envs_list(info_dict['envs'], not context.json) 322 323 if args.system: 324 if not context.json: 325 from .find_commands import find_commands, find_executable 326 print("sys.version: %s..." % (sys.version[:40])) 327 print("sys.prefix: %s" % sys.prefix) 328 print("sys.executable: %s" % sys.executable) 329 print("conda location: %s" % info_dict['conda_location']) 330 for cmd in sorted(set(find_commands() + ('build',))): 331 print("conda-%s: %s" % (cmd, find_executable('conda-' + cmd))) 332 print("user site dirs: ", end='') 333 site_dirs = info_dict['site_dirs'] 334 if site_dirs: 335 print(site_dirs[0]) 336 else: 337 print() 338 for site_dir in site_dirs[1:]: 339 print(' %s' % site_dir) 340 print() 341 342 for name, value in sorted(iteritems(info_dict['env_vars'])): 343 print("%s: %s" % (name, value)) 344 print() 345 346 if args.license and not context.json: 347 try: 348 from _license import show_info 349 show_info() # pragma: no cover 350 except ImportError: 351 print(dedent(""" 352 WARNING: could not import _license.show_info 353 # try: 354 # $ conda install -n root _license""")) 355 except Exception as e: # pragma: no cover 356 log.warn('%r', e) 357 358 if context.json: 359 stdout_json(info_dict) 360 [end of conda/cli/main_info.py] [start of conda/egg_info.py] 1 """ 2 Functions related to core conda functionality that relates to manually 3 installed Python packages, e.g. using "python setup.py install", or "pip". 4 """ 5 from __future__ import absolute_import, division, print_function, unicode_literals 6 7 from io import open 8 import os 9 from os.path import isdir, isfile, join 10 import re 11 import sys 12 13 from .common.compat import itervalues, on_win 14 from .core.linked_data import linked_data 15 from .misc import rel_path 16 from .models.dist import Dist 17 18 19 def get_site_packages_dir(installed_pkgs): 20 for info in itervalues(installed_pkgs): 21 if info['name'] == 'python': 22 if on_win: 23 stdlib_dir = 'Lib' 24 else: 25 py_ver = info['version'][:3] 26 stdlib_dir = 'lib/python%s' % py_ver 27 return join(stdlib_dir, 'site-packages') 28 return None 29 30 31 def get_egg_info_files(sp_dir): 32 for fn in (isdir(sp_dir) and os.listdir(sp_dir) or ()): 33 if fn.endswith('.egg-link'): 34 with open(join(sp_dir, fn), 'r') as reader: 35 for egg in get_egg_info_files(reader.readline().strip()): 36 yield egg 37 if not fn.endswith(('.egg', '.egg-info', '.dist-info')): 38 continue 39 path = join(sp_dir, fn) 40 if isfile(path): 41 yield path 42 elif isdir(path): 43 for path2 in [join(path, 'PKG-INFO'), 44 join(path, 'EGG-INFO', 'PKG-INFO'), 45 join(path, 'METADATA')]: 46 if isfile(path2): 47 yield path2 48 49 50 pat = re.compile(r'(\w+):\s*(\S+)', re.I) 51 def parse_egg_info(path): 52 """ 53 Parse an .egg-info file and return its canonical distribution name 54 """ 55 info = {} 56 for line in open(path, encoding='utf-8'): 57 line = line.strip() 58 m = pat.match(line) 59 if m: 60 key = m.group(1).lower() 61 info[key] = m.group(2) 62 try: 63 return '%(name)s-%(version)s-<pip>' % info 64 except KeyError: 65 pass 66 return None 67 68 69 def get_egg_info(prefix, all_pkgs=False): 70 """ 71 Return a set of canonical names of all Python packages (in `prefix`), 72 by inspecting the .egg-info files inside site-packages. 73 By default, only untracked (not conda installed) .egg-info files are 74 considered. Setting `all_pkgs` to True changes this. 75 """ 76 installed_pkgs = linked_data(prefix) 77 sp_dir = get_site_packages_dir(installed_pkgs) 78 if sp_dir is None or not isdir(join(prefix, sp_dir)): 79 return set() 80 81 conda_files = set() 82 for info in itervalues(installed_pkgs): 83 conda_files.update(info.get('files', [])) 84 85 res = set() 86 for path in get_egg_info_files(join(prefix, sp_dir)): 87 f = rel_path(prefix, path) 88 if all_pkgs or f not in conda_files: 89 try: 90 dist = parse_egg_info(path) 91 except UnicodeDecodeError: 92 dist = None 93 if dist: 94 res.add(Dist(dist)) 95 return res 96 97 98 if __name__ == '__main__': 99 from pprint import pprint 100 pprint(get_egg_info(sys.prefix)) 101 [end of conda/egg_info.py] [start of conda_env/cli/main_create.py] 1 from __future__ import print_function 2 3 from argparse import RawDescriptionHelpFormatter 4 import os 5 import sys 6 import textwrap 7 8 from conda._vendor.auxlib.path import expand 9 from conda.cli import install as cli_install 10 from conda.cli.conda_argparse import add_parser_json, add_parser_prefix 11 from conda.gateways.disk.delete import delete_trash, rm_rf 12 from conda.misc import touch_nonadmin 13 from .common import get_prefix 14 from .. import exceptions, specs 15 from ..installers.base import InvalidInstaller, get_installer 16 17 description = """ 18 Create an environment based on an environment file 19 """ 20 21 example = """ 22 examples: 23 conda env create 24 conda env create -n name 25 conda env create vader/deathstar 26 conda env create -f=/path/to/environment.yml 27 conda env create -f=/path/to/requirements.txt -n deathstar 28 conda env create -f=/path/to/requirements.txt -p /home/user/software/deathstar 29 """ 30 31 32 def configure_parser(sub_parsers): 33 p = sub_parsers.add_parser( 34 'create', 35 formatter_class=RawDescriptionHelpFormatter, 36 description=description, 37 help=description, 38 epilog=example, 39 ) 40 p.add_argument( 41 '-f', '--file', 42 action='store', 43 help='environment definition file (default: environment.yml)', 44 default='environment.yml', 45 ) 46 47 # Add name and prefix args 48 add_parser_prefix(p) 49 50 p.add_argument( 51 '-q', '--quiet', 52 action='store_true', 53 default=False, 54 ) 55 p.add_argument( 56 'remote_definition', 57 help='remote environment definition / IPython notebook', 58 action='store', 59 default=None, 60 nargs='?' 61 ) 62 p.add_argument( 63 '--force', 64 help=('force creation of environment (removing a previously existing ' 65 'environment of the same name).'), 66 action='store_true', 67 default=False, 68 ) 69 add_parser_json(p) 70 p.set_defaults(func='.main_create.execute') 71 72 73 def execute(args, parser): 74 from conda.base.context import context 75 name = args.remote_definition or args.name 76 77 try: 78 spec = specs.detect(name=name, filename=expand(args.file), 79 directory=os.getcwd()) 80 env = spec.environment 81 82 # FIXME conda code currently requires args to have a name or prefix 83 # don't overwrite name if it's given. gh-254 84 if args.prefix is None and args.name is None: 85 args.name = env.name 86 87 except exceptions.SpecNotFound: 88 raise 89 90 prefix = get_prefix(args, search=False) 91 92 if args.force and prefix != context.root_prefix and os.path.exists(prefix): 93 rm_rf(prefix) 94 cli_install.check_prefix(prefix, json=args.json) 95 96 # TODO, add capability 97 # common.ensure_override_channels_requires_channel(args) 98 # channel_urls = args.channel or () 99 100 # # special case for empty environment 101 # if not env.dependencies: 102 # from conda.install import symlink_conda 103 # symlink_conda(prefix, context.root_dir) 104 105 for installer_type, pkg_specs in env.dependencies.items(): 106 try: 107 installer = get_installer(installer_type) 108 installer.install(prefix, pkg_specs, args, env) 109 except InvalidInstaller: 110 sys.stderr.write(textwrap.dedent(""" 111 Unable to install package for {0}. 112 113 Please double check and ensure your dependencies file has 114 the correct spelling. You might also try installing the 115 conda-env-{0} package to see if provides the required 116 installer. 117 """).lstrip().format(installer_type) 118 ) 119 return -1 120 121 touch_nonadmin(prefix) 122 delete_trash() 123 if not args.json: 124 cli_install.print_activate(args.name if args.name else prefix) 125 [end of conda_env/cli/main_create.py] [start of conda_env/pip_util.py] 1 """ 2 Functions related to core conda functionality that relates to pip 3 4 NOTE: This modules used to in conda, as conda/pip.py 5 """ 6 from __future__ import absolute_import, print_function 7 8 import json 9 import os 10 from os.path import isfile, join 11 import re 12 import subprocess 13 import sys 14 15 16 def pip_args(prefix): 17 """ 18 return the arguments required to invoke pip (in prefix), or None if pip 19 is not installed 20 """ 21 if sys.platform == 'win32': 22 pip_path = join(prefix, 'Scripts', 'pip-script.py') 23 py_path = join(prefix, 'python.exe') 24 else: 25 pip_path = join(prefix, 'bin', 'pip') 26 py_path = join(prefix, 'bin', 'python') 27 if isfile(pip_path) and isfile(py_path): 28 ret = [py_path, pip_path] 29 30 # Check the version of pip 31 # --disable-pip-version-check was introduced in pip 6.0 32 # If older than that, they should probably get the warning anyway. 33 pip_version = subprocess.check_output(ret + ['-V']).decode('utf-8').split()[1] 34 major_ver = pip_version.split('.')[0] 35 if int(major_ver) >= 6: 36 ret.append('--disable-pip-version-check') 37 return ret, pip_version 38 else: 39 return None, None 40 41 42 class PipPackage(dict): 43 def __str__(self): 44 if 'path' in self: 45 return '%s (%s)-%s-<pip>' % ( 46 self['name'], 47 self['path'], 48 self['version'] 49 ) 50 return '%s-%s-<pip>' % (self['name'], self['version']) 51 52 53 def installed(prefix, output=True): 54 args, pip_version = pip_args(prefix) 55 if args is None: 56 return 57 58 pip_major_version = int(pip_version.split('.', 1)[0]) 59 60 env = os.environ.copy() 61 env[str('PIP_FORMAT')] = str('legacy') 62 args.append('list') 63 64 if pip_major_version >= 9: 65 args += ['--format', 'json'] 66 67 try: 68 pip_stdout = subprocess.check_output(args, universal_newlines=True, env=env) 69 except Exception: 70 # Any error should just be ignored 71 if output: 72 print("# Warning: subprocess call to pip failed") 73 return 74 75 if pip_major_version >= 9: 76 pkgs = json.loads(pip_stdout) 77 78 # For every package in pipinst that is not already represented 79 # in installed append a fake name to installed with 'pip' 80 # as the build string 81 for kwargs in pkgs: 82 kwargs['name'] = kwargs['name'].lower() 83 if ', ' in kwargs['version']: 84 # Packages installed with setup.py develop will include a path in 85 # the version. They should be included here, even if they are 86 # installed with conda, as they are preferred over the conda 87 # version. We still include the conda version, though, because it 88 # is still installed. 89 90 version, path = kwargs['version'].split(', ') 91 # We do this because the code below uses rsplit('-', 2) 92 version = version.replace('-', ' ') 93 kwargs['version'] = version 94 kwargs['path'] = path 95 yield PipPackage(**kwargs) 96 else: 97 # For every package in pipinst that is not already represented 98 # in installed append a fake name to installed with 'pip' 99 # as the build string 100 pat = re.compile('([\w.-]+)\s+\((.+)\)') 101 for line in pip_stdout.splitlines(): 102 line = line.strip() 103 if not line: 104 continue 105 m = pat.match(line) 106 if m is None: 107 if output: 108 print('Could not extract name and version from: %r' % line) 109 continue 110 name, version = m.groups() 111 name = name.lower() 112 kwargs = { 113 'name': name, 114 'version': version, 115 } 116 if ', ' in version: 117 # Packages installed with setup.py develop will include a path in 118 # the version. They should be included here, even if they are 119 # installed with conda, as they are preferred over the conda 120 # version. We still include the conda version, though, because it 121 # is still installed. 122 123 version, path = version.split(', ') 124 # We do this because the code below uses rsplit('-', 2) 125 version = version.replace('-', ' ') 126 kwargs.update({ 127 'path': path, 128 'version': version, 129 }) 130 yield PipPackage(**kwargs) 131 132 133 # canonicalize_{regex,name} inherited from packaging/utils.py 134 # Used under BSD license 135 _canonicalize_regex = re.compile(r"[-_.]+") 136 137 138 def _canonicalize_name(name): 139 # This is taken from PEP 503. 140 return _canonicalize_regex.sub("-", name).lower() 141 142 143 def add_pip_installed(prefix, installed_pkgs, json=None, output=True): 144 # Defer to json for backwards compatibility 145 if isinstance(json, bool): 146 output = not json 147 148 # TODO Refactor so installed is a real list of objects/dicts 149 # instead of strings allowing for direct comparison 150 # split :: to get rid of channel info 151 152 # canonicalize names for pip comparison 153 # because pip normalizes `foo_bar` to `foo-bar` 154 conda_names = {_canonicalize_name(d.quad[0]) for d in installed_pkgs} 155 for pip_pkg in installed(prefix, output=output): 156 pip_name = _canonicalize_name(pip_pkg['name']) 157 if pip_name in conda_names and 'path' not in pip_pkg: 158 continue 159 installed_pkgs.add(str(pip_pkg)) 160 [end of conda_env/pip_util.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
c1af02bb4fcfb8c962bb52d040ba705174345889
conda-env may update conda while it's still in use (cc @levy5674, @patricksnape) Issue reported in https://github.com/conda/conda/issues/6606#issuecomment-355149329: `conda-env` can update `conda` (either explicitly or by `auto_update_conda = True`) while it's still using it and thus makes the current process get into an inconsistent state (using parts of the old as well as updated `conda`). `conda-env` loops over the different dependency types in https://github.com/conda/conda/blob/4.4.6/conda_env/cli/main_update.py#L104. If it updates `conda`, other subsequent updates, e.g., using `conda_env.installers.pip`, may already introduce parts of the updated `conda` into the current process, see: ```yaml # environment.yaml dependencies: - conda - pip: - bumpversion ``` ```bash (ins)$ cat environment.yml dependencies: - conda - pip: - bumpversion (ins)$ bash Miniconda3-4.3.31-Linux-x86_64.sh -bp ./mc PREFIX=/home/maba/code/conda/mc installing: python-3.6.3-h6c0c0dc_5 ... Python 3.6.3 :: Anaconda, Inc. installing: ca-certificates-2017.08.26-h1d4fec5_0 ... installing: conda-env-2.6.0-h36134e3_1 ... installing: libgcc-ng-7.2.0-h7cc24e2_2 ... installing: libstdcxx-ng-7.2.0-h7a57d05_2 ... installing: libffi-3.2.1-hd88cf55_4 ... installing: ncurses-6.0-h9df7e31_2 ... installing: openssl-1.0.2n-hb7f436b_0 ... installing: tk-8.6.7-hc745277_3 ... installing: xz-5.2.3-h55aa19d_2 ... installing: yaml-0.1.7-had09818_2 ... installing: zlib-1.2.11-ha838bed_2 ... installing: libedit-3.1-heed3624_0 ... installing: readline-7.0-ha6073c6_4 ... installing: sqlite-3.20.1-hb898158_2 ... installing: asn1crypto-0.23.0-py36h4639342_0 ... installing: certifi-2017.11.5-py36hf29ccca_0 ... installing: chardet-3.0.4-py36h0f667ec_1 ... installing: idna-2.6-py36h82fb2a8_1 ... installing: pycosat-0.6.3-py36h0a5515d_0 ... installing: pycparser-2.18-py36hf9f622e_1 ... installing: pysocks-1.6.7-py36hd97a5b1_1 ... installing: ruamel_yaml-0.11.14-py36ha2fb22d_2 ... installing: six-1.11.0-py36h372c433_1 ... installing: cffi-1.11.2-py36h2825082_0 ... installing: setuptools-36.5.0-py36he42e2e1_0 ... installing: cryptography-2.1.4-py36hd09be54_0 ... installing: wheel-0.30.0-py36hfd4bba0_1 ... installing: pip-9.0.1-py36h6c6f9ce_4 ... installing: pyopenssl-17.5.0-py36h20ba746_0 ... installing: urllib3-1.22-py36hbe7ace6_0 ... installing: requests-2.18.4-py36he2e5f8d_1 ... installing: conda-4.3.31-py36_0 ... installation finished. (ins)$ source mc/bin/activate (ins)(root) $ conda env update -n root Fetching package metadata ........... Solving package specifications: . conda-4.4.6-py 100% |################################################################################################################################################################################| Time: 0:00:01 669.69 kB/s An unexpected error has occurred. Please consider posting the following information to the conda GitHub issue tracker at: https://github.com/conda/conda/issues Traceback (most recent call last): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 640, in conda_exception_handler original data Length: %(original_data_length)d File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main_update.py", line 106, in execute installer = get_installer(installer_type) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/installers/pip.py", line 40, in _pip_install_via_requirements args, pip_version = pip_args(prefix) ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/maba/code/conda/mc/bin/conda-env", line 11, in <module> sys.exit(main()) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main.py", line 76, in main File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 644, in conda_exception_handler 'path': path, File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 634, in handle_exception def __init__(self, path, placeholder, new_prefix, original_data_length, new_data_length): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 595, in print_unexpected_error_message File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/cli/main_info.py", line 19, in <module> from .common import print_envs_list, stdout_json ImportError: cannot import name 'print_envs_list' ``` This of course affects `conda 4.4` as well: <details><summary> logs for conda 4.4.x (using 4.4.0rc2 to let it trip at the same changed import as 4.3.31 did) </summary><p> ```bash (ins)$ cat environment.yml dependencies: - conda - pip: - bumpversion (ins)$ bash Miniconda3-4.3.31-Linux-x86_64.sh -bp ./mc PREFIX=/home/maba/code/conda/mc installing: python-3.6.3-h6c0c0dc_5 ... Python 3.6.3 :: Anaconda, Inc. installing: ca-certificates-2017.08.26-h1d4fec5_0 ... installing: conda-env-2.6.0-h36134e3_1 ... installing: libgcc-ng-7.2.0-h7cc24e2_2 ... installing: libstdcxx-ng-7.2.0-h7a57d05_2 ... installing: libffi-3.2.1-hd88cf55_4 ... installing: ncurses-6.0-h9df7e31_2 ... installing: openssl-1.0.2n-hb7f436b_0 ... installing: tk-8.6.7-hc745277_3 ... installing: xz-5.2.3-h55aa19d_2 ... installing: yaml-0.1.7-had09818_2 ... installing: zlib-1.2.11-ha838bed_2 ... installing: libedit-3.1-heed3624_0 ... installing: readline-7.0-ha6073c6_4 ... installing: sqlite-3.20.1-hb898158_2 ... installing: asn1crypto-0.23.0-py36h4639342_0 ... installing: certifi-2017.11.5-py36hf29ccca_0 ... installing: chardet-3.0.4-py36h0f667ec_1 ... installing: idna-2.6-py36h82fb2a8_1 ... installing: pycosat-0.6.3-py36h0a5515d_0 ... installing: pycparser-2.18-py36hf9f622e_1 ... installing: pysocks-1.6.7-py36hd97a5b1_1 ... installing: ruamel_yaml-0.11.14-py36ha2fb22d_2 ... installing: six-1.11.0-py36h372c433_1 ... installing: cffi-1.11.2-py36h2825082_0 ... installing: setuptools-36.5.0-py36he42e2e1_0 ... installing: cryptography-2.1.4-py36hd09be54_0 ... installing: wheel-0.30.0-py36hfd4bba0_1 ... installing: pip-9.0.1-py36h6c6f9ce_4 ... installing: pyopenssl-17.5.0-py36h20ba746_0 ... installing: urllib3-1.22-py36hbe7ace6_0 ... installing: requests-2.18.4-py36he2e5f8d_1 ... installing: conda-4.3.31-py36_0 ... installation finished. (ins)$ source mc/bin/activate (ins)(root) $ conda install -yc conda-canary conda=4.4.0rc2 Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment /home/maba/code/conda/mc: The following packages will be UPDATED: conda: 4.3.31-py36_0 --> 4.4.0rc2-py36h7fbbb7a_0 conda-canary conda-4.4.0rc2 100% |################################################################################################################################################################################| Time: 0:00:00 707.92 kB/s (ins)(root) $ conda env update -n root Solving environment: done Downloading and Extracting Packages conda 4.4.6: ########################################################################################################################################################################################################### | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done Traceback (most recent call last): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 683, in __call__ elif isinstance(error, SafetyError): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main.py", line 74, in do_call exit_code = getattr(module, func_name)(args, parser) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main_update.py", line 107, in execute installer.install(prefix, specs, args, env) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/installers/pip.py", line 40, in _pip_install_via_requirements args, pip_version = pip_args(prefix) ValueError: too many values to unpack (expected 2) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/maba/code/conda/mc/bin/conda-env", line 11, in <module> sys.exit(main()) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda_env/cli/main.py", line 84, in main return conda_exception_handler(do_call, args, parser) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 907, in conda_exception_handler url = response.headers['Location'] File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 686, in __call__ elif context.safety_checks == SafetyChecks.warn: File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 720, in handle_exception class ExceptionHandler(object): File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 732, in handle_unexpected_exception return sys.stdout if context.json else sys.stderr File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/exceptions.py", line 783, in print_error_report command = ' '.join(ensure_text_type(s) for s in sys.argv) File "/home/maba/code/conda/mc/lib/python3.6/site-packages/conda/cli/main_info.py", line 19, in <module> from .common import print_envs_list, stdout_json ImportError: cannot import name 'print_envs_list' ``` </p></details>
Ok that's just awful. Thanks for elucidating what's actually happening @mbargull I'm facing the same problem. Is there a workaround for this? You can work around this issue by updating `conda` separately, before updating the rest of the environment: ```sh conda update -y conda conda env update -n root ``` YMMV, but this may help as a work-around: ``` conda config --set auto_update_conda False ``` It prevents `conda` from automatically updating itself when creating environments.
2018-01-23T21:44:53Z
<patch> diff --git a/conda_env/cli/main.py b/conda_env/cli/main.py --- a/conda_env/cli/main.py +++ b/conda_env/cli/main.py @@ -79,6 +79,7 @@ def main(): initialize_logging() parser = create_parser() args = parser.parse_args() + os.environ["CONDA_AUTO_UPDATE_CONDA"] = "false" context.__init__(argparse_args=args) init_loggers(context) return conda_exception_handler(do_call, args, parser) </patch>
[]
[]
pandas-dev__pandas-14629
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> documentation improvement: set operations sort the index though clear from the examples, an explicit hint that set operations will re-sort your indices in ascending order would be helpful (section 7.3.1 "Set operations on Index objects"). i had indicies like "QK_1 ... QK_9 QK_10" and afterwards they got sorted as "QK_1 QK_10 QK_2...". </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td> 13 </tr> 14 <td></td> 15 <td><img src="https://anaconda.org/pandas/pandas/badges/version.svg" alt="latest release" /></td> 16 </tr> 17 <tr> 18 <td>Package Status</td> 19 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 20 </tr> 21 <tr> 22 <td>License</td> 23 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td> 24 </tr> 25 <tr> 26 <td>Build Status</td> 27 <td> 28 <a href="https://travis-ci.org/pandas-dev/pandas"> 29 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 30 </a> 31 </td> 32 </tr> 33 <td></td> 34 <td> 35 <a href="https://ci.appveyor.com/project/jreback/pandas-465"> 36 <img src="https://ci.appveyor.com/api/projects/status/iblk29s98quexwxi/branch/master?svg=true" alt="appveyor build status" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Coverage</td> 42 <td><img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /></td> 43 </tr> 44 <tr> 45 <td>Conda</td> 46 <td> 47 <a href="http://pandas.pydata.org"> 48 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda downloads" /> 49 </a> 50 </td> 51 </tr> 52 <tr> 53 <td>PyPI</td> 54 <td> 55 <a href="https://pypi.python.org/pypi/pandas/"> 56 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" /> 57 </a> 58 </td> 59 </tr> 60 </table> 61 62 [![https://gitter.im/pydata/pandas](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 63 64 ## What is it 65 66 **pandas** is a Python package providing fast, flexible, and expressive data 67 structures designed to make working with "relational" or "labeled" data both 68 easy and intuitive. It aims to be the fundamental high-level building block for 69 doing practical, **real world** data analysis in Python. Additionally, it has 70 the broader goal of becoming **the most powerful and flexible open source data 71 analysis / manipulation tool available in any language**. It is already well on 72 its way toward this goal. 73 74 ## Main Features 75 Here are just a few of the things that pandas does well: 76 77 - Easy handling of [**missing data**][missing-data] (represented as 78 `NaN`) in floating point as well as non-floating point data 79 - Size mutability: columns can be [**inserted and 80 deleted**][insertion-deletion] from DataFrame and higher dimensional 81 objects 82 - Automatic and explicit [**data alignment**][alignment]: objects can 83 be explicitly aligned to a set of labels, or the user can simply 84 ignore the labels and let `Series`, `DataFrame`, etc. automatically 85 align the data for you in computations 86 - Powerful, flexible [**group by**][groupby] functionality to perform 87 split-apply-combine operations on data sets, for both aggregating 88 and transforming data 89 - Make it [**easy to convert**][conversion] ragged, 90 differently-indexed data in other Python and NumPy data structures 91 into DataFrame objects 92 - Intelligent label-based [**slicing**][slicing], [**fancy 93 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 94 large data sets 95 - Intuitive [**merging**][merging] and [**joining**][joining] data 96 sets 97 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 98 data sets 99 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 100 labels per tick) 101 - Robust IO tools for loading data from [**flat files**][flat-files] 102 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 103 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 104 - [**Time series**][timeseries]-specific functionality: date range 105 generation and frequency conversion, moving window statistics, 106 moving window linear regressions, date shifting and lagging, etc. 107 108 109 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 110 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 111 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 112 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 113 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 114 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 115 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 116 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 117 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 118 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 119 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 120 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 121 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 122 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 123 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 124 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 125 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 126 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 127 128 ## Where to get it 129 The source code is currently hosted on GitHub at: 130 http://github.com/pandas-dev/pandas 131 132 Binary installers for the latest released version are available at the [Python 133 package index](http://pypi.python.org/pypi/pandas/) and on conda. 134 135 ```sh 136 # conda 137 conda install pandas 138 ``` 139 140 ```sh 141 # or PyPI 142 pip install pandas 143 ``` 144 145 ## Dependencies 146 - [NumPy](http://www.numpy.org): 1.7.0 or higher 147 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 148 - [pytz](http://pytz.sourceforge.net) 149 - Needed for time zone support with ``pandas.date_range`` 150 151 See the [full installation instructions](http://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 152 for recommended and optional dependencies. 153 154 ## Installation from sources 155 To install pandas from source you need Cython in addition to the normal 156 dependencies above. Cython can be installed from pypi: 157 158 ```sh 159 pip install cython 160 ``` 161 162 In the `pandas` directory (same one where you found this file after 163 cloning the git repo), execute: 164 165 ```sh 166 python setup.py install 167 ``` 168 169 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 170 171 ```sh 172 python setup.py develop 173 ``` 174 175 Alternatively, you can use `pip` if you want all the dependencies pulled 176 in automatically (the `-e` option is for installing it in [development 177 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 178 179 ```sh 180 pip install -e . 181 ``` 182 183 On Windows, you will need to install MinGW and execute: 184 185 ```sh 186 python setup.py build --compiler=mingw32 187 python setup.py install 188 ``` 189 190 See http://pandas.pydata.org/ for more information. 191 192 ## License 193 BSD 194 195 ## Documentation 196 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 197 198 The Sphinx documentation should provide a good starting point for learning how 199 to use the library. Expect the docs to continue to expand as time goes on. 200 201 ## Background 202 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 203 has been under active development since then. 204 205 ## Discussion and Development 206 Since pandas development is related to a number of other scientific 207 Python projects, questions are welcome on the scipy-user mailing 208 list. Specialized discussions or design issues should take place on 209 the PyData mailing list / Google group: 210 211 https://groups.google.com/forum/#!forum/pydata 212 [end of README.md] [start of pandas/core/base.py] 1 """ 2 Base and utility classes for pandas objects. 3 """ 4 from pandas import compat 5 from pandas.compat import builtins 6 import numpy as np 7 8 from pandas.types.missing import isnull 9 from pandas.types.generic import ABCDataFrame, ABCSeries, ABCIndexClass 10 from pandas.types.common import is_object_dtype, is_list_like, is_scalar 11 12 from pandas.core import common as com 13 import pandas.core.nanops as nanops 14 import pandas.lib as lib 15 from pandas.compat.numpy import function as nv 16 from pandas.util.decorators import (Appender, cache_readonly, 17 deprecate_kwarg, Substitution) 18 from pandas.core.common import AbstractMethodError 19 from pandas.formats.printing import pprint_thing 20 21 _shared_docs = dict() 22 _indexops_doc_kwargs = dict(klass='IndexOpsMixin', inplace='', 23 unique='IndexOpsMixin', duplicated='IndexOpsMixin') 24 25 26 class StringMixin(object): 27 """implements string methods so long as object defines a `__unicode__` 28 method. 29 30 Handles Python2/3 compatibility transparently. 31 """ 32 # side note - this could be made into a metaclass if more than one 33 # object needs 34 35 # ---------------------------------------------------------------------- 36 # Formatting 37 38 def __unicode__(self): 39 raise AbstractMethodError(self) 40 41 def __str__(self): 42 """ 43 Return a string representation for a particular Object 44 45 Invoked by str(df) in both py2/py3. 46 Yields Bytestring in Py2, Unicode String in py3. 47 """ 48 49 if compat.PY3: 50 return self.__unicode__() 51 return self.__bytes__() 52 53 def __bytes__(self): 54 """ 55 Return a string representation for a particular object. 56 57 Invoked by bytes(obj) in py3 only. 58 Yields a bytestring in both py2/py3. 59 """ 60 from pandas.core.config import get_option 61 62 encoding = get_option("display.encoding") 63 return self.__unicode__().encode(encoding, 'replace') 64 65 def __repr__(self): 66 """ 67 Return a string representation for a particular object. 68 69 Yields Bytestring in Py2, Unicode String in py3. 70 """ 71 return str(self) 72 73 74 class PandasObject(StringMixin): 75 76 """baseclass for various pandas objects""" 77 78 @property 79 def _constructor(self): 80 """class constructor (for this class it's just `__class__`""" 81 return self.__class__ 82 83 def __unicode__(self): 84 """ 85 Return a string representation for a particular object. 86 87 Invoked by unicode(obj) in py2 only. Yields a Unicode String in both 88 py2/py3. 89 """ 90 # Should be overwritten by base classes 91 return object.__repr__(self) 92 93 def _dir_additions(self): 94 """ add addtional __dir__ for this object """ 95 return set() 96 97 def _dir_deletions(self): 98 """ delete unwanted __dir__ for this object """ 99 return set() 100 101 def __dir__(self): 102 """ 103 Provide method name lookup and completion 104 Only provide 'public' methods 105 """ 106 rv = set(dir(type(self))) 107 rv = (rv - self._dir_deletions()) | self._dir_additions() 108 return sorted(rv) 109 110 def _reset_cache(self, key=None): 111 """ 112 Reset cached properties. If ``key`` is passed, only clears that key. 113 """ 114 if getattr(self, '_cache', None) is None: 115 return 116 if key is None: 117 self._cache.clear() 118 else: 119 self._cache.pop(key, None) 120 121 def __sizeof__(self): 122 """ 123 Generates the total memory usage for a object that returns 124 either a value or Series of values 125 """ 126 if hasattr(self, 'memory_usage'): 127 mem = self.memory_usage(deep=True) 128 if not is_scalar(mem): 129 mem = mem.sum() 130 return int(mem) 131 132 # no memory_usage attribute, so fall back to 133 # object's 'sizeof' 134 return super(PandasObject, self).__sizeof__() 135 136 137 class NoNewAttributesMixin(object): 138 """Mixin which prevents adding new attributes. 139 140 Prevents additional attributes via xxx.attribute = "something" after a 141 call to `self.__freeze()`. Mainly used to prevent the user from using 142 wrong attrirbutes on a accessor (`Series.cat/.str/.dt`). 143 144 If you really want to add a new attribute at a later time, you need to use 145 `object.__setattr__(self, key, value)`. 146 """ 147 148 def _freeze(self): 149 """Prevents setting additional attributes""" 150 object.__setattr__(self, "__frozen", True) 151 152 # prevent adding any attribute via s.xxx.new_attribute = ... 153 def __setattr__(self, key, value): 154 # _cache is used by a decorator 155 # dict lookup instead of getattr as getattr is false for getter 156 # which error 157 if getattr(self, "__frozen", False) and not \ 158 (key in type(self).__dict__ or key == "_cache"): 159 raise AttributeError("You cannot add any new attribute '{key}'". 160 format(key=key)) 161 object.__setattr__(self, key, value) 162 163 164 class PandasDelegate(PandasObject): 165 """ an abstract base class for delegating methods/properties """ 166 167 def _delegate_property_get(self, name, *args, **kwargs): 168 raise TypeError("You cannot access the " 169 "property {name}".format(name=name)) 170 171 def _delegate_property_set(self, name, value, *args, **kwargs): 172 raise TypeError("The property {name} cannot be set".format(name=name)) 173 174 def _delegate_method(self, name, *args, **kwargs): 175 raise TypeError("You cannot call method {name}".format(name=name)) 176 177 @classmethod 178 def _add_delegate_accessors(cls, delegate, accessors, typ, 179 overwrite=False): 180 """ 181 add accessors to cls from the delegate class 182 183 Parameters 184 ---------- 185 cls : the class to add the methods/properties to 186 delegate : the class to get methods/properties & doc-strings 187 acccessors : string list of accessors to add 188 typ : 'property' or 'method' 189 overwrite : boolean, default False 190 overwrite the method/property in the target class if it exists 191 """ 192 193 def _create_delegator_property(name): 194 195 def _getter(self): 196 return self._delegate_property_get(name) 197 198 def _setter(self, new_values): 199 return self._delegate_property_set(name, new_values) 200 201 _getter.__name__ = name 202 _setter.__name__ = name 203 204 return property(fget=_getter, fset=_setter, 205 doc=getattr(delegate, name).__doc__) 206 207 def _create_delegator_method(name): 208 209 def f(self, *args, **kwargs): 210 return self._delegate_method(name, *args, **kwargs) 211 212 f.__name__ = name 213 f.__doc__ = getattr(delegate, name).__doc__ 214 215 return f 216 217 for name in accessors: 218 219 if typ == 'property': 220 f = _create_delegator_property(name) 221 else: 222 f = _create_delegator_method(name) 223 224 # don't overwrite existing methods/properties 225 if overwrite or not hasattr(cls, name): 226 setattr(cls, name, f) 227 228 229 class AccessorProperty(object): 230 """Descriptor for implementing accessor properties like Series.str 231 """ 232 def __init__(self, accessor_cls, construct_accessor): 233 self.accessor_cls = accessor_cls 234 self.construct_accessor = construct_accessor 235 self.__doc__ = accessor_cls.__doc__ 236 237 def __get__(self, instance, owner=None): 238 if instance is None: 239 # this ensures that Series.str.<method> is well defined 240 return self.accessor_cls 241 return self.construct_accessor(instance) 242 243 def __set__(self, instance, value): 244 raise AttributeError("can't set attribute") 245 246 def __delete__(self, instance): 247 raise AttributeError("can't delete attribute") 248 249 250 class GroupByError(Exception): 251 pass 252 253 254 class DataError(GroupByError): 255 pass 256 257 258 class SpecificationError(GroupByError): 259 pass 260 261 262 class SelectionMixin(object): 263 """ 264 mixin implementing the selection & aggregation interface on a group-like 265 object sub-classes need to define: obj, exclusions 266 """ 267 _selection = None 268 _internal_names = ['_cache', '__setstate__'] 269 _internal_names_set = set(_internal_names) 270 _builtin_table = { 271 builtins.sum: np.sum, 272 builtins.max: np.max, 273 builtins.min: np.min 274 } 275 _cython_table = { 276 builtins.sum: 'sum', 277 builtins.max: 'max', 278 builtins.min: 'min', 279 np.sum: 'sum', 280 np.mean: 'mean', 281 np.prod: 'prod', 282 np.std: 'std', 283 np.var: 'var', 284 np.median: 'median', 285 np.max: 'max', 286 np.min: 'min', 287 np.cumprod: 'cumprod', 288 np.cumsum: 'cumsum' 289 } 290 291 @property 292 def name(self): 293 if self._selection is None: 294 return None # 'result' 295 else: 296 return self._selection 297 298 @property 299 def _selection_list(self): 300 if not isinstance(self._selection, (list, tuple, ABCSeries, 301 ABCIndexClass, np.ndarray)): 302 return [self._selection] 303 return self._selection 304 305 @cache_readonly 306 def _selected_obj(self): 307 308 if self._selection is None or isinstance(self.obj, ABCSeries): 309 return self.obj 310 else: 311 return self.obj[self._selection] 312 313 @cache_readonly 314 def ndim(self): 315 return self._selected_obj.ndim 316 317 @cache_readonly 318 def _obj_with_exclusions(self): 319 if self._selection is not None and isinstance(self.obj, 320 ABCDataFrame): 321 return self.obj.reindex(columns=self._selection_list) 322 323 if len(self.exclusions) > 0: 324 return self.obj.drop(self.exclusions, axis=1) 325 else: 326 return self.obj 327 328 def __getitem__(self, key): 329 if self._selection is not None: 330 raise Exception('Column(s) %s already selected' % self._selection) 331 332 if isinstance(key, (list, tuple, ABCSeries, ABCIndexClass, 333 np.ndarray)): 334 if len(self.obj.columns.intersection(key)) != len(key): 335 bad_keys = list(set(key).difference(self.obj.columns)) 336 raise KeyError("Columns not found: %s" 337 % str(bad_keys)[1:-1]) 338 return self._gotitem(list(key), ndim=2) 339 340 elif not getattr(self, 'as_index', False): 341 if key not in self.obj.columns: 342 raise KeyError("Column not found: %s" % key) 343 return self._gotitem(key, ndim=2) 344 345 else: 346 if key not in self.obj: 347 raise KeyError("Column not found: %s" % key) 348 return self._gotitem(key, ndim=1) 349 350 def _gotitem(self, key, ndim, subset=None): 351 """ 352 sub-classes to define 353 return a sliced object 354 355 Parameters 356 ---------- 357 key : string / list of selections 358 ndim : 1,2 359 requested ndim of result 360 subset : object, default None 361 subset to act on 362 363 """ 364 raise AbstractMethodError(self) 365 366 _agg_doc = """Aggregate using input function or dict of {column -> 367 function} 368 369 Parameters 370 ---------- 371 arg : function or dict 372 Function to use for aggregating groups. If a function, must either 373 work when passed a DataFrame or when passed to DataFrame.apply. If 374 passed a dict, the keys must be DataFrame column names. 375 376 Accepted Combinations are: 377 - string cythonized function name 378 - function 379 - list of functions 380 - dict of columns -> functions 381 - nested dict of names -> dicts of functions 382 383 Notes 384 ----- 385 Numpy functions mean/median/prod/sum/std/var are special cased so the 386 default behavior is applying the function along axis=0 387 (e.g., np.mean(arr_2d, axis=0)) as opposed to 388 mimicking the default Numpy behavior (e.g., np.mean(arr_2d)). 389 390 Returns 391 ------- 392 aggregated : DataFrame 393 """ 394 395 _see_also_template = """ 396 See also 397 -------- 398 pandas.Series.%(name)s 399 pandas.DataFrame.%(name)s 400 """ 401 402 def aggregate(self, func, *args, **kwargs): 403 raise AbstractMethodError(self) 404 405 agg = aggregate 406 407 def _aggregate(self, arg, *args, **kwargs): 408 """ 409 provide an implementation for the aggregators 410 411 Parameters 412 ---------- 413 arg : string, dict, function 414 *args : args to pass on to the function 415 **kwargs : kwargs to pass on to the function 416 417 Returns 418 ------- 419 tuple of result, how 420 421 Notes 422 ----- 423 how can be a string describe the required post-processing, or 424 None if not required 425 """ 426 427 is_aggregator = lambda x: isinstance(x, (list, tuple, dict)) 428 is_nested_renamer = False 429 430 _level = kwargs.pop('_level', None) 431 if isinstance(arg, compat.string_types): 432 return getattr(self, arg)(*args, **kwargs), None 433 434 if isinstance(arg, dict): 435 436 # aggregate based on the passed dict 437 if self.axis != 0: # pragma: no cover 438 raise ValueError('Can only pass dict with axis=0') 439 440 obj = self._selected_obj 441 442 # if we have a dict of any non-scalars 443 # eg. {'A' : ['mean']}, normalize all to 444 # be list-likes 445 if any(is_aggregator(x) for x in compat.itervalues(arg)): 446 new_arg = compat.OrderedDict() 447 for k, v in compat.iteritems(arg): 448 if not isinstance(v, (tuple, list, dict)): 449 new_arg[k] = [v] 450 else: 451 new_arg[k] = v 452 453 # the keys must be in the columns 454 # for ndim=2, or renamers for ndim=1 455 456 # ok 457 # {'A': { 'ra': 'mean' }} 458 # {'A': { 'ra': ['mean'] }} 459 # {'ra': ['mean']} 460 461 # not ok 462 # {'ra' : { 'A' : 'mean' }} 463 if isinstance(v, dict): 464 is_nested_renamer = True 465 466 if k not in obj.columns: 467 raise SpecificationError('cannot perform renaming ' 468 'for {0} with a nested ' 469 'dictionary'.format(k)) 470 471 arg = new_arg 472 473 from pandas.tools.merge import concat 474 475 def _agg_1dim(name, how, subset=None): 476 """ 477 aggregate a 1-dim with how 478 """ 479 colg = self._gotitem(name, ndim=1, subset=subset) 480 if colg.ndim != 1: 481 raise SpecificationError("nested dictionary is ambiguous " 482 "in aggregation") 483 return colg.aggregate(how, _level=(_level or 0) + 1) 484 485 def _agg_2dim(name, how): 486 """ 487 aggregate a 2-dim with how 488 """ 489 colg = self._gotitem(self._selection, ndim=2, 490 subset=obj) 491 return colg.aggregate(how, _level=None) 492 493 def _agg(arg, func): 494 """ 495 run the aggregations over the arg with func 496 return an OrderedDict 497 """ 498 result = compat.OrderedDict() 499 for fname, agg_how in compat.iteritems(arg): 500 result[fname] = func(fname, agg_how) 501 return result 502 503 # set the final keys 504 keys = list(compat.iterkeys(arg)) 505 result = compat.OrderedDict() 506 507 # nested renamer 508 if is_nested_renamer: 509 result = list(_agg(arg, _agg_1dim).values()) 510 511 if all(isinstance(r, dict) for r in result): 512 513 result, results = compat.OrderedDict(), result 514 for r in results: 515 result.update(r) 516 keys = list(compat.iterkeys(result)) 517 518 else: 519 520 if self._selection is not None: 521 keys = None 522 523 # some selection on the object 524 elif self._selection is not None: 525 526 sl = set(self._selection_list) 527 528 # we are a Series like object, 529 # but may have multiple aggregations 530 if len(sl) == 1: 531 532 result = _agg(arg, lambda fname, 533 agg_how: _agg_1dim(self._selection, agg_how)) 534 535 # we are selecting the same set as we are aggregating 536 elif not len(sl - set(compat.iterkeys(arg))): 537 538 result = _agg(arg, _agg_1dim) 539 540 # we are a DataFrame, with possibly multiple aggregations 541 else: 542 543 result = _agg(arg, _agg_2dim) 544 545 # no selection 546 else: 547 548 try: 549 result = _agg(arg, _agg_1dim) 550 except SpecificationError: 551 552 # we are aggregating expecting all 1d-returns 553 # but we have 2d 554 result = _agg(arg, _agg_2dim) 555 556 # combine results 557 if isinstance(result, list): 558 result = concat(result, keys=keys, axis=1) 559 elif isinstance(list(compat.itervalues(result))[0], 560 ABCDataFrame): 561 result = concat([result[k] for k in keys], keys=keys, axis=1) 562 else: 563 from pandas import DataFrame 564 result = DataFrame(result) 565 566 return result, True 567 elif hasattr(arg, '__iter__'): 568 return self._aggregate_multiple_funcs(arg, _level=_level), None 569 else: 570 result = None 571 572 cy_func = self._is_cython_func(arg) 573 if cy_func and not args and not kwargs: 574 return getattr(self, cy_func)(), None 575 576 # caller can react 577 return result, True 578 579 def _aggregate_multiple_funcs(self, arg, _level): 580 from pandas.tools.merge import concat 581 582 if self.axis != 0: 583 raise NotImplementedError("axis other than 0 is not supported") 584 585 if self._selected_obj.ndim == 1: 586 obj = self._selected_obj 587 else: 588 obj = self._obj_with_exclusions 589 590 results = [] 591 keys = [] 592 593 # degenerate case 594 if obj.ndim == 1: 595 for a in arg: 596 try: 597 colg = self._gotitem(obj.name, ndim=1, subset=obj) 598 results.append(colg.aggregate(a)) 599 600 # make sure we find a good name 601 name = com._get_callable_name(a) or a 602 keys.append(name) 603 except (TypeError, DataError): 604 pass 605 except SpecificationError: 606 raise 607 608 # multiples 609 else: 610 for col in obj: 611 try: 612 colg = self._gotitem(col, ndim=1, subset=obj[col]) 613 results.append(colg.aggregate(arg)) 614 keys.append(col) 615 except (TypeError, DataError): 616 pass 617 except SpecificationError: 618 raise 619 620 return concat(results, keys=keys, axis=1) 621 622 def _shallow_copy(self, obj=None, obj_type=None, **kwargs): 623 """ return a new object with the replacement attributes """ 624 if obj is None: 625 obj = self._selected_obj.copy() 626 if obj_type is None: 627 obj_type = self._constructor 628 if isinstance(obj, obj_type): 629 obj = obj.obj 630 for attr in self._attributes: 631 if attr not in kwargs: 632 kwargs[attr] = getattr(self, attr) 633 return obj_type(obj, **kwargs) 634 635 def _is_cython_func(self, arg): 636 """ if we define an internal function for this argument, return it """ 637 return self._cython_table.get(arg) 638 639 def _is_builtin_func(self, arg): 640 """ 641 if we define an builtin function for this argument, return it, 642 otherwise return the arg 643 """ 644 return self._builtin_table.get(arg, arg) 645 646 647 class GroupByMixin(object): 648 """ provide the groupby facilities to the mixed object """ 649 650 @staticmethod 651 def _dispatch(name, *args, **kwargs): 652 """ dispatch to apply """ 653 def outer(self, *args, **kwargs): 654 def f(x): 655 x = self._shallow_copy(x, groupby=self._groupby) 656 return getattr(x, name)(*args, **kwargs) 657 return self._groupby.apply(f) 658 outer.__name__ = name 659 return outer 660 661 def _gotitem(self, key, ndim, subset=None): 662 """ 663 sub-classes to define 664 return a sliced object 665 666 Parameters 667 ---------- 668 key : string / list of selections 669 ndim : 1,2 670 requested ndim of result 671 subset : object, default None 672 subset to act on 673 """ 674 675 # create a new object to prevent aliasing 676 if subset is None: 677 subset = self.obj 678 679 # we need to make a shallow copy of ourselves 680 # with the same groupby 681 kwargs = dict([(attr, getattr(self, attr)) 682 for attr in self._attributes]) 683 self = self.__class__(subset, 684 groupby=self._groupby[key], 685 parent=self, 686 **kwargs) 687 self._reset_cache() 688 if subset.ndim == 2: 689 if is_scalar(key) and key in subset or is_list_like(key): 690 self._selection = key 691 return self 692 693 694 class FrozenList(PandasObject, list): 695 696 """ 697 Container that doesn't allow setting item *but* 698 because it's technically non-hashable, will be used 699 for lookups, appropriately, etc. 700 """ 701 # Sidenote: This has to be of type list, otherwise it messes up PyTables 702 # typechecks 703 704 def __add__(self, other): 705 if isinstance(other, tuple): 706 other = list(other) 707 return self.__class__(super(FrozenList, self).__add__(other)) 708 709 __iadd__ = __add__ 710 711 # Python 2 compat 712 def __getslice__(self, i, j): 713 return self.__class__(super(FrozenList, self).__getslice__(i, j)) 714 715 def __getitem__(self, n): 716 # Python 3 compat 717 if isinstance(n, slice): 718 return self.__class__(super(FrozenList, self).__getitem__(n)) 719 return super(FrozenList, self).__getitem__(n) 720 721 def __radd__(self, other): 722 if isinstance(other, tuple): 723 other = list(other) 724 return self.__class__(other + list(self)) 725 726 def __eq__(self, other): 727 if isinstance(other, (tuple, FrozenList)): 728 other = list(other) 729 return super(FrozenList, self).__eq__(other) 730 731 __req__ = __eq__ 732 733 def __mul__(self, other): 734 return self.__class__(super(FrozenList, self).__mul__(other)) 735 736 __imul__ = __mul__ 737 738 def __reduce__(self): 739 return self.__class__, (list(self),) 740 741 def __hash__(self): 742 return hash(tuple(self)) 743 744 def _disabled(self, *args, **kwargs): 745 """This method will not function because object is immutable.""" 746 raise TypeError("'%s' does not support mutable operations." % 747 self.__class__.__name__) 748 749 def __unicode__(self): 750 return pprint_thing(self, quote_strings=True, 751 escape_chars=('\t', '\r', '\n')) 752 753 def __repr__(self): 754 return "%s(%s)" % (self.__class__.__name__, 755 str(self)) 756 757 __setitem__ = __setslice__ = __delitem__ = __delslice__ = _disabled 758 pop = append = extend = remove = sort = insert = _disabled 759 760 761 class FrozenNDArray(PandasObject, np.ndarray): 762 763 # no __array_finalize__ for now because no metadata 764 def __new__(cls, data, dtype=None, copy=False): 765 if copy is None: 766 copy = not isinstance(data, FrozenNDArray) 767 res = np.array(data, dtype=dtype, copy=copy).view(cls) 768 return res 769 770 def _disabled(self, *args, **kwargs): 771 """This method will not function because object is immutable.""" 772 raise TypeError("'%s' does not support mutable operations." % 773 self.__class__) 774 775 __setitem__ = __setslice__ = __delitem__ = __delslice__ = _disabled 776 put = itemset = fill = _disabled 777 778 def _shallow_copy(self): 779 return self.view() 780 781 def values(self): 782 """returns *copy* of underlying array""" 783 arr = self.view(np.ndarray).copy() 784 return arr 785 786 def __unicode__(self): 787 """ 788 Return a string representation for this object. 789 790 Invoked by unicode(df) in py2 only. Yields a Unicode String in both 791 py2/py3. 792 """ 793 prepr = pprint_thing(self, escape_chars=('\t', '\r', '\n'), 794 quote_strings=True) 795 return "%s(%s, dtype='%s')" % (type(self).__name__, prepr, self.dtype) 796 797 798 class IndexOpsMixin(object): 799 """ common ops mixin to support a unified inteface / docs for Series / 800 Index 801 """ 802 803 # ndarray compatibility 804 __array_priority__ = 1000 805 806 def transpose(self, *args, **kwargs): 807 """ return the transpose, which is by definition self """ 808 nv.validate_transpose(args, kwargs) 809 return self 810 811 T = property(transpose, doc="return the transpose, which is by " 812 "definition self") 813 814 @property 815 def shape(self): 816 """ return a tuple of the shape of the underlying data """ 817 return self.values.shape 818 819 @property 820 def ndim(self): 821 """ return the number of dimensions of the underlying data, 822 by definition 1 823 """ 824 return 1 825 826 def item(self): 827 """ return the first element of the underlying data as a python 828 scalar 829 """ 830 try: 831 return self.values.item() 832 except IndexError: 833 # copy numpy's message here because Py26 raises an IndexError 834 raise ValueError('can only convert an array of size 1 to a ' 835 'Python scalar') 836 837 @property 838 def data(self): 839 """ return the data pointer of the underlying data """ 840 return self.values.data 841 842 @property 843 def itemsize(self): 844 """ return the size of the dtype of the item of the underlying data """ 845 return self.values.itemsize 846 847 @property 848 def nbytes(self): 849 """ return the number of bytes in the underlying data """ 850 return self.values.nbytes 851 852 @property 853 def strides(self): 854 """ return the strides of the underlying data """ 855 return self.values.strides 856 857 @property 858 def size(self): 859 """ return the number of elements in the underlying data """ 860 return self.values.size 861 862 @property 863 def flags(self): 864 """ return the ndarray.flags for the underlying data """ 865 return self.values.flags 866 867 @property 868 def base(self): 869 """ return the base object if the memory of the underlying data is 870 shared 871 """ 872 return self.values.base 873 874 @property 875 def _values(self): 876 """ the internal implementation """ 877 return self.values 878 879 def max(self): 880 """ The maximum value of the object """ 881 return nanops.nanmax(self.values) 882 883 def argmax(self, axis=None): 884 """ 885 return a ndarray of the maximum argument indexer 886 887 See also 888 -------- 889 numpy.ndarray.argmax 890 """ 891 return nanops.nanargmax(self.values) 892 893 def min(self): 894 """ The minimum value of the object """ 895 return nanops.nanmin(self.values) 896 897 def argmin(self, axis=None): 898 """ 899 return a ndarray of the minimum argument indexer 900 901 See also 902 -------- 903 numpy.ndarray.argmin 904 """ 905 return nanops.nanargmin(self.values) 906 907 @cache_readonly 908 def hasnans(self): 909 """ return if I have any nans; enables various perf speedups """ 910 return isnull(self).any() 911 912 def _reduce(self, op, name, axis=0, skipna=True, numeric_only=None, 913 filter_type=None, **kwds): 914 """ perform the reduction type operation if we can """ 915 func = getattr(self, name, None) 916 if func is None: 917 raise TypeError("{klass} cannot perform the operation {op}".format( 918 klass=self.__class__.__name__, op=name)) 919 return func(**kwds) 920 921 def value_counts(self, normalize=False, sort=True, ascending=False, 922 bins=None, dropna=True): 923 """ 924 Returns object containing counts of unique values. 925 926 The resulting object will be in descending order so that the 927 first element is the most frequently-occurring element. 928 Excludes NA values by default. 929 930 Parameters 931 ---------- 932 normalize : boolean, default False 933 If True then the object returned will contain the relative 934 frequencies of the unique values. 935 sort : boolean, default True 936 Sort by values 937 ascending : boolean, default False 938 Sort in ascending order 939 bins : integer, optional 940 Rather than count values, group them into half-open bins, 941 a convenience for pd.cut, only works with numeric data 942 dropna : boolean, default True 943 Don't include counts of NaN. 944 945 Returns 946 ------- 947 counts : Series 948 """ 949 from pandas.core.algorithms import value_counts 950 result = value_counts(self, sort=sort, ascending=ascending, 951 normalize=normalize, bins=bins, dropna=dropna) 952 return result 953 954 _shared_docs['unique'] = ( 955 """ 956 Return %(unique)s of unique values in the object. 957 Significantly faster than numpy.unique. Includes NA values. 958 The order of the original is preserved. 959 960 Returns 961 ------- 962 uniques : %(unique)s 963 """) 964 965 @Appender(_shared_docs['unique'] % _indexops_doc_kwargs) 966 def unique(self): 967 values = self._values 968 969 if hasattr(values, 'unique'): 970 result = values.unique() 971 else: 972 from pandas.core.nanops import unique1d 973 result = unique1d(values) 974 return result 975 976 def nunique(self, dropna=True): 977 """ 978 Return number of unique elements in the object. 979 980 Excludes NA values by default. 981 982 Parameters 983 ---------- 984 dropna : boolean, default True 985 Don't include NaN in the count. 986 987 Returns 988 ------- 989 nunique : int 990 """ 991 uniqs = self.unique() 992 n = len(uniqs) 993 if dropna and isnull(uniqs).any(): 994 n -= 1 995 return n 996 997 @property 998 def is_unique(self): 999 """ 1000 Return boolean if values in the object are unique 1001 1002 Returns 1003 ------- 1004 is_unique : boolean 1005 """ 1006 return self.nunique() == len(self) 1007 1008 @property 1009 def is_monotonic(self): 1010 """ 1011 Return boolean if values in the object are 1012 monotonic_increasing 1013 1014 .. versionadded:: 0.19.0 1015 1016 Returns 1017 ------- 1018 is_monotonic : boolean 1019 """ 1020 from pandas import Index 1021 return Index(self).is_monotonic 1022 1023 is_monotonic_increasing = is_monotonic 1024 1025 @property 1026 def is_monotonic_decreasing(self): 1027 """ 1028 Return boolean if values in the object are 1029 monotonic_decreasing 1030 1031 .. versionadded:: 0.19.0 1032 1033 Returns 1034 ------- 1035 is_monotonic_decreasing : boolean 1036 """ 1037 from pandas import Index 1038 return Index(self).is_monotonic_decreasing 1039 1040 def memory_usage(self, deep=False): 1041 """ 1042 Memory usage of my values 1043 1044 Parameters 1045 ---------- 1046 deep : bool 1047 Introspect the data deeply, interrogate 1048 `object` dtypes for system-level memory consumption 1049 1050 Returns 1051 ------- 1052 bytes used 1053 1054 Notes 1055 ----- 1056 Memory usage does not include memory consumed by elements that 1057 are not components of the array if deep=False 1058 1059 See Also 1060 -------- 1061 numpy.ndarray.nbytes 1062 """ 1063 if hasattr(self.values, 'memory_usage'): 1064 return self.values.memory_usage(deep=deep) 1065 1066 v = self.values.nbytes 1067 if deep and is_object_dtype(self): 1068 v += lib.memory_usage_of_objects(self.values) 1069 return v 1070 1071 def factorize(self, sort=False, na_sentinel=-1): 1072 """ 1073 Encode the object as an enumerated type or categorical variable 1074 1075 Parameters 1076 ---------- 1077 sort : boolean, default False 1078 Sort by values 1079 na_sentinel: int, default -1 1080 Value to mark "not found" 1081 1082 Returns 1083 ------- 1084 labels : the indexer to the original array 1085 uniques : the unique Index 1086 """ 1087 from pandas.core.algorithms import factorize 1088 return factorize(self, sort=sort, na_sentinel=na_sentinel) 1089 1090 _shared_docs['searchsorted'] = ( 1091 """Find indices where elements should be inserted to maintain order. 1092 1093 Find the indices into a sorted %(klass)s `self` such that, if the 1094 corresponding elements in `v` were inserted before the indices, the 1095 order of `self` would be preserved. 1096 1097 Parameters 1098 ---------- 1099 %(value)s : array_like 1100 Values to insert into `self`. 1101 side : {'left', 'right'}, optional 1102 If 'left', the index of the first suitable location found is given. 1103 If 'right', return the last such index. If there is no suitable 1104 index, return either 0 or N (where N is the length of `self`). 1105 sorter : 1-D array_like, optional 1106 Optional array of integer indices that sort `self` into ascending 1107 order. They are typically the result of ``np.argsort``. 1108 1109 Returns 1110 ------- 1111 indices : array of ints 1112 Array of insertion points with the same shape as `v`. 1113 1114 See Also 1115 -------- 1116 numpy.searchsorted 1117 1118 Notes 1119 ----- 1120 Binary search is used to find the required insertion points. 1121 1122 Examples 1123 -------- 1124 >>> x = pd.Series([1, 2, 3]) 1125 >>> x 1126 0 1 1127 1 2 1128 2 3 1129 dtype: int64 1130 >>> x.searchsorted(4) 1131 array([3]) 1132 >>> x.searchsorted([0, 4]) 1133 array([0, 3]) 1134 >>> x.searchsorted([1, 3], side='left') 1135 array([0, 2]) 1136 >>> x.searchsorted([1, 3], side='right') 1137 array([1, 3]) 1138 >>> 1139 >>> x = pd.Categorical(['apple', 'bread', 'bread', 'cheese', 'milk' ]) 1140 [apple, bread, bread, cheese, milk] 1141 Categories (4, object): [apple < bread < cheese < milk] 1142 >>> x.searchsorted('bread') 1143 array([1]) # Note: an array, not a scalar 1144 >>> x.searchsorted(['bread']) 1145 array([1]) 1146 >>> x.searchsorted(['bread', 'eggs']) 1147 array([1, 4]) 1148 >>> x.searchsorted(['bread', 'eggs'], side='right') 1149 array([3, 4]) # eggs before milk 1150 """) 1151 1152 @Substitution(klass='IndexOpsMixin', value='key') 1153 @Appender(_shared_docs['searchsorted']) 1154 def searchsorted(self, key, side='left', sorter=None): 1155 # needs coercion on the key (DatetimeIndex does already) 1156 return self.values.searchsorted(key, side=side, sorter=sorter) 1157 1158 _shared_docs['drop_duplicates'] = ( 1159 """Return %(klass)s with duplicate values removed 1160 1161 Parameters 1162 ---------- 1163 1164 keep : {'first', 'last', False}, default 'first' 1165 - ``first`` : Drop duplicates except for the first occurrence. 1166 - ``last`` : Drop duplicates except for the last occurrence. 1167 - False : Drop all duplicates. 1168 take_last : deprecated 1169 %(inplace)s 1170 1171 Returns 1172 ------- 1173 deduplicated : %(klass)s 1174 """) 1175 1176 @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', 1177 False: 'first'}) 1178 @Appender(_shared_docs['drop_duplicates'] % _indexops_doc_kwargs) 1179 def drop_duplicates(self, keep='first', inplace=False): 1180 if isinstance(self, ABCIndexClass): 1181 if self.is_unique: 1182 return self._shallow_copy() 1183 1184 duplicated = self.duplicated(keep=keep) 1185 result = self[np.logical_not(duplicated)] 1186 if inplace: 1187 return self._update_inplace(result) 1188 else: 1189 return result 1190 1191 _shared_docs['duplicated'] = ( 1192 """Return boolean %(duplicated)s denoting duplicate values 1193 1194 Parameters 1195 ---------- 1196 keep : {'first', 'last', False}, default 'first' 1197 - ``first`` : Mark duplicates as ``True`` except for the first 1198 occurrence. 1199 - ``last`` : Mark duplicates as ``True`` except for the last 1200 occurrence. 1201 - False : Mark all duplicates as ``True``. 1202 take_last : deprecated 1203 1204 Returns 1205 ------- 1206 duplicated : %(duplicated)s 1207 """) 1208 1209 @deprecate_kwarg('take_last', 'keep', mapping={True: 'last', 1210 False: 'first'}) 1211 @Appender(_shared_docs['duplicated'] % _indexops_doc_kwargs) 1212 def duplicated(self, keep='first'): 1213 from pandas.core.algorithms import duplicated 1214 if isinstance(self, ABCIndexClass): 1215 if self.is_unique: 1216 return np.zeros(len(self), dtype=np.bool) 1217 return duplicated(self, keep=keep) 1218 else: 1219 return self._constructor(duplicated(self, keep=keep), 1220 index=self.index).__finalize__(self) 1221 1222 # ---------------------------------------------------------------------- 1223 # abstracts 1224 1225 def _update_inplace(self, result, **kwargs): 1226 raise AbstractMethodError(self) 1227 [end of pandas/core/base.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
2e276fb2fec6dd04d6abcf5e79c03853ee86cd24
documentation improvement: set operations sort the index though clear from the examples, an explicit hint that set operations will re-sort your indices in ascending order would be helpful (section 7.3.1 "Set operations on Index objects"). i had indicies like "QK_1 ... QK_9 QK_10" and afterwards they got sorted as "QK_1 QK_10 QK_2...".
pls do a PR! @drunkeneye can you do a PR for this? @drunkeneye ping! sorry, for now i do not have the time to checkout the project and commit a PR. @drunkeneye np.....feel free when you have some time @drunkeneye PR for this?
2016-11-10T06:40:35Z
<patch> diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst --- a/doc/source/indexing.rst +++ b/doc/source/indexing.rst @@ -1467,6 +1467,10 @@ with duplicates dropped. idx1.symmetric_difference(idx2) idx1 ^ idx2 +.. note:: + + The resulting index from a set operation will be sorted in ascending order. + Missing values ~~~~~~~~~~~~~~ </patch>
[]
[]
pandas-dev__pandas-25729
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add the possibility of callable to ValueError message raised by corr #### Code Sample, a copy-pastable example if possible ```python import pandas as pd from scipy.stats import pearsonr df = pd.DataFrame({'A': [1,2,3], 'B': [2,5,6]}) df.corr(method='a') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-be8a09a85708> in <module> 2 3 df = pd.DataFrame({'A': [1,2,3], 'B': [2,5,6]}) ----> 4 df.corr(method='a') ~/miniconda3/envs/spols190117/lib/python3.6/site-packages/pandas/core/frame.py in corr(self, method, min_periods) 7034 raise ValueError("method must be either 'pearson', " 7035 "'spearman', or 'kendall', '{method}' " -> 7036 "was supplied".format(method=method)) 7037 7038 return self._constructor(correl, index=idx, columns=cols) ValueError: method must be either 'pearson', 'spearman', or 'kendall', 'a' was supplied ``` #### Problem description The `ValueError` in the example above does not mention that a `callable` could be supplied to `corr` as well. I would suggest to add this to the error message. #### Expected Output ```python ValueError: method must be either 'pearson', 'spearman', 'kendall' or callable, 'a' was supplied ``` #### Output of ``pd.show_versions()`` <details> ```python INSTALLED VERSIONS ------------------ commit: None python: 3.6.8.final.0 python-bits: 64 OS: Linux OS-release: 4.4.165-81-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.24.2 pytest: 4.1.1 pip: 18.1 setuptools: 40.6.3 Cython: 0.29.3 numpy: 1.15.4 scipy: 1.2.0 pyarrow: None xarray: None IPython: 7.2.0 sphinx: 1.8.3 patsy: 0.5.1 dateutil: 2.7.5 pytz: 2018.9 blosc: None bottleneck: None tables: 3.4.4 numexpr: 2.6.9 feather: None matplotlib: 3.0.2 openpyxl: 2.4.0-b1 xlrd: 1.2.0 xlwt: None xlsxwriter: None lxml.etree: 4.3.0 bs4: 4.7.1 html5lib: 1.0.1 sqlalchemy: None pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None gcsfs: None ``` </details> </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /> 30 </a> 31 </td> 32 </tr> 33 <tr> 34 <td>License</td> 35 <td> 36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 38 </a> 39 </td> 40 </tr> 41 <tr> 42 <td>Build Status</td> 43 <td> 44 <a href="https://travis-ci.org/pandas-dev/pandas"> 45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 46 </a> 47 </td> 48 </tr> 49 <tr> 50 <td></td> 51 <td> 52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 54 </a> 55 </td> 56 </tr> 57 <tr> 58 <td>Coverage</td> 59  <td> 60 <a href="https://codecov.io/gh/pandas-dev/pandas"> 61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 62 </a> 63 </td> 64 </tr> 65 <tr> 66 <td>Downloads</td> 67 <td> 68 <a href="https://pandas.pydata.org"> 69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 70 </a> 71 </td> 72 </tr> 73 <tr> 74 <td>Gitter</td> 75 <td> 76 <a href="https://gitter.im/pydata/pandas"> 77 <img src="https://badges.gitter.im/Join%20Chat.svg" /> 78 </a> 79 </td> 80 </tr> 81 </table> 82 83 84 85 ## What is it? 86 87 **pandas** is a Python package providing fast, flexible, and expressive data 88 structures designed to make working with "relational" or "labeled" data both 89 easy and intuitive. It aims to be the fundamental high-level building block for 90 doing practical, **real world** data analysis in Python. Additionally, it has 91 the broader goal of becoming **the most powerful and flexible open source data 92 analysis / manipulation tool available in any language**. It is already well on 93 its way towards this goal. 94 95 ## Main Features 96 Here are just a few of the things that pandas does well: 97 98 - Easy handling of [**missing data**][missing-data] (represented as 99 `NaN`) in floating point as well as non-floating point data 100 - Size mutability: columns can be [**inserted and 101 deleted**][insertion-deletion] from DataFrame and higher dimensional 102 objects 103 - Automatic and explicit [**data alignment**][alignment]: objects can 104 be explicitly aligned to a set of labels, or the user can simply 105 ignore the labels and let `Series`, `DataFrame`, etc. automatically 106 align the data for you in computations 107 - Powerful, flexible [**group by**][groupby] functionality to perform 108 split-apply-combine operations on data sets, for both aggregating 109 and transforming data 110 - Make it [**easy to convert**][conversion] ragged, 111 differently-indexed data in other Python and NumPy data structures 112 into DataFrame objects 113 - Intelligent label-based [**slicing**][slicing], [**fancy 114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 115 large data sets 116 - Intuitive [**merging**][merging] and [**joining**][joining] data 117 sets 118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 119 data sets 120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 121 labels per tick) 122 - Robust IO tools for loading data from [**flat files**][flat-files] 123 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 125 - [**Time series**][timeseries]-specific functionality: date range 126 generation and frequency conversion, moving window statistics, 127 moving window linear regressions, date shifting and lagging, etc. 128 129 130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 148 149 ## Where to get it 150 The source code is currently hosted on GitHub at: 151 https://github.com/pandas-dev/pandas 152 153 Binary installers for the latest released version are available at the [Python 154 package index](https://pypi.org/project/pandas) and on conda. 155 156 ```sh 157 # conda 158 conda install pandas 159 ``` 160 161 ```sh 162 # or PyPI 163 pip install pandas 164 ``` 165 166 ## Dependencies 167 - [NumPy](https://www.numpy.org): 1.12.0 or higher 168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 169 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 170 171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 172 for recommended and optional dependencies. 173 174 ## Installation from sources 175 To install pandas from source you need Cython in addition to the normal 176 dependencies above. Cython can be installed from pypi: 177 178 ```sh 179 pip install cython 180 ``` 181 182 In the `pandas` directory (same one where you found this file after 183 cloning the git repo), execute: 184 185 ```sh 186 python setup.py install 187 ``` 188 189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 190 191 ```sh 192 python setup.py develop 193 ``` 194 195 Alternatively, you can use `pip` if you want all the dependencies pulled 196 in automatically (the `-e` option is for installing it in [development 197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 198 199 ```sh 200 pip install -e . 201 ``` 202 203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 204 205 ## License 206 [BSD 3](LICENSE) 207 208 ## Documentation 209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 210 211 ## Background 212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 213 has been under active development since then. 214 215 ## Getting Help 216 217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 219 220 ## Discussion and Development 221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 222 223 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 224 225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 226 227 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 228 229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 230 231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 232 233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 234 235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 236 [end of README.md] [start of doc/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # pandas documentation build configuration file, created by 4 # 5 # This file is execfile()d with the current directory set to its containing 6 # dir. 7 # 8 # Note that not all possible configuration values are present in this 9 # autogenerated file. 10 # 11 # All configuration values have a default; values that are commented out 12 # serve to show the default. 13 14 import sys 15 import os 16 import inspect 17 import importlib 18 import logging 19 import warnings 20 import jinja2 21 from sphinx.ext.autosummary import _import_by_name 22 from numpydoc.docscrape import NumpyDocString 23 from numpydoc.docscrape_sphinx import SphinxDocString 24 25 logger = logging.getLogger(__name__) 26 27 # https://github.com/sphinx-doc/sphinx/pull/2325/files 28 # Workaround for sphinx-build recursion limit overflow: 29 # pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) 30 # RuntimeError: maximum recursion depth exceeded while pickling an object 31 # 32 # Python's default allowed recursion depth is 1000. 33 sys.setrecursionlimit(5000) 34 35 # If extensions (or modules to document with autodoc) are in another directory, 36 # add these directories to sys.path here. If the directory is relative to the 37 # documentation root, use os.path.abspath to make it absolute, like shown here. 38 # sys.path.append(os.path.abspath('.')) 39 sys.path.insert(0, os.path.abspath('../sphinxext')) 40 sys.path.extend([ 41 42 # numpy standard doc extensions 43 os.path.join(os.path.dirname(__file__), 44 '..', '../..', 45 'sphinxext') 46 47 ]) 48 49 # -- General configuration ----------------------------------------------- 50 51 # Add any Sphinx extension module names here, as strings. They can be 52 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 53 # sphinxext. 54 55 extensions = ['sphinx.ext.autodoc', 56 'sphinx.ext.autosummary', 57 'sphinx.ext.doctest', 58 'sphinx.ext.extlinks', 59 'sphinx.ext.todo', 60 'numpydoc', # handle NumPy documentation formatted docstrings 61 'IPython.sphinxext.ipython_directive', 62 'IPython.sphinxext.ipython_console_highlighting', 63 'matplotlib.sphinxext.plot_directive', 64 'sphinx.ext.intersphinx', 65 'sphinx.ext.coverage', 66 'sphinx.ext.mathjax', 67 'sphinx.ext.ifconfig', 68 'sphinx.ext.linkcode', 69 'nbsphinx', 70 'contributors', # custom pandas extension 71 ] 72 73 exclude_patterns = ['**.ipynb_checkpoints'] 74 try: 75 import nbconvert 76 except ImportError: 77 logger.warn('nbconvert not installed. Skipping notebooks.') 78 exclude_patterns.append('**/*.ipynb') 79 else: 80 try: 81 nbconvert.utils.pandoc.get_pandoc_version() 82 except nbconvert.utils.pandoc.PandocMissing: 83 logger.warn('Pandoc not installed. Skipping notebooks.') 84 exclude_patterns.append('**/*.ipynb') 85 86 # sphinx_pattern can be '-api' to exclude the API pages, 87 # the path to a file, or a Python object 88 # (e.g. '10min.rst' or 'pandas.DataFrame.head') 89 source_path = os.path.dirname(os.path.abspath(__file__)) 90 pattern = os.environ.get('SPHINX_PATTERN') 91 if pattern: 92 for dirname, dirs, fnames in os.walk(source_path): 93 for fname in fnames: 94 if os.path.splitext(fname)[-1] in ('.rst', '.ipynb'): 95 fname = os.path.relpath(os.path.join(dirname, fname), 96 source_path) 97 98 if (fname == 'index.rst' 99 and os.path.abspath(dirname) == source_path): 100 continue 101 elif pattern == '-api' and dirname == 'reference': 102 exclude_patterns.append(fname) 103 elif pattern != '-api' and fname != pattern: 104 exclude_patterns.append(fname) 105 106 with open(os.path.join(source_path, 'index.rst.template')) as f: 107 t = jinja2.Template(f.read()) 108 with open(os.path.join(source_path, 'index.rst'), 'w') as f: 109 f.write(t.render(include_api=pattern is None, 110 single_doc=(pattern 111 if pattern is not None and pattern != '-api' 112 else None))) 113 autosummary_generate = True if pattern is None else ['index'] 114 115 # matplotlib plot directive 116 plot_include_source = True 117 plot_formats = [("png", 90)] 118 plot_html_show_formats = False 119 plot_html_show_source_link = False 120 plot_pre_code = """import numpy as np 121 import pandas as pd""" 122 123 # Add any paths that contain templates here, relative to this directory. 124 templates_path = ['../_templates'] 125 126 # The suffix of source filenames. 127 source_suffix = [ 128 '.rst', 129 ] 130 131 # The encoding of source files. 132 source_encoding = 'utf-8' 133 134 # The master toctree document. 135 master_doc = 'index' 136 137 # General information about the project. 138 project = u'pandas' 139 copyright = u'2008-2014, the pandas development team' 140 141 # The version info for the project you're documenting, acts as replacement for 142 # |version| and |release|, also used in various other places throughout the 143 # built documents. 144 # 145 # The short X.Y version. 146 import pandas 147 148 # version = '%s r%s' % (pandas.__version__, svn_version()) 149 version = str(pandas.__version__) 150 151 # The full version, including alpha/beta/rc tags. 152 release = version 153 154 # The language for content autogenerated by Sphinx. Refer to documentation 155 # for a list of supported languages. 156 # language = None 157 158 # There are two options for replacing |today|: either, you set today to some 159 # non-false value, then it is used: 160 # today = '' 161 # Else, today_fmt is used as the format for a strftime call. 162 # today_fmt = '%B %d, %Y' 163 164 # List of documents that shouldn't be included in the build. 165 # unused_docs = [] 166 167 # List of directories, relative to source directory, that shouldn't be searched 168 # for source files. 169 exclude_trees = [] 170 171 # The reST default role (used for this markup: `text`) to use for all 172 # documents. default_role = None 173 174 # If true, '()' will be appended to :func: etc. cross-reference text. 175 # add_function_parentheses = True 176 177 # If true, the current module name will be prepended to all description 178 # unit titles (such as .. function::). 179 # add_module_names = True 180 181 # If true, sectionauthor and moduleauthor directives will be shown in the 182 # output. They are ignored by default. 183 # show_authors = False 184 185 # The name of the Pygments (syntax highlighting) style to use. 186 pygments_style = 'sphinx' 187 188 # A list of ignored prefixes for module index sorting. 189 # modindex_common_prefix = [] 190 191 192 # -- Options for HTML output --------------------------------------------- 193 194 # The theme to use for HTML and HTML Help pages. Major themes that come with 195 # Sphinx are currently 'default' and 'sphinxdoc'. 196 html_theme = 'nature_with_gtoc' 197 198 # The style sheet to use for HTML and HTML Help pages. A file of that name 199 # must exist either in Sphinx' static/ path, or in one of the custom paths 200 # given in html_static_path. 201 # html_style = 'statsmodels.css' 202 203 # Theme options are theme-specific and customize the look and feel of a theme 204 # further. For a list of options available for each theme, see the 205 # documentation. 206 # html_theme_options = {} 207 208 # Add any paths that contain custom themes here, relative to this directory. 209 html_theme_path = ['themes'] 210 211 # The name for this set of Sphinx documents. If None, it defaults to 212 # "<project> v<release> documentation". 213 # html_title = None 214 215 # A shorter title for the navigation bar. Default is the same as html_title. 216 # html_short_title = None 217 218 # The name of an image file (relative to this directory) to place at the top 219 # of the sidebar. 220 # html_logo = None 221 222 # Add any paths that contain custom static files (such as style sheets) here, 223 # relative to this directory. They are copied after the builtin static files, 224 # so a file named "default.css" will overwrite the builtin "default.css". 225 html_static_path = ['_static'] 226 227 # The name of an image file (within the static path) to use as favicon of the 228 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 229 # pixels large. 230 html_favicon = os.path.join(html_static_path[0], 'favicon.ico') 231 232 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 233 # using the given strftime format. 234 # html_last_updated_fmt = '%b %d, %Y' 235 236 # If true, SmartyPants will be used to convert quotes and dashes to 237 # typographically correct entities. 238 # html_use_smartypants = True 239 240 # Custom sidebar templates, maps document names to template names. 241 # html_sidebars = {} 242 243 # Additional templates that should be rendered to pages, maps page names to 244 # template names. 245 246 # Add redirect for previously existing API pages 247 # each item is like `(from_old, to_new)` 248 # To redirect a class and all its methods, see below 249 # https://github.com/pandas-dev/pandas/issues/16186 250 251 moved_api_pages = [ 252 ('pandas.core.common.isnull', 'pandas.isna'), 253 ('pandas.core.common.notnull', 'pandas.notna'), 254 ('pandas.core.reshape.get_dummies', 'pandas.get_dummies'), 255 ('pandas.tools.merge.concat', 'pandas.concat'), 256 ('pandas.tools.merge.merge', 'pandas.merge'), 257 ('pandas.tools.pivot.pivot_table', 'pandas.pivot_table'), 258 ('pandas.tseries.tools.to_datetime', 'pandas.to_datetime'), 259 ('pandas.io.clipboard.read_clipboard', 'pandas.read_clipboard'), 260 ('pandas.io.excel.ExcelFile.parse', 'pandas.ExcelFile.parse'), 261 ('pandas.io.excel.read_excel', 'pandas.read_excel'), 262 ('pandas.io.gbq.read_gbq', 'pandas.read_gbq'), 263 ('pandas.io.html.read_html', 'pandas.read_html'), 264 ('pandas.io.json.read_json', 'pandas.read_json'), 265 ('pandas.io.parsers.read_csv', 'pandas.read_csv'), 266 ('pandas.io.parsers.read_fwf', 'pandas.read_fwf'), 267 ('pandas.io.parsers.read_table', 'pandas.read_table'), 268 ('pandas.io.pickle.read_pickle', 'pandas.read_pickle'), 269 ('pandas.io.pytables.HDFStore.append', 'pandas.HDFStore.append'), 270 ('pandas.io.pytables.HDFStore.get', 'pandas.HDFStore.get'), 271 ('pandas.io.pytables.HDFStore.put', 'pandas.HDFStore.put'), 272 ('pandas.io.pytables.HDFStore.select', 'pandas.HDFStore.select'), 273 ('pandas.io.pytables.read_hdf', 'pandas.read_hdf'), 274 ('pandas.io.sql.read_sql', 'pandas.read_sql'), 275 ('pandas.io.sql.read_frame', 'pandas.read_frame'), 276 ('pandas.io.sql.write_frame', 'pandas.write_frame'), 277 ('pandas.io.stata.read_stata', 'pandas.read_stata'), 278 ] 279 280 # Again, tuples of (from_old, to_new) 281 moved_classes = [ 282 ('pandas.tseries.resample.Resampler', 'pandas.core.resample.Resampler'), 283 ('pandas.formats.style.Styler', 'pandas.io.formats.style.Styler'), 284 ] 285 286 for old, new in moved_classes: 287 # the class itself... 288 moved_api_pages.append((old, new)) 289 290 mod, classname = new.rsplit('.', 1) 291 klass = getattr(importlib.import_module(mod), classname) 292 methods = [x for x in dir(klass) 293 if not x.startswith('_') or x in ('__iter__', '__array__')] 294 295 for method in methods: 296 # ... and each of its public methods 297 moved_api_pages.append( 298 ("{old}.{method}".format(old=old, method=method), 299 "{new}.{method}".format(new=new, method=method)) 300 ) 301 302 if pattern is None: 303 html_additional_pages = { 304 'generated/' + page[0]: 'api_redirect.html' 305 for page in moved_api_pages 306 } 307 308 309 header = """\ 310 .. currentmodule:: pandas 311 312 .. ipython:: python 313 :suppress: 314 315 import numpy as np 316 import pandas as pd 317 318 randn = np.random.randn 319 np.random.seed(123456) 320 np.set_printoptions(precision=4, suppress=True) 321 pd.options.display.max_rows = 15 322 323 import os 324 os.chdir('{}') 325 """.format(os.path.dirname(os.path.dirname(__file__))) 326 327 328 html_context = { 329 'redirects': {old: new for old, new in moved_api_pages}, 330 'header': header 331 } 332 333 # If false, no module index is generated. 334 html_use_modindex = True 335 336 # If false, no index is generated. 337 # html_use_index = True 338 339 # If true, the index is split into individual pages for each letter. 340 # html_split_index = False 341 342 # If true, links to the reST sources are added to the pages. 343 # html_show_sourcelink = True 344 345 # If true, an OpenSearch description file will be output, and all pages will 346 # contain a <link> tag referring to it. The value of this option must be the 347 # base URL from which the finished HTML is served. 348 # html_use_opensearch = '' 349 350 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 351 # html_file_suffix = '' 352 353 # Output file base name for HTML help builder. 354 htmlhelp_basename = 'pandas' 355 356 # -- Options for nbsphinx ------------------------------------------------ 357 358 nbsphinx_allow_errors = True 359 360 # -- Options for LaTeX output -------------------------------------------- 361 362 latex_elements = {} 363 364 # The paper size ('letter' or 'a4'). 365 # latex_paper_size = 'letter' 366 367 # The font size ('10pt', '11pt' or '12pt'). 368 # latex_font_size = '10pt' 369 370 # Grouping the document tree into LaTeX files. List of tuples (source start 371 # file, target name, title, author, documentclass [howto/manual]). 372 latex_documents = [ 373 ('index', 'pandas.tex', 374 'pandas: powerful Python data analysis toolkit', 375 r'Wes McKinney\n\& PyData Development Team', 'manual'), 376 ] 377 378 # The name of an image file (relative to this directory) to place at the top of 379 # the title page. 380 # latex_logo = None 381 382 # For "manual" documents, if this is true, then toplevel headings are parts, 383 # not chapters. 384 # latex_use_parts = False 385 386 # Additional stuff for the LaTeX preamble. 387 # latex_preamble = '' 388 389 # Documents to append as an appendix to all manuals. 390 # latex_appendices = [] 391 392 # If false, no module index is generated. 393 # latex_use_modindex = True 394 395 396 if pattern is None: 397 intersphinx_mapping = { 398 'dateutil': ("https://dateutil.readthedocs.io/en/latest/", None), 399 'matplotlib': ('https://matplotlib.org/', None), 400 'numpy': ('https://docs.scipy.org/doc/numpy/', None), 401 'pandas-gbq': ('https://pandas-gbq.readthedocs.io/en/latest/', None), 402 'py': ('https://pylib.readthedocs.io/en/latest/', None), 403 'python': ('https://docs.python.org/3/', None), 404 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None), 405 'statsmodels': ('http://www.statsmodels.org/devel/', None), 406 } 407 408 # extlinks alias 409 extlinks = {'issue': ('https://github.com/pandas-dev/pandas/issues/%s', 410 'GH'), 411 'wiki': ('https://github.com/pandas-dev/pandas/wiki/%s', 412 'wiki ')} 413 414 415 # ignore all deprecation warnings from Panel during doc build 416 # (to avoid the need to add :okwarning: in many places) 417 warnings.filterwarnings("ignore", message="\nPanel is deprecated", 418 category=FutureWarning) 419 420 421 ipython_warning_is_error = False 422 ipython_exec_lines = [ 423 'import numpy as np', 424 'import pandas as pd', 425 # This ensures correct rendering on system with console encoding != utf8 426 # (windows). It forces pandas to encode its output reprs using utf8 427 # wherever the docs are built. The docs' target is the browser, not 428 # the console, so this is fine. 429 'pd.options.display.encoding="utf8"' 430 ] 431 432 433 def sphinxdocstring_str(self, indent=0, func_role="obj"): 434 # Pandas displays Attributes section in style like Methods section 435 436 # Function is copy of `SphinxDocString.__str__` 437 ns = { 438 'signature': self._str_signature(), 439 'index': self._str_index(), 440 'summary': self._str_summary(), 441 'extended_summary': self._str_extended_summary(), 442 'parameters': self._str_param_list('Parameters'), 443 'returns': self._str_returns('Returns'), 444 'yields': self._str_returns('Yields'), 445 'other_parameters': self._str_param_list('Other Parameters'), 446 'raises': self._str_param_list('Raises'), 447 'warns': self._str_param_list('Warns'), 448 'warnings': self._str_warnings(), 449 'see_also': self._str_see_also(func_role), 450 'notes': self._str_section('Notes'), 451 'references': self._str_references(), 452 'examples': self._str_examples(), 453 # Replaced `self._str_param_list('Attributes', fake_autosummary=True)` 454 # with `self._str_member_list('Attributes')` 455 'attributes': self._str_member_list('Attributes'), 456 'methods': self._str_member_list('Methods'), 457 } 458 ns = {k: '\n'.join(v) for k, v in ns.items()} 459 460 rendered = self.template.render(**ns) 461 return '\n'.join(self._str_indent(rendered.split('\n'), indent)) 462 463 464 SphinxDocString.__str__ = sphinxdocstring_str 465 466 467 # Fix "WARNING: Inline strong start-string without end-string." 468 # PR #155 "Escape the * in *args and **kwargs" from numpydoc 469 # Can be removed after PR merges in v0.9.0 470 def decorate_process_param(func): 471 def _escape_args_and_kwargs(name): 472 if name[:2] == '**': 473 return r'\*\*' + name[2:] 474 elif name[:1] == '*': 475 return r'\*' + name[1:] 476 else: 477 return name 478 479 def func_wrapper(self, param, desc, fake_autosummary): 480 param = _escape_args_and_kwargs(param.strip()) 481 return func(self, param, desc, fake_autosummary) 482 483 return func_wrapper 484 485 486 func = SphinxDocString._process_param 487 SphinxDocString._process_param = decorate_process_param(func) 488 489 # Add custom Documenter to handle attributes/methods of an AccessorProperty 490 # eg pandas.Series.str and pandas.Series.dt (see GH9322) 491 492 import sphinx 493 from sphinx.util import rpartition 494 from sphinx.ext.autodoc import ( 495 Documenter, MethodDocumenter, AttributeDocumenter) 496 from sphinx.ext.autosummary import Autosummary 497 498 499 class AccessorDocumenter(MethodDocumenter): 500 """ 501 Specialized Documenter subclass for accessors. 502 """ 503 objtype = 'accessor' 504 directivetype = 'method' 505 506 # lower than MethodDocumenter so this is not chosen for normal methods 507 priority = 0.6 508 509 def format_signature(self): 510 # this method gives an error/warning for the accessors, therefore 511 # overriding it (accessor has no arguments) 512 return '' 513 514 515 class AccessorLevelDocumenter(Documenter): 516 """ 517 Specialized Documenter subclass for objects on accessor level (methods, 518 attributes). 519 """ 520 # This is the simple straightforward version 521 # modname is None, base the last elements (eg 'hour') 522 # and path the part before (eg 'Series.dt') 523 # def resolve_name(self, modname, parents, path, base): 524 # modname = 'pandas' 525 # mod_cls = path.rstrip('.') 526 # mod_cls = mod_cls.split('.') 527 # 528 # return modname, mod_cls + [base] 529 def resolve_name(self, modname, parents, path, base): 530 if modname is None: 531 if path: 532 mod_cls = path.rstrip('.') 533 else: 534 mod_cls = None 535 # if documenting a class-level object without path, 536 # there must be a current class, either from a parent 537 # auto directive ... 538 mod_cls = self.env.temp_data.get('autodoc:class') 539 # ... or from a class directive 540 if mod_cls is None: 541 mod_cls = self.env.temp_data.get('py:class') 542 # ... if still None, there's no way to know 543 if mod_cls is None: 544 return None, [] 545 # HACK: this is added in comparison to ClassLevelDocumenter 546 # mod_cls still exists of class.accessor, so an extra 547 # rpartition is needed 548 modname, accessor = rpartition(mod_cls, '.') 549 modname, cls = rpartition(modname, '.') 550 parents = [cls, accessor] 551 # if the module name is still missing, get it like above 552 if not modname: 553 modname = self.env.temp_data.get('autodoc:module') 554 if not modname: 555 if sphinx.__version__ > '1.3': 556 modname = self.env.ref_context.get('py:module') 557 else: 558 modname = self.env.temp_data.get('py:module') 559 # ... else, it stays None, which means invalid 560 return modname, parents + [base] 561 562 563 class AccessorAttributeDocumenter(AccessorLevelDocumenter, 564 AttributeDocumenter): 565 objtype = 'accessorattribute' 566 directivetype = 'attribute' 567 568 # lower than AttributeDocumenter so this is not chosen for normal 569 # attributes 570 priority = 0.6 571 572 573 class AccessorMethodDocumenter(AccessorLevelDocumenter, MethodDocumenter): 574 objtype = 'accessormethod' 575 directivetype = 'method' 576 577 # lower than MethodDocumenter so this is not chosen for normal methods 578 priority = 0.6 579 580 581 class AccessorCallableDocumenter(AccessorLevelDocumenter, MethodDocumenter): 582 """ 583 This documenter lets us removes .__call__ from the method signature for 584 callable accessors like Series.plot 585 """ 586 objtype = 'accessorcallable' 587 directivetype = 'method' 588 589 # lower than MethodDocumenter; otherwise the doc build prints warnings 590 priority = 0.5 591 592 def format_name(self): 593 return MethodDocumenter.format_name(self).rstrip('.__call__') 594 595 596 class PandasAutosummary(Autosummary): 597 """ 598 This alternative autosummary class lets us override the table summary for 599 Series.plot and DataFrame.plot in the API docs. 600 """ 601 def _replace_pandas_items(self, display_name, sig, summary, real_name): 602 # this a hack: ideally we should extract the signature from the 603 # .__call__ method instead of hard coding this 604 if display_name == 'DataFrame.plot': 605 sig = '([x, y, kind, ax, ....])' 606 summary = 'DataFrame plotting accessor and method' 607 elif display_name == 'Series.plot': 608 sig = '([kind, ax, figsize, ....])' 609 summary = 'Series plotting accessor and method' 610 return (display_name, sig, summary, real_name) 611 612 @staticmethod 613 def _is_deprecated(real_name): 614 try: 615 obj, parent, modname = _import_by_name(real_name) 616 except ImportError: 617 return False 618 doc = NumpyDocString(obj.__doc__ or '') 619 summary = ''.join(doc['Summary'] + doc['Extended Summary']) 620 return '.. deprecated::' in summary 621 622 def _add_deprecation_prefixes(self, items): 623 for item in items: 624 display_name, sig, summary, real_name = item 625 if self._is_deprecated(real_name): 626 summary = '(DEPRECATED) %s' % summary 627 yield display_name, sig, summary, real_name 628 629 def get_items(self, names): 630 items = Autosummary.get_items(self, names) 631 items = [self._replace_pandas_items(*item) for item in items] 632 items = list(self._add_deprecation_prefixes(items)) 633 return items 634 635 636 # based on numpy doc/source/conf.py 637 def linkcode_resolve(domain, info): 638 """ 639 Determine the URL corresponding to Python object 640 """ 641 if domain != 'py': 642 return None 643 644 modname = info['module'] 645 fullname = info['fullname'] 646 647 submod = sys.modules.get(modname) 648 if submod is None: 649 return None 650 651 obj = submod 652 for part in fullname.split('.'): 653 try: 654 obj = getattr(obj, part) 655 except AttributeError: 656 return None 657 658 try: 659 # inspect.unwrap() was added in Python version 3.4 660 if sys.version_info >= (3, 5): 661 fn = inspect.getsourcefile(inspect.unwrap(obj)) 662 else: 663 fn = inspect.getsourcefile(obj) 664 except TypeError: 665 fn = None 666 if not fn: 667 return None 668 669 try: 670 source, lineno = inspect.getsourcelines(obj) 671 except OSError: 672 lineno = None 673 674 if lineno: 675 linespec = "#L{:d}-L{:d}".format(lineno, lineno + len(source) - 1) 676 else: 677 linespec = "" 678 679 fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__)) 680 681 if '+' in pandas.__version__: 682 return ("http://github.com/pandas-dev/pandas/blob/master/pandas/" 683 "{}{}".format(fn, linespec)) 684 else: 685 return ("http://github.com/pandas-dev/pandas/blob/" 686 "v{}/pandas/{}{}".format(pandas.__version__, fn, linespec)) 687 688 689 # remove the docstring of the flags attribute (inherited from numpy ndarray) 690 # because these give doc build errors (see GH issue 5331) 691 def remove_flags_docstring(app, what, name, obj, options, lines): 692 if what == "attribute" and name.endswith(".flags"): 693 del lines[:] 694 695 696 def process_class_docstrings(app, what, name, obj, options, lines): 697 """ 698 For those classes for which we use :: 699 700 :template: autosummary/class_without_autosummary.rst 701 702 the documented attributes/methods have to be listed in the class 703 docstring. However, if one of those lists is empty, we use 'None', 704 which then generates warnings in sphinx / ugly html output. 705 This "autodoc-process-docstring" event connector removes that part 706 from the processed docstring. 707 708 """ 709 if what == "class": 710 joined = '\n'.join(lines) 711 712 templates = [ 713 """.. rubric:: Attributes 714 715 .. autosummary:: 716 :toctree: 717 718 None 719 """, 720 """.. rubric:: Methods 721 722 .. autosummary:: 723 :toctree: 724 725 None 726 """ 727 ] 728 729 for template in templates: 730 if template in joined: 731 joined = joined.replace(template, '') 732 lines[:] = joined.split('\n') 733 734 735 suppress_warnings = [ 736 # We "overwrite" autosummary with our PandasAutosummary, but 737 # still want the regular autosummary setup to run. So we just 738 # suppress this warning. 739 'app.add_directive' 740 ] 741 if pattern: 742 # When building a single document we don't want to warn because references 743 # to other documents are unknown, as it's expected 744 suppress_warnings.append('ref.ref') 745 746 747 def rstjinja(app, docname, source): 748 """ 749 Render our pages as a jinja template for fancy templating goodness. 750 """ 751 # http://ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/ 752 # Make sure we're outputting HTML 753 if app.builder.format != 'html': 754 return 755 src = source[0] 756 rendered = app.builder.templates.render_string( 757 src, app.config.html_context 758 ) 759 source[0] = rendered 760 761 762 def setup(app): 763 app.connect("source-read", rstjinja) 764 app.connect("autodoc-process-docstring", remove_flags_docstring) 765 app.connect("autodoc-process-docstring", process_class_docstrings) 766 app.add_autodocumenter(AccessorDocumenter) 767 app.add_autodocumenter(AccessorAttributeDocumenter) 768 app.add_autodocumenter(AccessorMethodDocumenter) 769 app.add_autodocumenter(AccessorCallableDocumenter) 770 app.add_directive('autosummary', PandasAutosummary) 771 [end of doc/source/conf.py] [start of pandas/util/_print_versions.py] 1 import codecs 2 import importlib 3 import locale 4 import os 5 import platform 6 import struct 7 import subprocess 8 import sys 9 10 11 def get_sys_info(): 12 "Returns system information as a dict" 13 14 blob = [] 15 16 # get full commit hash 17 commit = None 18 if os.path.isdir(".git") and os.path.isdir("pandas"): 19 try: 20 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), 21 stdout=subprocess.PIPE, 22 stderr=subprocess.PIPE) 23 so, serr = pipe.communicate() 24 except (OSError, ValueError): 25 pass 26 else: 27 if pipe.returncode == 0: 28 commit = so 29 try: 30 commit = so.decode('utf-8') 31 except ValueError: 32 pass 33 commit = commit.strip().strip('"') 34 35 blob.append(('commit', commit)) 36 37 try: 38 (sysname, nodename, release, 39 version, machine, processor) = platform.uname() 40 blob.extend([ 41 ("python", '.'.join(map(str, sys.version_info))), 42 ("python-bits", struct.calcsize("P") * 8), 43 ("OS", "{sysname}".format(sysname=sysname)), 44 ("OS-release", "{release}".format(release=release)), 45 # ("Version", "{version}".format(version=version)), 46 ("machine", "{machine}".format(machine=machine)), 47 ("processor", "{processor}".format(processor=processor)), 48 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)), 49 ("LC_ALL", "{lc}".format(lc=os.environ.get('LC_ALL', "None"))), 50 ("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))), 51 ("LOCALE", '.'.join(map(str, locale.getlocale()))), 52 ]) 53 except (KeyError, ValueError): 54 pass 55 56 return blob 57 58 59 def show_versions(as_json=False): 60 sys_info = get_sys_info() 61 62 deps = [ 63 # (MODULE_NAME, f(mod) -> mod version) 64 ("pandas", lambda mod: mod.__version__), 65 ("pytest", lambda mod: mod.__version__), 66 ("pip", lambda mod: mod.__version__), 67 ("setuptools", lambda mod: mod.__version__), 68 ("Cython", lambda mod: mod.__version__), 69 ("numpy", lambda mod: mod.version.version), 70 ("scipy", lambda mod: mod.version.version), 71 ("pyarrow", lambda mod: mod.__version__), 72 ("xarray", lambda mod: mod.__version__), 73 ("IPython", lambda mod: mod.__version__), 74 ("sphinx", lambda mod: mod.__version__), 75 ("patsy", lambda mod: mod.__version__), 76 ("dateutil", lambda mod: mod.__version__), 77 ("pytz", lambda mod: mod.VERSION), 78 ("blosc", lambda mod: mod.__version__), 79 ("bottleneck", lambda mod: mod.__version__), 80 ("tables", lambda mod: mod.__version__), 81 ("numexpr", lambda mod: mod.__version__), 82 ("feather", lambda mod: mod.__version__), 83 ("matplotlib", lambda mod: mod.__version__), 84 ("openpyxl", lambda mod: mod.__version__), 85 ("xlrd", lambda mod: mod.__VERSION__), 86 ("xlwt", lambda mod: mod.__VERSION__), 87 ("xlsxwriter", lambda mod: mod.__version__), 88 ("lxml.etree", lambda mod: mod.__version__), 89 ("bs4", lambda mod: mod.__version__), 90 ("html5lib", lambda mod: mod.__version__), 91 ("sqlalchemy", lambda mod: mod.__version__), 92 ("pymysql", lambda mod: mod.__version__), 93 ("psycopg2", lambda mod: mod.__version__), 94 ("jinja2", lambda mod: mod.__version__), 95 ("s3fs", lambda mod: mod.__version__), 96 ("fastparquet", lambda mod: mod.__version__), 97 ("pandas_gbq", lambda mod: mod.__version__), 98 ("pandas_datareader", lambda mod: mod.__version__), 99 ("gcsfs", lambda mod: mod.__version__), 100 ] 101 102 deps_blob = list() 103 for (modname, ver_f) in deps: 104 try: 105 if modname in sys.modules: 106 mod = sys.modules[modname] 107 else: 108 mod = importlib.import_module(modname) 109 ver = ver_f(mod) 110 deps_blob.append((modname, ver)) 111 except ImportError: 112 deps_blob.append((modname, None)) 113 114 if (as_json): 115 try: 116 import json 117 except ImportError: 118 import simplejson as json 119 120 j = dict(system=dict(sys_info), dependencies=dict(deps_blob)) 121 122 if as_json is True: 123 print(j) 124 else: 125 with codecs.open(as_json, "wb", encoding='utf8') as f: 126 json.dump(j, f, indent=2) 127 128 else: 129 130 print("\nINSTALLED VERSIONS") 131 print("------------------") 132 133 for k, stat in sys_info: 134 print("{k}: {stat}".format(k=k, stat=stat)) 135 136 print("") 137 for k, stat in deps_blob: 138 print("{k}: {stat}".format(k=k, stat=stat)) 139 140 141 def main(): 142 from optparse import OptionParser 143 parser = OptionParser() 144 parser.add_option("-j", "--json", metavar="FILE", nargs=1, 145 help="Save output as JSON into file, pass in " 146 "'-' to output to stdout") 147 148 (options, args) = parser.parse_args() 149 150 if options.json == "-": 151 options.json = True 152 153 show_versions(as_json=options.json) 154 155 return 0 156 157 158 if __name__ == "__main__": 159 sys.exit(main()) 160 [end of pandas/util/_print_versions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
a5d251de3af3cf07dfec39baa343633a9989c1d5
Add the possibility of callable to ValueError message raised by corr #### Code Sample, a copy-pastable example if possible ```python import pandas as pd from scipy.stats import pearsonr df = pd.DataFrame({'A': [1,2,3], 'B': [2,5,6]}) df.corr(method='a') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-be8a09a85708> in <module> 2 3 df = pd.DataFrame({'A': [1,2,3], 'B': [2,5,6]}) ----> 4 df.corr(method='a') ~/miniconda3/envs/spols190117/lib/python3.6/site-packages/pandas/core/frame.py in corr(self, method, min_periods) 7034 raise ValueError("method must be either 'pearson', " 7035 "'spearman', or 'kendall', '{method}' " -> 7036 "was supplied".format(method=method)) 7037 7038 return self._constructor(correl, index=idx, columns=cols) ValueError: method must be either 'pearson', 'spearman', or 'kendall', 'a' was supplied ``` #### Problem description The `ValueError` in the example above does not mention that a `callable` could be supplied to `corr` as well. I would suggest to add this to the error message. #### Expected Output ```python ValueError: method must be either 'pearson', 'spearman', 'kendall' or callable, 'a' was supplied ``` #### Output of ``pd.show_versions()`` <details> ```python INSTALLED VERSIONS ------------------ commit: None python: 3.6.8.final.0 python-bits: 64 OS: Linux OS-release: 4.4.165-81-default machine: x86_64 processor: x86_64 byteorder: little LC_ALL: en_US.UTF-8 LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.24.2 pytest: 4.1.1 pip: 18.1 setuptools: 40.6.3 Cython: 0.29.3 numpy: 1.15.4 scipy: 1.2.0 pyarrow: None xarray: None IPython: 7.2.0 sphinx: 1.8.3 patsy: 0.5.1 dateutil: 2.7.5 pytz: 2018.9 blosc: None bottleneck: None tables: 3.4.4 numexpr: 2.6.9 feather: None matplotlib: 3.0.2 openpyxl: 2.4.0-b1 xlrd: 1.2.0 xlwt: None xlsxwriter: None lxml.etree: 4.3.0 bs4: 4.7.1 html5lib: 1.0.1 sqlalchemy: None pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None gcsfs: None ``` </details>
Slight adjustment to your expected error message, I would phrase it as "a callable", since the function / type `callable` isn't valid.
2019-03-14T13:39:55Z
<patch> diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst --- a/doc/source/whatsnew/v0.25.0.rst +++ b/doc/source/whatsnew/v0.25.0.rst @@ -124,7 +124,7 @@ Bug Fixes ~~~~~~~~~ - Bug in :func:`to_datetime` which would raise an (incorrect) ``ValueError`` when called with a date far into the future and the ``format`` argument specified instead of raising ``OutOfBoundsDatetime`` (:issue:`23830`) - Bug in an error message in :meth:`DataFrame.plot`. Improved the error message if non-numerics are passed to :meth:`DataFrame.plot` (:issue:`25481`) -- +- Bug in error messages in :meth:`DataFrame.corr` and :meth:`Series.corr`. Added the possibility of using a callable. (:issue:`25729`) Categorical ^^^^^^^^^^^ diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -7088,8 +7088,8 @@ def corr(self, method='pearson', min_periods=1): correl[j, i] = c else: raise ValueError("method must be either 'pearson', " - "'spearman', or 'kendall', '{method}' " - "was supplied".format(method=method)) + "'spearman', 'kendall', or a callable, " + "'{method}' was supplied".format(method=method)) return self._constructor(correl, index=idx, columns=cols) diff --git a/pandas/core/series.py b/pandas/core/series.py --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2159,8 +2159,8 @@ def corr(self, other, method='pearson', min_periods=None): min_periods=min_periods) raise ValueError("method must be either 'pearson', " - "'spearman', or 'kendall', '{method}' " - "was supplied".format(method=method)) + "'spearman', 'kendall', or a callable, " + "'{method}' was supplied".format(method=method)) def cov(self, other, min_periods=None): """ </patch>
[]
[]
conda__conda-12378
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Document the opposite of `conda create` ### Checklist - [X] I added a descriptive title - [X] I searched open requests and couldn't find a duplicate ### What is the idea? A new conda environment is created with `conda create`. I'm not sure what the correct method is to remove a conda environment. ### Why is this needed? Some Internet searches show that the correct way to do it is `conda env remove -n [environment]`, however `conda --help` lists the `env` command as legacy. Furthermore, this command is listed as being "from other packages", which would indicate that it's not part of conda itself. ### What should happen? There should be a command to remove an environment, or at least the current command should be documented and discoverable. ### Additional Context There was discussion on this in #723 but this didn't seem to have a satisfying resolution. </issue> <code> [start of README.md] 1 [conda-logo]: https://s3.amazonaws.com/conda-dev/conda_logo.svg 2 [tests-badge]: https://github.com/conda/conda/actions/workflows/tests.yml/badge.svg 3 [images-badge]: https://github.com/conda/conda/actions/workflows/images.yml/badge.svg 4 [codecov-badge]: https://img.shields.io/codecov/c/github/conda/conda/main.svg?label=coverage 5 [release-badge]: https://img.shields.io/github/release/conda/conda.svg 6 [gitpod]: https://gitpod.io/button/open-in-gitpod.svg 7 8 [![Conda Logo][conda-logo]](https://github.com/conda/conda) 9 10 [![Tests (GitHub Actions)][tests-badge]](https://github.com/conda/conda/actions/workflows/tests.yml) 11 [![Images (GitHub Actions)][images-badge]](https://github.com/conda/conda/actions/workflows/images.yml) 12 [![Codecov Status][codecov-badge]](https://codecov.io/gh/conda/conda/branch/main) 13 [![latest release version][release-badge]](https://github.com/conda/conda/releases) 14 15 Conda is a cross-platform, language-agnostic binary package manager. It is the 16 package manager used by [Anaconda](https://www.anaconda.com/distribution/) installations, but it may be 17 used for other systems as well. Conda makes environments first-class 18 citizens, making it easy to create independent environments even for C 19 libraries. Conda is written entirely in Python, and is BSD licensed open 20 source. 21 22 Conda is enhanced by organizations, tools, and repositories created and managed by 23 the amazing members of the conda community. Some of them can be found 24 [here](https://github.com/conda/conda/wiki/Conda-Community). 25 26 27 ## Installation 28 29 Conda is a part of the [Anaconda Distribution](https://repo.anaconda.com). 30 Use [Miniconda](https://docs.conda.io/en/latest/miniconda.html) to bootstrap a minimal installation 31 that only includes conda and its dependencies. 32 33 34 ## Getting Started 35 36 If you install the Anaconda Distribution, you will already have hundreds of packages 37 installed. You can see what packages are installed by running 38 39 ```bash 40 $ conda list 41 ``` 42 43 to see all the packages that are available, use 44 45 ```bash 46 $ conda search 47 ``` 48 49 and to install a package, use 50 51 ```bash 52 $ conda install <package-name> 53 ``` 54 55 The real power of conda comes from its ability to manage environments. 56 In conda, an environment can be thought of as a completely separate installation. 57 Conda installs packages into environments efficiently using [hard links](https://en.wikipedia.org/wiki/Hard_link) by default when it is possible, so 58 environments are space efficient, and take seconds to create. 59 60 The default environment, which `conda` itself is installed into is called 61 `base`. To create another environment, use the `conda create` 62 command. For instance, to create an environment with the IPython notebook and 63 NumPy 1.6, which is older than the version that comes with Anaconda by 64 default, you would run: 65 66 ```bash 67 $ conda create -n numpy16 ipython-notebook numpy=1.6 68 ``` 69 70 This creates an environment called `numpy16` with the latest version of 71 the IPython notebook, NumPy 1.6, and their dependencies. 72 73 We can now activate this environment, use 74 75 ```bash 76 $ conda activate numpy16 77 ``` 78 79 This puts the bin directory of the `numpy16` environment in the front of the 80 `PATH`, and sets it as the default environment for all subsequent conda commands. 81 82 To go back to the base environment, use 83 84 ```bash 85 $ conda deactivate 86 ``` 87 88 ## Building Your Own Packages 89 90 You can easily build your own packages for conda, and upload them 91 to [anaconda.org](https://anaconda.org), a free service for hosting 92 packages for conda, as well as other package managers. 93 To build a package, create a recipe. Package building documentation is available 94 [here](https://docs.conda.io/projects/conda-build/en/latest/). 95 See [AnacondaRecipes](https://github.com/AnacondaRecipes) for the recipes that make up the Anaconda Distribution and `defaults` channel. 96 [Conda-forge](https://conda-forge.org/feedstocks/) and [Bioconda](https://github.com/bioconda/bioconda-recipes) are community-driven conda-based distributions. 97 98 To upload to anaconda.org, create an account. Then, install the 99 anaconda-client and login 100 101 ```bash 102 $ conda install anaconda-client 103 $ anaconda login 104 ``` 105 106 Then, after you build your recipe 107 108 ```bash 109 $ conda build <recipe-dir> 110 ``` 111 112 you will be prompted to upload to anaconda.org. 113 114 To add your anaconda.org channel, or other's channels, to conda so 115 that `conda install` will find and install their packages, run 116 117 ```bash 118 $ conda config --add channels https://conda.anaconda.org/username 119 ``` 120 121 (replacing `username` with the username of the person whose channel you want 122 to add). 123 124 ## Getting Help 125 126 - [Documentation](https://docs.conda.io/projects/conda/en/latest) 127 - [Twitter](https://twitter.com/condaproject) 128 - [Slack](https://conda.slack.com) 129 - [Bug Reports/Feature Requests](https://github.com/conda/conda/issues) 130 - [Installer/Package Issues](https://github.com/ContinuumIO/anaconda-issues/issues) 131 - [Discourse](https://conda.discourse.group/) 132 133 ## Contributing 134 135 [![open in gitpod for one-click development][gitpod]](https://gitpod.io/#https://github.com/conda/conda) 136 137 Contributions to conda are welcome. See the [contributing](CONTRIBUTING.md) documentation 138 for instructions on setting up a development environment. 139 [end of README.md] [start of conda/_vendor/distro.py] 1 # Copyright 2015,2016 Nir Cohen 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """ 16 The ``distro`` package (``distro`` stands for Linux Distribution) provides 17 information about the Linux distribution it runs on, such as a reliable 18 machine-readable distro ID, or version information. 19 20 It is a renewed alternative implementation for Python's original 21 :py:func:`platform.linux_distribution` function, but it provides much more 22 functionality. An alternative implementation became necessary because Python 23 3.5 deprecated this function, and Python 3.7 is expected to remove it 24 altogether. Its predecessor function :py:func:`platform.dist` was already 25 deprecated since Python 2.6 and is also expected to be removed in Python 3.7. 26 Still, there are many cases in which access to Linux distribution information 27 is needed. See `Python issue 1322 <https://bugs.python.org/issue1322>`_ for 28 more information. 29 """ 30 31 import os 32 import re 33 import sys 34 import json 35 import shlex 36 import logging 37 import argparse 38 import subprocess 39 40 41 if not sys.platform.startswith('linux'): 42 raise ImportError('Unsupported platform: {0}'.format(sys.platform)) 43 44 _UNIXCONFDIR = os.environ.get('UNIXCONFDIR', '/etc') 45 _OS_RELEASE_BASENAME = 'os-release' 46 47 #: Translation table for normalizing the "ID" attribute defined in os-release 48 #: files, for use by the :func:`distro.id` method. 49 #: 50 #: * Key: Value as defined in the os-release file, translated to lower case, 51 #: with blanks translated to underscores. 52 #: 53 #: * Value: Normalized value. 54 NORMALIZED_OS_ID = {} 55 56 #: Translation table for normalizing the "Distributor ID" attribute returned by 57 #: the lsb_release command, for use by the :func:`distro.id` method. 58 #: 59 #: * Key: Value as returned by the lsb_release command, translated to lower 60 #: case, with blanks translated to underscores. 61 #: 62 #: * Value: Normalized value. 63 NORMALIZED_LSB_ID = { 64 'enterpriseenterprise': 'oracle', # Oracle Enterprise Linux 65 'redhatenterpriseworkstation': 'rhel', # RHEL 6, 7 Workstation 66 'redhatenterpriseserver': 'rhel', # RHEL 6, 7 Server 67 } 68 69 #: Translation table for normalizing the distro ID derived from the file name 70 #: of distro release files, for use by the :func:`distro.id` method. 71 #: 72 #: * Key: Value as derived from the file name of a distro release file, 73 #: translated to lower case, with blanks translated to underscores. 74 #: 75 #: * Value: Normalized value. 76 NORMALIZED_DISTRO_ID = { 77 'redhat': 'rhel', # RHEL 6.x, 7.x 78 } 79 80 # Pattern for content of distro release file (reversed) 81 _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile( 82 r'(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)') 83 84 # Pattern for base file name of distro release file 85 _DISTRO_RELEASE_BASENAME_PATTERN = re.compile( 86 r'(\w+)[-_](release|version)$') 87 88 # Base file names to be ignored when searching for distro release file 89 _DISTRO_RELEASE_IGNORE_BASENAMES = ( 90 'debian_version', 91 'lsb-release', 92 'oem-release', 93 _OS_RELEASE_BASENAME, 94 'system-release' 95 ) 96 97 98 def linux_distribution(full_distribution_name=True): 99 """ 100 Return information about the current Linux distribution as a tuple 101 ``(id_name, version, codename)`` with items as follows: 102 103 * ``id_name``: If *full_distribution_name* is false, the result of 104 :func:`distro.id`. Otherwise, the result of :func:`distro.name`. 105 106 * ``version``: The result of :func:`distro.version`. 107 108 * ``codename``: The result of :func:`distro.codename`. 109 110 The interface of this function is compatible with the original 111 :py:func:`platform.linux_distribution` function, supporting a subset of 112 its parameters. 113 114 The data it returns may not exactly be the same, because it uses more data 115 sources than the original function, and that may lead to different data if 116 the Linux distribution is not consistent across multiple data sources it 117 provides (there are indeed such distributions ...). 118 119 Another reason for differences is the fact that the :func:`distro.id` 120 method normalizes the distro ID string to a reliable machine-readable value 121 for a number of popular Linux distributions. 122 """ 123 return _distro.linux_distribution(full_distribution_name) 124 125 126 def id(): 127 """ 128 Return the distro ID of the current Linux distribution, as a 129 machine-readable string. 130 131 For a number of Linux distributions, the returned distro ID value is 132 *reliable*, in the sense that it is documented and that it does not change 133 across releases of the distribution. 134 135 This package maintains the following reliable distro ID values: 136 137 ============== ========================================= 138 Distro ID Distribution 139 ============== ========================================= 140 "ubuntu" Ubuntu 141 "debian" Debian 142 "rhel" RedHat Enterprise Linux 143 "centos" CentOS 144 "fedora" Fedora 145 "sles" SUSE Linux Enterprise Server 146 "opensuse" openSUSE 147 "amazon" Amazon Linux 148 "arch" Arch Linux 149 "cloudlinux" CloudLinux OS 150 "exherbo" Exherbo Linux 151 "gentoo" GenToo Linux 152 "ibm_powerkvm" IBM PowerKVM 153 "kvmibm" KVM for IBM z Systems 154 "linuxmint" Linux Mint 155 "mageia" Mageia 156 "mandriva" Mandriva Linux 157 "parallels" Parallels 158 "pidora" Pidora 159 "raspbian" Raspbian 160 "oracle" Oracle Linux (and Oracle Enterprise Linux) 161 "scientific" Scientific Linux 162 "slackware" Slackware 163 "xenserver" XenServer 164 ============== ========================================= 165 166 If you have a need to get distros for reliable IDs added into this set, 167 or if you find that the :func:`distro.id` function returns a different 168 distro ID for one of the listed distros, please create an issue in the 169 `distro issue tracker`_. 170 171 **Lookup hierarchy and transformations:** 172 173 First, the ID is obtained from the following sources, in the specified 174 order. The first available and non-empty value is used: 175 176 * the value of the "ID" attribute of the os-release file, 177 178 * the value of the "Distributor ID" attribute returned by the lsb_release 179 command, 180 181 * the first part of the file name of the distro release file, 182 183 The so determined ID value then passes the following transformations, 184 before it is returned by this method: 185 186 * it is translated to lower case, 187 188 * blanks (which should not be there anyway) are translated to underscores, 189 190 * a normalization of the ID is performed, based upon 191 `normalization tables`_. The purpose of this normalization is to ensure 192 that the ID is as reliable as possible, even across incompatible changes 193 in the Linux distributions. A common reason for an incompatible change is 194 the addition of an os-release file, or the addition of the lsb_release 195 command, with ID values that differ from what was previously determined 196 from the distro release file name. 197 """ 198 return _distro.id() 199 200 201 def name(pretty=False): 202 """ 203 Return the name of the current Linux distribution, as a human-readable 204 string. 205 206 If *pretty* is false, the name is returned without version or codename. 207 (e.g. "CentOS Linux") 208 209 If *pretty* is true, the version and codename are appended. 210 (e.g. "CentOS Linux 7.1.1503 (Core)") 211 212 **Lookup hierarchy:** 213 214 The name is obtained from the following sources, in the specified order. 215 The first available and non-empty value is used: 216 217 * If *pretty* is false: 218 219 - the value of the "NAME" attribute of the os-release file, 220 221 - the value of the "Distributor ID" attribute returned by the lsb_release 222 command, 223 224 - the value of the "<name>" field of the distro release file. 225 226 * If *pretty* is true: 227 228 - the value of the "PRETTY_NAME" attribute of the os-release file, 229 230 - the value of the "Description" attribute returned by the lsb_release 231 command, 232 233 - the value of the "<name>" field of the distro release file, appended 234 with the value of the pretty version ("<version_id>" and "<codename>" 235 fields) of the distro release file, if available. 236 """ 237 return _distro.name(pretty) 238 239 240 def version(pretty=False, best=False): 241 """ 242 Return the version of the current Linux distribution, as a human-readable 243 string. 244 245 If *pretty* is false, the version is returned without codename (e.g. 246 "7.0"). 247 248 If *pretty* is true, the codename in parenthesis is appended, if the 249 codename is non-empty (e.g. "7.0 (Maipo)"). 250 251 Some distributions provide version numbers with different precisions in 252 the different sources of distribution information. Examining the different 253 sources in a fixed priority order does not always yield the most precise 254 version (e.g. for Debian 8.2, or CentOS 7.1). 255 256 The *best* parameter can be used to control the approach for the returned 257 version: 258 259 If *best* is false, the first non-empty version number in priority order of 260 the examined sources is returned. 261 262 If *best* is true, the most precise version number out of all examined 263 sources is returned. 264 265 **Lookup hierarchy:** 266 267 In all cases, the version number is obtained from the following sources. 268 If *best* is false, this order represents the priority order: 269 270 * the value of the "VERSION_ID" attribute of the os-release file, 271 * the value of the "Release" attribute returned by the lsb_release 272 command, 273 * the version number parsed from the "<version_id>" field of the first line 274 of the distro release file, 275 * the version number parsed from the "PRETTY_NAME" attribute of the 276 os-release file, if it follows the format of the distro release files. 277 * the version number parsed from the "Description" attribute returned by 278 the lsb_release command, if it follows the format of the distro release 279 files. 280 """ 281 return _distro.version(pretty, best) 282 283 284 def version_parts(best=False): 285 """ 286 Return the version of the current Linux distribution as a tuple 287 ``(major, minor, build_number)`` with items as follows: 288 289 * ``major``: The result of :func:`distro.major_version`. 290 291 * ``minor``: The result of :func:`distro.minor_version`. 292 293 * ``build_number``: The result of :func:`distro.build_number`. 294 295 For a description of the *best* parameter, see the :func:`distro.version` 296 method. 297 """ 298 return _distro.version_parts(best) 299 300 301 def major_version(best=False): 302 """ 303 Return the major version of the current Linux distribution, as a string, 304 if provided. 305 Otherwise, the empty string is returned. The major version is the first 306 part of the dot-separated version string. 307 308 For a description of the *best* parameter, see the :func:`distro.version` 309 method. 310 """ 311 return _distro.major_version(best) 312 313 314 def minor_version(best=False): 315 """ 316 Return the minor version of the current Linux distribution, as a string, 317 if provided. 318 Otherwise, the empty string is returned. The minor version is the second 319 part of the dot-separated version string. 320 321 For a description of the *best* parameter, see the :func:`distro.version` 322 method. 323 """ 324 return _distro.minor_version(best) 325 326 327 def build_number(best=False): 328 """ 329 Return the build number of the current Linux distribution, as a string, 330 if provided. 331 Otherwise, the empty string is returned. The build number is the third part 332 of the dot-separated version string. 333 334 For a description of the *best* parameter, see the :func:`distro.version` 335 method. 336 """ 337 return _distro.build_number(best) 338 339 340 def like(): 341 """ 342 Return a space-separated list of distro IDs of distributions that are 343 closely related to the current Linux distribution in regards to packaging 344 and programming interfaces, for example distributions the current 345 distribution is a derivative from. 346 347 **Lookup hierarchy:** 348 349 This information item is only provided by the os-release file. 350 For details, see the description of the "ID_LIKE" attribute in the 351 `os-release man page 352 <http://www.freedesktop.org/software/systemd/man/os-release.html>`_. 353 """ 354 return _distro.like() 355 356 357 def codename(): 358 """ 359 Return the codename for the release of the current Linux distribution, 360 as a string. 361 362 If the distribution does not have a codename, an empty string is returned. 363 364 Note that the returned codename is not always really a codename. For 365 example, openSUSE returns "x86_64". This function does not handle such 366 cases in any special way and just returns the string it finds, if any. 367 368 **Lookup hierarchy:** 369 370 * the codename within the "VERSION" attribute of the os-release file, if 371 provided, 372 373 * the value of the "Codename" attribute returned by the lsb_release 374 command, 375 376 * the value of the "<codename>" field of the distro release file. 377 """ 378 return _distro.codename() 379 380 381 def info(pretty=False, best=False): 382 """ 383 Return certain machine-readable information items about the current Linux 384 distribution in a dictionary, as shown in the following example: 385 386 .. sourcecode:: python 387 388 { 389 'id': 'rhel', 390 'version': '7.0', 391 'version_parts': { 392 'major': '7', 393 'minor': '0', 394 'build_number': '' 395 }, 396 'like': 'fedora', 397 'codename': 'Maipo' 398 } 399 400 The dictionary structure and keys are always the same, regardless of which 401 information items are available in the underlying data sources. The values 402 for the various keys are as follows: 403 404 * ``id``: The result of :func:`distro.id`. 405 406 * ``version``: The result of :func:`distro.version`. 407 408 * ``version_parts -> major``: The result of :func:`distro.major_version`. 409 410 * ``version_parts -> minor``: The result of :func:`distro.minor_version`. 411 412 * ``version_parts -> build_number``: The result of 413 :func:`distro.build_number`. 414 415 * ``like``: The result of :func:`distro.like`. 416 417 * ``codename``: The result of :func:`distro.codename`. 418 419 For a description of the *pretty* and *best* parameters, see the 420 :func:`distro.version` method. 421 """ 422 return _distro.info(pretty, best) 423 424 425 def os_release_info(): 426 """ 427 Return a dictionary containing key-value pairs for the information items 428 from the os-release file data source of the current Linux distribution. 429 430 See `os-release file`_ for details about these information items. 431 """ 432 return _distro.os_release_info() 433 434 435 def lsb_release_info(): 436 """ 437 Return a dictionary containing key-value pairs for the information items 438 from the lsb_release command data source of the current Linux distribution. 439 440 See `lsb_release command output`_ for details about these information 441 items. 442 """ 443 return _distro.lsb_release_info() 444 445 446 def distro_release_info(): 447 """ 448 Return a dictionary containing key-value pairs for the information items 449 from the distro release file data source of the current Linux distribution. 450 451 See `distro release file`_ for details about these information items. 452 """ 453 return _distro.distro_release_info() 454 455 456 def os_release_attr(attribute): 457 """ 458 Return a single named information item from the os-release file data source 459 of the current Linux distribution. 460 461 Parameters: 462 463 * ``attribute`` (string): Key of the information item. 464 465 Returns: 466 467 * (string): Value of the information item, if the item exists. 468 The empty string, if the item does not exist. 469 470 See `os-release file`_ for details about these information items. 471 """ 472 return _distro.os_release_attr(attribute) 473 474 475 def lsb_release_attr(attribute): 476 """ 477 Return a single named information item from the lsb_release command output 478 data source of the current Linux distribution. 479 480 Parameters: 481 482 * ``attribute`` (string): Key of the information item. 483 484 Returns: 485 486 * (string): Value of the information item, if the item exists. 487 The empty string, if the item does not exist. 488 489 See `lsb_release command output`_ for details about these information 490 items. 491 """ 492 return _distro.lsb_release_attr(attribute) 493 494 495 def distro_release_attr(attribute): 496 """ 497 Return a single named information item from the distro release file 498 data source of the current Linux distribution. 499 500 Parameters: 501 502 * ``attribute`` (string): Key of the information item. 503 504 Returns: 505 506 * (string): Value of the information item, if the item exists. 507 The empty string, if the item does not exist. 508 509 See `distro release file`_ for details about these information items. 510 """ 511 return _distro.distro_release_attr(attribute) 512 513 514 class LinuxDistribution(object): 515 """ 516 Provides information about a Linux distribution. 517 518 This package creates a private module-global instance of this class with 519 default initialization arguments, that is used by the 520 `consolidated accessor functions`_ and `single source accessor functions`_. 521 By using default initialization arguments, that module-global instance 522 returns data about the current Linux distribution (i.e. the distro this 523 package runs on). 524 525 Normally, it is not necessary to create additional instances of this class. 526 However, in situations where control is needed over the exact data sources 527 that are used, instances of this class can be created with a specific 528 distro release file, or a specific os-release file, or without invoking the 529 lsb_release command. 530 """ 531 532 def __init__(self, 533 include_lsb=True, 534 os_release_file='', 535 distro_release_file=''): 536 """ 537 The initialization method of this class gathers information from the 538 available data sources, and stores that in private instance attributes. 539 Subsequent access to the information items uses these private instance 540 attributes, so that the data sources are read only once. 541 542 Parameters: 543 544 * ``include_lsb`` (bool): Controls whether the 545 `lsb_release command output`_ is included as a data source. 546 547 If the lsb_release command is not available in the program execution 548 path, the data source for the lsb_release command will be empty. 549 550 * ``os_release_file`` (string): The path name of the 551 `os-release file`_ that is to be used as a data source. 552 553 An empty string (the default) will cause the default path name to 554 be used (see `os-release file`_ for details). 555 556 If the specified or defaulted os-release file does not exist, the 557 data source for the os-release file will be empty. 558 559 * ``distro_release_file`` (string): The path name of the 560 `distro release file`_ that is to be used as a data source. 561 562 An empty string (the default) will cause a default search algorithm 563 to be used (see `distro release file`_ for details). 564 565 If the specified distro release file does not exist, or if no default 566 distro release file can be found, the data source for the distro 567 release file will be empty. 568 569 Public instance attributes: 570 571 * ``os_release_file`` (string): The path name of the 572 `os-release file`_ that is actually used as a data source. The 573 empty string if no distro release file is used as a data source. 574 575 * ``distro_release_file`` (string): The path name of the 576 `distro release file`_ that is actually used as a data source. The 577 empty string if no distro release file is used as a data source. 578 579 Raises: 580 581 * :py:exc:`IOError`: Some I/O issue with an os-release file or distro 582 release file. 583 584 * :py:exc:`subprocess.CalledProcessError`: The lsb_release command had 585 some issue (other than not being available in the program execution 586 path). 587 588 * :py:exc:`UnicodeError`: A data source has unexpected characters or 589 uses an unexpected encoding. 590 """ 591 self.os_release_file = os_release_file or \ 592 os.path.join(_UNIXCONFDIR, _OS_RELEASE_BASENAME) 593 self.distro_release_file = distro_release_file or '' # updated later 594 self._os_release_info = self._get_os_release_info() 595 self._lsb_release_info = self._get_lsb_release_info() \ 596 if include_lsb else {} 597 self._distro_release_info = self._get_distro_release_info() 598 599 def __repr__(self): 600 """Return repr of all info 601 """ 602 return \ 603 "LinuxDistribution(" \ 604 "os_release_file={0!r}, " \ 605 "distro_release_file={1!r}, " \ 606 "_os_release_info={2!r}, " \ 607 "_lsb_release_info={3!r}, " \ 608 "_distro_release_info={4!r})".format( 609 self.os_release_file, 610 self.distro_release_file, 611 self._os_release_info, 612 self._lsb_release_info, 613 self._distro_release_info) 614 615 def linux_distribution(self, full_distribution_name=True): 616 """ 617 Return information about the Linux distribution that is compatible 618 with Python's :func:`platform.linux_distribution`, supporting a subset 619 of its parameters. 620 621 For details, see :func:`distro.linux_distribution`. 622 """ 623 return ( 624 self.name() if full_distribution_name else self.id(), 625 self.version(), 626 self.codename() 627 ) 628 629 def id(self): 630 """Return the distro ID of the Linux distribution, as a string. 631 632 For details, see :func:`distro.id`. 633 """ 634 def normalize(distro_id, table): 635 distro_id = distro_id.lower().replace(' ', '_') 636 return table.get(distro_id, distro_id) 637 638 distro_id = self.os_release_attr('id') 639 if distro_id: 640 return normalize(distro_id, NORMALIZED_OS_ID) 641 642 distro_id = self.lsb_release_attr('distributor_id') 643 if distro_id: 644 return normalize(distro_id, NORMALIZED_LSB_ID) 645 646 distro_id = self.distro_release_attr('id') 647 if distro_id: 648 return normalize(distro_id, NORMALIZED_DISTRO_ID) 649 650 return '' 651 652 def name(self, pretty=False): 653 """ 654 Return the name of the Linux distribution, as a string. 655 656 For details, see :func:`distro.name`. 657 """ 658 name = self.os_release_attr('name') \ 659 or self.lsb_release_attr('distributor_id') \ 660 or self.distro_release_attr('name') 661 if pretty: 662 name = self.os_release_attr('pretty_name') \ 663 or self.lsb_release_attr('description') 664 if not name: 665 name = self.distro_release_attr('name') 666 version = self.version(pretty=True) 667 if version: 668 name = name + ' ' + version 669 return name or '' 670 671 def version(self, pretty=False, best=False): 672 """ 673 Return the version of the Linux distribution, as a string. 674 675 For details, see :func:`distro.version`. 676 """ 677 versions = [ 678 self.os_release_attr('version_id'), 679 self.lsb_release_attr('release'), 680 self.distro_release_attr('version_id'), 681 self._parse_distro_release_content( 682 self.os_release_attr('pretty_name')).get('version_id', ''), 683 self._parse_distro_release_content( 684 self.lsb_release_attr('description')).get('version_id', '') 685 ] 686 version = '' 687 if best: 688 # This algorithm uses the last version in priority order that has 689 # the best precision. If the versions are not in conflict, that 690 # does not matter; otherwise, using the last one instead of the 691 # first one might be considered a surprise. 692 for v in versions: 693 if v.count(".") > version.count(".") or version == '': 694 version = v 695 else: 696 for v in versions: 697 if v != '': 698 version = v 699 break 700 if pretty and version and self.codename(): 701 version = u'{0} ({1})'.format(version, self.codename()) 702 return version 703 704 def version_parts(self, best=False): 705 """ 706 Return the version of the Linux distribution, as a tuple of version 707 numbers. 708 709 For details, see :func:`distro.version_parts`. 710 """ 711 version_str = self.version(best=best) 712 if version_str: 713 version_regex = re.compile(r'(\d+)\.?(\d+)?\.?(\d+)?') 714 matches = version_regex.match(version_str) 715 if matches: 716 major, minor, build_number = matches.groups() 717 return major, minor or '', build_number or '' 718 return '', '', '' 719 720 def major_version(self, best=False): 721 """ 722 Return the major version number of the current distribution. 723 724 For details, see :func:`distro.major_version`. 725 """ 726 return self.version_parts(best)[0] 727 728 def minor_version(self, best=False): 729 """ 730 Return the minor version number of the Linux distribution. 731 732 For details, see :func:`distro.minor_version`. 733 """ 734 return self.version_parts(best)[1] 735 736 def build_number(self, best=False): 737 """ 738 Return the build number of the Linux distribution. 739 740 For details, see :func:`distro.build_number`. 741 """ 742 return self.version_parts(best)[2] 743 744 def like(self): 745 """ 746 Return the IDs of distributions that are like the Linux distribution. 747 748 For details, see :func:`distro.like`. 749 """ 750 return self.os_release_attr('id_like') or '' 751 752 def codename(self): 753 """ 754 Return the codename of the Linux distribution. 755 756 For details, see :func:`distro.codename`. 757 """ 758 return self.os_release_attr('codename') \ 759 or self.lsb_release_attr('codename') \ 760 or self.distro_release_attr('codename') \ 761 or '' 762 763 def info(self, pretty=False, best=False): 764 """ 765 Return certain machine-readable information about the Linux 766 distribution. 767 768 For details, see :func:`distro.info`. 769 """ 770 return dict( 771 id=self.id(), 772 version=self.version(pretty, best), 773 version_parts=dict( 774 major=self.major_version(best), 775 minor=self.minor_version(best), 776 build_number=self.build_number(best) 777 ), 778 like=self.like(), 779 codename=self.codename(), 780 ) 781 782 def os_release_info(self): 783 """ 784 Return a dictionary containing key-value pairs for the information 785 items from the os-release file data source of the Linux distribution. 786 787 For details, see :func:`distro.os_release_info`. 788 """ 789 return self._os_release_info 790 791 def lsb_release_info(self): 792 """ 793 Return a dictionary containing key-value pairs for the information 794 items from the lsb_release command data source of the Linux 795 distribution. 796 797 For details, see :func:`distro.lsb_release_info`. 798 """ 799 return self._lsb_release_info 800 801 def distro_release_info(self): 802 """ 803 Return a dictionary containing key-value pairs for the information 804 items from the distro release file data source of the Linux 805 distribution. 806 807 For details, see :func:`distro.distro_release_info`. 808 """ 809 return self._distro_release_info 810 811 def os_release_attr(self, attribute): 812 """ 813 Return a single named information item from the os-release file data 814 source of the Linux distribution. 815 816 For details, see :func:`distro.os_release_attr`. 817 """ 818 return self._os_release_info.get(attribute, '') 819 820 def lsb_release_attr(self, attribute): 821 """ 822 Return a single named information item from the lsb_release command 823 output data source of the Linux distribution. 824 825 For details, see :func:`distro.lsb_release_attr`. 826 """ 827 return self._lsb_release_info.get(attribute, '') 828 829 def distro_release_attr(self, attribute): 830 """ 831 Return a single named information item from the distro release file 832 data source of the Linux distribution. 833 834 For details, see :func:`distro.distro_release_attr`. 835 """ 836 return self._distro_release_info.get(attribute, '') 837 838 def _get_os_release_info(self): 839 """ 840 Get the information items from the specified os-release file. 841 842 Returns: 843 A dictionary containing all information items. 844 """ 845 if os.path.isfile(self.os_release_file): 846 with open(self.os_release_file) as release_file: 847 return self._parse_os_release_content(release_file) 848 return {} 849 850 @staticmethod 851 def _parse_os_release_content(lines): 852 """ 853 Parse the lines of an os-release file. 854 855 Parameters: 856 857 * lines: Iterable through the lines in the os-release file. 858 Each line must be a unicode string or a UTF-8 encoded byte 859 string. 860 861 Returns: 862 A dictionary containing all information items. 863 """ 864 props = {} 865 lexer = shlex.shlex(lines, posix=True) 866 lexer.whitespace_split = True 867 868 # The shlex module defines its `wordchars` variable using literals, 869 # making it dependent on the encoding of the Python source file. 870 # In Python 2.6 and 2.7, the shlex source file is encoded in 871 # 'iso-8859-1', and the `wordchars` variable is defined as a byte 872 # string. This causes a UnicodeDecodeError to be raised when the 873 # parsed content is a unicode object. The following fix resolves that 874 # (... but it should be fixed in shlex...): 875 if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes): 876 lexer.wordchars = lexer.wordchars.decode('iso-8859-1') 877 878 tokens = list(lexer) 879 for token in tokens: 880 # At this point, all shell-like parsing has been done (i.e. 881 # comments processed, quotes and backslash escape sequences 882 # processed, multi-line values assembled, trailing newlines 883 # stripped, etc.), so the tokens are now either: 884 # * variable assignments: var=value 885 # * commands or their arguments (not allowed in os-release) 886 if '=' in token: 887 k, v = token.split('=', 1) 888 if isinstance(v, bytes): 889 v = v.decode('utf-8') 890 props[k.lower()] = v 891 if k == 'VERSION': 892 # this handles cases in which the codename is in 893 # the `(CODENAME)` (rhel, centos, fedora) format 894 # or in the `, CODENAME` format (Ubuntu). 895 codename = re.search(r'(\(\D+\))|,(\s+)?\D+', v) 896 if codename: 897 codename = codename.group() 898 codename = codename.strip('()') 899 codename = codename.strip(',') 900 codename = codename.strip() 901 # codename appears within paranthese. 902 props['codename'] = codename 903 else: 904 props['codename'] = '' 905 else: 906 # Ignore any tokens that are not variable assignments 907 pass 908 return props 909 910 def _get_lsb_release_info(self): 911 """ 912 Get the information items from the lsb_release command output. 913 914 Returns: 915 A dictionary containing all information items. 916 """ 917 cmd = 'lsb_release -a' 918 # conda customization: On Ubuntu 17.10, lsb_release calls the 919 # system Python and it will not find our custom sysconfigdata 920 env = os.environ.copy() 921 if '_PYTHON_SYSCONFIGDATA_NAME' in env: 922 del env['_PYTHON_SYSCONFIGDATA_NAME'] 923 process = subprocess.Popen( 924 cmd, 925 shell=True, 926 stdout=subprocess.PIPE, 927 stderr=subprocess.PIPE, 928 env=env) 929 stdout, stderr = process.communicate() 930 stdout, stderr = stdout.decode('utf-8'), stderr.decode('utf-8') 931 code = process.returncode 932 if code == 0: 933 content = stdout.splitlines() 934 return self._parse_lsb_release_content(content) 935 elif code == 127: # Command not found 936 return {} 937 else: 938 if sys.version_info[:2] >= (3, 5): 939 raise subprocess.CalledProcessError(code, cmd, stdout, stderr) 940 elif sys.version_info[:2] >= (2, 7): 941 raise subprocess.CalledProcessError(code, cmd, stdout) 942 elif sys.version_info[:2] == (2, 6): 943 raise subprocess.CalledProcessError(code, cmd) 944 945 @staticmethod 946 def _parse_lsb_release_content(lines): 947 """ 948 Parse the output of the lsb_release command. 949 950 Parameters: 951 952 * lines: Iterable through the lines of the lsb_release output. 953 Each line must be a unicode string or a UTF-8 encoded byte 954 string. 955 956 Returns: 957 A dictionary containing all information items. 958 """ 959 props = {} 960 for line in lines: 961 line = line.decode('utf-8') if isinstance(line, bytes) else line 962 kv = line.strip('\n').split(':', 1) 963 if len(kv) != 2: 964 # Ignore lines without colon. 965 continue 966 k, v = kv 967 props.update({k.replace(' ', '_').lower(): v.strip()}) 968 return props 969 970 def _get_distro_release_info(self): 971 """ 972 Get the information items from the specified distro release file. 973 974 Returns: 975 A dictionary containing all information items. 976 """ 977 if self.distro_release_file: 978 # If it was specified, we use it and parse what we can, even if 979 # its file name or content does not match the expected pattern. 980 distro_info = self._parse_distro_release_file( 981 self.distro_release_file) 982 basename = os.path.basename(self.distro_release_file) 983 # The file name pattern for user-specified distro release files 984 # is somewhat more tolerant (compared to when searching for the 985 # file), because we want to use what was specified as best as 986 # possible. 987 match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) 988 if match: 989 distro_info['id'] = match.group(1) 990 return distro_info 991 else: 992 try: 993 basenames = os.listdir(_UNIXCONFDIR) 994 # We sort for repeatability in cases where there are multiple 995 # distro specific files; e.g. CentOS, Oracle, Enterprise all 996 # containing `redhat-release` on top of their own. 997 basenames.sort() 998 except OSError: 999 # This may occur when /etc is not readable but we can't be 1000 # sure about the *-release files. Check common entries of 1001 # /etc for information. If they turn out to not be there the 1002 # error is handled in `_parse_distro_release_file()`. 1003 basenames = ['SuSE-release', 1004 'arch-release', 1005 'base-release', 1006 'centos-release', 1007 'fedora-release', 1008 'gentoo-release', 1009 'mageia-release', 1010 'manjaro-release', 1011 'oracle-release', 1012 'redhat-release', 1013 'sl-release', 1014 'slackware-version'] 1015 for basename in basenames: 1016 if basename in _DISTRO_RELEASE_IGNORE_BASENAMES: 1017 continue 1018 match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) 1019 if match: 1020 filepath = os.path.join(_UNIXCONFDIR, basename) 1021 distro_info = self._parse_distro_release_file(filepath) 1022 if 'name' in distro_info: 1023 # The name is always present if the pattern matches 1024 self.distro_release_file = filepath 1025 distro_info['id'] = match.group(1) 1026 return distro_info 1027 return {} 1028 1029 def _parse_distro_release_file(self, filepath): 1030 """ 1031 Parse a distro release file. 1032 1033 Parameters: 1034 1035 * filepath: Path name of the distro release file. 1036 1037 Returns: 1038 A dictionary containing all information items. 1039 """ 1040 try: 1041 with open(filepath) as fp: 1042 # Only parse the first line. For instance, on SLES there 1043 # are multiple lines. We don't want them... 1044 return self._parse_distro_release_content(fp.readline()) 1045 except (OSError, IOError): 1046 # Ignore not being able to read a specific, seemingly version 1047 # related file. 1048 # See https://github.com/nir0s/distro/issues/162 1049 return {} 1050 1051 @staticmethod 1052 def _parse_distro_release_content(line): 1053 """ 1054 Parse a line from a distro release file. 1055 1056 Parameters: 1057 * line: Line from the distro release file. Must be a unicode string 1058 or a UTF-8 encoded byte string. 1059 1060 Returns: 1061 A dictionary containing all information items. 1062 """ 1063 if isinstance(line, bytes): 1064 line = line.decode('utf-8') 1065 matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match( 1066 line.strip()[::-1]) 1067 distro_info = {} 1068 if matches: 1069 # regexp ensures non-None 1070 distro_info['name'] = matches.group(3)[::-1] 1071 if matches.group(2): 1072 distro_info['version_id'] = matches.group(2)[::-1] 1073 if matches.group(1): 1074 distro_info['codename'] = matches.group(1)[::-1] 1075 elif line: 1076 distro_info['name'] = line.strip() 1077 return distro_info 1078 1079 1080 _distro = LinuxDistribution() 1081 1082 1083 def main(): 1084 logger = logging.getLogger(__name__) 1085 logger.setLevel(logging.DEBUG) 1086 logger.addHandler(logging.StreamHandler(sys.stdout)) 1087 1088 parser = argparse.ArgumentParser(description="Linux distro info tool") 1089 parser.add_argument( 1090 '--json', 1091 '-j', 1092 help="Output in machine readable format", 1093 action="store_true") 1094 args = parser.parse_args() 1095 1096 if args.json: 1097 logger.info(json.dumps(info(), indent=4, sort_keys=True)) 1098 else: 1099 logger.info('Name: %s', name(pretty=True)) 1100 distribution_version = version(pretty=True) 1101 logger.info('Version: %s', distribution_version) 1102 distribution_codename = codename() 1103 logger.info('Codename: %s', distribution_codename) 1104 1105 1106 if __name__ == '__main__': 1107 main() 1108 [end of conda/_vendor/distro.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
f302133c19fcd33afa967e0359b1aa9542166184
Document the opposite of `conda create` ### Checklist - [X] I added a descriptive title - [X] I searched open requests and couldn't find a duplicate ### What is the idea? A new conda environment is created with `conda create`. I'm not sure what the correct method is to remove a conda environment. ### Why is this needed? Some Internet searches show that the correct way to do it is `conda env remove -n [environment]`, however `conda --help` lists the `env` command as legacy. Furthermore, this command is listed as being "from other packages", which would indicate that it's not part of conda itself. ### What should happen? There should be a command to remove an environment, or at least the current command should be documented and discoverable. ### Additional Context There was discussion on this in #723 but this didn't seem to have a satisfying resolution.
Thank you for opening this issue, @xobs. `conda --help` lists the `remove` command. If you run `conda remove --help`, you will learn that running `conda remove -n <environment> --all` deletes all the packages in the specified environment and thereby the environment itself. This is also mentioned in the [conda documentation](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#removing-an-environment). However I agree that this is not intuitive and the help text from `conda --help` should mention that `conda remove` can be used to remove environments as well. I will add appropriate labels to this issue and my team and I will take suitable actions. Meanwhile you can follow this discussion: - https://github.com/conda/conda/issues/11633 This is where we’ve been discussing plans for potentially updating the way that CLI interface works. Hmm... From the help I thought that `conda remove` just removed a list of packages from an environment. Is it the case that an environment is automatically deleted when all packages are removed? You're right that `conda remove --help` says that `--all` will `Remove all packages, i.e., the entire environment.` -- that removes the entire environment's contents, as well as the enclosing environment itself? I do see that in the documentation now. Unfortunately, stack overflow, opengenius, and thecodingbot all come up in searches first, all of which say that `conda env` is the way to do it. It'd be nice to add a note to `remove` that it can be used to remove environments as well as packages.
2023-02-21T09:47:16Z
<patch> diff --git a/conda/cli/conda_argparse.py b/conda/cli/conda_argparse.py --- a/conda/cli/conda_argparse.py +++ b/conda/cli/conda_argparse.py @@ -1058,7 +1058,10 @@ def configure_parser_package(sub_parsers): def configure_parser_remove(sub_parsers, aliases): - help_ = "Remove a list of packages from a specified conda environment." + help_ = ( + "Remove a list of packages from a specified conda environment. " + "Use `--all` flag to remove all packages and the environment itself." + ) descr = dals( f""" {help_} @@ -1082,6 +1085,10 @@ def configure_parser_remove(sub_parsers, aliases): conda remove -n myenv scipy curl wheel + Remove all packages from environment `myenv` and the environment itself:: + + conda remove -n myenv --all + """ ) p = sub_parsers.add_parser( </patch>
[]
[]
pypa__pip-11502
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Update vendored packages for 22.3 Toward #11486 </issue> <code> [start of README.rst] 1 pip - The Python Package Installer 2 ================================== 3 4 .. image:: https://img.shields.io/pypi/v/pip.svg 5 :target: https://pypi.org/project/pip/ 6 7 .. image:: https://readthedocs.org/projects/pip/badge/?version=latest 8 :target: https://pip.pypa.io/en/latest 9 10 pip is the `package installer`_ for Python. You can use pip to install packages from the `Python Package Index`_ and other indexes. 11 12 Please take a look at our documentation for how to install and use pip: 13 14 * `Installation`_ 15 * `Usage`_ 16 17 We release updates regularly, with a new version every 3 months. Find more details in our documentation: 18 19 * `Release notes`_ 20 * `Release process`_ 21 22 In pip 20.3, we've `made a big improvement to the heart of pip`_; `learn more`_. We want your input, so `sign up for our user experience research studies`_ to help us do it right. 23 24 **Note**: pip 21.0, in January 2021, removed Python 2 support, per pip's `Python 2 support policy`_. Please migrate to Python 3. 25 26 If you find bugs, need help, or want to talk to the developers, please use our mailing lists or chat rooms: 27 28 * `Issue tracking`_ 29 * `Discourse channel`_ 30 * `User IRC`_ 31 32 If you want to get involved head over to GitHub to get the source code, look at our development documentation and feel free to jump on the developer mailing lists and chat rooms: 33 34 * `GitHub page`_ 35 * `Development documentation`_ 36 * `Development IRC`_ 37 38 Code of Conduct 39 --------------- 40 41 Everyone interacting in the pip project's codebases, issue trackers, chat 42 rooms, and mailing lists is expected to follow the `PSF Code of Conduct`_. 43 44 .. _package installer: https://packaging.python.org/guides/tool-recommendations/ 45 .. _Python Package Index: https://pypi.org 46 .. _Installation: https://pip.pypa.io/en/stable/installation/ 47 .. _Usage: https://pip.pypa.io/en/stable/ 48 .. _Release notes: https://pip.pypa.io/en/stable/news.html 49 .. _Release process: https://pip.pypa.io/en/latest/development/release-process/ 50 .. _GitHub page: https://github.com/pypa/pip 51 .. _Development documentation: https://pip.pypa.io/en/latest/development 52 .. _made a big improvement to the heart of pip: https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html 53 .. _learn more: https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-3-2020 54 .. _sign up for our user experience research studies: https://pyfound.blogspot.com/2020/03/new-pip-resolver-to-roll-out-this-year.html 55 .. _Python 2 support policy: https://pip.pypa.io/en/latest/development/release-process/#python-2-support 56 .. _Issue tracking: https://github.com/pypa/pip/issues 57 .. _Discourse channel: https://discuss.python.org/c/packaging 58 .. _User IRC: https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:+6697/pypa 59 .. _Development IRC: https://kiwiirc.com/nextclient/#ircs://irc.libera.chat:+6697/pypa-dev 60 .. _PSF Code of Conduct: https://github.com/pypa/.github/blob/main/CODE_OF_CONDUCT.md 61 [end of README.rst] [start of noxfile.py] 1 """Automation using nox. 2 """ 3 4 import glob 5 import os 6 import shutil 7 import sys 8 from pathlib import Path 9 from typing import Iterator, List, Tuple 10 11 import nox 12 13 # fmt: off 14 sys.path.append(".") 15 from tools import release # isort:skip # noqa 16 sys.path.pop() 17 # fmt: on 18 19 nox.options.reuse_existing_virtualenvs = True 20 nox.options.sessions = ["lint"] 21 22 LOCATIONS = { 23 "common-wheels": "tests/data/common_wheels", 24 "protected-pip": "tools/protected_pip.py", 25 } 26 REQUIREMENTS = { 27 "docs": "docs/requirements.txt", 28 "tests": "tests/requirements.txt", 29 "common-wheels": "tests/requirements-common_wheels.txt", 30 } 31 32 AUTHORS_FILE = "AUTHORS.txt" 33 VERSION_FILE = "src/pip/__init__.py" 34 35 36 def run_with_protected_pip(session: nox.Session, *arguments: str) -> None: 37 """Do a session.run("pip", *arguments), using a "protected" pip. 38 39 This invokes a wrapper script, that forwards calls to original virtualenv 40 (stable) version, and not the code being tested. This ensures pip being 41 used is not the code being tested. 42 """ 43 env = {"VIRTUAL_ENV": session.virtualenv.location} 44 45 command = ("python", LOCATIONS["protected-pip"]) + arguments 46 session.run(*command, env=env, silent=True) 47 48 49 def should_update_common_wheels() -> bool: 50 # If the cache hasn't been created, create it. 51 if not os.path.exists(LOCATIONS["common-wheels"]): 52 return True 53 54 # If the requirements was updated after cache, we'll repopulate it. 55 cache_last_populated_at = os.path.getmtime(LOCATIONS["common-wheels"]) 56 requirements_updated_at = os.path.getmtime(REQUIREMENTS["common-wheels"]) 57 need_to_repopulate = requirements_updated_at > cache_last_populated_at 58 59 # Clear the stale cache. 60 if need_to_repopulate: 61 shutil.rmtree(LOCATIONS["common-wheels"], ignore_errors=True) 62 63 return need_to_repopulate 64 65 66 # ----------------------------------------------------------------------------- 67 # Development Commands 68 # ----------------------------------------------------------------------------- 69 @nox.session(python=["3.7", "3.8", "3.9", "3.10", "pypy3"]) 70 def test(session: nox.Session) -> None: 71 # Get the common wheels. 72 if should_update_common_wheels(): 73 # fmt: off 74 run_with_protected_pip( 75 session, 76 "wheel", 77 "-w", LOCATIONS["common-wheels"], 78 "-r", REQUIREMENTS["common-wheels"], 79 ) 80 # fmt: on 81 else: 82 msg = f"Re-using existing common-wheels at {LOCATIONS['common-wheels']}." 83 session.log(msg) 84 85 # Build source distribution 86 sdist_dir = os.path.join(session.virtualenv.location, "sdist") 87 if os.path.exists(sdist_dir): 88 shutil.rmtree(sdist_dir, ignore_errors=True) 89 90 # fmt: off 91 session.run( 92 "python", "setup.py", "sdist", "--formats=zip", "--dist-dir", sdist_dir, 93 silent=True, 94 ) 95 # fmt: on 96 97 generated_files = os.listdir(sdist_dir) 98 assert len(generated_files) == 1 99 generated_sdist = os.path.join(sdist_dir, generated_files[0]) 100 101 # Install source distribution 102 run_with_protected_pip(session, "install", generated_sdist) 103 104 # Install test dependencies 105 run_with_protected_pip(session, "install", "-r", REQUIREMENTS["tests"]) 106 107 # Parallelize tests as much as possible, by default. 108 arguments = session.posargs or ["-n", "auto"] 109 110 # Run the tests 111 # LC_CTYPE is set to get UTF-8 output inside of the subprocesses that our 112 # tests use. 113 session.run( 114 "pytest", 115 *arguments, 116 env={ 117 "LC_CTYPE": "en_US.UTF-8", 118 }, 119 ) 120 121 122 @nox.session 123 def docs(session: nox.Session) -> None: 124 session.install("-e", ".") 125 session.install("-r", REQUIREMENTS["docs"]) 126 127 def get_sphinx_build_command(kind: str) -> List[str]: 128 # Having the conf.py in the docs/html is weird but needed because we 129 # can not use a different configuration directory vs source directory 130 # on RTD currently. So, we'll pass "-c docs/html" here. 131 # See https://github.com/rtfd/readthedocs.org/issues/1543. 132 # fmt: off 133 return [ 134 "sphinx-build", 135 "-W", 136 "-c", "docs/html", # see note above 137 "-d", "docs/build/doctrees/" + kind, 138 "-b", kind, 139 "docs/" + kind, 140 "docs/build/" + kind, 141 ] 142 # fmt: on 143 144 session.run(*get_sphinx_build_command("html")) 145 session.run(*get_sphinx_build_command("man")) 146 147 148 @nox.session(name="docs-live") 149 def docs_live(session: nox.Session) -> None: 150 session.install("-e", ".") 151 session.install("-r", REQUIREMENTS["docs"], "sphinx-autobuild") 152 153 session.run( 154 "sphinx-autobuild", 155 "-d=docs/build/doctrees/livehtml", 156 "-b=dirhtml", 157 "docs/html", 158 "docs/build/livehtml", 159 *session.posargs, 160 ) 161 162 163 @nox.session 164 def lint(session: nox.Session) -> None: 165 session.install("pre-commit") 166 167 if session.posargs: 168 args = session.posargs + ["--all-files"] 169 else: 170 args = ["--all-files", "--show-diff-on-failure"] 171 172 session.run("pre-commit", "run", *args) 173 174 175 # NOTE: This session will COMMIT upgrades to vendored libraries. 176 # You should therefore not run it directly against `main`. If you 177 # do (assuming you started with a clean main), you can run: 178 # 179 # git checkout -b vendoring-updates 180 # git checkout main 181 # git reset --hard origin/main 182 @nox.session 183 def vendoring(session: nox.Session) -> None: 184 session.install("vendoring~=1.2.0") 185 186 if "--upgrade" not in session.posargs: 187 session.run("vendoring", "sync", "-v") 188 return 189 190 def pinned_requirements(path: Path) -> Iterator[Tuple[str, str]]: 191 for line in path.read_text().splitlines(keepends=False): 192 one, sep, two = line.partition("==") 193 if not sep: 194 continue 195 name = one.strip() 196 version = two.split("#", 1)[0].strip() 197 if name and version: 198 yield name, version 199 200 vendor_txt = Path("src/pip/_vendor/vendor.txt") 201 for name, old_version in pinned_requirements(vendor_txt): 202 if name == "setuptools": 203 continue 204 205 # update requirements.txt 206 session.run("vendoring", "update", ".", name) 207 208 # get the updated version 209 new_version = old_version 210 for inner_name, inner_version in pinned_requirements(vendor_txt): 211 if inner_name == name: 212 # this is a dedicated assignment, to make flake8 happy 213 new_version = inner_version 214 break 215 else: 216 session.error(f"Could not find {name} in {vendor_txt}") 217 218 # check if the version changed. 219 if new_version == old_version: 220 continue # no change, nothing more to do here. 221 222 # synchronize the contents 223 session.run("vendoring", "sync", ".") 224 225 # Determine the correct message 226 message = f"Upgrade {name} to {new_version}" 227 228 # Write our news fragment 229 news_file = Path("news") / (name + ".vendor.rst") 230 news_file.write_text(message + "\n") # "\n" appeases end-of-line-fixer 231 232 # Commit the changes 233 release.commit_file(session, ".", message=message) 234 235 236 @nox.session 237 def coverage(session: nox.Session) -> None: 238 # Install source distribution 239 run_with_protected_pip(session, "install", ".") 240 241 # Install test dependencies 242 run_with_protected_pip(session, "install", "-r", REQUIREMENTS["tests"]) 243 244 if not os.path.exists(".coverage-output"): 245 os.mkdir(".coverage-output") 246 session.run( 247 "pytest", 248 "--cov=pip", 249 "--cov-config=./setup.cfg", 250 *session.posargs, 251 env={ 252 "COVERAGE_OUTPUT_DIR": "./.coverage-output", 253 "COVERAGE_PROCESS_START": os.fsdecode(Path("setup.cfg").resolve()), 254 }, 255 ) 256 257 258 # ----------------------------------------------------------------------------- 259 # Release Commands 260 # ----------------------------------------------------------------------------- 261 @nox.session(name="prepare-release") 262 def prepare_release(session: nox.Session) -> None: 263 version = release.get_version_from_arguments(session) 264 if not version: 265 session.error("Usage: nox -s prepare-release -- <version>") 266 267 session.log("# Ensure nothing is staged") 268 if release.modified_files_in_git("--staged"): 269 session.error("There are files staged in git") 270 271 session.log(f"# Updating {AUTHORS_FILE}") 272 release.generate_authors(AUTHORS_FILE) 273 if release.modified_files_in_git(): 274 release.commit_file(session, AUTHORS_FILE, message=f"Update {AUTHORS_FILE}") 275 else: 276 session.log(f"# No changes to {AUTHORS_FILE}") 277 278 session.log("# Generating NEWS") 279 release.generate_news(session, version) 280 281 session.log(f"# Bumping for release {version}") 282 release.update_version_file(version, VERSION_FILE) 283 release.commit_file(session, VERSION_FILE, message="Bump for release") 284 285 session.log("# Tagging release") 286 release.create_git_tag(session, version, message=f"Release {version}") 287 288 session.log("# Bumping for development") 289 next_dev_version = release.get_next_development_version(version) 290 release.update_version_file(next_dev_version, VERSION_FILE) 291 release.commit_file(session, VERSION_FILE, message="Bump for development") 292 293 294 @nox.session(name="build-release") 295 def build_release(session: nox.Session) -> None: 296 version = release.get_version_from_arguments(session) 297 if not version: 298 session.error("Usage: nox -s build-release -- YY.N[.P]") 299 300 session.log("# Ensure no files in dist/") 301 if release.have_files_in_folder("dist"): 302 session.error( 303 "There are files in dist/. Remove them and try again. " 304 "You can use `git clean -fxdi -- dist` command to do this" 305 ) 306 307 session.log("# Install dependencies") 308 session.install("setuptools", "wheel", "twine") 309 310 with release.isolated_temporary_checkout(session, version) as build_dir: 311 session.log( 312 "# Start the build in an isolated, " 313 f"temporary Git checkout at {build_dir!s}", 314 ) 315 with release.workdir(session, build_dir): 316 tmp_dists = build_dists(session) 317 318 tmp_dist_paths = (build_dir / p for p in tmp_dists) 319 session.log(f"# Copying dists from {build_dir}") 320 os.makedirs("dist", exist_ok=True) 321 for dist, final in zip(tmp_dist_paths, tmp_dists): 322 session.log(f"# Copying {dist} to {final}") 323 shutil.copy(dist, final) 324 325 326 def build_dists(session: nox.Session) -> List[str]: 327 """Return dists with valid metadata.""" 328 session.log( 329 "# Check if there's any Git-untracked files before building the wheel", 330 ) 331 332 has_forbidden_git_untracked_files = any( 333 # Don't report the environment this session is running in 334 not untracked_file.startswith(".nox/build-release/") 335 for untracked_file in release.get_git_untracked_files() 336 ) 337 if has_forbidden_git_untracked_files: 338 session.error( 339 "There are untracked files in the working directory. " 340 "Remove them and try again", 341 ) 342 343 session.log("# Build distributions") 344 session.run("python", "setup.py", "sdist", "bdist_wheel", silent=True) 345 produced_dists = glob.glob("dist/*") 346 347 session.log(f"# Verify distributions: {', '.join(produced_dists)}") 348 session.run("twine", "check", *produced_dists, silent=True) 349 350 return produced_dists 351 352 353 @nox.session(name="upload-release") 354 def upload_release(session: nox.Session) -> None: 355 version = release.get_version_from_arguments(session) 356 if not version: 357 session.error("Usage: nox -s upload-release -- YY.N[.P]") 358 359 session.log("# Install dependencies") 360 session.install("twine") 361 362 distribution_files = glob.glob("dist/*") 363 session.log(f"# Distribution files: {distribution_files}") 364 365 # Sanity check: Make sure there's 2 distribution files. 366 count = len(distribution_files) 367 if count != 2: 368 session.error( 369 f"Expected 2 distribution files for upload, got {count}. " 370 f"Remove dist/ and run 'nox -s build-release -- {version}'" 371 ) 372 # Sanity check: Make sure the files are correctly named. 373 distfile_names = (os.path.basename(fn) for fn in distribution_files) 374 expected_distribution_files = [ 375 f"pip-{version}-py3-none-any.whl", 376 f"pip-{version}.tar.gz", 377 ] 378 if sorted(distfile_names) != sorted(expected_distribution_files): 379 session.error(f"Distribution files do not seem to be for {version} release.") 380 381 session.log("# Upload distributions") 382 session.run("twine", "upload", *distribution_files) 383 [end of noxfile.py] [start of src/pip/__init__.py] 1 from typing import List, Optional 2 3 __version__ = "22.3.dev0" 4 5 6 def main(args: Optional[List[str]] = None) -> int: 7 """This is an internal API only meant for use by pip's own console scripts. 8 9 For additional details, see https://github.com/pypa/pip/issues/7498. 10 """ 11 from pip._internal.utils.entrypoints import _wrapper 12 13 return _wrapper(args) 14 [end of src/pip/__init__.py] [start of src/pip/_internal/cli/req_command.py] 1 """Contains the Command base classes that depend on PipSession. 2 3 The classes in this module are in a separate module so the commands not 4 needing download / PackageFinder capability don't unnecessarily import the 5 PackageFinder machinery and all its vendored dependencies, etc. 6 """ 7 8 import logging 9 import os 10 import sys 11 from functools import partial 12 from optparse import Values 13 from typing import TYPE_CHECKING, Any, List, Optional, Tuple 14 15 from pip._internal.cache import WheelCache 16 from pip._internal.cli import cmdoptions 17 from pip._internal.cli.base_command import Command 18 from pip._internal.cli.command_context import CommandContextMixIn 19 from pip._internal.exceptions import CommandError, PreviousBuildDirError 20 from pip._internal.index.collector import LinkCollector 21 from pip._internal.index.package_finder import PackageFinder 22 from pip._internal.models.selection_prefs import SelectionPreferences 23 from pip._internal.models.target_python import TargetPython 24 from pip._internal.network.session import PipSession 25 from pip._internal.operations.build.build_tracker import BuildTracker 26 from pip._internal.operations.prepare import RequirementPreparer 27 from pip._internal.req.constructors import ( 28 install_req_from_editable, 29 install_req_from_line, 30 install_req_from_parsed_requirement, 31 install_req_from_req_string, 32 ) 33 from pip._internal.req.req_file import parse_requirements 34 from pip._internal.req.req_install import InstallRequirement 35 from pip._internal.resolution.base import BaseResolver 36 from pip._internal.self_outdated_check import pip_self_version_check 37 from pip._internal.utils.temp_dir import ( 38 TempDirectory, 39 TempDirectoryTypeRegistry, 40 tempdir_kinds, 41 ) 42 from pip._internal.utils.virtualenv import running_under_virtualenv 43 44 if TYPE_CHECKING: 45 from ssl import SSLContext 46 47 logger = logging.getLogger(__name__) 48 49 50 def _create_truststore_ssl_context() -> Optional["SSLContext"]: 51 if sys.version_info < (3, 10): 52 raise CommandError("The truststore feature is only available for Python 3.10+") 53 54 try: 55 import ssl 56 except ImportError: 57 logger.warning("Disabling truststore since ssl support is missing") 58 return None 59 60 try: 61 import truststore 62 except ImportError: 63 raise CommandError( 64 "To use the truststore feature, 'truststore' must be installed into " 65 "pip's current environment." 66 ) 67 68 return truststore.SSLContext(ssl.PROTOCOL_TLS_CLIENT) 69 70 71 class SessionCommandMixin(CommandContextMixIn): 72 73 """ 74 A class mixin for command classes needing _build_session(). 75 """ 76 77 def __init__(self) -> None: 78 super().__init__() 79 self._session: Optional[PipSession] = None 80 81 @classmethod 82 def _get_index_urls(cls, options: Values) -> Optional[List[str]]: 83 """Return a list of index urls from user-provided options.""" 84 index_urls = [] 85 if not getattr(options, "no_index", False): 86 url = getattr(options, "index_url", None) 87 if url: 88 index_urls.append(url) 89 urls = getattr(options, "extra_index_urls", None) 90 if urls: 91 index_urls.extend(urls) 92 # Return None rather than an empty list 93 return index_urls or None 94 95 def get_default_session(self, options: Values) -> PipSession: 96 """Get a default-managed session.""" 97 if self._session is None: 98 self._session = self.enter_context(self._build_session(options)) 99 # there's no type annotation on requests.Session, so it's 100 # automatically ContextManager[Any] and self._session becomes Any, 101 # then https://github.com/python/mypy/issues/7696 kicks in 102 assert self._session is not None 103 return self._session 104 105 def _build_session( 106 self, 107 options: Values, 108 retries: Optional[int] = None, 109 timeout: Optional[int] = None, 110 fallback_to_certifi: bool = False, 111 ) -> PipSession: 112 cache_dir = options.cache_dir 113 assert not cache_dir or os.path.isabs(cache_dir) 114 115 if "truststore" in options.features_enabled: 116 try: 117 ssl_context = _create_truststore_ssl_context() 118 except Exception: 119 if not fallback_to_certifi: 120 raise 121 ssl_context = None 122 else: 123 ssl_context = None 124 125 session = PipSession( 126 cache=os.path.join(cache_dir, "http") if cache_dir else None, 127 retries=retries if retries is not None else options.retries, 128 trusted_hosts=options.trusted_hosts, 129 index_urls=self._get_index_urls(options), 130 ssl_context=ssl_context, 131 ) 132 133 # Handle custom ca-bundles from the user 134 if options.cert: 135 session.verify = options.cert 136 137 # Handle SSL client certificate 138 if options.client_cert: 139 session.cert = options.client_cert 140 141 # Handle timeouts 142 if options.timeout or timeout: 143 session.timeout = timeout if timeout is not None else options.timeout 144 145 # Handle configured proxies 146 if options.proxy: 147 session.proxies = { 148 "http": options.proxy, 149 "https": options.proxy, 150 } 151 152 # Determine if we can prompt the user for authentication or not 153 session.auth.prompting = not options.no_input 154 155 return session 156 157 158 class IndexGroupCommand(Command, SessionCommandMixin): 159 160 """ 161 Abstract base class for commands with the index_group options. 162 163 This also corresponds to the commands that permit the pip version check. 164 """ 165 166 def handle_pip_version_check(self, options: Values) -> None: 167 """ 168 Do the pip version check if not disabled. 169 170 This overrides the default behavior of not doing the check. 171 """ 172 # Make sure the index_group options are present. 173 assert hasattr(options, "no_index") 174 175 if options.disable_pip_version_check or options.no_index: 176 return 177 178 # Otherwise, check if we're using the latest version of pip available. 179 session = self._build_session( 180 options, 181 retries=0, 182 timeout=min(5, options.timeout), 183 # This is set to ensure the function does not fail when truststore is 184 # specified in use-feature but cannot be loaded. This usually raises a 185 # CommandError and shows a nice user-facing error, but this function is not 186 # called in that try-except block. 187 fallback_to_certifi=True, 188 ) 189 with session: 190 pip_self_version_check(session, options) 191 192 193 KEEPABLE_TEMPDIR_TYPES = [ 194 tempdir_kinds.BUILD_ENV, 195 tempdir_kinds.EPHEM_WHEEL_CACHE, 196 tempdir_kinds.REQ_BUILD, 197 ] 198 199 200 def warn_if_run_as_root() -> None: 201 """Output a warning for sudo users on Unix. 202 203 In a virtual environment, sudo pip still writes to virtualenv. 204 On Windows, users may run pip as Administrator without issues. 205 This warning only applies to Unix root users outside of virtualenv. 206 """ 207 if running_under_virtualenv(): 208 return 209 if not hasattr(os, "getuid"): 210 return 211 # On Windows, there are no "system managed" Python packages. Installing as 212 # Administrator via pip is the correct way of updating system environments. 213 # 214 # We choose sys.platform over utils.compat.WINDOWS here to enable Mypy platform 215 # checks: https://mypy.readthedocs.io/en/stable/common_issues.html 216 if sys.platform == "win32" or sys.platform == "cygwin": 217 return 218 219 if os.getuid() != 0: 220 return 221 222 logger.warning( 223 "Running pip as the 'root' user can result in broken permissions and " 224 "conflicting behaviour with the system package manager. " 225 "It is recommended to use a virtual environment instead: " 226 "https://pip.pypa.io/warnings/venv" 227 ) 228 229 230 def with_cleanup(func: Any) -> Any: 231 """Decorator for common logic related to managing temporary 232 directories. 233 """ 234 235 def configure_tempdir_registry(registry: TempDirectoryTypeRegistry) -> None: 236 for t in KEEPABLE_TEMPDIR_TYPES: 237 registry.set_delete(t, False) 238 239 def wrapper( 240 self: RequirementCommand, options: Values, args: List[Any] 241 ) -> Optional[int]: 242 assert self.tempdir_registry is not None 243 if options.no_clean: 244 configure_tempdir_registry(self.tempdir_registry) 245 246 try: 247 return func(self, options, args) 248 except PreviousBuildDirError: 249 # This kind of conflict can occur when the user passes an explicit 250 # build directory with a pre-existing folder. In that case we do 251 # not want to accidentally remove it. 252 configure_tempdir_registry(self.tempdir_registry) 253 raise 254 255 return wrapper 256 257 258 class RequirementCommand(IndexGroupCommand): 259 def __init__(self, *args: Any, **kw: Any) -> None: 260 super().__init__(*args, **kw) 261 262 self.cmd_opts.add_option(cmdoptions.no_clean()) 263 264 @staticmethod 265 def determine_resolver_variant(options: Values) -> str: 266 """Determines which resolver should be used, based on the given options.""" 267 if "legacy-resolver" in options.deprecated_features_enabled: 268 return "legacy" 269 270 return "2020-resolver" 271 272 @classmethod 273 def make_requirement_preparer( 274 cls, 275 temp_build_dir: TempDirectory, 276 options: Values, 277 build_tracker: BuildTracker, 278 session: PipSession, 279 finder: PackageFinder, 280 use_user_site: bool, 281 download_dir: Optional[str] = None, 282 verbosity: int = 0, 283 ) -> RequirementPreparer: 284 """ 285 Create a RequirementPreparer instance for the given parameters. 286 """ 287 temp_build_dir_path = temp_build_dir.path 288 assert temp_build_dir_path is not None 289 290 resolver_variant = cls.determine_resolver_variant(options) 291 if resolver_variant == "2020-resolver": 292 lazy_wheel = "fast-deps" in options.features_enabled 293 if lazy_wheel: 294 logger.warning( 295 "pip is using lazily downloaded wheels using HTTP " 296 "range requests to obtain dependency information. " 297 "This experimental feature is enabled through " 298 "--use-feature=fast-deps and it is not ready for " 299 "production." 300 ) 301 else: 302 lazy_wheel = False 303 if "fast-deps" in options.features_enabled: 304 logger.warning( 305 "fast-deps has no effect when used with the legacy resolver." 306 ) 307 308 return RequirementPreparer( 309 build_dir=temp_build_dir_path, 310 src_dir=options.src_dir, 311 download_dir=download_dir, 312 build_isolation=options.build_isolation, 313 check_build_deps=options.check_build_deps, 314 build_tracker=build_tracker, 315 session=session, 316 progress_bar=options.progress_bar, 317 finder=finder, 318 require_hashes=options.require_hashes, 319 use_user_site=use_user_site, 320 lazy_wheel=lazy_wheel, 321 verbosity=verbosity, 322 ) 323 324 @classmethod 325 def make_resolver( 326 cls, 327 preparer: RequirementPreparer, 328 finder: PackageFinder, 329 options: Values, 330 wheel_cache: Optional[WheelCache] = None, 331 use_user_site: bool = False, 332 ignore_installed: bool = True, 333 ignore_requires_python: bool = False, 334 force_reinstall: bool = False, 335 upgrade_strategy: str = "to-satisfy-only", 336 use_pep517: Optional[bool] = None, 337 py_version_info: Optional[Tuple[int, ...]] = None, 338 ) -> BaseResolver: 339 """ 340 Create a Resolver instance for the given parameters. 341 """ 342 make_install_req = partial( 343 install_req_from_req_string, 344 isolated=options.isolated_mode, 345 use_pep517=use_pep517, 346 config_settings=getattr(options, "config_settings", None), 347 ) 348 resolver_variant = cls.determine_resolver_variant(options) 349 # The long import name and duplicated invocation is needed to convince 350 # Mypy into correctly typechecking. Otherwise it would complain the 351 # "Resolver" class being redefined. 352 if resolver_variant == "2020-resolver": 353 import pip._internal.resolution.resolvelib.resolver 354 355 return pip._internal.resolution.resolvelib.resolver.Resolver( 356 preparer=preparer, 357 finder=finder, 358 wheel_cache=wheel_cache, 359 make_install_req=make_install_req, 360 use_user_site=use_user_site, 361 ignore_dependencies=options.ignore_dependencies, 362 ignore_installed=ignore_installed, 363 ignore_requires_python=ignore_requires_python, 364 force_reinstall=force_reinstall, 365 upgrade_strategy=upgrade_strategy, 366 py_version_info=py_version_info, 367 ) 368 import pip._internal.resolution.legacy.resolver 369 370 return pip._internal.resolution.legacy.resolver.Resolver( 371 preparer=preparer, 372 finder=finder, 373 wheel_cache=wheel_cache, 374 make_install_req=make_install_req, 375 use_user_site=use_user_site, 376 ignore_dependencies=options.ignore_dependencies, 377 ignore_installed=ignore_installed, 378 ignore_requires_python=ignore_requires_python, 379 force_reinstall=force_reinstall, 380 upgrade_strategy=upgrade_strategy, 381 py_version_info=py_version_info, 382 ) 383 384 def get_requirements( 385 self, 386 args: List[str], 387 options: Values, 388 finder: PackageFinder, 389 session: PipSession, 390 ) -> List[InstallRequirement]: 391 """ 392 Parse command-line arguments into the corresponding requirements. 393 """ 394 requirements: List[InstallRequirement] = [] 395 for filename in options.constraints: 396 for parsed_req in parse_requirements( 397 filename, 398 constraint=True, 399 finder=finder, 400 options=options, 401 session=session, 402 ): 403 req_to_add = install_req_from_parsed_requirement( 404 parsed_req, 405 isolated=options.isolated_mode, 406 user_supplied=False, 407 ) 408 requirements.append(req_to_add) 409 410 for req in args: 411 req_to_add = install_req_from_line( 412 req, 413 None, 414 isolated=options.isolated_mode, 415 use_pep517=options.use_pep517, 416 user_supplied=True, 417 config_settings=getattr(options, "config_settings", None), 418 ) 419 requirements.append(req_to_add) 420 421 for req in options.editables: 422 req_to_add = install_req_from_editable( 423 req, 424 user_supplied=True, 425 isolated=options.isolated_mode, 426 use_pep517=options.use_pep517, 427 config_settings=getattr(options, "config_settings", None), 428 ) 429 requirements.append(req_to_add) 430 431 # NOTE: options.require_hashes may be set if --require-hashes is True 432 for filename in options.requirements: 433 for parsed_req in parse_requirements( 434 filename, finder=finder, options=options, session=session 435 ): 436 req_to_add = install_req_from_parsed_requirement( 437 parsed_req, 438 isolated=options.isolated_mode, 439 use_pep517=options.use_pep517, 440 user_supplied=True, 441 ) 442 requirements.append(req_to_add) 443 444 # If any requirement has hash options, enable hash checking. 445 if any(req.has_hash_options for req in requirements): 446 options.require_hashes = True 447 448 if not (args or options.editables or options.requirements): 449 opts = {"name": self.name} 450 if options.find_links: 451 raise CommandError( 452 "You must give at least one requirement to {name} " 453 '(maybe you meant "pip {name} {links}"?)'.format( 454 **dict(opts, links=" ".join(options.find_links)) 455 ) 456 ) 457 else: 458 raise CommandError( 459 "You must give at least one requirement to {name} " 460 '(see "pip help {name}")'.format(**opts) 461 ) 462 463 return requirements 464 465 @staticmethod 466 def trace_basic_info(finder: PackageFinder) -> None: 467 """ 468 Trace basic information about the provided objects. 469 """ 470 # Display where finder is looking for packages 471 search_scope = finder.search_scope 472 locations = search_scope.get_formatted_locations() 473 if locations: 474 logger.info(locations) 475 476 def _build_package_finder( 477 self, 478 options: Values, 479 session: PipSession, 480 target_python: Optional[TargetPython] = None, 481 ignore_requires_python: Optional[bool] = None, 482 ) -> PackageFinder: 483 """ 484 Create a package finder appropriate to this requirement command. 485 486 :param ignore_requires_python: Whether to ignore incompatible 487 "Requires-Python" values in links. Defaults to False. 488 """ 489 link_collector = LinkCollector.create(session, options=options) 490 selection_prefs = SelectionPreferences( 491 allow_yanked=True, 492 format_control=options.format_control, 493 allow_all_prereleases=options.pre, 494 prefer_binary=options.prefer_binary, 495 ignore_requires_python=ignore_requires_python, 496 ) 497 498 return PackageFinder.create( 499 link_collector=link_collector, 500 selection_prefs=selection_prefs, 501 target_python=target_python, 502 ) 503 [end of src/pip/_internal/cli/req_command.py] [start of src/pip/_internal/utils/appdirs.py] 1 """ 2 This code wraps the vendored appdirs module to so the return values are 3 compatible for the current pip code base. 4 5 The intention is to rewrite current usages gradually, keeping the tests pass, 6 and eventually drop this after all usages are changed. 7 """ 8 9 import os 10 import sys 11 from typing import List 12 13 from pip._vendor import platformdirs as _appdirs 14 15 16 def user_cache_dir(appname: str) -> str: 17 return _appdirs.user_cache_dir(appname, appauthor=False) 18 19 20 def _macos_user_config_dir(appname: str, roaming: bool = True) -> str: 21 # Use ~/Application Support/pip, if the directory exists. 22 path = _appdirs.user_data_dir(appname, appauthor=False, roaming=roaming) 23 if os.path.isdir(path): 24 return path 25 26 # Use a Linux-like ~/.config/pip, by default. 27 linux_like_path = "~/.config/" 28 if appname: 29 linux_like_path = os.path.join(linux_like_path, appname) 30 31 return os.path.expanduser(linux_like_path) 32 33 34 def user_config_dir(appname: str, roaming: bool = True) -> str: 35 if sys.platform == "darwin": 36 return _macos_user_config_dir(appname, roaming) 37 38 return _appdirs.user_config_dir(appname, appauthor=False, roaming=roaming) 39 40 41 # for the discussion regarding site_config_dir locations 42 # see <https://github.com/pypa/pip/issues/1733> 43 def site_config_dirs(appname: str) -> List[str]: 44 if sys.platform == "darwin": 45 return [_appdirs.site_data_dir(appname, appauthor=False, multipath=True)] 46 47 dirval = _appdirs.site_config_dir(appname, appauthor=False, multipath=True) 48 if sys.platform == "win32": 49 return [dirval] 50 51 # Unix-y system. Look in /etc as well. 52 return dirval.split(os.pathsep) + ["/etc"] 53 [end of src/pip/_internal/utils/appdirs.py] [start of src/pip/_vendor/__init__.py] 1 """ 2 pip._vendor is for vendoring dependencies of pip to prevent needing pip to 3 depend on something external. 4 5 Files inside of pip._vendor should be considered immutable and should only be 6 updated to versions from upstream. 7 """ 8 from __future__ import absolute_import 9 10 import glob 11 import os.path 12 import sys 13 14 # Downstream redistributors which have debundled our dependencies should also 15 # patch this value to be true. This will trigger the additional patching 16 # to cause things like "six" to be available as pip. 17 DEBUNDLED = False 18 19 # By default, look in this directory for a bunch of .whl files which we will 20 # add to the beginning of sys.path before attempting to import anything. This 21 # is done to support downstream re-distributors like Debian and Fedora who 22 # wish to create their own Wheels for our dependencies to aid in debundling. 23 WHEEL_DIR = os.path.abspath(os.path.dirname(__file__)) 24 25 26 # Define a small helper function to alias our vendored modules to the real ones 27 # if the vendored ones do not exist. This idea of this was taken from 28 # https://github.com/kennethreitz/requests/pull/2567. 29 def vendored(modulename): 30 vendored_name = "{0}.{1}".format(__name__, modulename) 31 32 try: 33 __import__(modulename, globals(), locals(), level=0) 34 except ImportError: 35 # We can just silently allow import failures to pass here. If we 36 # got to this point it means that ``import pip._vendor.whatever`` 37 # failed and so did ``import whatever``. Since we're importing this 38 # upfront in an attempt to alias imports, not erroring here will 39 # just mean we get a regular import error whenever pip *actually* 40 # tries to import one of these modules to use it, which actually 41 # gives us a better error message than we would have otherwise 42 # gotten. 43 pass 44 else: 45 sys.modules[vendored_name] = sys.modules[modulename] 46 base, head = vendored_name.rsplit(".", 1) 47 setattr(sys.modules[base], head, sys.modules[modulename]) 48 49 50 # If we're operating in a debundled setup, then we want to go ahead and trigger 51 # the aliasing of our vendored libraries as well as looking for wheels to add 52 # to our sys.path. This will cause all of this code to be a no-op typically 53 # however downstream redistributors can enable it in a consistent way across 54 # all platforms. 55 if DEBUNDLED: 56 # Actually look inside of WHEEL_DIR to find .whl files and add them to the 57 # front of our sys.path. 58 sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path 59 60 # Actually alias all of our vendored dependencies. 61 vendored("cachecontrol") 62 vendored("certifi") 63 vendored("colorama") 64 vendored("distlib") 65 vendored("distro") 66 vendored("six") 67 vendored("six.moves") 68 vendored("six.moves.urllib") 69 vendored("six.moves.urllib.parse") 70 vendored("packaging") 71 vendored("packaging.version") 72 vendored("packaging.specifiers") 73 vendored("pep517") 74 vendored("pkg_resources") 75 vendored("platformdirs") 76 vendored("progress") 77 vendored("requests") 78 vendored("requests.exceptions") 79 vendored("requests.packages") 80 vendored("requests.packages.urllib3") 81 vendored("requests.packages.urllib3._collections") 82 vendored("requests.packages.urllib3.connection") 83 vendored("requests.packages.urllib3.connectionpool") 84 vendored("requests.packages.urllib3.contrib") 85 vendored("requests.packages.urllib3.contrib.ntlmpool") 86 vendored("requests.packages.urllib3.contrib.pyopenssl") 87 vendored("requests.packages.urllib3.exceptions") 88 vendored("requests.packages.urllib3.fields") 89 vendored("requests.packages.urllib3.filepost") 90 vendored("requests.packages.urllib3.packages") 91 vendored("requests.packages.urllib3.packages.ordered_dict") 92 vendored("requests.packages.urllib3.packages.six") 93 vendored("requests.packages.urllib3.packages.ssl_match_hostname") 94 vendored("requests.packages.urllib3.packages.ssl_match_hostname." 95 "_implementation") 96 vendored("requests.packages.urllib3.poolmanager") 97 vendored("requests.packages.urllib3.request") 98 vendored("requests.packages.urllib3.response") 99 vendored("requests.packages.urllib3.util") 100 vendored("requests.packages.urllib3.util.connection") 101 vendored("requests.packages.urllib3.util.request") 102 vendored("requests.packages.urllib3.util.response") 103 vendored("requests.packages.urllib3.util.retry") 104 vendored("requests.packages.urllib3.util.ssl_") 105 vendored("requests.packages.urllib3.util.timeout") 106 vendored("requests.packages.urllib3.util.url") 107 vendored("resolvelib") 108 vendored("rich") 109 vendored("rich.console") 110 vendored("rich.highlighter") 111 vendored("rich.logging") 112 vendored("rich.markup") 113 vendored("rich.progress") 114 vendored("rich.segment") 115 vendored("rich.style") 116 vendored("rich.text") 117 vendored("rich.traceback") 118 vendored("tenacity") 119 vendored("tomli") 120 vendored("urllib3") 121 [end of src/pip/_vendor/__init__.py] [start of src/pip/_vendor/urllib3/__init__.py] 1 """ 2 Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more 3 """ 4 from __future__ import absolute_import 5 6 # Set default logging handler to avoid "No handler found" warnings. 7 import logging 8 import warnings 9 from logging import NullHandler 10 11 from . import exceptions 12 from ._version import __version__ 13 from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url 14 from .filepost import encode_multipart_formdata 15 from .poolmanager import PoolManager, ProxyManager, proxy_from_url 16 from .response import HTTPResponse 17 from .util.request import make_headers 18 from .util.retry import Retry 19 from .util.timeout import Timeout 20 from .util.url import get_host 21 22 __author__ = "Andrey Petrov ([email protected])" 23 __license__ = "MIT" 24 __version__ = __version__ 25 26 __all__ = ( 27 "HTTPConnectionPool", 28 "HTTPSConnectionPool", 29 "PoolManager", 30 "ProxyManager", 31 "HTTPResponse", 32 "Retry", 33 "Timeout", 34 "add_stderr_logger", 35 "connection_from_url", 36 "disable_warnings", 37 "encode_multipart_formdata", 38 "get_host", 39 "make_headers", 40 "proxy_from_url", 41 ) 42 43 logging.getLogger(__name__).addHandler(NullHandler()) 44 45 46 def add_stderr_logger(level=logging.DEBUG): 47 """ 48 Helper for quickly adding a StreamHandler to the logger. Useful for 49 debugging. 50 51 Returns the handler after adding it. 52 """ 53 # This method needs to be in this __init__.py to get the __name__ correct 54 # even if urllib3 is vendored within another package. 55 logger = logging.getLogger(__name__) 56 handler = logging.StreamHandler() 57 handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s")) 58 logger.addHandler(handler) 59 logger.setLevel(level) 60 logger.debug("Added a stderr logging handler to logger: %s", __name__) 61 return handler 62 63 64 # ... Clean up. 65 del NullHandler 66 67 68 # All warning filters *must* be appended unless you're really certain that they 69 # shouldn't be: otherwise, it's very hard for users to use most Python 70 # mechanisms to silence them. 71 # SecurityWarning's always go off by default. 72 warnings.simplefilter("always", exceptions.SecurityWarning, append=True) 73 # SubjectAltNameWarning's should go off once per host 74 warnings.simplefilter("default", exceptions.SubjectAltNameWarning, append=True) 75 # InsecurePlatformWarning's don't vary between requests, so we keep it default. 76 warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True) 77 # SNIMissingWarnings should go off only once. 78 warnings.simplefilter("default", exceptions.SNIMissingWarning, append=True) 79 80 81 def disable_warnings(category=exceptions.HTTPWarning): 82 """ 83 Helper for quickly disabling all urllib3 warnings. 84 """ 85 warnings.simplefilter("ignore", category) 86 [end of src/pip/_vendor/urllib3/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pypa/pip
f8a25921e5c443b07483017b0ffdeb08b9ba2fdf
Update vendored packages for 22.3 Toward #11486
@pfmoore What's your git autocrlf configuration? The default. Which, looking at it, is `true`. In general, everything I do, I try to use the defaults, because that way I won't forget 🙂 OK, my guess is that it's likely what's making things break for you. https://stackoverflow.com/questions/2825428/why-should-i-use-core-autocrlf-true-in-git I'll try to grab a Windows VM and do stuff in it to investigate. BTW, my main problem here is mostly that I don't really understand any of the machinery involved in the vendoring (which I believe is largely your rewrite of the previous invoke-based stuff, that I *also* didn't understand 🙂) And I've never really felt comfortable with git's handling of line endings, I just stumble through mostly living with the occasional weirdness. So while I can point out where things don't work on Windows, I'm pretty lost trying to fix stuff. I am happy to try out suggestions, if you need me to check stuff, though. One thought - is it possible to configure the pip repo somehow so it's not reliant on what the developer's global settings are? Something like `.gitattributes`? Of course, then we'd get problems when people create new files on Windows which default to CRLF... > my guess is that it's likely what's making things break for you. That's my guess, too. I remember the old `patch` utility used to go berserk when fed patches with CRLF in them. I didn't think git would have the same issue, as in general git seems less overtly hostile to Windows than the older tools, but maybe it does. Although to be fair, the underlying issue is that the patch no longer applies. That's legitimate. There are 2 things that need consideration at that point: 1. How to we re-create the patch? The `certifi` patch hits this, as while it's easy enough to mechanically work out what the equivalent change is, I'm pretty sure just doing so is broken (there's a certifi issue where the maintainer expresses frustration at `importlib.resources`, and I can't say I disagree...) 2. What's a platform-agnostic way of rebuilding the patch? That's where CRLF issues and the whole "stage, change, make a diff, and discard" workflow is hitting me. Creating patches has *always* been a Windows-hostile activity, it seems to me (some of which is simply the fact that `diff`/`git diff` don't have an `-o` option, resulting in all sorts of flaky behaviour as a result of Windows' redirection mechanisms).
2022-10-10T01:32:49Z
<patch> diff --git a/src/pip/_vendor/pygments/__init__.py b/src/pip/_vendor/pygments/__init__.py --- a/src/pip/_vendor/pygments/__init__.py +++ b/src/pip/_vendor/pygments/__init__.py @@ -26,7 +26,7 @@ """ from io import StringIO, BytesIO -__version__ = '2.12.0' +__version__ = '2.13.0' __docformat__ = 'restructuredtext' __all__ = ['lex', 'format', 'highlight'] @@ -38,10 +38,10 @@ def lex(code, lexer): """ try: return lexer.get_tokens(code) - except TypeError as err: - if (isinstance(err.args[0], str) and - ('unbound method get_tokens' in err.args[0] or - 'missing 1 required positional argument' in err.args[0])): + except TypeError: + # Heuristic to catch a common mistake. + from pip._vendor.pygments.lexer import RegexLexer + if isinstance(lexer, type) and issubclass(lexer, RegexLexer): raise TypeError('lex() argument must be a lexer instance, ' 'not a class') raise @@ -62,10 +62,10 @@ def format(tokens, formatter, outfile=None): # pylint: disable=redefined-builti return realoutfile.getvalue() else: formatter.format(tokens, outfile) - except TypeError as err: - if (isinstance(err.args[0], str) and - ('unbound method format' in err.args[0] or - 'missing 1 required positional argument' in err.args[0])): + except TypeError: + # Heuristic to catch a common mistake. + from pip._vendor.pygments.formatter import Formatter + if isinstance(formatter, type) and issubclass(formatter, Formatter): raise TypeError('format() argument must be a formatter instance, ' 'not a class') raise @@ -80,4 +80,3 @@ def highlight(code, lexer, formatter, outfile=None): it is returned as a string. """ return format(lex(code, lexer), formatter, outfile) - diff --git a/src/pip/_vendor/pygments/cmdline.py b/src/pip/_vendor/pygments/cmdline.py --- a/src/pip/_vendor/pygments/cmdline.py +++ b/src/pip/_vendor/pygments/cmdline.py @@ -25,7 +25,7 @@ from pip._vendor.pygments.formatters import get_all_formatters, get_formatter_by_name, \ load_formatter_from_file, get_formatter_for_filename, find_formatter_class from pip._vendor.pygments.formatters.terminal import TerminalFormatter -from pip._vendor.pygments.formatters.terminal256 import Terminal256Formatter +from pip._vendor.pygments.formatters.terminal256 import Terminal256Formatter, TerminalTrueColorFormatter from pip._vendor.pygments.filters import get_all_filters, find_filter_class from pip._vendor.pygments.styles import get_all_styles, get_style_by_name @@ -445,7 +445,9 @@ def is_only_option(opt): return 1 else: if not fmter: - if '256' in os.environ.get('TERM', ''): + if os.environ.get('COLORTERM','') in ('truecolor', '24bit'): + fmter = TerminalTrueColorFormatter(**parsed_opts) + elif '256' in os.environ.get('TERM', ''): fmter = Terminal256Formatter(**parsed_opts) else: fmter = TerminalFormatter(**parsed_opts) @@ -636,6 +638,9 @@ def main(args=sys.argv): try: return main_inner(parser, argns) + except BrokenPipeError: + # someone closed our stdout, e.g. by quitting a pager. + return 0 except Exception: if argns.v: print(file=sys.stderr) diff --git a/src/pip/_vendor/pygments/filters/__init__.py b/src/pip/_vendor/pygments/filters/__init__.py --- a/src/pip/_vendor/pygments/filters/__init__.py +++ b/src/pip/_vendor/pygments/filters/__init__.py @@ -69,13 +69,16 @@ class CodeTagFilter(Filter): `codetags` : list of strings A list of strings that are flagged as code tags. The default is to - highlight ``XXX``, ``TODO``, ``BUG`` and ``NOTE``. + highlight ``XXX``, ``TODO``, ``FIXME``, ``BUG`` and ``NOTE``. + + .. versionchanged:: 2.13 + Now recognizes ``FIXME`` by default. """ def __init__(self, **options): Filter.__init__(self, **options) tags = get_list_opt(options, 'codetags', - ['XXX', 'TODO', 'BUG', 'NOTE']) + ['XXX', 'TODO', 'FIXME', 'BUG', 'NOTE']) self.tag_re = re.compile(r'\b(%s)\b' % '|'.join([ re.escape(tag) for tag in tags if tag ])) diff --git a/src/pip/_vendor/pygments/formatters/__init__.py b/src/pip/_vendor/pygments/formatters/__init__.py --- a/src/pip/_vendor/pygments/formatters/__init__.py +++ b/src/pip/_vendor/pygments/formatters/__init__.py @@ -11,7 +11,7 @@ import re import sys import types -import fnmatch +from fnmatch import fnmatch from os.path import basename from pip._vendor.pygments.formatters._mapping import FORMATTERS @@ -22,16 +22,6 @@ 'get_all_formatters', 'load_formatter_from_file'] + list(FORMATTERS) _formatter_cache = {} # classes by name -_pattern_cache = {} - - -def _fn_matches(fn, glob): - """Return whether the supplied file name fn matches pattern filename.""" - if glob not in _pattern_cache: - pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob)) - return pattern.match(fn) - return _pattern_cache[glob].match(fn) - def _load_formatters(module_name): """Load a formatter (and all others in the module too).""" @@ -122,13 +112,13 @@ def get_formatter_for_filename(fn, **options): fn = basename(fn) for modname, name, _, filenames, _ in FORMATTERS.values(): for filename in filenames: - if _fn_matches(fn, filename): + if fnmatch(fn, filename): if name not in _formatter_cache: _load_formatters(modname) return _formatter_cache[name](**options) for cls in find_plugin_formatters(): for filename in cls.filenames: - if _fn_matches(fn, filename): + if fnmatch(fn, filename): return cls(**options) raise ClassNotFound("no formatter found for file name %r" % fn) diff --git a/src/pip/_vendor/pygments/formatters/_mapping.py b/src/pip/_vendor/pygments/formatters/_mapping.py --- a/src/pip/_vendor/pygments/formatters/_mapping.py +++ b/src/pip/_vendor/pygments/formatters/_mapping.py @@ -1,16 +1,5 @@ -""" - pygments.formatters._mapping - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter mapping definitions. This file is generated by itself. Every time - you change something on a builtin formatter definition, run this script from - the formatters folder to update it. - - Do not alter the FORMATTERS dictionary by hand. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" +# Automatically generated by scripts/gen_mapfiles.py. +# DO NOT EDIT BY HAND; run `make mapfiles` instead. FORMATTERS = { 'BBCodeFormatter': ('pygments.formatters.bbcode', 'BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'), @@ -30,55 +19,5 @@ 'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'), 'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'), 'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'), - 'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.') + 'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.'), } - -if __name__ == '__main__': # pragma: no cover - import sys - import os - - # lookup formatters - found_formatters = [] - imports = [] - sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..')) - from pip._vendor.pygments.util import docstring_headline - - for root, dirs, files in os.walk('.'): - for filename in files: - if filename.endswith('.py') and not filename.startswith('_'): - module_name = 'pygments.formatters%s.%s' % ( - root[1:].replace('/', '.'), filename[:-3]) - print(module_name) - module = __import__(module_name, None, None, ['']) - for formatter_name in module.__all__: - formatter = getattr(module, formatter_name) - found_formatters.append( - '%r: %r' % (formatter_name, - (module_name, - formatter.name, - tuple(formatter.aliases), - tuple(formatter.filenames), - docstring_headline(formatter)))) - # sort them to make the diff minimal - found_formatters.sort() - - # extract useful sourcecode from this file - with open(__file__) as fp: - content = fp.read() - # replace crnl to nl for Windows. - # - # Note that, originally, contributors should keep nl of master - # repository, for example by using some kind of automatic - # management EOL, like `EolExtension - # <https://www.mercurial-scm.org/wiki/EolExtension>`. - content = content.replace("\r\n", "\n") - header = content[:content.find('FORMATTERS = {')] - footer = content[content.find("if __name__ == '__main__':"):] - - # write new file - with open(__file__, 'w') as fp: - fp.write(header) - fp.write('FORMATTERS = {\n %s\n}\n\n' % ',\n '.join(found_formatters)) - fp.write(footer) - - print ('=== %d formatters processed.' % len(found_formatters)) diff --git a/src/pip/_vendor/pygments/formatters/img.py b/src/pip/_vendor/pygments/formatters/img.py --- a/src/pip/_vendor/pygments/formatters/img.py +++ b/src/pip/_vendor/pygments/formatters/img.py @@ -206,13 +206,17 @@ def get_char_size(self): """ Get the character size. """ - return self.fonts['NORMAL'].getsize('M') + return self.get_text_size('M') def get_text_size(self, text): """ - Get the text size(width, height). + Get the text size (width, height). """ - return self.fonts['NORMAL'].getsize(text) + font = self.fonts['NORMAL'] + if hasattr(font, 'getbbox'): # Pillow >= 9.2.0 + return font.getbbox(text)[2:4] + else: + return font.getsize(text) def get_font(self, bold, oblique): """ @@ -520,7 +524,7 @@ def _create_drawables(self, tokensource): text_fg = self._get_text_color(style), text_bg = self._get_text_bg_color(style), ) - temp_width, temp_hight = self.fonts.get_text_size(temp) + temp_width, _ = self.fonts.get_text_size(temp) linelength += temp_width maxlinelength = max(maxlinelength, linelength) charno += len(temp) diff --git a/src/pip/_vendor/pygments/lexers/__init__.py b/src/pip/_vendor/pygments/lexers/__init__.py --- a/src/pip/_vendor/pygments/lexers/__init__.py +++ b/src/pip/_vendor/pygments/lexers/__init__.py @@ -11,7 +11,7 @@ import re import sys import types -import fnmatch +from fnmatch import fnmatch from os.path import basename from pip._vendor.pygments.lexers._mapping import LEXERS @@ -28,16 +28,6 @@ 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT) _lexer_cache = {} -_pattern_cache = {} - - -def _fn_matches(fn, glob): - """Return whether the supplied file name fn matches pattern filename.""" - if glob not in _pattern_cache: - pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob)) - return pattern.match(fn) - return _pattern_cache[glob].match(fn) - def _load_lexers(module_name): """Load a lexer (and all others in the module too).""" @@ -169,13 +159,13 @@ def find_lexer_class_for_filename(_fn, code=None): fn = basename(_fn) for modname, name, _, filenames, _ in LEXERS.values(): for filename in filenames: - if _fn_matches(fn, filename): + if fnmatch(fn, filename): if name not in _lexer_cache: _load_lexers(modname) matches.append((_lexer_cache[name], filename)) for cls in find_plugin_lexers(): for filename in cls.filenames: - if _fn_matches(fn, filename): + if fnmatch(fn, filename): matches.append((cls, filename)) if isinstance(code, bytes): @@ -262,11 +252,11 @@ def guess_lexer_for_filename(_fn, _text, **options): matching_lexers = set() for lexer in _iter_lexerclasses(): for filename in lexer.filenames: - if _fn_matches(fn, filename): + if fnmatch(fn, filename): matching_lexers.add(lexer) primary[lexer] = True for filename in lexer.alias_filenames: - if _fn_matches(fn, filename): + if fnmatch(fn, filename): matching_lexers.add(lexer) primary[lexer] = False if not matching_lexers: diff --git a/src/pip/_vendor/pygments/lexers/_mapping.py b/src/pip/_vendor/pygments/lexers/_mapping.py --- a/src/pip/_vendor/pygments/lexers/_mapping.py +++ b/src/pip/_vendor/pygments/lexers/_mapping.py @@ -1,16 +1,5 @@ -""" - pygments.lexers._mapping - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexer mapping definitions. This file is generated by itself. Every time - you change something on a builtin lexer definition, run this script from - the lexers folder to update it. - - Do not alter the LEXERS dictionary by hand. - - :copyright: Copyright 2006-2014, 2016 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" +# Automatically generated by scripts/gen_mapfiles.py. +# DO NOT EDIT BY HAND; run `make mapfiles` instead. LEXERS = { 'ABAPLexer': ('pip._vendor.pygments.lexers.business', 'ABAP', ('abap',), ('*.abap', '*.ABAP'), ('text/x-abap',)), @@ -103,6 +92,7 @@ 'ColdfusionCFCLexer': ('pip._vendor.pygments.lexers.templates', 'Coldfusion CFC', ('cfc',), ('*.cfc',), ()), 'ColdfusionHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'Coldfusion HTML', ('cfm',), ('*.cfm', '*.cfml'), ('application/x-coldfusion',)), 'ColdfusionLexer': ('pip._vendor.pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()), + 'Comal80Lexer': ('pip._vendor.pygments.lexers.comal', 'COMAL-80', ('comal', 'comal80'), ('*.cml', '*.comal'), ()), 'CommonLispLexer': ('pip._vendor.pygments.lexers.lisp', 'Common Lisp', ('common-lisp', 'cl', 'lisp'), ('*.cl', '*.lisp'), ('text/x-common-lisp',)), 'ComponentPascalLexer': ('pip._vendor.pygments.lexers.oberon', 'Component Pascal', ('componentpascal', 'cp'), ('*.cp', '*.cps'), ('text/x-component-pascal',)), 'CoqLexer': ('pip._vendor.pygments.lexers.theorem', 'Coq', ('coq',), ('*.v',), ('text/x-coq',)), @@ -229,6 +219,7 @@ 'IrcLogsLexer': ('pip._vendor.pygments.lexers.textfmts', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)), 'IsabelleLexer': ('pip._vendor.pygments.lexers.theorem', 'Isabelle', ('isabelle',), ('*.thy',), ('text/x-isabelle',)), 'JLexer': ('pip._vendor.pygments.lexers.j', 'J', ('j',), ('*.ijs',), ('text/x-j',)), + 'JMESPathLexer': ('pip._vendor.pygments.lexers.jmespath', 'JMESPath', ('jmespath', 'jp'), ('*.jp',), ()), 'JSLTLexer': ('pip._vendor.pygments.lexers.jslt', 'JSLT', ('jslt',), ('*.jslt',), ('text/x-jslt',)), 'JagsLexer': ('pip._vendor.pygments.lexers.modeling', 'JAGS', ('jags',), ('*.jag', '*.bug'), ()), 'JasminLexer': ('pip._vendor.pygments.lexers.jvm', 'Jasmin', ('jasmin', 'jasminxt'), ('*.j',), ()), @@ -462,6 +453,7 @@ 'SourcesListLexer': ('pip._vendor.pygments.lexers.installers', 'Debian Sourcelist', ('debsources', 'sourceslist', 'sources.list'), ('sources.list',), ()), 'SparqlLexer': ('pip._vendor.pygments.lexers.rdf', 'SPARQL', ('sparql',), ('*.rq', '*.sparql'), ('application/sparql-query',)), 'SpiceLexer': ('pip._vendor.pygments.lexers.spice', 'Spice', ('spice', 'spicelang'), ('*.spice',), ('text/x-spice',)), + 'SqlJinjaLexer': ('pip._vendor.pygments.lexers.templates', 'SQL+Jinja', ('sql+jinja',), ('*.sql', '*.sql.j2', '*.sql.jinja2'), ()), 'SqlLexer': ('pip._vendor.pygments.lexers.sql', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)), 'SqliteConsoleLexer': ('pip._vendor.pygments.lexers.sql', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)), 'SquidConfLexer': ('pip._vendor.pygments.lexers.configs', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)), @@ -516,7 +508,7 @@ 'VGLLexer': ('pip._vendor.pygments.lexers.dsls', 'VGL', ('vgl',), ('*.rpf',), ()), 'ValaLexer': ('pip._vendor.pygments.lexers.c_like', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)), 'VbNetAspxLexer': ('pip._vendor.pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()), - 'VbNetLexer': ('pip._vendor.pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')), + 'VbNetLexer': ('pip._vendor.pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet', 'lobas', 'oobas', 'sobas'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')), 'VelocityHtmlLexer': ('pip._vendor.pygments.lexers.templates', 'HTML+Velocity', ('html+velocity',), (), ('text/html+velocity',)), 'VelocityLexer': ('pip._vendor.pygments.lexers.templates', 'Velocity', ('velocity',), ('*.vm', '*.fhtml'), ()), 'VelocityXmlLexer': ('pip._vendor.pygments.lexers.templates', 'XML+Velocity', ('xml+velocity',), (), ('application/xml+velocity',)), @@ -547,50 +539,3 @@ 'ZigLexer': ('pip._vendor.pygments.lexers.zig', 'Zig', ('zig',), ('*.zig',), ('text/zig',)), 'apdlexer': ('pip._vendor.pygments.lexers.apdlexer', 'ANSYS parametric design language', ('ansys', 'apdl'), ('*.ans',), ()), } - -if __name__ == '__main__': # pragma: no cover - import sys - import os - - # lookup lexers - found_lexers = [] - sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..')) - for root, dirs, files in os.walk('.'): - for filename in files: - if filename.endswith('.py') and not filename.startswith('_'): - module_name = 'pygments.lexers%s.%s' % ( - root[1:].replace('/', '.'), filename[:-3]) - print(module_name) - module = __import__(module_name, None, None, ['']) - for lexer_name in module.__all__: - lexer = getattr(module, lexer_name) - found_lexers.append( - '%r: %r' % (lexer_name, - (module_name, - lexer.name, - tuple(lexer.aliases), - tuple(lexer.filenames), - tuple(lexer.mimetypes)))) - # sort them to make the diff minimal - found_lexers.sort() - - # extract useful sourcecode from this file - with open(__file__) as fp: - content = fp.read() - # replace crnl to nl for Windows. - # - # Note that, originally, contributors should keep nl of master - # repository, for example by using some kind of automatic - # management EOL, like `EolExtension - # <https://www.mercurial-scm.org/wiki/EolExtension>`. - content = content.replace("\r\n", "\n") - header = content[:content.find('LEXERS = {')] - footer = content[content.find("if __name__ == '__main__':"):] - - # write new file - with open(__file__, 'w') as fp: - fp.write(header) - fp.write('LEXERS = {\n %s,\n}\n\n' % ',\n '.join(found_lexers)) - fp.write(footer) - - print ('=== %d lexers processed.' % len(found_lexers)) diff --git a/src/pip/_vendor/pygments/lexers/python.py b/src/pip/_vendor/pygments/lexers/python.py --- a/src/pip/_vendor/pygments/lexers/python.py +++ b/src/pip/_vendor/pygments/lexers/python.py @@ -142,7 +142,7 @@ def fstring_rules(ttype): combined('fstringescape', 'dqf')), ("([fF])(')", bygroups(String.Affix, String.Single), combined('fstringescape', 'sqf')), - # raw strings + # raw bytes and strings ('(?i)(rb|br|r)(""")', bygroups(String.Affix, String.Double), 'tdqs'), ("(?i)(rb|br|r)(''')", @@ -152,14 +152,24 @@ def fstring_rules(ttype): ("(?i)(rb|br|r)(')", bygroups(String.Affix, String.Single), 'sqs'), # non-raw strings - ('([uUbB]?)(""")', bygroups(String.Affix, String.Double), + ('([uU]?)(""")', bygroups(String.Affix, String.Double), combined('stringescape', 'tdqs')), - ("([uUbB]?)(''')", bygroups(String.Affix, String.Single), + ("([uU]?)(''')", bygroups(String.Affix, String.Single), combined('stringescape', 'tsqs')), - ('([uUbB]?)(")', bygroups(String.Affix, String.Double), + ('([uU]?)(")', bygroups(String.Affix, String.Double), combined('stringescape', 'dqs')), - ("([uUbB]?)(')", bygroups(String.Affix, String.Single), + ("([uU]?)(')", bygroups(String.Affix, String.Single), combined('stringescape', 'sqs')), + # non-raw bytes + ('([bB])(""")', bygroups(String.Affix, String.Double), + combined('bytesescape', 'tdqs')), + ("([bB])(''')", bygroups(String.Affix, String.Single), + combined('bytesescape', 'tsqs')), + ('([bB])(")', bygroups(String.Affix, String.Double), + combined('bytesescape', 'dqs')), + ("([bB])(')", bygroups(String.Affix, String.Single), + combined('bytesescape', 'sqs')), + (r'[^\S\n]+', Text), include('numbers'), (r'!=|==|<<|>>|:=|[-~+/*%=<>&^|.]', Operator), @@ -343,9 +353,12 @@ def fstring_rules(ttype): include('rfstringescape'), include('stringescape'), ], + 'bytesescape': [ + (r'\\([\\abfnrtv"\']|\n|x[a-fA-F0-9]{2}|[0-7]{1,3})', String.Escape) + ], 'stringescape': [ - (r'\\([\\abfnrtv"\']|\n|N\{.*?\}|u[a-fA-F0-9]{4}|' - r'U[a-fA-F0-9]{8}|x[a-fA-F0-9]{2}|[0-7]{1,3})', String.Escape) + (r'\\(N\{.*?\}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8})', String.Escape), + include('bytesescape') ], 'fstrings-single': fstring_rules(String.Single), 'fstrings-double': fstring_rules(String.Double), diff --git a/src/pip/_vendor/pygments/plugin.py b/src/pip/_vendor/pygments/plugin.py --- a/src/pip/_vendor/pygments/plugin.py +++ b/src/pip/_vendor/pygments/plugin.py @@ -2,9 +2,12 @@ pygments.plugin ~~~~~~~~~~~~~~~ - Pygments setuptools plugin interface. The methods defined - here also work if setuptools isn't installed but they just - return nothing. + Pygments plugin interface. By default, this tries to use + ``importlib.metadata``, which is in the Python standard + library since Python 3.8, or its ``importlib_metadata`` + backport for earlier versions of Python. It falls back on + ``pkg_resources`` if not found. Finally, if ``pkg_resources`` + is not found either, no plugins are loaded at all. lexer plugins:: @@ -34,6 +37,7 @@ :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ + LEXER_ENTRY_POINT = 'pygments.lexers' FORMATTER_ENTRY_POINT = 'pygments.formatters' STYLE_ENTRY_POINT = 'pygments.styles' @@ -42,11 +46,26 @@ def iter_entry_points(group_name): try: - from pip._vendor import pkg_resources - except (ImportError, OSError): - return [] - - return pkg_resources.iter_entry_points(group_name) + from importlib.metadata import entry_points + except ImportError: + try: + from importlib_metadata import entry_points + except ImportError: + try: + from pip._vendor.pkg_resources import iter_entry_points + except (ImportError, OSError): + return [] + else: + return iter_entry_points(group_name) + groups = entry_points() + if hasattr(groups, 'select'): + # New interface in Python 3.10 and newer versions of the + # importlib_metadata backport. + return groups.select(group=group_name) + else: + # Older interface, deprecated in Python 3.10 and recent + # importlib_metadata, but we need it in Python 3.8 and 3.9. + return groups.get(group_name, []) def find_plugin_lexers(): diff --git a/src/pip/_vendor/pygments/styles/__init__.py b/src/pip/_vendor/pygments/styles/__init__.py --- a/src/pip/_vendor/pygments/styles/__init__.py +++ b/src/pip/_vendor/pygments/styles/__init__.py @@ -48,6 +48,7 @@ 'solarized-dark': 'solarized::SolarizedDarkStyle', 'solarized-light': 'solarized::SolarizedLightStyle', 'sas': 'sas::SasStyle', + 'staroffice' : 'staroffice::StarofficeStyle', 'stata': 'stata_light::StataLightStyle', 'stata-light': 'stata_light::StataLightStyle', 'stata-dark': 'stata_dark::StataDarkStyle', @@ -58,6 +59,9 @@ 'dracula': 'dracula::DraculaStyle', 'one-dark': 'onedark::OneDarkStyle', 'lilypond' : 'lilypond::LilyPondStyle', + 'nord': 'nord::NordStyle', + 'nord-darker': 'nord::NordDarkerStyle', + 'github-dark': 'gh_dark::GhDarkStyle' } diff --git a/src/pip/_vendor/pygments/token.py b/src/pip/_vendor/pygments/token.py --- a/src/pip/_vendor/pygments/token.py +++ b/src/pip/_vendor/pygments/token.py @@ -189,6 +189,7 @@ def string_to_tokentype(s): Operator.Word: 'ow', Punctuation: 'p', + Punctuation.Marker: 'pm', Comment: 'c', Comment.Hashbang: 'ch', diff --git a/src/pip/_vendor/typing_extensions.py b/src/pip/_vendor/typing_extensions.py --- a/src/pip/_vendor/typing_extensions.py +++ b/src/pip/_vendor/typing_extensions.py @@ -8,9 +8,9 @@ import typing -# Please keep __all__ alphabetized within each category. __all__ = [ # Super-special typing primitives. + 'Any', 'ClassVar', 'Concatenate', 'Final', @@ -20,6 +20,7 @@ 'ParamSpecKwargs', 'Self', 'Type', + 'TypeVar', 'TypeVarTuple', 'Unpack', @@ -60,6 +61,7 @@ 'Literal', 'NewType', 'overload', + 'override', 'Protocol', 'reveal_type', 'runtime', @@ -149,6 +151,37 @@ def _collect_type_vars(types, typevar_types=None): T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers. T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant. + +if sys.version_info >= (3, 11): + from typing import Any +else: + + class _AnyMeta(type): + def __instancecheck__(self, obj): + if self is Any: + raise TypeError("typing_extensions.Any cannot be used with isinstance()") + return super().__instancecheck__(obj) + + def __repr__(self): + if self is Any: + return "typing_extensions.Any" + return super().__repr__() + + class Any(metaclass=_AnyMeta): + """Special type indicating an unconstrained type. + - Any is compatible with every type. + - Any assumed to have all methods. + - All values assumed to be instances of Any. + Note that all the above statements are true from the point of view of + static type checkers. At runtime, Any should not be used with instance + checks. + """ + def __new__(cls, *args, **kwargs): + if cls is Any: + raise TypeError("Any cannot be instantiated") + return super().__new__(cls, *args, **kwargs) + + ClassVar = typing.ClassVar # On older versions of typing there is an internal class named "Final". @@ -431,7 +464,7 @@ def _no_init(self, *args, **kwargs): if type(self)._is_protocol: raise TypeError('Protocols cannot be instantiated') - class _ProtocolMeta(abc.ABCMeta): + class _ProtocolMeta(abc.ABCMeta): # noqa: B024 # This metaclass is a bit unfortunate and exists only because of the lack # of __instancehook__. def __instancecheck__(cls, instance): @@ -1115,6 +1148,44 @@ def __repr__(self): above.""") +class _DefaultMixin: + """Mixin for TypeVarLike defaults.""" + + __slots__ = () + + def __init__(self, default): + if isinstance(default, (tuple, list)): + self.__default__ = tuple((typing._type_check(d, "Default must be a type") + for d in default)) + elif default: + self.__default__ = typing._type_check(default, "Default must be a type") + else: + self.__default__ = None + + +# Add default and infer_variance parameters from PEP 696 and 695 +class TypeVar(typing.TypeVar, _DefaultMixin, _root=True): + """Type variable.""" + + __module__ = 'typing' + + def __init__(self, name, *constraints, bound=None, + covariant=False, contravariant=False, + default=None, infer_variance=False): + super().__init__(name, *constraints, bound=bound, covariant=covariant, + contravariant=contravariant) + _DefaultMixin.__init__(self, default) + self.__infer_variance__ = infer_variance + + # for pickling: + try: + def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') + except (AttributeError, ValueError): + def_mod = None + if def_mod != 'typing_extensions': + self.__module__ = def_mod + + # Python 3.10+ has PEP 612 if hasattr(typing, 'ParamSpecArgs'): ParamSpecArgs = typing.ParamSpecArgs @@ -1179,12 +1250,32 @@ def __eq__(self, other): # 3.10+ if hasattr(typing, 'ParamSpec'): - ParamSpec = typing.ParamSpec + + # Add default Parameter - PEP 696 + class ParamSpec(typing.ParamSpec, _DefaultMixin, _root=True): + """Parameter specification variable.""" + + __module__ = 'typing' + + def __init__(self, name, *, bound=None, covariant=False, contravariant=False, + default=None): + super().__init__(name, bound=bound, covariant=covariant, + contravariant=contravariant) + _DefaultMixin.__init__(self, default) + + # for pickling: + try: + def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') + except (AttributeError, ValueError): + def_mod = None + if def_mod != 'typing_extensions': + self.__module__ = def_mod + # 3.7-3.9 else: # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class ParamSpec(list): + class ParamSpec(list, _DefaultMixin): """Parameter specification variable. Usage:: @@ -1242,7 +1333,8 @@ def args(self): def kwargs(self): return ParamSpecKwargs(self) - def __init__(self, name, *, bound=None, covariant=False, contravariant=False): + def __init__(self, name, *, bound=None, covariant=False, contravariant=False, + default=None): super().__init__([self]) self.__name__ = name self.__covariant__ = bool(covariant) @@ -1251,6 +1343,7 @@ def __init__(self, name, *, bound=None, covariant=False, contravariant=False): self.__bound__ = typing._type_check(bound, 'Bound must be a type.') else: self.__bound__ = None + _DefaultMixin.__init__(self, default) # for pickling: try: @@ -1752,9 +1845,25 @@ def _is_unpack(obj): if hasattr(typing, "TypeVarTuple"): # 3.11+ - TypeVarTuple = typing.TypeVarTuple + + # Add default Parameter - PEP 696 + class TypeVarTuple(typing.TypeVarTuple, _DefaultMixin, _root=True): + """Type variable tuple.""" + + def __init__(self, name, *, default=None): + super().__init__(name) + _DefaultMixin.__init__(self, default) + + # for pickling: + try: + def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') + except (AttributeError, ValueError): + def_mod = None + if def_mod != 'typing_extensions': + self.__module__ = def_mod + else: - class TypeVarTuple: + class TypeVarTuple(_DefaultMixin): """Type variable tuple. Usage:: @@ -1804,8 +1913,9 @@ def get_shape(self) -> Tuple[*Ts]: def __iter__(self): yield self.__unpacked__ - def __init__(self, name): + def __init__(self, name, *, default=None): self.__name__ = name + _DefaultMixin.__init__(self, default) # for pickling: try: @@ -1968,6 +2078,36 @@ def decorator(cls_or_fn): return decorator +if hasattr(typing, "override"): + override = typing.override +else: + _F = typing.TypeVar("_F", bound=typing.Callable[..., typing.Any]) + + def override(__arg: _F) -> _F: + """Indicate that a method is intended to override a method in a base class. + + Usage: + + class Base: + def method(self) -> None: ... + pass + + class Child(Base): + @override + def method(self) -> None: + super().method() + + When this decorator is applied to a method, the type checker will + validate that it overrides a method with the same name on a base class. + This helps prevent bugs that may occur when a base class is changed + without an equivalent change to a child class. + + See PEP 698 for details. + + """ + return __arg + + # We have to do some monkey patching to deal with the dual nature of # Unpack/TypeVarTuple: # - We want Unpack to be a kind of TypeVar so it gets accepted in diff --git a/src/pip/_vendor/urllib3/__init__.py b/src/pip/_vendor/urllib3/__init__.py --- a/src/pip/_vendor/urllib3/__init__.py +++ b/src/pip/_vendor/urllib3/__init__.py @@ -19,6 +19,23 @@ from .util.timeout import Timeout from .util.url import get_host +# === NOTE TO REPACKAGERS AND VENDORS === +# Please delete this block, this logic is only +# for urllib3 being distributed via PyPI. +# See: https://github.com/urllib3/urllib3/issues/2680 +try: + import urllib3_secure_extra # type: ignore # noqa: F401 +except ImportError: + pass +else: + warnings.warn( + "'urllib3[secure]' extra is deprecated and will be removed " + "in a future release of urllib3 2.x. Read more in this issue: " + "https://github.com/urllib3/urllib3/issues/2680", + category=DeprecationWarning, + stacklevel=2, + ) + __author__ = "Andrey Petrov ([email protected])" __license__ = "MIT" __version__ = __version__ diff --git a/src/pip/_vendor/urllib3/_version.py b/src/pip/_vendor/urllib3/_version.py --- a/src/pip/_vendor/urllib3/_version.py +++ b/src/pip/_vendor/urllib3/_version.py @@ -1,2 +1,2 @@ # This file is protected via CODEOWNERS -__version__ = "1.26.10" +__version__ = "1.26.12" diff --git a/src/pip/_vendor/urllib3/contrib/pyopenssl.py b/src/pip/_vendor/urllib3/contrib/pyopenssl.py --- a/src/pip/_vendor/urllib3/contrib/pyopenssl.py +++ b/src/pip/_vendor/urllib3/contrib/pyopenssl.py @@ -73,11 +73,20 @@ class UnsupportedExtension(Exception): import logging import ssl import sys +import warnings from .. import util from ..packages import six from ..util.ssl_ import PROTOCOL_TLS_CLIENT +warnings.warn( + "'urllib3.contrib.pyopenssl' module is deprecated and will be removed " + "in a future release of urllib3 2.x. Read more in this issue: " + "https://github.com/urllib3/urllib3/issues/2680", + category=DeprecationWarning, + stacklevel=2, +) + __all__ = ["inject_into_urllib3", "extract_from_urllib3"] # SNI always works. diff --git a/src/pip/_vendor/urllib3/response.py b/src/pip/_vendor/urllib3/response.py --- a/src/pip/_vendor/urllib3/response.py +++ b/src/pip/_vendor/urllib3/response.py @@ -2,6 +2,7 @@ import io import logging +import sys import zlib from contextlib import contextmanager from socket import error as SocketError @@ -9,6 +10,7 @@ brotli = None +from . import util from ._collections import HTTPHeaderDict from .connection import BaseSSLError, HTTPException from .exceptions import ( @@ -475,6 +477,54 @@ def _error_catcher(self): if self._original_response and self._original_response.isclosed(): self.release_conn() + def _fp_read(self, amt): + """ + Read a response with the thought that reading the number of bytes + larger than can fit in a 32-bit int at a time via SSL in some + known cases leads to an overflow error that has to be prevented + if `amt` or `self.length_remaining` indicate that a problem may + happen. + + The known cases: + * 3.8 <= CPython < 3.9.7 because of a bug + https://github.com/urllib3/urllib3/issues/2513#issuecomment-1152559900. + * urllib3 injected with pyOpenSSL-backed SSL-support. + * CPython < 3.10 only when `amt` does not fit 32-bit int. + """ + assert self._fp + c_int_max = 2 ** 31 - 1 + if ( + ( + (amt and amt > c_int_max) + or (self.length_remaining and self.length_remaining > c_int_max) + ) + and not util.IS_SECURETRANSPORT + and (util.IS_PYOPENSSL or sys.version_info < (3, 10)) + ): + buffer = io.BytesIO() + # Besides `max_chunk_amt` being a maximum chunk size, it + # affects memory overhead of reading a response by this + # method in CPython. + # `c_int_max` equal to 2 GiB - 1 byte is the actual maximum + # chunk size that does not lead to an overflow error, but + # 256 MiB is a compromise. + max_chunk_amt = 2 ** 28 + while amt is None or amt != 0: + if amt is not None: + chunk_amt = min(amt, max_chunk_amt) + amt -= chunk_amt + else: + chunk_amt = max_chunk_amt + data = self._fp.read(chunk_amt) + if not data: + break + buffer.write(data) + del data # to reduce peak memory usage by `max_chunk_amt`. + return buffer.getvalue() + else: + # StringIO doesn't like amt=None + return self._fp.read(amt) if amt is not None else self._fp.read() + def read(self, amt=None, decode_content=None, cache_content=False): """ Similar to :meth:`http.client.HTTPResponse.read`, but with two additional @@ -507,13 +557,11 @@ def read(self, amt=None, decode_content=None, cache_content=False): fp_closed = getattr(self._fp, "closed", False) with self._error_catcher(): + data = self._fp_read(amt) if not fp_closed else b"" if amt is None: - # cStringIO doesn't like amt=None - data = self._fp.read() if not fp_closed else b"" flush_decoder = True else: cache_content = False - data = self._fp.read(amt) if not fp_closed else b"" if ( amt != 0 and not data ): # Platform-specific: Buggy versions of Python. </patch>
[]
[]
pantsbuild__pants-4773
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Local publish.jar should succeed without an SCM repo around it I have a pants project that for reasons I won't go into doesn't have a git (or any other) repo around it. I tried to run `publish.jar` and `--publish-jar-local=/some/folder` and it crashed with the following message: ``` 15:26:13 00:23 [publish] Waiting for background workers to finish. 15:26:13 00:23 [complete] FAILUREException caught: (<type 'exceptions.AttributeError'>) Exception message: 'NoneType' object has no attribute 'branch_name' ``` The rest of the build ran fine up until the publish step. According to @jsirois this is expected but not really desirable behavior, so I'm filing this issue 😄 </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a build system for software projects in a variety of languages. 4 It works particularly well for a source code repository that contains 5 many distinct projects. 6 7 Friendly documentation: http://www.pantsbuild.org/ 8 9 We release to [PyPI](https://pypi.python.org/pypi) 10 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 11 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 12 13 We use [Travis CI](https://travis-ci.org) to verify the build 14 [![Build Status](https://travis-ci.org/pantsbuild/pants.svg?branch=master)](https://travis-ci.org/pantsbuild/pants/branches). 15 16 We use [Coveralls](https://coveralls.io) to monitor test coverage 17 [![Coverage Status](https://coveralls.io/repos/pantsbuild/pants/badge.png?branch=master)](https://coveralls.io/r/pantsbuild/pants). 18 19 # Requirements 20 21 At a minimum, pants requires the following to run properly: 22 23 * Linux or Mac OS X 24 * Python 2.7.x (the latest stable version of 2.7 is recommended) 25 * A C compiler, system headers, Python headers (to compile native Python modules) and the libffi 26 library and headers (to compile and link modules that use CFFI to access native code). 27 * Internet access (so that pants can fully bootstrap itself) 28 29 Additionally, if you use the jvm backend to work with java or scala code (installed by default): 30 31 * OpenJDK or Oracle JDK version 7 or greater 32 [end of README.md] [start of src/python/pants/backend/jvm/tasks/jar_publish.py] 1 # coding=utf-8 2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 import functools 9 import getpass 10 import hashlib 11 import os 12 import pkgutil 13 import shutil 14 import sys 15 from collections import OrderedDict, defaultdict, namedtuple 16 from copy import copy 17 18 from twitter.common.collections import OrderedSet 19 20 from pants.backend.jvm.ossrh_publication_metadata import OSSRHPublicationMetadata 21 from pants.backend.jvm.targets.jarable import Jarable 22 from pants.backend.jvm.targets.scala_library import ScalaLibrary 23 from pants.backend.jvm.tasks.jar_task import JarTask 24 from pants.backend.jvm.tasks.properties import Properties 25 from pants.base.build_environment import get_buildroot, get_scm 26 from pants.base.build_file import BuildFile 27 from pants.base.exceptions import TaskError 28 from pants.base.generator import Generator, TemplateData 29 from pants.build_graph.address import Address 30 from pants.build_graph.address_lookup_error import AddressLookupError 31 from pants.build_graph.build_file_parser import BuildFileParser 32 from pants.build_graph.build_graph import sort_targets 33 from pants.ivy.bootstrapper import Bootstrapper 34 from pants.ivy.ivy import Ivy 35 from pants.task.scm_publish_mixin import Namedver, ScmPublishMixin, Semver 36 from pants.util.dirutil import safe_mkdir, safe_open, safe_rmtree 37 from pants.util.strutil import ensure_text 38 39 40 _TEMPLATES_RELPATH = os.path.join('templates', 'jar_publish') 41 42 43 class PushDb(object): 44 45 @staticmethod 46 def load(path): 47 """Loads a pushdb maintained in a properties file at the given path.""" 48 with open(path, 'r') as props: 49 properties = Properties.load(props) 50 return PushDb(properties) 51 52 class Entry(object): 53 54 def __init__(self, sem_ver, named_ver, named_is_latest, sha, fingerprint): 55 """Records the most recent push/release of an artifact. 56 57 :param Semver sem_ver: The last semantically versioned release (or Semver(0.0.0)) 58 :param Namedver named_ver: The last named release of this entry (or None) 59 :param boolean named_is_latest: True if named_ver is the latest, false if sem_ver is 60 :param string sha: The last Git SHA (or None) 61 :param string fingerprint: A unique hash for the most recent version of the target. 62 """ 63 self.sem_ver = sem_ver 64 self.named_ver = named_ver 65 self.named_is_latest = named_is_latest 66 self.sha = sha 67 self.fingerprint = fingerprint 68 69 def version(self): 70 if self.named_is_latest: 71 return self.named_ver 72 else: 73 return self.sem_ver 74 75 def with_sem_ver(self, sem_ver): 76 """Returns a clone of this entry with the given sem_ver marked as the latest.""" 77 return PushDb.Entry(sem_ver, self.named_ver, False, self.sha, self.fingerprint) 78 79 def with_named_ver(self, named_ver): 80 """Returns a clone of this entry with the given name_ver marked as the latest.""" 81 return PushDb.Entry(self.sem_ver, named_ver, True, self.sha, self.fingerprint) 82 83 def with_sha_and_fingerprint(self, sha, fingerprint): 84 """Returns a clone of this entry with the given sha and fingerprint.""" 85 return PushDb.Entry(self.sem_ver, self.named_ver, self.named_is_latest, sha, fingerprint) 86 87 def __repr__(self): 88 return '<{}, {}, {}, {}, {}, {}>'.format( 89 self.__class__.__name__, self.sem_ver, self.named_ver, self.named_is_latest, 90 self.sha, self.fingerprint) 91 92 def __init__(self, props=None): 93 self._props = props or OrderedDict() 94 95 def get_entry(self, target): 96 """Given an internal target, return a PushDb.Entry, which might contain defaults.""" 97 db_get, _ = self._accessors_for_target(target) 98 99 major = int(db_get('revision.major', '0')) 100 minor = int(db_get('revision.minor', '0')) 101 patch = int(db_get('revision.patch', '0')) 102 snapshot = str(db_get('revision.snapshot', 'false')).lower() == 'true' 103 named_version = db_get('revision.named_version', None) 104 named_is_latest = str(db_get('revision.named_is_latest', 'false')).lower() == 'true' 105 sha = db_get('revision.sha', None) 106 fingerprint = db_get('revision.fingerprint', None) 107 sem_ver = Semver(major, minor, patch, snapshot=snapshot) 108 named_ver = Namedver(named_version) if named_version else None 109 return self.Entry(sem_ver, named_ver, named_is_latest, sha, fingerprint) 110 111 def set_entry(self, target, pushdb_entry): 112 pe = pushdb_entry 113 _, db_set = self._accessors_for_target(target) 114 db_set('revision.major', pe.sem_ver.major) 115 db_set('revision.minor', pe.sem_ver.minor) 116 db_set('revision.patch', pe.sem_ver.patch) 117 db_set('revision.snapshot', str(pe.sem_ver.snapshot).lower()) 118 if pe.named_ver: 119 db_set('revision.named_version', pe.named_ver.version()) 120 db_set('revision.named_is_latest', str(pe.named_is_latest).lower()) 121 db_set('revision.sha', pe.sha) 122 db_set('revision.fingerprint', pe.fingerprint) 123 124 def _accessors_for_target(self, target): 125 jar_dep, exported = target.get_artifact_info() 126 if not exported: 127 raise ValueError 128 129 def key(prefix): 130 return '{}.{}%{}'.format(prefix, jar_dep.org, jar_dep.name) 131 132 def getter(prefix, default=None): 133 return self._props.get(key(prefix), default) 134 135 def setter(prefix, value): 136 self._props[key(prefix)] = value 137 138 return getter, setter 139 140 def dump(self, path): 141 """Saves the pushdb as a properties file to the given path.""" 142 with open(path, 'w') as props: 143 Properties.dump(self._props, props) 144 145 146 class PomWriter(object): 147 def __init__(self, get_db, tag): 148 self._get_db = get_db 149 self._tag = tag 150 151 def write(self, target, path): 152 dependencies = OrderedDict() 153 for internal_dep in target_internal_dependencies(target): 154 jar = self._as_versioned_jar(internal_dep) 155 key = (jar.org, jar.name) 156 dependencies[key] = self._internaldep(jar, internal_dep) 157 158 for jar in target.jar_dependencies: 159 jardep = self._jardep(jar) 160 if jardep: 161 key = (jar.org, jar.name, jar.classifier) 162 dependencies[key] = jardep 163 164 target_jar = self._internaldep(self._as_versioned_jar(target), target) 165 if target_jar: 166 target_jar = target_jar.extend(dependencies=dependencies.values()) 167 168 template_relpath = os.path.join(_TEMPLATES_RELPATH, 'pom.xml.mustache') 169 template_text = pkgutil.get_data(__name__, template_relpath) 170 generator = Generator(template_text, project=target_jar) 171 with safe_open(path, 'w') as output: 172 generator.write(output) 173 174 def _as_versioned_jar(self, internal_target): 175 """Fetches the jar representation of the given target, and applies the latest pushdb version.""" 176 jar, _ = internal_target.get_artifact_info() 177 pushdb_entry = self._get_db(internal_target).get_entry(internal_target) 178 return jar.copy(rev=pushdb_entry.version().version()) 179 180 def _internaldep(self, jar_dependency, target): 181 template_data = self._jardep(jar_dependency) 182 if isinstance(target.provides.publication_metadata, OSSRHPublicationMetadata): 183 pom = target.provides.publication_metadata 184 185 # Forming the project name from the coordinates like this is acceptable as a fallback when 186 # the user supplies no project name. 187 # See: http://central.sonatype.org/pages/requirements.html#project-name-description-and-url 188 name = pom.name or '{}:{}'.format(jar_dependency.org, jar_dependency.name) 189 190 template_data = template_data.extend(name=name, 191 description=pom.description, 192 url=pom.url, 193 licenses=pom.licenses, 194 scm=pom.scm.tagged(self._tag), 195 developers=pom.developers) 196 return template_data 197 198 def _jardep(self, jar): 199 return TemplateData( 200 classifier=jar.classifier, 201 artifact_id=jar.name, 202 group_id=jar.org, 203 version=jar.rev, 204 scope='compile', 205 excludes=[TemplateData(org=exclude.org, name=exclude.name) 206 for exclude in jar.excludes if exclude.name]) 207 208 209 def coordinate(org, name, rev=None): 210 return '{}#{};{}'.format(org, name, rev) if rev else '{}#{}'.format(org, name) 211 212 213 def jar_coordinate(jar, rev=None): 214 return coordinate(jar.org, jar.name, rev or jar.rev) 215 216 217 def pushdb_coordinate(jar, entry): 218 return jar_coordinate(jar, rev=entry.version().version()) 219 220 221 def target_internal_dependencies(target): 222 """Returns internal Jarable dependencies that were "directly" declared. 223 224 Directly declared deps are those that are explicitly listed in the definition of a 225 target, rather than being depended on transitively. But in order to walk through 226 aggregator targets such as `target`, `dependencies`, or `jar_library`, this recursively 227 descends the dep graph and stops at Jarable instances.""" 228 for dep in target.dependencies: 229 if isinstance(dep, Jarable): 230 yield dep 231 else: 232 for childdep in target_internal_dependencies(dep): 233 yield childdep 234 235 236 class JarPublish(ScmPublishMixin, JarTask): 237 """Publish jars to a maven repository. 238 239 At a high-level, pants uses `Apache Ivy <http://ant.apache.org/ivy/>`_ to 240 publish artifacts to Maven-style repositories. Pants performs prerequisite 241 tasks like compiling, creating jars, and generating ``pom.xml`` files then 242 invokes Ivy to actually publish the artifacts, so publishing is largely 243 configured in ``ivysettings.xml``. ``BUILD`` and ``pants.ini`` files 244 primarily provide linkage between publishable targets and the 245 Ivy ``resolvers`` used to publish them. 246 247 The following target types are publishable: 248 `java_library <build_dictionary.html#java_library>`_, 249 `scala_library <build_dictionary.html#scala_library>`_, 250 `java_thrift_library <build_dictionary.html#java_thrift_library>`_, 251 `annotation_processor <build_dictionary.html#annotation_processor>`_. 252 Targets to publish and their dependencies must be publishable target 253 types and specify the ``provides`` argument. One exception is 254 `jar <build_dictionary.html#jar>`_\s - pants will generate a pom file that 255 depends on the already-published jar. 256 257 Example usage: :: 258 259 # By default pants will perform a dry-run. 260 ./pants clean-all publish src/java/com/twitter/mybird 261 262 # Actually publish. 263 ./pants clean-all publish src/java/com/twitter/mybird --no-publish-dryrun 264 265 Please see ``./pants publish -h`` for a detailed description of all 266 publishing options. 267 268 Publishing can be configured with the following options: 269 270 * ``--repos`` - Required dictionary of settings for repos that may be pushed to. 271 * ``--jvm-options`` - Optional list of JVM command-line args when invoking Ivy. 272 * ``--restrict-push-branches`` - Optional list of branches to restrict publishing to. 273 274 Example repos dictionary: :: 275 276 repos = { 277 # repository target name is paired with this key 278 'myrepo': { 279 # ivysettings.xml resolver to use for publishing 280 'resolver': 'maven.example.com', 281 # address of a Credentials target to use when publishing 282 'auth': 'address/of/credentials:target', 283 # help message if unable to initialize the Credentials target. 284 'help': 'Please check your credentials and try again.', 285 }, 286 } 287 """ 288 289 class Publication(namedtuple('Publication', ['name', 'classifier', 'ext'])): 290 """Represents an artifact publication. 291 292 There will be at least 2 of these for any given published coordinate - a pom, and at least one 293 other artifact. 294 """ 295 296 class DuplicateArtifactError(TaskError): 297 """An artifact was defined by two different targets.""" 298 299 @classmethod 300 def register_options(cls, register): 301 super(JarPublish, cls).register_options(register) 302 303 # TODO(John Sirois): Support a preview mode that outputs a file with entries like: 304 # artifact id: 305 # revision: 306 # publish: (true|false) 307 # changelog: 308 # 309 # Allow re-running this goal with the file as input to support forcing an arbitrary set of 310 # revisions and supply of hand edited changelogs. 311 312 register('--dryrun', default=True, type=bool, 313 help='Run through a push without actually pushing artifacts, editing publish dbs or ' 314 'otherwise writing data') 315 register('--commit', default=True, type=bool, 316 help='Commit the push db. Turn off for local testing.') 317 register('--local', metavar='<PATH>', 318 help='Publish jars to a maven repository on the local filesystem at this path.') 319 register('--local-snapshot', default=True, type=bool, 320 help='If --local is specified, publishes jars with -SNAPSHOT revision suffixes.') 321 register('--named-snapshot', default=None, 322 help='Publish all artifacts with the given snapshot name, replacing their version. ' 323 'This is not Semantic Versioning compatible, but is easier to consume in cases ' 324 'where many artifacts must align.') 325 register('--transitive', default=True, type=bool, 326 help='Publish the specified targets and all their internal dependencies transitively.') 327 register('--force', type=bool, 328 help='Force pushing jars even if there have been no changes since the last push.') 329 register('--override', type=list, 330 help='Specifies a published jar revision override in the form: ' 331 '([org]#[name]|[target spec])=[new revision] ' 332 'For example, to specify 2 overrides: ' 333 '--override=com.foo.bar#baz=0.1.2 --override=src/java/com/foo/bar/qux=1.0.0') 334 register('--restart-at', 335 help='Restart a fail push at the given jar. Jars can be identified by ' 336 'maven coordinate [org]#[name] or target. ' 337 'For example: --restart-at=com.twitter.common#quantity ' 338 'Or: --restart-at=src/java/com/twitter/common/base') 339 register('--ivy_settings', advanced=True, default=None, 340 help='Specify a custom ivysettings.xml file to be used when publishing.') 341 register('--repos', advanced=True, type=dict, 342 help='Settings for repositories that can be pushed to. See ' 343 'https://pantsbuild.org/publish.html for details.') 344 register('--publish-extras', advanced=True, type=dict, 345 help='Extra products to publish. See ' 346 'https://pantsbuild.org/dev_tasks_publish_extras.html for details.') 347 register('--individual-plugins', advanced=True, type=bool, 348 help='Extra products to publish as a individual artifact.') 349 register('--push-postscript', advanced=True, default=None, 350 help='A post-script to add to pushdb commit messages and push tag commit messages.') 351 register('--changelog', default=True, type=bool, 352 help='A changelog.txt file will be created and printed to the console for each ' 353 'artifact published') 354 register('--prompt', default=True, type=bool, 355 help='Interactively prompt user before publishing each artifact.') 356 357 @classmethod 358 def prepare(cls, options, round_manager): 359 super(JarPublish, cls).prepare(options, round_manager) 360 round_manager.require('jars') 361 round_manager.require('javadoc') 362 round_manager.require('scaladoc') 363 364 def __init__(self, *args, **kwargs): 365 super(JarPublish, self).__init__(*args, **kwargs) 366 self.cachedir = os.path.join(self.workdir, 'cache') 367 368 self._jvm_options = self.get_options().jvm_options 369 370 self.scm = get_scm() 371 self.log = self.context.log 372 373 if self.get_options().local: 374 local_repo = dict( 375 resolver='publish_local', 376 path=os.path.abspath(os.path.expanduser(self.get_options().local)), 377 confs=['default'], 378 auth=None 379 ) 380 self.repos = defaultdict(lambda: local_repo) 381 self.commit = False 382 self.local_snapshot = self.get_options().local_snapshot 383 else: 384 self.repos = self.get_options().repos 385 if not self.repos: 386 raise TaskError( 387 "This repo is not configured to publish externally! Please configure per\n" 388 "http://pantsbuild.org/publish.html#authenticating-to-the-artifact-repository,\n" 389 "by setting --publish-jar-repos=<dict> or re-run with '--publish-jar-local=<dir>'.") 390 for repo, data in self.repos.items(): 391 auth = data.get('auth') 392 if auth: 393 credentials = next(iter(self.context.resolve(auth))) 394 user = credentials.username(data['resolver']) 395 password = credentials.password(data['resolver']) 396 self.context.log.debug('Found auth for repo={} user={}'.format(repo, user)) 397 self.repos[repo]['username'] = user 398 self.repos[repo]['password'] = password 399 self.commit = self.get_options().commit 400 self.push_postscript = self.get_options().push_postscript or '' 401 self.local_snapshot = False 402 403 self.named_snapshot = self.get_options().named_snapshot 404 if self.named_snapshot: 405 self.named_snapshot = Namedver.parse(self.named_snapshot) 406 407 self.dryrun = self.get_options().dryrun 408 self.transitive = self.get_options().transitive 409 self.force = self.get_options().force 410 self.publish_changelog = self.get_options().changelog 411 412 def parse_jarcoordinate(coordinate): 413 components = coordinate.split('#', 1) 414 if len(components) == 2: 415 org, name = components 416 return org, name 417 else: 418 spec = components[0] 419 address = Address.parse(spec) 420 try: 421 self.context.build_graph.inject_address_closure(address) 422 target = self.context.build_graph.get_target(address) 423 if not target: 424 siblings = self.context.address_mapper.addresses_in_spec_path(address.spec_path) 425 prompt = 'did you mean' if len(siblings) == 1 else 'maybe you meant one of these' 426 raise TaskError('{} => {}?:\n {}'.format(address, prompt, 427 '\n '.join(str(a) for a in siblings))) 428 if not target.is_exported: 429 raise TaskError('{} is not an exported target'.format(coordinate)) 430 return target.provides.org, target.provides.name 431 except (BuildFile.BuildFileError, 432 BuildFileParser.BuildFileParserError, 433 AddressLookupError) as e: 434 raise TaskError('{message}\n Problem identifying target at {spec}' 435 .format(message=e, spec=spec)) 436 437 self.overrides = {} 438 if self.get_options().override: 439 if self.named_snapshot: 440 raise TaskError('Options --named-snapshot and --override are mutually exclusive!') 441 442 def parse_override(override): 443 try: 444 coordinate, rev = override.split('=', 1) 445 try: 446 # overrides imply semantic versioning 447 rev = Semver.parse(rev) 448 except ValueError as e: 449 raise TaskError('Invalid version {}: {}'.format(rev, e)) 450 return parse_jarcoordinate(coordinate), rev 451 except ValueError: 452 raise TaskError('Invalid override: {}'.format(override)) 453 454 self.overrides.update(parse_override(o) for o in self.get_options().override) 455 456 self.restart_at = None 457 if self.get_options().restart_at: 458 self.restart_at = parse_jarcoordinate(self.get_options().restart_at) 459 460 def confirm_push(self, coord, version): 461 """Ask the user if a push should be done for a particular version of a 462 particular coordinate. Return True if the push should be done""" 463 if not self.get_options().prompt: 464 return True 465 try: 466 isatty = os.isatty(sys.stdin.fileno()) 467 except ValueError: 468 # In tests, sys.stdin might not have a fileno 469 isatty = False 470 if not isatty: 471 return True 472 push = raw_input('\nPublish {} with revision {} ? [y|N] '.format( 473 coord, version 474 )) 475 print('\n') 476 return push.strip().lower() == 'y' 477 478 def _copy_artifact(self, tgt, jar, version, typename, suffix='', extension='jar', 479 artifact_ext='', override_name=None): 480 """Copy the products for a target into the artifact path for the jar/version""" 481 genmap = self.context.products.get(typename) 482 product_mapping = genmap.get(tgt) 483 if product_mapping is None: 484 raise ValueError("No product mapping in {} for {}. " 485 "You may need to run some other task first".format(typename, tgt)) 486 for basedir, jars in product_mapping.items(): 487 for artifact in jars: 488 path = self.artifact_path(jar, version, name=override_name, suffix=suffix, 489 extension=extension, artifact_ext=artifact_ext) 490 safe_mkdir(os.path.dirname(path)) 491 shutil.copy(os.path.join(basedir, artifact), path) 492 493 def _ivy_jvm_options(self, repo): 494 """Get the JVM options for ivy authentication, if needed.""" 495 # Get authentication for the publish repo if needed. 496 if not repo.get('auth'): 497 # No need to copy here, as this list isn't modified by the caller. 498 return self._jvm_options 499 500 # Create a copy of the options, so that the modification is appropriately transient. 501 jvm_options = copy(self._jvm_options) 502 user = repo.get('username') 503 password = repo.get('password') 504 if user and password: 505 jvm_options.append('-Dlogin={}'.format(user)) 506 jvm_options.append('-Dpassword={}'.format(password)) 507 else: 508 raise TaskError('Unable to publish to {}. {}' 509 .format(repo.get('resolver'), repo.get('help', ''))) 510 return jvm_options 511 512 def publish(self, publications, jar, entry, repo, published): 513 """Run ivy to publish a jar. ivyxml_path is the path to the ivy file; published 514 is a list of jars published so far (including this one). entry is a pushdb entry.""" 515 516 try: 517 ivy = Bootstrapper.default_ivy() 518 except Bootstrapper.Error as e: 519 raise TaskError('Failed to push {0}! {1}'.format(pushdb_coordinate(jar, entry), e)) 520 521 path = repo.get('path') 522 ivysettings = self.generate_ivysettings(ivy, published, publish_local=path) 523 524 version = entry.version().version() 525 ivyxml = self.generate_ivy(jar, version, publications) 526 527 resolver = repo['resolver'] 528 args = [ 529 '-settings', ivysettings, 530 '-ivy', ivyxml, 531 532 # Without this setting, the ivy.xml is delivered to the CWD, littering the workspace. We 533 # don't need the ivy.xml, so just give it path under the workdir we won't use. 534 '-deliverto', ivyxml + '.unused', 535 536 '-publish', resolver, 537 '-publishpattern', '{}/[organisation]/[module]/' 538 '[artifact]-[revision](-[classifier]).[ext]'.format(self.workdir), 539 '-revision', version, 540 '-m2compatible', 541 ] 542 543 # TODO(John Sirois): global logging options should be hidden behind some sort of log manager 544 # that we can: 545 # a.) obtain a handle to (dependency injection or manual plumbing) 546 # b.) query for log detail, ie: `if log_manager.is_verbose:` 547 if self.get_options().level == 'debug': 548 args.append('-verbose') 549 550 if self.local_snapshot: 551 args.append('-overwrite') 552 553 try: 554 jvm_options = self._ivy_jvm_options(repo) 555 ivy.execute(jvm_options=jvm_options, args=args, 556 workunit_factory=self.context.new_workunit, workunit_name='ivy-publish') 557 except Ivy.Error as e: 558 raise TaskError('Failed to push {0}! {1}'.format(pushdb_coordinate(jar, entry), e)) 559 560 def execute(self): 561 self.check_clean_master(commit=(not self.dryrun and self.commit)) 562 563 exported_targets = self.exported_targets() 564 self.check_targets(exported_targets) 565 566 pushdbs = {} 567 568 def get_db(tgt): 569 # TODO(tdesai) Handle resource type in get_db. 570 if tgt.provides is None: 571 raise TaskError('trying to publish target {!r} which does not provide an artifact'.format(tgt)) 572 dbfile = tgt.provides.repo.push_db(tgt) 573 result = pushdbs.get(dbfile) 574 if not result: 575 # Create an empty pushdb if no dbfile exists. 576 if (os.path.exists(dbfile)): 577 db = PushDb.load(dbfile) 578 else: 579 safe_mkdir(os.path.dirname(dbfile)) 580 db = PushDb() 581 try: 582 repo = self.repos[tgt.provides.repo.name] 583 except KeyError: 584 raise TaskError('Repository {0} has no entry in the --repos option.'.format( 585 tgt.provides.repo.name)) 586 result = (db, dbfile, repo) 587 pushdbs[dbfile] = result 588 return result 589 590 def get_pushdb(tgt): 591 return get_db(tgt)[0] 592 593 def fingerprint_internal(tgt): 594 pushdb = get_pushdb(tgt) 595 entry = pushdb.get_entry(tgt) 596 return entry.fingerprint or '0.0.0' 597 598 def stage_artifacts(tgt, jar, version, tag, changelog): 599 publications = OrderedSet() 600 601 # TODO Remove this once we fix https://github.com/pantsbuild/pants/issues/1229 602 if (not self.context.products.get('jars').has(tgt) and 603 not self.get_options().individual_plugins): 604 raise TaskError('Expected to find a primary artifact for {} but there was no jar for it.' 605 .format(tgt.address.reference())) 606 607 # TODO Remove this guard once we fix https://github.com/pantsbuild/pants/issues/1229, there 608 # should always be a primary artifact. 609 if self.context.products.get('jars').has(tgt): 610 self._copy_artifact(tgt, jar, version, typename='jars') 611 publications.add(self.Publication(name=jar.name, classifier=None, ext='jar')) 612 613 self.create_source_jar(tgt, jar, version) 614 publications.add(self.Publication(name=jar.name, classifier='sources', ext='jar')) 615 616 # don't request docs unless they are available for all transitive targets 617 # TODO: doc products should be checked by an independent jar'ing task, and 618 # conditionally enabled; see https://github.com/pantsbuild/pants/issues/568 619 doc_jar = self.create_doc_jar(tgt, jar, version) 620 if doc_jar: 621 publications.add(self.Publication(name=jar.name, classifier='javadoc', ext='jar')) 622 623 if self.publish_changelog: 624 changelog_path = self.artifact_path(jar, version, suffix='-CHANGELOG', extension='txt') 625 with safe_open(changelog_path, 'wb') as changelog_file: 626 changelog_file.write(changelog.encode('utf-8')) 627 publications.add(self.Publication(name=jar.name, classifier='CHANGELOG', ext='txt')) 628 629 # Process any extra jars that might have been previously generated for this target, or a 630 # target that it was derived from. 631 for extra_product, extra_config in (self.get_options().publish_extras or {}).items(): 632 override_name = jar.name 633 if 'override_name' in extra_config: 634 # If the supplied string has a '{target_provides_name}' in it, replace it with the 635 # current jar name. If not, the string will be taken verbatim. 636 override_name = extra_config['override_name'].format(target_provides_name=jar.name) 637 638 classifier = None 639 suffix = '' 640 if 'classifier' in extra_config: 641 classifier = extra_config['classifier'] 642 suffix = "-{0}".format(classifier) 643 644 extension = extra_config.get('extension', 'jar') 645 646 extra_pub = self.Publication(name=override_name, classifier=classifier, ext=extension) 647 648 # A lot of flexibility is allowed in parameterizing the extra artifact, ensure those 649 # parameters lead to a unique publication. 650 # TODO(John Sirois): Check this much earlier. 651 if extra_pub in publications: 652 raise TaskError("publish_extra for '{0}' must override one of name, classifier or " 653 "extension with a non-default value.".format(extra_product)) 654 655 # Build a list of targets to check. This list will consist of the current target, plus the 656 # entire derived_from chain. 657 target_list = [tgt] 658 target = tgt 659 while target.derived_from != target: 660 target_list.append(target.derived_from) 661 target = target.derived_from 662 for cur_tgt in target_list: 663 if self.context.products.get(extra_product).has(cur_tgt): 664 self._copy_artifact(cur_tgt, jar, version, typename=extra_product, suffix=suffix, 665 extension=extension, override_name=override_name) 666 publications.add(extra_pub) 667 668 pom_path = self.artifact_path(jar, version, extension='pom') 669 PomWriter(get_pushdb, tag).write(tgt, path=pom_path) 670 return publications 671 672 if self.overrides: 673 print('\nPublishing with revision overrides:') 674 for (org, name), rev in self.overrides.items(): 675 print('{0}={1}'.format(coordinate(org, name), rev)) 676 677 head_sha = self.scm.commit_id 678 679 safe_rmtree(self.workdir) 680 published = [] 681 skip = (self.restart_at is not None) 682 for target in exported_targets: 683 pushdb, dbfile, repo = get_db(target) 684 oldentry = pushdb.get_entry(target) 685 686 # the jar version is ignored here, since it is overridden below with the new entry 687 jar, _ = target.get_artifact_info() 688 published.append(jar) 689 690 if skip and (jar.org, jar.name) == self.restart_at: 691 skip = False 692 # select the next version: either a named version, or semver via the pushdb/overrides 693 if self.named_snapshot: 694 newentry = oldentry.with_named_ver(self.named_snapshot) 695 else: 696 override = self.overrides.get((jar.org, jar.name)) 697 sem_ver = override if override else oldentry.sem_ver.bump() 698 if self.local_snapshot: 699 sem_ver = sem_ver.make_snapshot() 700 701 if sem_ver <= oldentry.sem_ver: 702 raise TaskError('Requested version {} must be greater than the current version {}'.format( 703 sem_ver, oldentry.sem_ver 704 )) 705 newentry = oldentry.with_sem_ver(sem_ver) 706 707 newfingerprint = self.entry_fingerprint(target, fingerprint_internal) 708 newentry = newentry.with_sha_and_fingerprint(head_sha, newfingerprint) 709 no_changes = newentry.fingerprint == oldentry.fingerprint 710 711 changelog = '' 712 if self.publish_changelog: 713 if no_changes: 714 changelog = 'No changes for {0} - forced push.\n'.format(pushdb_coordinate(jar, oldentry)) 715 else: 716 changelog = self.changelog(target, oldentry.sha) or 'Direct dependencies changed.\n' 717 718 org = jar.org 719 name = jar.name 720 rev = newentry.version().version() 721 tag_name = '{org}-{name}-{rev}'.format(org=org, name=name, rev=rev) if self.commit else None 722 723 if no_changes and not self.force: 724 print('No changes for {0}'.format(pushdb_coordinate(jar, oldentry))) 725 stage_artifacts(target, jar, oldentry.version().version(), tag_name, changelog) 726 elif skip: 727 print('Skipping {} to resume at {}'.format( 728 jar_coordinate(jar, (newentry.version() if self.force else oldentry.version()).version()), 729 coordinate(self.restart_at[0], self.restart_at[1]) 730 )) 731 stage_artifacts(target, jar, oldentry.version().version(), tag_name, changelog) 732 else: 733 if not self.dryrun: 734 # Confirm push looks good 735 if self.publish_changelog: 736 if no_changes: 737 print(changelog) 738 else: 739 # The changelog may contain non-ascii text, but the print function can, under certain 740 # circumstances, incorrectly detect the output encoding to be ascii and thus blow up 741 # on non-ascii changelog characters. Here we explicitly control the encoding to avoid 742 # the print function's mis-interpretation. 743 # TODO(John Sirois): Consider introducing a pants/util `print_safe` helper for this. 744 message = '\nChanges for {} since {} @ {}:\n\n{}\n'.format( 745 coordinate(jar.org, jar.name), oldentry.version(), oldentry.sha, changelog) 746 # The stdout encoding can be detected as None when running without a tty (common in 747 # tests), in which case we want to force encoding with a unicode-supporting codec. 748 encoding = sys.stdout.encoding or 'utf-8' 749 sys.stdout.write(message.encode(encoding)) 750 if not self.confirm_push(coordinate(jar.org, jar.name), newentry.version()): 751 raise TaskError('User aborted push') 752 753 pushdb.set_entry(target, newentry) 754 publications = stage_artifacts(target, jar, rev, tag_name, changelog) 755 756 if self.dryrun: 757 print('Skipping publish of {0} in test mode.'.format(pushdb_coordinate(jar, newentry))) 758 else: 759 self.publish(publications, jar=jar, entry=newentry, repo=repo, published=published) 760 761 if self.commit: 762 coord = coordinate(org, name, rev) 763 764 pushdb.dump(dbfile) 765 766 self.publish_pushdb_changes_to_remote_scm( 767 pushdb_file=dbfile, 768 coordinate=coord, 769 tag_name=tag_name, 770 tag_message='Publish of {coordinate} initiated by {user} {cause}'.format( 771 coordinate=coord, 772 user=getpass.getuser(), 773 cause='with forced revision' if (org, name) in self.overrides else '(autoinc)', 774 ), 775 postscript=self.push_postscript 776 ) 777 778 def artifact_path(self, jar, version, name=None, suffix='', extension='jar', artifact_ext=''): 779 return os.path.join(self.workdir, jar.org, jar.name + artifact_ext, 780 '{}{}-{}{}.{}'.format((name or jar.name), 781 artifact_ext if name != 'ivy' else '', 782 version, 783 suffix, 784 extension)) 785 786 def check_for_duplicate_artifacts(self, targets): 787 targets_by_artifact = defaultdict(list) 788 duplicates = set() 789 for target in targets: 790 artifact = target.provides 791 if artifact in targets_by_artifact: 792 duplicates.add(artifact) 793 targets_by_artifact[artifact].append(target) 794 795 def duplication_message(artifact): 796 specs = sorted('\n {}'.format(t.address.spec) for t in targets_by_artifact[artifact]) 797 return '\n {artifact} is defined by:{specs}'.format(artifact=artifact, specs=''.join(specs)) 798 799 if duplicates: 800 raise self.DuplicateArtifactError('Multiple targets define the same artifacts!\n{}'.format( 801 '\n'.join(duplication_message(artifact) for artifact in duplicates))) 802 803 def check_targets(self, targets): 804 self.check_for_duplicate_artifacts(targets) 805 invalid = defaultdict(lambda: defaultdict(set)) 806 derived_by_target = defaultdict(set) 807 808 def collect_invalid(publish_target, walked_target): 809 for derived_target in walked_target.derived_from_chain: 810 derived_by_target[derived_target].add(walked_target) 811 if not walked_target.has_sources() or not walked_target.sources_relative_to_buildroot(): 812 invalid[publish_target][walked_target].add('No sources.') 813 if not walked_target.is_exported: 814 invalid[publish_target][walked_target].add('Does not provide a binary artifact.') 815 816 for target in targets: 817 target.walk(functools.partial(collect_invalid, target), 818 predicate=lambda t: isinstance(t, Jarable)) 819 820 # When walking the graph of a publishable target, we may encounter families of sibling targets 821 # that form a derivation chain. As long as one of these siblings is publishable, we can 822 # proceed and publish a valid graph. 823 for publish_target, invalid_targets in list(invalid.items()): 824 for invalid_target, reasons in list(invalid_targets.items()): 825 derived_from_set = derived_by_target[invalid_target] 826 if derived_from_set - set(invalid_targets.keys()): 827 invalid_targets.pop(invalid_target) 828 if not invalid_targets: 829 invalid.pop(publish_target) 830 831 if invalid: 832 msg = list() 833 834 def first_address(pair): 835 first, _ = pair 836 return str(first.address) 837 838 for publish_target, invalid_targets in sorted(invalid.items(), key=first_address): 839 msg.append('\n Cannot publish {} due to:'.format(publish_target.address)) 840 for invalid_target, reasons in sorted(invalid_targets.items(), key=first_address): 841 for reason in sorted(reasons): 842 msg.append('\n {} - {}'.format(invalid_target.address, reason)) 843 844 raise TaskError('The following errors must be resolved to publish.{}'.format(''.join(msg))) 845 846 def exported_targets(self): 847 candidates = set() 848 if self.transitive: 849 candidates.update(self.context.targets()) 850 else: 851 candidates.update(self.context.target_roots) 852 853 def get_synthetic(lang, target): 854 mappings = self.context.products.get(lang).get(target) 855 if mappings: 856 for key, generated in mappings.items(): 857 for synthetic in generated: 858 yield synthetic 859 860 # Handle the case where a code gen target is in the listed roots and thus the publishable 861 # target is a synthetic twin generated by a code gen task upstream. 862 for candidate in self.context.target_roots: 863 candidates.update(get_synthetic('java', candidate)) 864 candidates.update(get_synthetic('scala', candidate)) 865 866 def exportable(tgt): 867 return tgt in candidates and tgt.is_exported 868 869 return OrderedSet(filter(exportable, 870 reversed(sort_targets(filter(exportable, candidates))))) 871 872 def entry_fingerprint(self, target, fingerprint_internal): 873 sha = hashlib.sha1() 874 sha.update(target.invalidation_hash()) 875 876 # TODO(Tejal Desai): pantsbuild/pants/65: Remove java_sources attribute for ScalaLibrary 877 if isinstance(target, ScalaLibrary): 878 for java_source in sorted(target.java_sources): 879 sha.update(java_source.invalidation_hash()) 880 881 # TODO(John Sirois): handle resources 882 883 for jarsig in sorted([jar_coordinate(j) for j in target.jar_dependencies if j.rev]): 884 sha.update(jarsig) 885 886 # TODO(tdesai) Handle resource type in get_db. 887 internal_dependencies = sorted(target_internal_dependencies(target), key=lambda t: t.id) 888 for internal_target in internal_dependencies: 889 fingerprint = fingerprint_internal(internal_target) 890 sha.update(fingerprint) 891 892 return sha.hexdigest() 893 894 def changelog(self, target, sha): 895 # Filter synthetic files. 896 files = filter(lambda filename: not filename.startswith(os.pardir), target.sources_relative_to_buildroot()) 897 return ensure_text(self.scm.changelog(from_commit=sha, files=files)) 898 899 def fetch_ivysettings(self, ivy): 900 if self.get_options().ivy_settings: 901 return self.get_options().ivy_settings 902 elif ivy.ivy_settings is None: 903 raise TaskError('An ivysettings.xml with writeable resolvers is required for publishing, ' 904 'but none was configured.') 905 else: 906 return ivy.ivy_settings 907 908 def generate_ivysettings(self, ivy, publishedjars, publish_local=None): 909 template_relpath = os.path.join(_TEMPLATES_RELPATH, 'ivysettings.xml.mustache') 910 template_text = pkgutil.get_data(__name__, template_relpath) 911 912 published = [TemplateData(org=jar.org, name=jar.name) for jar in publishedjars] 913 914 generator = Generator(template_text, 915 ivysettings=self.fetch_ivysettings(ivy), 916 dir=self.workdir, 917 cachedir=self.cachedir, 918 published=published, 919 publish_local=publish_local) 920 921 with safe_open(os.path.join(self.workdir, 'ivysettings.xml'), 'w') as wrapper: 922 generator.write(wrapper) 923 return wrapper.name 924 925 def generate_ivy(self, jar, version, publications): 926 template_relpath = os.path.join(_TEMPLATES_RELPATH, 'ivy.xml.mustache') 927 template_text = pkgutil.get_data(__name__, template_relpath) 928 929 pubs = [TemplateData(name=None if p.name == jar.name else p.name, 930 classifier=p.classifier, 931 ext=None if p.ext == 'jar' else p.ext) for p in publications] 932 933 generator = Generator(template_text, 934 org=jar.org, 935 name=jar.name, 936 rev=version, 937 publications=pubs) 938 939 with safe_open(os.path.join(self.workdir, 'ivy.xml'), 'w') as ivyxml: 940 generator.write(ivyxml) 941 return ivyxml.name 942 943 def create_source_jar(self, target, open_jar, version): 944 # TODO(Tejal Desai) pantsbuild/pants/65: Avoid creating 2 jars with java sources for a 945 # scala_library with java_sources. Currently publish fails fast if scala_library owning 946 # java sources pointed by java_library target also provides an artifact. However, jar_create 947 # ends up creating 2 jars one scala and other java both including the java_sources. 948 949 def abs_and_relative_sources(target): 950 abs_source_root = os.path.join(get_buildroot(), target.target_base) 951 for source in target.sources_relative_to_source_root(): 952 yield os.path.join(abs_source_root, source), source 953 954 jar_path = self.artifact_path(open_jar, version, suffix='-sources') 955 with self.open_jar(jar_path, overwrite=True, compressed=True) as open_jar: 956 for abs_source, rel_source in abs_and_relative_sources(target): 957 open_jar.write(abs_source, rel_source) 958 959 # TODO(Tejal Desai): pantsbuild/pants/65 Remove java_sources attribute for ScalaLibrary 960 if isinstance(target, ScalaLibrary): 961 for java_source_target in target.java_sources: 962 for abs_source, rel_source in abs_and_relative_sources(java_source_target): 963 open_jar.write(abs_source, rel_source) 964 965 if target.has_resources: 966 for resource_target in target.resources: 967 for abs_source, rel_source in abs_and_relative_sources(resource_target): 968 open_jar.write(abs_source, rel_source) 969 970 return jar_path 971 972 def _java_doc(self, target): 973 return self.context.products.get('javadoc').get(target) 974 975 def _scala_doc(self, target): 976 return self.context.products.get('scaladoc').get(target) 977 978 def create_doc_jar(self, target, open_jar, version): 979 """Returns a doc jar if either scala or java docs are available for the given target.""" 980 javadoc = self._java_doc(target) 981 scaladoc = self._scala_doc(target) 982 if javadoc or scaladoc: 983 jar_path = self.artifact_path(open_jar, version, suffix='-javadoc') 984 with self.open_jar(jar_path, overwrite=True, compressed=True) as open_jar: 985 def add_docs(docs): 986 if docs: 987 for basedir, doc_files in docs.items(): 988 for doc_file in doc_files: 989 open_jar.write(os.path.join(basedir, doc_file), doc_file) 990 991 add_docs(javadoc) 992 add_docs(scaladoc) 993 return jar_path 994 else: 995 return None 996 [end of src/python/pants/backend/jvm/tasks/jar_publish.py] [start of src/python/pants/scm/scm.py] 1 # coding=utf-8 2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 from abc import abstractmethod, abstractproperty 9 10 from pants.util.meta import AbstractClass 11 12 13 class Scm(AbstractClass): 14 """Abstracts high-level scm operations needed by pants core and pants tasks. 15 16 :API: public 17 """ 18 19 class ScmException(Exception): 20 """Indicates a problem interacting with the scm. 21 22 :API: public 23 """ 24 25 class RemoteException(ScmException): 26 """Indicates a problem performing a remote scm operation. 27 28 :API: public 29 """ 30 31 class LocalException(ScmException): 32 """Indicates a problem performing a local scm operation. 33 34 :API: public 35 """ 36 37 @abstractproperty 38 def current_rev_identifier(self): 39 """Identifier for the tip/head of the current branch eg. "HEAD" in git. 40 41 :API: public 42 """ 43 44 @abstractproperty 45 def commit_id(self): 46 """Returns the id of the current commit. 47 48 :API: public 49 """ 50 51 @abstractproperty 52 def server_url(self): 53 """Returns the url of the (default) remote server.""" 54 55 @abstractproperty 56 def tag_name(self): 57 """Returns the name of the current tag if any. 58 59 :API: public 60 """ 61 62 @abstractproperty 63 def branch_name(self): 64 """Returns the name of the current branch if any. 65 66 :API: public 67 """ 68 69 @abstractmethod 70 def commit_date(self, commit_reference): 71 """Returns the commit date of the referenced commit. 72 73 :API: public 74 """ 75 76 @abstractproperty 77 def worktree(self): 78 """Returns the worktree for the SCM. 79 80 :API: public 81 """ 82 83 @abstractmethod 84 def changed_files(self, from_commit=None, include_untracked=False, relative_to=None): 85 """Returns a list of files with uncommitted changes or else files changed since from_commit. 86 87 If include_untracked=True then any workspace files that are un-tracked by the scm and not 88 ignored will be included as well. 89 90 If relative_to is None, then the paths will be relative to the working tree of the SCM 91 implementation (which might NOT match the buildroot). 92 93 :API: public 94 """ 95 96 @abstractmethod 97 def changes_in(self, diffspec, relative_to=None): 98 """Returns a list of files changed by some diffspec (eg sha, range, ref, etc) 99 100 :API: public 101 102 :param str diffspec: Some diffspec meaningful to the SCM. 103 :param str relative_to: a path to which results should be relative (instead of SCM root) 104 """ 105 106 @abstractmethod 107 def changelog(self, from_commit=None, files=None): 108 """Produces a changelog from the given commit or the 1st commit if none is specified until the 109 present workspace commit for the changes affecting the given files. 110 111 If no files are given then the full change log should be produced. 112 113 :API: public 114 """ 115 116 @abstractmethod 117 def refresh(self): 118 """Refreshes the local workspace with any changes on the server. 119 120 Subclasses should raise some form of ScmException to indicate a refresh error whether it be 121 a conflict or a communication channel error. 122 123 :API: public 124 """ 125 126 @abstractmethod 127 def tag(self, name, message=None): 128 """Tags the state in the local workspace and ensures this tag is on the server. 129 130 Subclasses should raise RemoteException if there is a problem getting the tag to the server. 131 132 :API: public 133 """ 134 135 @abstractmethod 136 def commit(self, message): 137 """Commits all the changes for tracked files in the local workspace. 138 139 Subclasses should raise LocalException if there is a problem making the commit. 140 141 :API: public 142 """ 143 144 @abstractmethod 145 def add(self, *paths): 146 """Add paths to the set of tracked files. 147 148 Subclasses should raise LocalException if there is a problem adding the paths. 149 150 :API: public 151 """ 152 153 @abstractmethod 154 def push(self): 155 """Push the current branch of the local repository to the corresponding local branch 156 on the server 157 158 Subclasses should raise RemoteException if there is a problem getting the commit to the 159 server. 160 161 :API: public 162 """ 163 164 @abstractmethod 165 def set_state(self, rev): 166 """Set the repo state to the specified rev. 167 168 :API: public 169 """ 170 [end of src/python/pants/scm/scm.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
a4afe015e73fdf311e12db6b8014122050ffd311
Local publish.jar should succeed without an SCM repo around it I have a pants project that for reasons I won't go into doesn't have a git (or any other) repo around it. I tried to run `publish.jar` and `--publish-jar-local=/some/folder` and it crashed with the following message: ``` 15:26:13 00:23 [publish] Waiting for background workers to finish. 15:26:13 00:23 [complete] FAILUREException caught: (<type 'exceptions.AttributeError'>) Exception message: 'NoneType' object has no attribute 'branch_name' ``` The rest of the build ran fine up until the publish step. According to @jsirois this is expected but not really desirable behavior, so I'm filing this issue 😄
To repro using the pants repo: ``` $ git archive --prefix 4772-test-repo/ HEAD | tar -x -C /tmp $ cd /tmp/4772-test-repo/ $ curl -sSL -O https://pantsbuild.github.io/setup/pants $ chmod +x pants $ cat << EOF >> pants.ini > > [GLOBAL] > pants_version: 1.3.0 > EOF $ ./pants publish.jar --local=/tmp/4772-test-m2 --no-dryrun --no-prompt src/java/org/pantsbuild/args4j:: ... 08:45:59 00:02 [jar] 08:45:59 00:02 [create] 08:45:59 00:02 [doc] 08:45:59 00:02 [javadoc] 08:45:59 00:02 [scaladoc] 08:45:59 00:02 [publish]fatal: Not a git repository (or any parent up to mount point /) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). 08:45:59 00:02 [jar] Waiting for background workers to finish. 08:45:59 00:02 [complete] FAILURE Exception caught: (<type 'exceptions.AttributeError'>) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/bin/pants", line 11, in <module> sys.exit(main()) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/pants_exe.py", line 44, in main PantsRunner(exiter).run() File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/pants_runner.py", line 57, in run options_bootstrapper=options_bootstrapper) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/pants_runner.py", line 46, in _run return LocalPantsRunner(exiter, args, env, options_bootstrapper=options_bootstrapper).run() File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/local_pants_runner.py", line 37, in run self._run() File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/local_pants_runner.py", line 79, in _run goal_runner_result = goal_runner.run() File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/goal_runner.py", line 263, in run result = self._execute_engine() File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/bin/goal_runner.py", line 252, in _execute_engine result = engine.execute(self._context, self._goals) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/engine/legacy_engine.py", line 26, in execute self.attempt(context, goals) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/engine/round_engine.py", line 224, in attempt goal_executor.attempt(explain) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/engine/round_engine.py", line 47, in attempt task.execute() File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/backend/jvm/tasks/jar_publish.py", line 561, in execute self.check_clean_master(commit=(not self.dryrun and self.commit)) File "/home/jsirois/.cache/pants/setup/bootstrap-Linux-x86_64/1.3.0/lib/python2.7/site-packages/pants/task/scm_publish_mixin.py", line 203, in check_clean_master .format(self.scm.branch_name)) Exception message: 'NoneType' object has no attribute 'branch_name' ``` This fix will not be the nicest cleanup - really tangled code here. That said, it will have regression tests.
2017-07-26T17:28:33Z
<patch> diff --git a/src/python/pants/backend/jvm/tasks/jar_publish.py b/src/python/pants/backend/jvm/tasks/jar_publish.py --- a/src/python/pants/backend/jvm/tasks/jar_publish.py +++ b/src/python/pants/backend/jvm/tasks/jar_publish.py @@ -367,7 +367,6 @@ def __init__(self, *args, **kwargs): self._jvm_options = self.get_options().jvm_options - self.scm = get_scm() self.log = self.context.log if self.get_options().local: @@ -400,6 +399,8 @@ def __init__(self, *args, **kwargs): self.push_postscript = self.get_options().push_postscript or '' self.local_snapshot = False + self.scm = get_scm() if self.commit else None + self.named_snapshot = self.get_options().named_snapshot if self.named_snapshot: self.named_snapshot = Namedver.parse(self.named_snapshot) @@ -407,7 +408,7 @@ def __init__(self, *args, **kwargs): self.dryrun = self.get_options().dryrun self.transitive = self.get_options().transitive self.force = self.get_options().force - self.publish_changelog = self.get_options().changelog + self.publish_changelog = self.get_options().changelog and self.scm def parse_jarcoordinate(coordinate): components = coordinate.split('#', 1) @@ -674,7 +675,7 @@ def stage_artifacts(tgt, jar, version, tag, changelog): for (org, name), rev in self.overrides.items(): print('{0}={1}'.format(coordinate(org, name), rev)) - head_sha = self.scm.commit_id + head_sha = self.scm.commit_id if self.scm else None safe_rmtree(self.workdir) published = [] diff --git a/src/python/pants/task/scm_publish_mixin.py b/src/python/pants/task/scm_publish_mixin.py --- a/src/python/pants/task/scm_publish_mixin.py +++ b/src/python/pants/task/scm_publish_mixin.py @@ -198,7 +198,7 @@ def check_clean_master(self, commit=False): if changed_files: raise self.DirtyWorkspaceError('Can only push from a clean branch, found : {}' .format(' '.join(changed_files))) - else: + elif self.scm: self.log.info('Skipping check for a clean {} branch in test mode.' .format(self.scm.branch_name)) </patch>
[]
[]
numpy__numpy-6543
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: ma: Several functions in the ma module return None when given a scalar. Here's an example, using `ma.atleast_1d`: ``` In [4]: np.__version__ Out[4]: '1.8.0.dev-31a5501' ``` When given a scalar, the regular version of `atleast_1d` returns an array: ``` In [5]: x = np.atleast_1d(1.0) In [6]: x Out[6]: array([ 1.]) ``` `ma.atleast_1d` returns `None`: ``` In [7]: x = np.ma.atleast_1d(1.0) In [8]: print x None ``` `ma.atleast_1d` and several other functions in the `ma` module are defined using the class `_fromnxfunction` (defined in numpy/ma/extras.py). In the `__call__` method of this class, in the case where `len(args) == 1`, the code handles the cases where the single argument is an ndarray, a tuple or list. If the single argument is not one of these, the code falls through to the end without returning anything. The non-masked versions of these accept a scalar, so the masked version should too: ``` atleast_1d atleast_2d atleast_3d diagflat ``` </issue> <code> [start of README.txt] 1 NumPy is the fundamental package needed for scientific computing with Python. 2 This package contains: 3 4 * a powerful N-dimensional array object 5 * sophisticated (broadcasting) functions 6 * tools for integrating C/C++ and Fortran code 7 * useful linear algebra, Fourier transform, and random number capabilities. 8 9 It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray. 10 11 More information can be found at the website: 12 13 http://www.numpy.org 14 15 After installation, tests can be run with: 16 17 python -c 'import numpy; numpy.test()' 18 19 The most current development version is always available from our 20 git repository: 21 22 http://github.com/numpy/numpy 23 [end of README.txt] [start of numpy/lib/financial.py] 1 """Some simple financial calculations 2 3 patterned after spreadsheet computations. 4 5 There is some complexity in each function 6 so that the functions behave like ufuncs with 7 broadcasting and being able to be called with scalars 8 or arrays (or other sequences). 9 10 """ 11 from __future__ import division, absolute_import, print_function 12 13 import numpy as np 14 15 __all__ = ['fv', 'pmt', 'nper', 'ipmt', 'ppmt', 'pv', 'rate', 16 'irr', 'npv', 'mirr'] 17 18 _when_to_num = {'end':0, 'begin':1, 19 'e':0, 'b':1, 20 0:0, 1:1, 21 'beginning':1, 22 'start':1, 23 'finish':0} 24 25 def _convert_when(when): 26 #Test to see if when has already been converted to ndarray 27 #This will happen if one function calls another, for example ppmt 28 if isinstance(when, np.ndarray): 29 return when 30 try: 31 return _when_to_num[when] 32 except (KeyError, TypeError): 33 return [_when_to_num[x] for x in when] 34 35 36 def fv(rate, nper, pmt, pv, when='end'): 37 """ 38 Compute the future value. 39 40 Given: 41 * a present value, `pv` 42 * an interest `rate` compounded once per period, of which 43 there are 44 * `nper` total 45 * a (fixed) payment, `pmt`, paid either 46 * at the beginning (`when` = {'begin', 1}) or the end 47 (`when` = {'end', 0}) of each period 48 49 Return: 50 the value at the end of the `nper` periods 51 52 Parameters 53 ---------- 54 rate : scalar or array_like of shape(M, ) 55 Rate of interest as decimal (not per cent) per period 56 nper : scalar or array_like of shape(M, ) 57 Number of compounding periods 58 pmt : scalar or array_like of shape(M, ) 59 Payment 60 pv : scalar or array_like of shape(M, ) 61 Present value 62 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional 63 When payments are due ('begin' (1) or 'end' (0)). 64 Defaults to {'end', 0}. 65 66 Returns 67 ------- 68 out : ndarray 69 Future values. If all input is scalar, returns a scalar float. If 70 any input is array_like, returns future values for each input element. 71 If multiple inputs are array_like, they all must have the same shape. 72 73 Notes 74 ----- 75 The future value is computed by solving the equation:: 76 77 fv + 78 pv*(1+rate)**nper + 79 pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) == 0 80 81 or, when ``rate == 0``:: 82 83 fv + pv + pmt * nper == 0 84 85 References 86 ---------- 87 .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). 88 Open Document Format for Office Applications (OpenDocument)v1.2, 89 Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version, 90 Pre-Draft 12. Organization for the Advancement of Structured Information 91 Standards (OASIS). Billerica, MA, USA. [ODT Document]. 92 Available: 93 http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula 94 OpenDocument-formula-20090508.odt 95 96 Examples 97 -------- 98 What is the future value after 10 years of saving $100 now, with 99 an additional monthly savings of $100. Assume the interest rate is 100 5% (annually) compounded monthly? 101 102 >>> np.fv(0.05/12, 10*12, -100, -100) 103 15692.928894335748 104 105 By convention, the negative sign represents cash flow out (i.e. money not 106 available today). Thus, saving $100 a month at 5% annual interest leads 107 to $15,692.93 available to spend in 10 years. 108 109 If any input is array_like, returns an array of equal shape. Let's 110 compare different interest rates from the example above. 111 112 >>> a = np.array((0.05, 0.06, 0.07))/12 113 >>> np.fv(a, 10*12, -100, -100) 114 array([ 15692.92889434, 16569.87435405, 17509.44688102]) 115 116 """ 117 when = _convert_when(when) 118 (rate, nper, pmt, pv, when) = map(np.asarray, [rate, nper, pmt, pv, when]) 119 temp = (1+rate)**nper 120 miter = np.broadcast(rate, nper, pmt, pv, when) 121 zer = np.zeros(miter.shape) 122 fact = np.where(rate == zer, nper + zer, 123 (1 + rate*when)*(temp - 1)/rate + zer) 124 return -(pv*temp + pmt*fact) 125 126 def pmt(rate, nper, pv, fv=0, when='end'): 127 """ 128 Compute the payment against loan principal plus interest. 129 130 Given: 131 * a present value, `pv` (e.g., an amount borrowed) 132 * a future value, `fv` (e.g., 0) 133 * an interest `rate` compounded once per period, of which 134 there are 135 * `nper` total 136 * and (optional) specification of whether payment is made 137 at the beginning (`when` = {'begin', 1}) or the end 138 (`when` = {'end', 0}) of each period 139 140 Return: 141 the (fixed) periodic payment. 142 143 Parameters 144 ---------- 145 rate : array_like 146 Rate of interest (per period) 147 nper : array_like 148 Number of compounding periods 149 pv : array_like 150 Present value 151 fv : array_like, optional 152 Future value (default = 0) 153 when : {{'begin', 1}, {'end', 0}}, {string, int} 154 When payments are due ('begin' (1) or 'end' (0)) 155 156 Returns 157 ------- 158 out : ndarray 159 Payment against loan plus interest. If all input is scalar, returns a 160 scalar float. If any input is array_like, returns payment for each 161 input element. If multiple inputs are array_like, they all must have 162 the same shape. 163 164 Notes 165 ----- 166 The payment is computed by solving the equation:: 167 168 fv + 169 pv*(1 + rate)**nper + 170 pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) == 0 171 172 or, when ``rate == 0``:: 173 174 fv + pv + pmt * nper == 0 175 176 for ``pmt``. 177 178 Note that computing a monthly mortgage payment is only 179 one use for this function. For example, pmt returns the 180 periodic deposit one must make to achieve a specified 181 future balance given an initial deposit, a fixed, 182 periodically compounded interest rate, and the total 183 number of periods. 184 185 References 186 ---------- 187 .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). 188 Open Document Format for Office Applications (OpenDocument)v1.2, 189 Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version, 190 Pre-Draft 12. Organization for the Advancement of Structured Information 191 Standards (OASIS). Billerica, MA, USA. [ODT Document]. 192 Available: 193 http://www.oasis-open.org/committees/documents.php 194 ?wg_abbrev=office-formulaOpenDocument-formula-20090508.odt 195 196 Examples 197 -------- 198 What is the monthly payment needed to pay off a $200,000 loan in 15 199 years at an annual interest rate of 7.5%? 200 201 >>> np.pmt(0.075/12, 12*15, 200000) 202 -1854.0247200054619 203 204 In order to pay-off (i.e., have a future-value of 0) the $200,000 obtained 205 today, a monthly payment of $1,854.02 would be required. Note that this 206 example illustrates usage of `fv` having a default value of 0. 207 208 """ 209 when = _convert_when(when) 210 (rate, nper, pv, fv, when) = map(np.asarray, [rate, nper, pv, fv, when]) 211 temp = (1 + rate)**nper 212 mask = (rate == 0.0) 213 np.copyto(rate, 1.0, where=mask) 214 z = np.zeros(np.broadcast(rate, nper, pv, fv, when).shape) 215 fact = np.where(mask != z, nper + z, (1 + rate*when)*(temp - 1)/rate + z) 216 return -(fv + pv*temp) / fact 217 218 def nper(rate, pmt, pv, fv=0, when='end'): 219 """ 220 Compute the number of periodic payments. 221 222 Parameters 223 ---------- 224 rate : array_like 225 Rate of interest (per period) 226 pmt : array_like 227 Payment 228 pv : array_like 229 Present value 230 fv : array_like, optional 231 Future value 232 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional 233 When payments are due ('begin' (1) or 'end' (0)) 234 235 Notes 236 ----- 237 The number of periods ``nper`` is computed by solving the equation:: 238 239 fv + pv*(1+rate)**nper + pmt*(1+rate*when)/rate*((1+rate)**nper-1) = 0 240 241 but if ``rate = 0`` then:: 242 243 fv + pv + pmt*nper = 0 244 245 Examples 246 -------- 247 If you only had $150/month to pay towards the loan, how long would it take 248 to pay-off a loan of $8,000 at 7% annual interest? 249 250 >>> print round(np.nper(0.07/12, -150, 8000), 5) 251 64.07335 252 253 So, over 64 months would be required to pay off the loan. 254 255 The same analysis could be done with several different interest rates 256 and/or payments and/or total amounts to produce an entire table. 257 258 >>> np.nper(*(np.ogrid[0.07/12: 0.08/12: 0.01/12, 259 ... -150 : -99 : 50 , 260 ... 8000 : 9001 : 1000])) 261 array([[[ 64.07334877, 74.06368256], 262 [ 108.07548412, 127.99022654]], 263 [[ 66.12443902, 76.87897353], 264 [ 114.70165583, 137.90124779]]]) 265 266 """ 267 when = _convert_when(when) 268 (rate, pmt, pv, fv, when) = map(np.asarray, [rate, pmt, pv, fv, when]) 269 270 use_zero_rate = False 271 with np.errstate(divide="raise"): 272 try: 273 z = pmt*(1.0+rate*when)/rate 274 except FloatingPointError: 275 use_zero_rate = True 276 277 if use_zero_rate: 278 return (-fv + pv) / (pmt + 0.0) 279 else: 280 A = -(fv + pv)/(pmt+0.0) 281 B = np.log((-fv+z) / (pv+z))/np.log(1.0+rate) 282 miter = np.broadcast(rate, pmt, pv, fv, when) 283 zer = np.zeros(miter.shape) 284 return np.where(rate == zer, A + zer, B + zer) + 0.0 285 286 def ipmt(rate, per, nper, pv, fv=0.0, when='end'): 287 """ 288 Compute the interest portion of a payment. 289 290 Parameters 291 ---------- 292 rate : scalar or array_like of shape(M, ) 293 Rate of interest as decimal (not per cent) per period 294 per : scalar or array_like of shape(M, ) 295 Interest paid against the loan changes during the life or the loan. 296 The `per` is the payment period to calculate the interest amount. 297 nper : scalar or array_like of shape(M, ) 298 Number of compounding periods 299 pv : scalar or array_like of shape(M, ) 300 Present value 301 fv : scalar or array_like of shape(M, ), optional 302 Future value 303 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional 304 When payments are due ('begin' (1) or 'end' (0)). 305 Defaults to {'end', 0}. 306 307 Returns 308 ------- 309 out : ndarray 310 Interest portion of payment. If all input is scalar, returns a scalar 311 float. If any input is array_like, returns interest payment for each 312 input element. If multiple inputs are array_like, they all must have 313 the same shape. 314 315 See Also 316 -------- 317 ppmt, pmt, pv 318 319 Notes 320 ----- 321 The total payment is made up of payment against principal plus interest. 322 323 ``pmt = ppmt + ipmt`` 324 325 Examples 326 -------- 327 What is the amortization schedule for a 1 year loan of $2500 at 328 8.24% interest per year compounded monthly? 329 330 >>> principal = 2500.00 331 332 The 'per' variable represents the periods of the loan. Remember that 333 financial equations start the period count at 1! 334 335 >>> per = np.arange(1*12) + 1 336 >>> ipmt = np.ipmt(0.0824/12, per, 1*12, principal) 337 >>> ppmt = np.ppmt(0.0824/12, per, 1*12, principal) 338 339 Each element of the sum of the 'ipmt' and 'ppmt' arrays should equal 340 'pmt'. 341 342 >>> pmt = np.pmt(0.0824/12, 1*12, principal) 343 >>> np.allclose(ipmt + ppmt, pmt) 344 True 345 346 >>> fmt = '{0:2d} {1:8.2f} {2:8.2f} {3:8.2f}' 347 >>> for payment in per: 348 ... index = payment - 1 349 ... principal = principal + ppmt[index] 350 ... print fmt.format(payment, ppmt[index], ipmt[index], principal) 351 1 -200.58 -17.17 2299.42 352 2 -201.96 -15.79 2097.46 353 3 -203.35 -14.40 1894.11 354 4 -204.74 -13.01 1689.37 355 5 -206.15 -11.60 1483.22 356 6 -207.56 -10.18 1275.66 357 7 -208.99 -8.76 1066.67 358 8 -210.42 -7.32 856.25 359 9 -211.87 -5.88 644.38 360 10 -213.32 -4.42 431.05 361 11 -214.79 -2.96 216.26 362 12 -216.26 -1.49 -0.00 363 364 >>> interestpd = np.sum(ipmt) 365 >>> np.round(interestpd, 2) 366 -112.98 367 368 """ 369 when = _convert_when(when) 370 rate, per, nper, pv, fv, when = np.broadcast_arrays(rate, per, nper, 371 pv, fv, when) 372 total_pmt = pmt(rate, nper, pv, fv, when) 373 ipmt = _rbl(rate, per, total_pmt, pv, when)*rate 374 try: 375 ipmt = np.where(when == 1, ipmt/(1 + rate), ipmt) 376 ipmt = np.where(np.logical_and(when == 1, per == 1), 0.0, ipmt) 377 except IndexError: 378 pass 379 return ipmt 380 381 def _rbl(rate, per, pmt, pv, when): 382 """ 383 This function is here to simply have a different name for the 'fv' 384 function to not interfere with the 'fv' keyword argument within the 'ipmt' 385 function. It is the 'remaining balance on loan' which might be useful as 386 it's own function, but is easily calculated with the 'fv' function. 387 """ 388 return fv(rate, (per - 1), pmt, pv, when) 389 390 def ppmt(rate, per, nper, pv, fv=0.0, when='end'): 391 """ 392 Compute the payment against loan principal. 393 394 Parameters 395 ---------- 396 rate : array_like 397 Rate of interest (per period) 398 per : array_like, int 399 Amount paid against the loan changes. The `per` is the period of 400 interest. 401 nper : array_like 402 Number of compounding periods 403 pv : array_like 404 Present value 405 fv : array_like, optional 406 Future value 407 when : {{'begin', 1}, {'end', 0}}, {string, int} 408 When payments are due ('begin' (1) or 'end' (0)) 409 410 See Also 411 -------- 412 pmt, pv, ipmt 413 414 """ 415 total = pmt(rate, nper, pv, fv, when) 416 return total - ipmt(rate, per, nper, pv, fv, when) 417 418 def pv(rate, nper, pmt, fv=0.0, when='end'): 419 """ 420 Compute the present value. 421 422 Given: 423 * a future value, `fv` 424 * an interest `rate` compounded once per period, of which 425 there are 426 * `nper` total 427 * a (fixed) payment, `pmt`, paid either 428 * at the beginning (`when` = {'begin', 1}) or the end 429 (`when` = {'end', 0}) of each period 430 431 Return: 432 the value now 433 434 Parameters 435 ---------- 436 rate : array_like 437 Rate of interest (per period) 438 nper : array_like 439 Number of compounding periods 440 pmt : array_like 441 Payment 442 fv : array_like, optional 443 Future value 444 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional 445 When payments are due ('begin' (1) or 'end' (0)) 446 447 Returns 448 ------- 449 out : ndarray, float 450 Present value of a series of payments or investments. 451 452 Notes 453 ----- 454 The present value is computed by solving the equation:: 455 456 fv + 457 pv*(1 + rate)**nper + 458 pmt*(1 + rate*when)/rate*((1 + rate)**nper - 1) = 0 459 460 or, when ``rate = 0``:: 461 462 fv + pv + pmt * nper = 0 463 464 for `pv`, which is then returned. 465 466 References 467 ---------- 468 .. [WRW] Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). 469 Open Document Format for Office Applications (OpenDocument)v1.2, 470 Part 2: Recalculated Formula (OpenFormula) Format - Annotated Version, 471 Pre-Draft 12. Organization for the Advancement of Structured Information 472 Standards (OASIS). Billerica, MA, USA. [ODT Document]. 473 Available: 474 http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula 475 OpenDocument-formula-20090508.odt 476 477 Examples 478 -------- 479 What is the present value (e.g., the initial investment) 480 of an investment that needs to total $15692.93 481 after 10 years of saving $100 every month? Assume the 482 interest rate is 5% (annually) compounded monthly. 483 484 >>> np.pv(0.05/12, 10*12, -100, 15692.93) 485 -100.00067131625819 486 487 By convention, the negative sign represents cash flow out 488 (i.e., money not available today). Thus, to end up with 489 $15,692.93 in 10 years saving $100 a month at 5% annual 490 interest, one's initial deposit should also be $100. 491 492 If any input is array_like, ``pv`` returns an array of equal shape. 493 Let's compare different interest rates in the example above: 494 495 >>> a = np.array((0.05, 0.04, 0.03))/12 496 >>> np.pv(a, 10*12, -100, 15692.93) 497 array([ -100.00067132, -649.26771385, -1273.78633713]) 498 499 So, to end up with the same $15692.93 under the same $100 per month 500 "savings plan," for annual interest rates of 4% and 3%, one would 501 need initial investments of $649.27 and $1273.79, respectively. 502 503 """ 504 when = _convert_when(when) 505 (rate, nper, pmt, fv, when) = map(np.asarray, [rate, nper, pmt, fv, when]) 506 temp = (1+rate)**nper 507 miter = np.broadcast(rate, nper, pmt, fv, when) 508 zer = np.zeros(miter.shape) 509 fact = np.where(rate == zer, nper+zer, (1+rate*when)*(temp-1)/rate+zer) 510 return -(fv + pmt*fact)/temp 511 512 # Computed with Sage 513 # (y + (r + 1)^n*x + p*((r + 1)^n - 1)*(r*w + 1)/r)/(n*(r + 1)^(n - 1)*x - 514 # p*((r + 1)^n - 1)*(r*w + 1)/r^2 + n*p*(r + 1)^(n - 1)*(r*w + 1)/r + 515 # p*((r + 1)^n - 1)*w/r) 516 517 def _g_div_gp(r, n, p, x, y, w): 518 t1 = (r+1)**n 519 t2 = (r+1)**(n-1) 520 return ((y + t1*x + p*(t1 - 1)*(r*w + 1)/r) / 521 (n*t2*x - p*(t1 - 1)*(r*w + 1)/(r**2) + n*p*t2*(r*w + 1)/r + 522 p*(t1 - 1)*w/r)) 523 524 # Use Newton's iteration until the change is less than 1e-6 525 # for all values or a maximum of 100 iterations is reached. 526 # Newton's rule is 527 # r_{n+1} = r_{n} - g(r_n)/g'(r_n) 528 # where 529 # g(r) is the formula 530 # g'(r) is the derivative with respect to r. 531 def rate(nper, pmt, pv, fv, when='end', guess=0.10, tol=1e-6, maxiter=100): 532 """ 533 Compute the rate of interest per period. 534 535 Parameters 536 ---------- 537 nper : array_like 538 Number of compounding periods 539 pmt : array_like 540 Payment 541 pv : array_like 542 Present value 543 fv : array_like 544 Future value 545 when : {{'begin', 1}, {'end', 0}}, {string, int}, optional 546 When payments are due ('begin' (1) or 'end' (0)) 547 guess : float, optional 548 Starting guess for solving the rate of interest 549 tol : float, optional 550 Required tolerance for the solution 551 maxiter : int, optional 552 Maximum iterations in finding the solution 553 554 Notes 555 ----- 556 The rate of interest is computed by iteratively solving the 557 (non-linear) equation:: 558 559 fv + pv*(1+rate)**nper + pmt*(1+rate*when)/rate * ((1+rate)**nper - 1) = 0 560 561 for ``rate``. 562 563 References 564 ---------- 565 Wheeler, D. A., E. Rathke, and R. Weir (Eds.) (2009, May). Open Document 566 Format for Office Applications (OpenDocument)v1.2, Part 2: Recalculated 567 Formula (OpenFormula) Format - Annotated Version, Pre-Draft 12. 568 Organization for the Advancement of Structured Information Standards 569 (OASIS). Billerica, MA, USA. [ODT Document]. Available: 570 http://www.oasis-open.org/committees/documents.php?wg_abbrev=office-formula 571 OpenDocument-formula-20090508.odt 572 573 """ 574 when = _convert_when(when) 575 (nper, pmt, pv, fv, when) = map(np.asarray, [nper, pmt, pv, fv, when]) 576 rn = guess 577 iter = 0 578 close = False 579 while (iter < maxiter) and not close: 580 rnp1 = rn - _g_div_gp(rn, nper, pmt, pv, fv, when) 581 diff = abs(rnp1-rn) 582 close = np.all(diff < tol) 583 iter += 1 584 rn = rnp1 585 if not close: 586 # Return nan's in array of the same shape as rn 587 return np.nan + rn 588 else: 589 return rn 590 591 def irr(values): 592 """ 593 Return the Internal Rate of Return (IRR). 594 595 This is the "average" periodically compounded rate of return 596 that gives a net present value of 0.0; for a more complete explanation, 597 see Notes below. 598 599 Parameters 600 ---------- 601 values : array_like, shape(N,) 602 Input cash flows per time period. By convention, net "deposits" 603 are negative and net "withdrawals" are positive. Thus, for 604 example, at least the first element of `values`, which represents 605 the initial investment, will typically be negative. 606 607 Returns 608 ------- 609 out : float 610 Internal Rate of Return for periodic input values. 611 612 Notes 613 ----- 614 The IRR is perhaps best understood through an example (illustrated 615 using np.irr in the Examples section below). Suppose one invests 100 616 units and then makes the following withdrawals at regular (fixed) 617 intervals: 39, 59, 55, 20. Assuming the ending value is 0, one's 100 618 unit investment yields 173 units; however, due to the combination of 619 compounding and the periodic withdrawals, the "average" rate of return 620 is neither simply 0.73/4 nor (1.73)^0.25-1. Rather, it is the solution 621 (for :math:`r`) of the equation: 622 623 .. math:: -100 + \\frac{39}{1+r} + \\frac{59}{(1+r)^2} 624 + \\frac{55}{(1+r)^3} + \\frac{20}{(1+r)^4} = 0 625 626 In general, for `values` :math:`= [v_0, v_1, ... v_M]`, 627 irr is the solution of the equation: [G]_ 628 629 .. math:: \\sum_{t=0}^M{\\frac{v_t}{(1+irr)^{t}}} = 0 630 631 References 632 ---------- 633 .. [G] L. J. Gitman, "Principles of Managerial Finance, Brief," 3rd ed., 634 Addison-Wesley, 2003, pg. 348. 635 636 Examples 637 -------- 638 >>> round(irr([-100, 39, 59, 55, 20]), 5) 639 0.28095 640 >>> round(irr([-100, 0, 0, 74]), 5) 641 -0.0955 642 >>> round(irr([-100, 100, 0, -7]), 5) 643 -0.0833 644 >>> round(irr([-100, 100, 0, 7]), 5) 645 0.06206 646 >>> round(irr([-5, 10.5, 1, -8, 1]), 5) 647 0.0886 648 649 (Compare with the Example given for numpy.lib.financial.npv) 650 651 """ 652 res = np.roots(values[::-1]) 653 mask = (res.imag == 0) & (res.real > 0) 654 if res.size == 0: 655 return np.nan 656 res = res[mask].real 657 # NPV(rate) = 0 can have more than one solution so we return 658 # only the solution closest to zero. 659 rate = 1.0/res - 1 660 rate = rate.item(np.argmin(np.abs(rate))) 661 return rate 662 663 def npv(rate, values): 664 """ 665 Returns the NPV (Net Present Value) of a cash flow series. 666 667 Parameters 668 ---------- 669 rate : scalar 670 The discount rate. 671 values : array_like, shape(M, ) 672 The values of the time series of cash flows. The (fixed) time 673 interval between cash flow "events" must be the same as that for 674 which `rate` is given (i.e., if `rate` is per year, then precisely 675 a year is understood to elapse between each cash flow event). By 676 convention, investments or "deposits" are negative, income or 677 "withdrawals" are positive; `values` must begin with the initial 678 investment, thus `values[0]` will typically be negative. 679 680 Returns 681 ------- 682 out : float 683 The NPV of the input cash flow series `values` at the discount 684 `rate`. 685 686 Notes 687 ----- 688 Returns the result of: [G]_ 689 690 .. math :: \\sum_{t=0}^{M-1}{\\frac{values_t}{(1+rate)^{t}}} 691 692 References 693 ---------- 694 .. [G] L. J. Gitman, "Principles of Managerial Finance, Brief," 3rd ed., 695 Addison-Wesley, 2003, pg. 346. 696 697 Examples 698 -------- 699 >>> np.npv(0.281,[-100, 39, 59, 55, 20]) 700 -0.0084785916384548798 701 702 (Compare with the Example given for numpy.lib.financial.irr) 703 704 """ 705 values = np.asarray(values) 706 return (values / (1+rate)**np.arange(0, len(values))).sum(axis=0) 707 708 def mirr(values, finance_rate, reinvest_rate): 709 """ 710 Modified internal rate of return. 711 712 Parameters 713 ---------- 714 values : array_like 715 Cash flows (must contain at least one positive and one negative 716 value) or nan is returned. The first value is considered a sunk 717 cost at time zero. 718 finance_rate : scalar 719 Interest rate paid on the cash flows 720 reinvest_rate : scalar 721 Interest rate received on the cash flows upon reinvestment 722 723 Returns 724 ------- 725 out : float 726 Modified internal rate of return 727 728 """ 729 values = np.asarray(values, dtype=np.double) 730 n = values.size 731 pos = values > 0 732 neg = values < 0 733 if not (pos.any() and neg.any()): 734 return np.nan 735 numer = np.abs(npv(reinvest_rate, values*pos)) 736 denom = np.abs(npv(finance_rate, values*neg)) 737 return (numer/denom)**(1.0/(n - 1))*(1 + reinvest_rate) - 1 738 [end of numpy/lib/financial.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
numpy/numpy
b3c772c74bcf722b50c02232ec3571ad7f77b846
BUG: ma: Several functions in the ma module return None when given a scalar. Here's an example, using `ma.atleast_1d`: ``` In [4]: np.__version__ Out[4]: '1.8.0.dev-31a5501' ``` When given a scalar, the regular version of `atleast_1d` returns an array: ``` In [5]: x = np.atleast_1d(1.0) In [6]: x Out[6]: array([ 1.]) ``` `ma.atleast_1d` returns `None`: ``` In [7]: x = np.ma.atleast_1d(1.0) In [8]: print x None ``` `ma.atleast_1d` and several other functions in the `ma` module are defined using the class `_fromnxfunction` (defined in numpy/ma/extras.py). In the `__call__` method of this class, in the case where `len(args) == 1`, the code handles the cases where the single argument is an ndarray, a tuple or list. If the single argument is not one of these, the code falls through to the end without returning anything. The non-masked versions of these accept a scalar, so the masked version should too: ``` atleast_1d atleast_2d atleast_3d diagflat ```
2015-10-21T16:04:57Z
<patch> diff --git a/numpy/ma/extras.py b/numpy/ma/extras.py --- a/numpy/ma/extras.py +++ b/numpy/ma/extras.py @@ -270,6 +270,10 @@ def __call__(self, *args, **params): _d = func(tuple([np.asarray(a) for a in x]), **params) _m = func(tuple([getmaskarray(a) for a in x]), **params) return masked_array(_d, mask=_m) + else: + _d = func(np.asarray(x), **params) + _m = func(getmaskarray(x), **params) + return masked_array(_d, mask=_m) else: arrays = [] args = list(args) </patch>
[]
[]
Qiskit__qiskit-1373
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> clbit_labels should not contain null <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Information The definition of `clbit_labels` was changed in #879 by @dcmckayibm. It used to be something like: ```[["c", 2], ["d", 3]] ```, which assigns contiguous regions of memory to each register. Now it is like this, where single classical bits can be assigned to arbitrary locations of memory. ```[["c", 1], ["c", 0], ["d", 0], ["d", 2], ["d", 1]]``` I agree with this change. However, the schema currently also allows something like this for `clbit_labels`. ```[["c", 1], ["c", 0], ["d", 0], ["d", 2], ["d", 1], null, null]``` I don't think the schema should allow this. I want to have a convention that for each experiment, `memory_slots` defines the total amount of classical (slow) memory bits for that experiment, and `clbit_labels` have a 1-1 correspondence with those. It is fine for `qubit_labels` to have null, as the device can have more qubits than used for an experiment. But `clbit_labels` should just be as many classical bits as the experiment uses. By the way this information is important for Terra to be able to rebuild the original circuit registers in the Result. </issue> <code> [start of README.md] 1 # Qiskit Terra 2 3 [![PyPI](https://img.shields.io/pypi/v/qiskit.svg)](https://pypi.python.org/pypi/qiskit) 4 [![Build Status](https://travis-ci.org/Qiskit/qiskit-terra.svg?branch=master)](https://travis-ci.org/Qiskit/qiskit-terra) 5 [![Build Status IBM Q](https://travis-matrix-badges.herokuapp.com/repos/Qiskit/qiskit-terra/branches/master/8)](https://travis-ci.org/Qiskit/qiskit-terra) 6 7 **Qiskit** is a software development kit for 8 developing quantum computing applications and working with NISQ (Noisy-Intermediate Scale Quantum) computers. 9 10 Qiskit is made up elements that each work together to enable quantum computing. This element is **Terra** 11 and is the foundation on which the rest of Qiskit is built (see this [post](https://medium.com/qiskit/qiskit-and-its-fundamental-elements-bcd7ead80492) for an overview). 12 13 14 ## Installation 15 16 17 We encourage installing Qiskit via the PIP tool (a python package manager): 18 19 ```bash 20 pip install qiskit 21 ``` 22 23 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version. 24 25 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In 26 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended 27 for interacting with the tutorials. 28 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads) 29 python distribution, as it comes with all of these dependencies pre-installed. 30 31 See [installing](doc/install.rst) Qiskit for detailed instructions, how to build from source and using environments. 32 33 34 ## Creating your first quantum program 35 36 Now that Qiskit is installed, it's time to begin working with Terra. 37 38 We are ready to try out a quantum circuit example, which is simulated locally using 39 the Qiskt Aer element. This is a simple example that makes an entangled state. 40 41 ``` 42 $ python 43 ``` 44 45 ```python 46 >>> from qiskit import * 47 >>> q = QuantumRegister(2) 48 >>> c = ClassicalRegister(2) 49 >>> qc = QuantumCircuit(q, c) 50 >>> qc.h(q[0]) 51 >>> qc.cx(q[0], q[1]) 52 >>> qc.measure(q, c) 53 >>> backend_sim = Aer.get_backend('qasm_simulator') 54 >>> result = execute(qc, backend_sim).result() 55 >>> print(result.get_counts(qc)) 56 ``` 57 58 In this case, the output will be: 59 60 ```python 61 {'counts': {'00': 513, '11': 511}} 62 ``` 63 64 A script is available [here](examples/python/hello_quantum.py), where we also show how to 65 run the same program on a real quantum computer via IBMQ. 66 67 ### Executing your code on a real quantum chip 68 69 You can also use Qiskit to execute your code on a 70 **real quantum chip**. 71 In order to do so, you need to configure Qiskit for using the credentials in 72 your IBM Q account: 73 74 #### Configure your IBMQ credentials 75 76 1. Create an _[IBM Q](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so. 77 78 2. Get an API token from the IBM Q website under _My Account > Advanced > API Token_. 79 80 3. Take your token from step 2, here called `MY_API_TOKEN`, and run: 81 82 ```python 83 >>> from qiskit import IBMQ 84 >>> IBMQ.save_account('MY_API_TOKEN') 85 ``` 86 87 4. If you have access to the IBM Q Network features, you also need to pass the 88 url listed on your IBM Q account page to `save_account`. 89 90 After calling `IBMQ.save_account()`, your credentials will be stored on disk. 91 Once they are stored, at any point in the future you can load and use them 92 in your program simply via: 93 94 ```python 95 >>> from qiskit import IBMQ 96 >>> IBMQ.load_accounts() 97 ``` 98 99 For those who do not want to save there credentials to disk please use 100 101 ```python 102 >>> from qiskit import IBMQ 103 >>> IBMQ.enable_account('MY_API_TOKEN') 104 ``` 105 106 and the token will only be active for the session. For examples using Terra with real 107 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in 108 the levels. 109 110 ## Contribution guidelines 111 112 If you'd like to contribute to Qiskit, please take a look at our 113 [contribution guidelines](.github/CONTRIBUTING.rst). This project adheres to Qiskit's [code of conduct](.github/CODE_OF_CONDUCT.rst). By participating, you are expect to uphold to this code. 114 115 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. 116 Please use our [slack](https://qiskit.slack.com) for discussion. To join our Slack community use the [link](https://join.slack.com/t/qiskit/shared_invite/enQtNDc2NjUzMjE4Mzc0LTMwZmE0YTM4ZThiNGJmODkzN2Y2NTNlMDIwYWNjYzA2ZmM1YTRlZGQ3OGM0NjcwMjZkZGE0MTA4MGQ1ZTVmYzk). To ask questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/qiskit). 117 118 119 120 ### Next Steps 121 122 Now you're set up and ready to check out some of the other examples from our 123 [Qiskit Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository. 124 125 126 ## Authors 127 128 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 129 to the project at different levels. 130 131 ## License 132 133 [Apache License 2.0](LICENSE.txt) [end of README.md] [start of qiskit/backends/aer/qasm_simulator_py.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 # pylint: disable=invalid-name 9 10 """Contains a (slow) python simulator. 11 12 It simulates a qasm quantum circuit that has been compiled to run on the 13 simulator. It is exponential in the number of qubits. 14 15 We advise using the c++ simulator or online simulator for larger size systems. 16 17 The input is a qobj dictionary 18 19 and the output is a Results object 20 21 results['data']["counts"] where this is dict {"0000" : 454} 22 23 The simulator is run using 24 25 .. code-block:: python 26 27 QasmSimulatorPy(compiled_circuit,shots,seed).run(). 28 29 .. code-block:: guess 30 31 compiled_circuit = 32 { 33 "header": { 34 "number_of_qubits": 2, // int 35 "number_of_clbits": 2, // int 36 "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]] 37 "clbit_labels": [["c", 2]], // list[list[string, int]] 38 } 39 "operations": // list[map] 40 [ 41 { 42 "name": , // required -- string 43 "params": , // optional -- list[double] 44 "qubits": , // required -- list[int] 45 "clbits": , // optional -- list[int] 46 "conditional": // optional -- map 47 { 48 "type": , // string 49 "mask": , // hex string 50 "val": , // bhex string 51 } 52 }, 53 ] 54 } 55 56 .. code-block:: python 57 58 result = 59 { 60 'data': { 61 'statevector': array([ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]), 62 'classical_state': 0 63 'counts': {'0000': 1} 64 'snapshots': { '0': {'statevector': array([1.+0.j, 0.+0.j, 65 0.+0.j, 0.+0.j])}} 66 } 67 } 68 'time_taken': 0.002 69 'status': 'DONE' 70 } 71 72 """ 73 import random 74 import uuid 75 import time 76 import logging 77 from collections import Counter 78 79 from math import log2 80 import numpy as np 81 from qiskit._util import local_hardware_info 82 from qiskit.backends.models import BackendConfiguration, BackendProperties 83 from qiskit.result._utils import copy_qasm_from_qobj_into_result, result_from_old_style_dict 84 from qiskit.backends import BaseBackend 85 from qiskit.backends.aer.aerjob import AerJob 86 from ._simulatorerror import SimulatorError 87 from ._simulatortools import index2, single_gate_matrix 88 logger = logging.getLogger(__name__) 89 90 91 class QasmSimulatorPy(BaseBackend): 92 """Python implementation of a qasm simulator.""" 93 94 DEFAULT_CONFIGURATION = { 95 'backend_name': 'qasm_simulator_py', 96 'backend_version': '2.0.0', 97 'n_qubits': int(log2(local_hardware_info()['memory'] * (1024**3)/16)), 98 'url': 'https://github.com/Qiskit/qiskit-terra', 99 'simulator': True, 100 'local': True, 101 'conditional': True, 102 'open_pulse': False, 103 'memory': True, 104 'max_shots': 65536, 105 'description': 'A python simulator for qasm experiments', 106 'basis_gates': ['u1', 'u2', 'u3', 'cx', 'id', 'snapshot'], 107 'gates': [{'name': 'TODO', 'parameters': [], 'qasm_def': 'TODO'}] 108 } 109 110 def __init__(self, configuration=None, provider=None): 111 super().__init__(configuration=(configuration or 112 BackendConfiguration.from_dict(self.DEFAULT_CONFIGURATION)), 113 provider=provider) 114 115 self._local_random = random.Random() 116 117 # Define attributes in __init__. 118 self._classical_state = 0 119 self._statevector = 0 120 self._snapshots = {} 121 self._number_of_cbits = 0 122 self._number_of_qubits = 0 123 self._shots = 0 124 self._qobj_config = None 125 126 def properties(self): 127 """Return backend properties""" 128 properties = { 129 'backend_name': self.name(), 130 'backend_version': self.configuration().backend_version, 131 'last_update_date': '2000-01-01 00:00:00Z', 132 'qubits': [[{'name': 'TODO', 'date': '2000-01-01 00:00:00Z', 133 'unit': 'TODO', 'value': 0}]], 134 'gates': [{'qubits': [0], 'gate': 'TODO', 135 'parameters': 136 [{'name': 'TODO', 'date': '2000-01-01 00:00:00Z', 137 'unit': 'TODO', 'value': 0}]}], 138 'general': [] 139 } 140 141 return BackendProperties.from_dict(properties) 142 143 def _add_qasm_single(self, gate, qubit): 144 """Apply an arbitrary 1-qubit operator to a qubit. 145 146 Gate is the single qubit applied. 147 qubit is the qubit the gate is applied to. 148 """ 149 psi = self._statevector 150 bit = 1 << qubit 151 for k1 in range(0, 1 << self._number_of_qubits, 1 << (qubit+1)): 152 for k2 in range(0, 1 << qubit, 1): 153 k = k1 | k2 154 cache0 = psi[k] 155 cache1 = psi[k | bit] 156 psi[k] = gate[0, 0] * cache0 + gate[0, 1] * cache1 157 psi[k | bit] = gate[1, 0] * cache0 + gate[1, 1] * cache1 158 159 def _add_qasm_cx(self, q0, q1): 160 """Optimized ideal CX on two qubits. 161 162 q0 is the first qubit (control) counts from 0. 163 q1 is the second qubit (target). 164 """ 165 psi = self._statevector 166 for k in range(0, 1 << (self._number_of_qubits - 2)): 167 # first bit is control, second is target 168 ind1 = index2(1, q0, 0, q1, k) 169 # swap target if control is 1 170 ind3 = index2(1, q0, 1, q1, k) 171 cache0 = psi[ind1] 172 cache1 = psi[ind3] 173 psi[ind3] = cache0 174 psi[ind1] = cache1 175 176 def _add_qasm_decision(self, qubit): 177 """Apply the decision of measurement/reset qubit gate. 178 179 qubit is the qubit that is measured/reset 180 """ 181 probability_zero = 0 182 random_number = self._local_random.random() 183 for ii in range(1 << self._number_of_qubits): 184 if ii & (1 << qubit) == 0: 185 probability_zero += np.abs(self._statevector[ii])**2 186 if random_number <= probability_zero: 187 outcome = '0' 188 norm = np.sqrt(probability_zero) 189 else: 190 outcome = '1' 191 norm = np.sqrt(1-probability_zero) 192 return (outcome, norm) 193 194 def _add_qasm_measure(self, qubit, cbit): 195 """Apply the measurement qubit gate. 196 197 qubit is the qubit measured. 198 cbit is the classical bit the measurement is assigned to. 199 """ 200 outcome, norm = self._add_qasm_decision(qubit) 201 for ii in range(1 << self._number_of_qubits): 202 # update quantum state 203 if (ii >> qubit) & 1 == int(outcome): 204 self._statevector[ii] = self._statevector[ii]/norm 205 else: 206 self._statevector[ii] = 0 207 # update classical state 208 bit = 1 << cbit 209 self._classical_state = (self._classical_state & (~bit)) | (int(outcome) << cbit) 210 211 def _add_qasm_reset(self, qubit): 212 """Apply the reset to the qubit. 213 214 This is done by doing a measruement and if 0 do nothing and 215 if 1 flip the qubit. 216 217 qubit is the qubit that is reset. 218 """ 219 # TODO: slow, refactor later 220 outcome, norm = self._add_qasm_decision(qubit) 221 temp = np.copy(self._statevector) 222 self._statevector.fill(0.0) 223 # measurement 224 for ii in range(1 << self._number_of_qubits): 225 if (ii >> qubit) & 1 == int(outcome): 226 temp[ii] = temp[ii]/norm 227 else: 228 temp[ii] = 0 229 # reset 230 if outcome == '1': 231 for ii in range(1 << self._number_of_qubits): 232 iip = (~ (1 << qubit)) & ii # bit number qubit set to zero 233 self._statevector[iip] += temp[ii] 234 else: 235 self._statevector = temp 236 237 def _add_qasm_snapshot(self, slot): 238 """Snapshot instruction to record simulator's internal representation 239 of quantum statevector. 240 241 slot is a string indicating a snapshot slot label. 242 """ 243 self._snapshots.setdefault(str(slot), 244 {}).setdefault("statevector", 245 []).append(np.copy(self._statevector)) 246 247 def run(self, qobj): 248 """Run qobj asynchronously. 249 250 Args: 251 qobj (dict): job description 252 253 Returns: 254 AerJob: derived from BaseJob 255 """ 256 job_id = str(uuid.uuid4()) 257 aer_job = AerJob(self, job_id, self._run_job, qobj) 258 aer_job.submit() 259 return aer_job 260 261 def _run_job(self, job_id, qobj): 262 """Run circuits in qobj""" 263 self._validate(qobj) 264 result_list = [] 265 self._shots = qobj.config.shots 266 self._qobj_config = qobj.config 267 start = time.time() 268 269 for circuit in qobj.experiments: 270 result_list.append(self.run_circuit(circuit)) 271 end = time.time() 272 result = {'backend': self.name(), 273 'id': qobj.qobj_id, 274 'job_id': job_id, 275 'result': result_list, 276 'status': 'COMPLETED', 277 'success': True, 278 'time_taken': (end - start)} 279 280 copy_qasm_from_qobj_into_result(qobj, result) 281 282 return result_from_old_style_dict(result) 283 284 def run_circuit(self, circuit): 285 """Run a circuit and return a single Result. 286 287 Args: 288 circuit (QobjExperiment): experiment from qobj experiments list 289 290 Returns: 291 dict: A dictionary of results which looks something like:: 292 293 { 294 "data": 295 { #### DATA CAN BE A DIFFERENT DICTIONARY FOR EACH BACKEND #### 296 "counts": {'00000': XXXX, '00001': XXXXX}, 297 "time" : xx.xxxxxxxx 298 }, 299 "status": --status (string)-- 300 } 301 Raises: 302 SimulatorError: if an error occurred. 303 """ 304 self._number_of_qubits = circuit.header.number_of_qubits 305 self._number_of_cbits = circuit.header.number_of_clbits 306 self._statevector = 0 307 self._classical_state = 0 308 self._snapshots = {} 309 cl_reg_index = [] # starting bit index of classical register 310 cl_reg_nbits = [] # number of bits in classical register 311 cbit_index = 0 312 for cl_reg in circuit.header.clbit_labels: 313 cl_reg_nbits.append(cl_reg[1]) 314 cl_reg_index.append(cbit_index) 315 cbit_index += cl_reg[1] 316 317 # Get the seed looking in circuit, qobj, and then random. 318 if hasattr(circuit, 'config') and hasattr(circuit.config, 'seed'): 319 seed = circuit.config.seed 320 elif hasattr(self._qobj_config, 'seed'): 321 seed = self._qobj_config.seed 322 else: 323 seed = random.getrandbits(32) 324 self._local_random.seed(seed) 325 outcomes = [] 326 327 start = time.time() 328 for _ in range(self._shots): 329 self._statevector = np.zeros(1 << self._number_of_qubits, 330 dtype=complex) 331 self._statevector[0] = 1 332 self._classical_state = 0 333 for operation in circuit.instructions: 334 if getattr(operation, 'conditional', None): 335 mask = int(operation.conditional.mask, 16) 336 if mask > 0: 337 value = self._classical_state & mask 338 while (mask & 0x1) == 0: 339 mask >>= 1 340 value >>= 1 341 if value != int(operation.conditional.val, 16): 342 continue 343 # Check if single gate 344 if operation.name in ('U', 'u1', 'u2', 'u3'): 345 params = getattr(operation, 'params', None) 346 qubit = operation.qubits[0] 347 gate = single_gate_matrix(operation.name, params) 348 self._add_qasm_single(gate, qubit) 349 # Check if CX gate 350 elif operation.name in ('id', 'u0'): 351 pass 352 elif operation.name in ('CX', 'cx'): 353 qubit0 = operation.qubits[0] 354 qubit1 = operation.qubits[1] 355 self._add_qasm_cx(qubit0, qubit1) 356 # Check if measure 357 elif operation.name == 'measure': 358 qubit = operation.qubits[0] 359 cbit = operation.clbits[0] 360 self._add_qasm_measure(qubit, cbit) 361 # Check if reset 362 elif operation.name == 'reset': 363 qubit = operation.qubits[0] 364 self._add_qasm_reset(qubit) 365 # Check if barrier 366 elif operation.name == 'barrier': 367 pass 368 # Check if snapshot command 369 elif operation.name == 'snapshot': 370 params = operation.params 371 self._add_qasm_snapshot(params[0]) 372 else: 373 backend = self.name() 374 err_msg = '{0} encountered unrecognized operation "{1}"' 375 raise SimulatorError(err_msg.format(backend, 376 operation.name)) 377 # Turn classical_state (int) into bit string 378 outcomes.append(bin(self._classical_state)[2:].zfill( 379 self._number_of_cbits)) 380 # Return the results 381 counts = dict(Counter(outcomes)) 382 data = { 383 'counts': self._format_result(counts, cl_reg_index, cl_reg_nbits), 384 'snapshots': self._snapshots 385 } 386 end = time.time() 387 return {'name': circuit.header.name, 388 'seed': seed, 389 'shots': self._shots, 390 'data': data, 391 'status': 'DONE', 392 'success': True, 393 'time_taken': (end-start)} 394 395 def _validate(self, qobj): 396 for experiment in qobj.experiments: 397 if 'measure' not in [op.name for 398 op in experiment.instructions]: 399 logger.warning("no measurements in circuit '%s', " 400 "classical register will remain all zeros.", 401 experiment.header.name) 402 403 def _format_result(self, counts, cl_reg_index, cl_reg_nbits): 404 """Format the result bit string. 405 406 This formats the result bit strings such that spaces are inserted 407 at register divisions. 408 409 Args: 410 counts (dict): dictionary of counts e.g. {'1111': 1000, '0000':5} 411 cl_reg_index (list): starting bit index of classical register 412 cl_reg_nbits (list): total amount of bits in classical register 413 Returns: 414 dict: spaces inserted into dictionary keys at register boundaries. 415 """ 416 fcounts = {} 417 for key, value in counts.items(): 418 if cl_reg_nbits: 419 new_key = [key[-cl_reg_nbits[0]:]] 420 for index, nbits in zip(cl_reg_index[1:], 421 cl_reg_nbits[1:]): 422 new_key.insert(0, key[-(index+nbits):-index]) 423 fcounts[' '.join(new_key)] = value 424 return fcounts 425 [end of qiskit/backends/aer/qasm_simulator_py.py] [start of qiskit/backends/ibmq/ibmqjob.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 """IBMQJob module 9 10 This module is used for creating asynchronous job objects for the 11 IBM Q Experience. 12 """ 13 14 from concurrent import futures 15 import warnings 16 import time 17 import logging 18 import pprint 19 import contextlib 20 import json 21 import datetime 22 import numpy 23 24 from qiskit.qobj import qobj_to_dict 25 from qiskit.transpiler import transpile_dag 26 from qiskit.backends import BaseJob, JobError, JobTimeoutError 27 from qiskit.backends.jobstatus import JobStatus, JOB_FINAL_STATES 28 from qiskit.result import Result 29 from qiskit.result._utils import result_from_old_style_dict 30 from qiskit.qobj import validate_qobj_against_schema 31 32 from .api import ApiError 33 34 35 logger = logging.getLogger(__name__) 36 37 38 API_FINAL_STATES = ( 39 'COMPLETED', 40 'CANCELLED', 41 'ERROR_CREATING_JOB', 42 'ERROR_VALIDATING_JOB', 43 'ERROR_RUNNING_JOB' 44 ) 45 46 47 class IBMQJob(BaseJob): 48 """Represent the jobs that will be executed on IBM-Q simulators and real 49 devices. Jobs are intended to be created calling ``run()`` on a particular 50 backend. 51 52 Creating a ``Job`` instance does not imply running it. You need to do it in 53 separate steps:: 54 55 job = IBMQJob(...) 56 job.submit() # It won't block. 57 58 An error while submitting a job will cause the next call to ``status()`` to 59 raise. If submitting the job successes, you can inspect the job's status by 60 using ``status()``. Status can be one of ``JobStatus`` members:: 61 62 from qiskit.backends.jobstatus import JobStatus 63 64 job = IBMQJob(...) 65 job.submit() 66 67 try: 68 job_status = job.status() # It won't block. It will query the backend API. 69 if job_status is JobStatus.RUNNING: 70 print('The job is still running') 71 72 except JobError as ex: 73 print("Something wrong happened!: {}".format(ex)) 74 75 A call to ``status()`` can raise if something happens at the API level that 76 prevents Qiskit from determining the status of the job. An example of this 77 is a temporary connection lose or a network failure. 78 79 The ``submit()`` and ``status()`` methods are examples of non-blocking API. 80 ``Job`` instances also have `id()` and ``result()`` methods which will 81 block:: 82 83 job = IBMQJob(...) 84 job.submit() 85 86 try: 87 job_id = job.id() # It will block until completing submission. 88 print('The job {} was successfully submitted'.format(job_id)) 89 90 job_result = job.result() # It will block until finishing. 91 print('The job finished with result {}'.format(job_result)) 92 93 except JobError as ex: 94 print("Something wrong happened!: {}".format(ex)) 95 96 97 Both methods can raise if something ath the API level happens that prevent 98 Qiskit from determining the status of the job. 99 100 .. NOTE:: 101 When querying the API for getting the status, two kinds of errors are 102 possible. The most severe is the one preventing Qiskit from getting a 103 response from the backend. This can be caused by a network failure or a 104 temporary system break. In these cases, calling ``status()`` will raise. 105 106 If Qiskit successfully retrieves the status of a job, it could be it 107 finished with errors. In that case, ``status()`` will simply return 108 ``JobStatus.ERROR`` and you can call ``error_message()`` to get more 109 info. 110 111 Attributes: 112 _executor (futures.Executor): executor to handle asynchronous jobs 113 """ 114 _executor = futures.ThreadPoolExecutor() 115 116 def __init__(self, backend, job_id, api, is_device, qobj=None, 117 creation_date=None, api_status=None, **kwargs): 118 """IBMQJob init function. 119 120 We can instantiate jobs from two sources: A QObj, and an already submitted job returned by 121 the API servers. 122 123 Args: 124 backend (str): The backend instance used to run this job. 125 job_id (str): The job ID of an already submitted job. Pass `None` 126 if you are creating a new one. 127 api (IBMQConnector): IBMQ connector. 128 is_device (bool): whether backend is a real device # TODO: remove this after Qobj 129 qobj (Qobj): The Quantum Object. See notes below 130 creation_date (str): When the job was run. 131 api_status (str): `status` field directly from the API response. 132 kwargs (dict): You can pass `backend_name` to this function although 133 it has been deprecated. 134 135 Notes: 136 It is mandatory to pass either ``qobj`` or ``job_id``. Passing a ``qobj`` 137 will ignore ``job_id`` and will create an instance to be submitted to the 138 API server for job creation. Passing only a `job_id`will create an instance 139 representing an already-created job retrieved from the API server. 140 """ 141 if 'backend_name' in kwargs: 142 warnings.warn('Passing the parameter `backend_name` is deprecated, ' 143 'pass the `backend` parameter with the instance of ' 144 'the backend running the job.', DeprecationWarning) 145 146 super().__init__(backend, job_id) 147 self._job_data = None 148 149 if qobj is not None: 150 validate_qobj_against_schema(qobj) 151 152 self._qobj_payload = qobj_to_dict(qobj, version='1.0.0') 153 # TODO: No need for this conversion, just use the new equivalent members above 154 old_qobj = qobj_to_dict(qobj, version='0.0.1') 155 self._job_data = { 156 'circuits': old_qobj['circuits'], 157 'hpc': old_qobj['config'].get('hpc'), 158 'seed': old_qobj['circuits'][0]['config']['seed'], 159 'shots': old_qobj['config']['shots'], 160 'max_credits': old_qobj['config']['max_credits'] 161 } 162 163 self._future_captured_exception = None 164 self._api = api 165 self._backend = backend 166 self._cancelled = False 167 self._status = JobStatus.INITIALIZING 168 # In case of not providing a `qobj`, it is assumed the job already 169 # exists in the API (with `job_id`). 170 if qobj is None: 171 # Some API calls (`get_status_jobs`, `get_status_job`) provide 172 # enough information to recreate the `Job`. If that is the case, try 173 # to make use of that information during instantiation, as 174 # `self.status()` involves an extra call to the API. 175 if api_status == 'VALIDATING': 176 self._status = JobStatus.VALIDATING 177 elif api_status == 'COMPLETED': 178 self._status = JobStatus.DONE 179 elif api_status == 'CANCELLED': 180 self._status = JobStatus.CANCELLED 181 self._cancelled = True 182 else: 183 self.status() 184 self._queue_position = None 185 self._is_device = is_device 186 187 def current_utc_time(): 188 """Gets the current time in UTC format""" 189 datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat() 190 191 self._creation_date = creation_date or current_utc_time() 192 self._future = None 193 self._api_error_msg = None 194 195 # pylint: disable=arguments-differ 196 def result(self, timeout=None, wait=5): 197 """Return the result from the job. 198 199 Args: 200 timeout (int): number of seconds to wait for job 201 wait (int): time between queries to IBM Q server 202 203 Returns: 204 qiskit.Result: Result object 205 206 Raises: 207 JobError: exception raised during job initialization 208 """ 209 job_response = self._wait_for_result(timeout=timeout, wait=wait) 210 return self._result_from_job_response(job_response) 211 212 def _wait_for_result(self, timeout=None, wait=5): 213 self._wait_for_submission() 214 215 try: 216 job_response = self._wait_for_job(timeout=timeout, wait=wait) 217 except ApiError as api_err: 218 raise JobError(str(api_err)) 219 220 status = self.status() 221 if status is not JobStatus.DONE: 222 raise JobError('Invalid job state. The job should be DONE but ' 223 'it is {}'.format(str(status))) 224 225 return job_response 226 227 def _result_from_job_response(self, job_response): 228 return Result.from_dict(job_response['qObjectResult']) 229 230 def cancel(self): 231 """Attempt to cancel a job. 232 233 Returns: 234 bool: True if job can be cancelled, else False. Currently this is 235 only possible on commercial systems. 236 237 Raises: 238 JobError: if there was some unexpected failure in the server 239 """ 240 hub = self._api.config.get('hub', None) 241 group = self._api.config.get('group', None) 242 project = self._api.config.get('project', None) 243 244 try: 245 response = self._api.cancel_job(self._job_id, hub, group, project) 246 self._cancelled = 'error' not in response 247 return self._cancelled 248 except ApiError as error: 249 self._cancelled = False 250 raise JobError('Error cancelling job: %s' % error.usr_msg) 251 252 def status(self): 253 """Query the API to update the status. 254 255 Returns: 256 JobStatus: The status of the job, once updated. 257 258 Raises: 259 JobError: if there was an exception in the future being executed 260 or the server sent an unknown answer. 261 """ 262 # Implies self._job_id is None 263 if self._future_captured_exception is not None: 264 raise JobError(str(self._future_captured_exception)) 265 266 if self._job_id is None or self._status in JOB_FINAL_STATES: 267 return self._status 268 269 try: 270 # TODO: See result values 271 api_job = self._api.get_status_job(self._job_id) 272 if 'status' not in api_job: 273 raise JobError('get_status_job didn\'t return status: %s' % 274 pprint.pformat(api_job)) 275 # pylint: disable=broad-except 276 except Exception as err: 277 raise JobError(str(err)) 278 279 if api_job['status'] == 'VALIDATING': 280 self._status = JobStatus.VALIDATING 281 282 elif api_job['status'] == 'RUNNING': 283 self._status = JobStatus.RUNNING 284 queued, self._queue_position = _is_job_queued(api_job) 285 if queued: 286 self._status = JobStatus.QUEUED 287 288 elif api_job['status'] == 'COMPLETED': 289 self._status = JobStatus.DONE 290 291 elif api_job['status'] == 'CANCELLED': 292 self._status = JobStatus.CANCELLED 293 self._cancelled = True 294 295 elif 'ERROR' in api_job['status']: 296 # Error status are of the form "ERROR_*_JOB" 297 self._status = JobStatus.ERROR 298 # TODO: This seems to be an inconsistency in the API package. 299 self._api_error_msg = api_job.get('error') or api_job.get('Error') 300 301 else: 302 raise JobError('Unrecognized answer from server: \n{}' 303 .format(pprint.pformat(api_job))) 304 305 return self._status 306 307 def error_message(self): 308 """Return the error message returned from the API server response.""" 309 return self._api_error_msg 310 311 def queue_position(self): 312 """Return the position in the server queue. 313 314 Returns: 315 Number: Position in the queue. 316 """ 317 return self._queue_position 318 319 def creation_date(self): 320 """ 321 Return creation date. 322 """ 323 return self._creation_date 324 325 # pylint: disable=invalid-name 326 def id(self): 327 """Return backend determined id. 328 329 If the Id is not set because the job is already initializing, this call 330 will block until we have an Id. 331 332 .. deprecated:: 0.6+ 333 After 0.6, this function is deprecated. Please use 334 `job.job_id()` instead. 335 """ 336 warnings.warn('The method `job.id()` is deprecated, use ' 337 '``job.job_id()`` instead.', DeprecationWarning) 338 return self.job_id() 339 340 def job_id(self): 341 """Return backend determined id. 342 343 If the Id is not set because the job is already initializing, this call 344 will block until we have an Id. 345 """ 346 self._wait_for_submission() 347 return self._job_id 348 349 def backend_name(self): 350 """ 351 Return backend name used for this job. 352 353 .. deprecated:: 0.6+ 354 After 0.6, this function is deprecated. Please use 355 `job.backend().name()` instead. 356 """ 357 warnings.warn('The use of `job.backend_name()` is deprecated, ' 358 'use `job.backend().name()` instead', DeprecationWarning) 359 return self.backend().name() 360 361 def submit(self): 362 """Submit job to IBM-Q. 363 364 Raises: 365 JobError: If we have already submitted the job. 366 """ 367 # TODO: Validation against the schema should be done here and not 368 # during initialization. Once done, we should document that the method 369 # can raise QobjValidationError. 370 if self._future is not None or self._job_id is not None: 371 raise JobError("We have already submitted the job!") 372 self._future = self._executor.submit(self._submit_callback) 373 374 def _submit_callback(self): 375 """Submit qobj job to IBM-Q. 376 377 Returns: 378 dict: A dictionary with the response of the submitted job 379 """ 380 backend_name = self.backend().name() 381 382 try: 383 submit_info = self._api.run_job(self._qobj_payload, backend=backend_name) 384 # pylint: disable=broad-except 385 except Exception as err: 386 # Undefined error during submission: 387 # Capture and keep it for raising it when calling status(). 388 self._future_captured_exception = err 389 return None 390 391 # Error in the job after submission: 392 # Transition to the `ERROR` final state. 393 if 'error' in submit_info: 394 self._status = JobStatus.ERROR 395 self._api_error_msg = str(submit_info['error']) 396 return submit_info 397 398 # Submission success. 399 self._creation_date = submit_info.get('creationDate') 400 self._status = JobStatus.QUEUED 401 self._job_id = submit_info.get('id') 402 return submit_info 403 404 def _wait_for_job(self, timeout=60, wait=5): 405 """Wait until all online ran circuits of a qobj are 'COMPLETED'. 406 407 Args: 408 timeout (float or None): seconds to wait for job. If None, wait 409 indefinitely. 410 wait (float): seconds between queries 411 412 Returns: 413 dict: A dict with the contents of the API request. 414 415 Raises: 416 JobTimeoutError: if the job does not return results before a specified timeout. 417 JobError: if something wrong happened in some of the server API calls 418 """ 419 start_time = time.time() 420 while self.status() not in JOB_FINAL_STATES: 421 elapsed_time = time.time() - start_time 422 if timeout is not None and elapsed_time >= timeout: 423 raise JobTimeoutError( 424 'Timeout while waiting for the job: {}'.format(self._job_id) 425 ) 426 427 logger.info('status = %s (%d seconds)', self._status, elapsed_time) 428 time.sleep(wait) 429 430 if self._cancelled: 431 raise JobError( 432 'Job result impossible to retrieve. The job was cancelled.') 433 434 return self._api.get_job(self._job_id) 435 436 def _wait_for_submission(self, timeout=60): 437 """Waits for the request to return a job ID""" 438 if self._job_id is None: 439 if self._future is None: 440 raise JobError("You have to submit before asking for status or results!") 441 try: 442 submit_info = self._future.result(timeout=timeout) 443 if self._future_captured_exception is not None: 444 # pylint can't see if catch of None type 445 # pylint: disable=raising-bad-type 446 raise self._future_captured_exception 447 except TimeoutError as ex: 448 raise JobTimeoutError( 449 "Timeout waiting for the job being submitted: {}".format(ex) 450 ) 451 if 'error' in submit_info: 452 self._status = JobStatus.ERROR 453 self._api_error_msg = str(submit_info['error']) 454 raise JobError(str(submit_info['error'])) 455 456 457 class IBMQJobPreQobj(IBMQJob): 458 """ 459 Subclass of IBMQJob for handling pre-qobj jobs. 460 """ 461 462 def _submit_callback(self): 463 """Submit old style qasms job to IBM-Q. Can remove when all devices 464 understand Qobj. 465 466 Returns: 467 dict: A dictionary with the response of the submitted job 468 """ 469 api_jobs = [] 470 circuits = self._job_data['circuits'] 471 for circuit in circuits: 472 job = _create_api_job_from_circuit(circuit) 473 api_jobs.append(job) 474 475 hpc_camel_cased = _format_hpc_parameters(self._job_data['hpc']) 476 seed = self._job_data['seed'] 477 shots = self._job_data['shots'] 478 max_credits = self._job_data['max_credits'] 479 480 try: 481 submit_info = self._api.run_job(api_jobs, backend=self.backend().name(), 482 shots=shots, max_credits=max_credits, 483 seed=seed, hpc=hpc_camel_cased) 484 # pylint: disable=broad-except 485 except Exception as err: 486 # Undefined error during submission: 487 # Capture and keep it for raising it when calling status(). 488 self._future_captured_exception = err 489 return None 490 491 # Error in the job after submission: 492 # Transition to the `ERROR` final state. 493 if 'error' in submit_info: 494 self._status = JobStatus.ERROR 495 self._api_error_msg = str(submit_info['error']) 496 return submit_info 497 498 # Submission success. 499 self._creation_date = submit_info.get('creationDate') 500 self._status = JobStatus.QUEUED 501 self._job_id = submit_info.get('id') 502 return submit_info 503 504 def _result_from_job_response(self, job_response): 505 if self._is_device: 506 _reorder_bits(job_response) 507 508 experiment_results = [] 509 for circuit_result in job_response['qasms']: 510 this_result = {'data': circuit_result['data'], 511 'compiled_circuit_qasm': circuit_result.get('qasm'), 512 'status': circuit_result['status'], 513 'success': circuit_result['status'] == 'DONE', 514 'shots': job_response['shots']} 515 if 'metadata' in circuit_result: 516 this_result['metadata'] = circuit_result['metadata'] 517 if 'header' in circuit_result['metadata'].get('compiled_circuit', {}): 518 this_result['header'] = \ 519 circuit_result['metadata']['compiled_circuit']['header'] 520 else: 521 this_result['header'] = {} 522 experiment_results.append(this_result) 523 524 return result_from_old_style_dict({ 525 'id': self._job_id, 526 'status': job_response['status'], 527 'used_credits': job_response.get('usedCredits'), 528 'result': experiment_results, 529 'backend_name': self.backend().name(), 530 'success': job_response['status'] == 'COMPLETED' 531 }) 532 533 534 def _reorder_bits(job_data): 535 """Temporary fix for ibmq backends. 536 537 For every ran circuit, get reordering information from qobj 538 and apply reordering on result. 539 540 Args: 541 job_data (dict): dict with the bare contents of the API.get_job request. 542 543 Raises: 544 JobError: raised if the creg sizes don't add up in result header. 545 """ 546 for circuit_result in job_data['qasms']: 547 if 'metadata' in circuit_result: 548 circ = circuit_result['metadata'].get('compiled_circuit') 549 else: 550 logger.warning('result object missing metadata for reordering' 551 ' bits: bits may be out of order') 552 return 553 # device_qubit -> device_clbit (how it should have been) 554 measure_dict = {op['qubits'][0]: op['clbits'][0] 555 for op in circ['operations'] 556 if op['name'] == 'measure'} 557 counts_dict_new = {} 558 for item in circuit_result['data']['counts'].items(): 559 # fix clbit ordering to what it should have been 560 bits = list(item[0]) 561 bits.reverse() # lsb in 0th position 562 count = item[1] 563 reordered_bits = list('x' * len(bits)) 564 for device_clbit, bit in enumerate(bits): 565 if device_clbit in measure_dict: 566 correct_device_clbit = measure_dict[device_clbit] 567 reordered_bits[correct_device_clbit] = bit 568 reordered_bits.reverse() 569 570 # only keep the clbits specified by circuit, not everything on device 571 num_clbits = circ['header']['number_of_clbits'] 572 compact_key = reordered_bits[-num_clbits:] 573 compact_key = "".join([b if b != 'x' else '0' 574 for b in compact_key]) 575 576 # insert spaces to signify different classical registers 577 cregs = circ['header']['clbit_labels'] 578 if sum([creg[1] for creg in cregs]) != num_clbits: 579 raise JobError("creg sizes don't add up in result header.") 580 creg_begin_pos = [] 581 creg_end_pos = [] 582 acc = 0 583 for creg in reversed(cregs): 584 creg_size = creg[1] 585 creg_begin_pos.append(acc) 586 creg_end_pos.append(acc + creg_size) 587 acc += creg_size 588 compact_key = " ".join([compact_key[creg_begin_pos[i]:creg_end_pos[i]] 589 for i in range(len(cregs))]) 590 591 # marginalize over unwanted measured qubits 592 if compact_key not in counts_dict_new: 593 counts_dict_new[compact_key] = count 594 else: 595 counts_dict_new[compact_key] += count 596 597 circuit_result['data']['counts'] = counts_dict_new 598 599 600 def _numpy_type_converter(obj): 601 ret = obj 602 if isinstance(obj, numpy.integer): 603 ret = int(obj) 604 elif isinstance(obj, numpy.floating): # pylint: disable=no-member 605 ret = float(obj) 606 elif isinstance(obj, numpy.ndarray): 607 ret = obj.tolist() 608 return ret 609 610 611 def _create_api_job_from_circuit(circuit): 612 """Helper function that creates a special job required by the API, from a circuit.""" 613 api_job = {} 614 if not circuit.get('compiled_circuit_qasm'): 615 compiled_circuit = transpile_dag(circuit['circuit']) 616 circuit['compiled_circuit_qasm'] = compiled_circuit.qasm(qeflag=True) 617 618 if isinstance(circuit['compiled_circuit_qasm'], bytes): 619 api_job['qasm'] = circuit['compiled_circuit_qasm'].decode() 620 else: 621 api_job['qasm'] = circuit['compiled_circuit_qasm'] 622 623 if circuit.get('name'): 624 api_job['name'] = circuit['name'] 625 626 # convert numpy types for json serialization 627 compiled_circuit = json.loads(json.dumps(circuit['compiled_circuit'], 628 default=_numpy_type_converter)) 629 630 api_job['metadata'] = {'compiled_circuit': compiled_circuit} 631 return api_job 632 633 634 def _is_job_queued(api_job_response): 635 """Checks whether a job has been queued or not.""" 636 is_queued, position = False, 0 637 if 'infoQueue' in api_job_response: 638 if 'status' in api_job_response['infoQueue']: 639 queue_status = api_job_response['infoQueue']['status'] 640 is_queued = queue_status == 'PENDING_IN_QUEUE' 641 if 'position' in api_job_response['infoQueue']: 642 position = api_job_response['infoQueue']['position'] 643 return is_queued, position 644 645 646 def _format_hpc_parameters(hpc): 647 """Helper function to get HPC parameters with the correct format""" 648 if hpc is None: 649 return None 650 651 hpc_camel_cased = None 652 with contextlib.suppress(KeyError, TypeError): 653 # Use CamelCase when passing the hpc parameters to the API. 654 hpc_camel_cased = { 655 'multiShotOptimization': hpc['multi_shot_optimization'], 656 'ompNumThreads': hpc['omp_num_threads'] 657 } 658 659 return hpc_camel_cased 660 [end of qiskit/backends/ibmq/ibmqjob.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
f57f633aa1b1669a00033f3b86ab9e74df128e4c
clbit_labels should not contain null <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Information The definition of `clbit_labels` was changed in #879 by @dcmckayibm. It used to be something like: ```[["c", 2], ["d", 3]] ```, which assigns contiguous regions of memory to each register. Now it is like this, where single classical bits can be assigned to arbitrary locations of memory. ```[["c", 1], ["c", 0], ["d", 0], ["d", 2], ["d", 1]]``` I agree with this change. However, the schema currently also allows something like this for `clbit_labels`. ```[["c", 1], ["c", 0], ["d", 0], ["d", 2], ["d", 1], null, null]``` I don't think the schema should allow this. I want to have a convention that for each experiment, `memory_slots` defines the total amount of classical (slow) memory bits for that experiment, and `clbit_labels` have a 1-1 correspondence with those. It is fine for `qubit_labels` to have null, as the device can have more qubits than used for an experiment. But `clbit_labels` should just be as many classical bits as the experiment uses. By the way this information is important for Terra to be able to rebuild the original circuit registers in the Result.
I agree. Sure
2018-11-29T12:50:43Z
<patch> diff --git a/qiskit/backends/aer/qasm_simulator.py b/qiskit/backends/aer/qasm_simulator.py --- a/qiskit/backends/aer/qasm_simulator.py +++ b/qiskit/backends/aer/qasm_simulator.py @@ -22,11 +22,9 @@ import numpy as np from qiskit._util import local_hardware_info from qiskit.backends.models import BackendConfiguration, BackendProperties -from qiskit.result._utils import copy_qasm_from_qobj_into_result, result_from_old_style_dict from qiskit.backends import BaseBackend from qiskit.backends.aer.aerjob import AerJob -from qiskit.qobj import Qobj -from qiskit.qobj import qobj_to_dict +from qiskit.result import Result logger = logging.getLogger(__name__) @@ -110,12 +108,12 @@ def run(self, qobj): return aer_job def _run_job(self, job_id, qobj): + """Run a Qobj on the backend.""" self._validate(qobj) - result = run(qobj, self._configuration.exe) + qobj_dict = qobj.as_dict() + result = run(qobj_dict, self._configuration.exe) result['job_id'] = job_id - copy_qasm_from_qobj_into_result(qobj, result) - - return result_from_old_style_dict(result) + return Result.from_dict(result) def _validate(self, qobj): for experiment in qobj.experiments: @@ -198,76 +196,21 @@ def run(self, qobj): return aer_job def _run_job(self, job_id, qobj): - if isinstance(qobj, Qobj): - qobj_dict = qobj.as_dict() - else: - qobj_dict = qobj + qobj_dict = qobj.as_dict() self._validate() # set backend to Clifford simulator if 'config' in qobj_dict: qobj_dict['config']['simulator'] = 'clifford' else: qobj_dict['config'] = {'simulator': 'clifford'} - - qobj = Qobj.from_dict(qobj_dict) - result = run(qobj, self._configuration.exe) + result = run(qobj_dict, self._configuration.exe) result['job_id'] = job_id - - return result_from_old_style_dict(result) + return Result.from_dict(result) def _validate(self): return -class QASMSimulatorEncoder(json.JSONEncoder): - """ - JSON encoder for NumPy arrays and complex numbers. - - This functions as the standard JSON Encoder but adds support - for encoding: - - * complex numbers z as lists [z.real, z.imag] - * ndarrays as nested lists. - """ - - # pylint: disable=method-hidden,arguments-differ - def default(self, obj): - if isinstance(obj, np.ndarray): - return obj.tolist() - if isinstance(obj, complex): - return [obj.real, obj.imag] - return json.JSONEncoder.default(self, obj) - - -class QASMSimulatorDecoder(json.JSONDecoder): - """ - JSON decoder for the output from C++ qasm_simulator. - - This converts complex vectors and matrices into numpy arrays - for the following keys. - """ - def __init__(self, *args, **kwargs): - json.JSONDecoder.__init__(self, object_hook=self.object_hook, *args, **kwargs) - - # pylint: disable=method-hidden - def object_hook(self, obj): - """Special decoding rules for simulator output.""" - - for key in ['U_error', 'density_matrix']: - # JSON is a complex matrix - if key in obj and isinstance(obj[key], list): - tmp = np.array(obj[key]) - obj[key] = tmp[::, ::, 0] + 1j * tmp[::, ::, 1] - for key in ['statevector', 'inner_products']: - # JSON is a list of complex vectors - if key in obj: - for j in range(len(obj[key])): - if isinstance(obj[key][j], list): - tmp = np.array(obj[key][j]) - obj[key][j] = tmp[::, 0] + 1j * tmp[::, 1] - return obj - - def run(qobj, executable): """ Run simulation on C++ simulator inside a subprocess. @@ -283,14 +226,13 @@ def run(qobj, executable): try: with subprocess.Popen([executable, '-'], stdin=PIPE, stdout=PIPE, stderr=PIPE) as proc: - cin = json.dumps(qobj_to_dict(qobj, version='0.0.1'), - cls=QASMSimulatorEncoder).encode() + cin = json.dumps(qobj).encode() cout, cerr = proc.communicate(cin) if cerr: logger.error('ERROR: Simulator encountered a runtime error: %s', cerr.decode()) - sim_output = cout.decode() - return json.loads(sim_output, cls=QASMSimulatorDecoder) + sim_output = json.loads(cout.decode()) + return sim_output except FileNotFoundError: msg = "ERROR: Simulator exe not found at: %s" % executable diff --git a/qiskit/backends/aer/qasm_simulator_py.py b/qiskit/backends/aer/qasm_simulator_py.py --- a/qiskit/backends/aer/qasm_simulator_py.py +++ b/qiskit/backends/aer/qasm_simulator_py.py @@ -9,82 +9,35 @@ """Contains a (slow) python simulator. -It simulates a qasm quantum circuit that has been compiled to run on the -simulator. It is exponential in the number of qubits. - -We advise using the c++ simulator or online simulator for larger size systems. - -The input is a qobj dictionary - -and the output is a Results object - - results['data']["counts"] where this is dict {"0000" : 454} +It simulates a qasm quantum circuit (an experiment) that has been compiled +to run on the simulator. It is exponential in the number of qubits. The simulator is run using .. code-block:: python + QasmSimulatorPy().run(qobj) - QasmSimulatorPy(compiled_circuit,shots,seed).run(). - -.. code-block:: guess - - compiled_circuit = - { - "header": { - "number_of_qubits": 2, // int - "number_of_clbits": 2, // int - "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]] - "clbit_labels": [["c", 2]], // list[list[string, int]] - } - "operations": // list[map] - [ - { - "name": , // required -- string - "params": , // optional -- list[double] - "qubits": , // required -- list[int] - "clbits": , // optional -- list[int] - "conditional": // optional -- map - { - "type": , // string - "mask": , // hex string - "val": , // bhex string - } - }, - ] - } - -.. code-block:: python - - result = - { - 'data': { - 'statevector': array([ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]), - 'classical_state': 0 - 'counts': {'0000': 1} - 'snapshots': { '0': {'statevector': array([1.+0.j, 0.+0.j, - 0.+0.j, 0.+0.j])}} - } - } - 'time_taken': 0.002 - 'status': 'DONE' - } - +Where the input is a Qobj object and the output is a AerJob object, which can +later be queried for the Result object. The result will contain a 'memory' data +field, which is a result of measurements for each shot. """ import random import uuid import time import logging -from collections import Counter from math import log2 +from collections import Counter import numpy as np + from qiskit._util import local_hardware_info from qiskit.backends.models import BackendConfiguration, BackendProperties -from qiskit.result._utils import copy_qasm_from_qobj_into_result, result_from_old_style_dict +from qiskit.result import Result from qiskit.backends import BaseBackend from qiskit.backends.aer.aerjob import AerJob from ._simulatorerror import SimulatorError -from ._simulatortools import index2, single_gate_matrix +from ._simulatortools import single_gate_matrix, index2 + logger = logging.getLogger(__name__) @@ -211,7 +164,7 @@ def _add_qasm_measure(self, qubit, cbit): def _add_qasm_reset(self, qubit): """Apply the reset to the qubit. - This is done by doing a measruement and if 0 do nothing and + This is done by doing a measurement and if 0 do nothing and if 1 flip the qubit. qubit is the qubit that is reset. @@ -238,7 +191,8 @@ def _add_qasm_snapshot(self, slot): """Snapshot instruction to record simulator's internal representation of quantum statevector. - slot is a string indicating a snapshot slot label. + Args: + slot (string): a label to identify the recorded snapshot. """ self._snapshots.setdefault(str(slot), {}).setdefault("statevector", @@ -248,7 +202,7 @@ def run(self, qobj): """Run qobj asynchronously. Args: - qobj (dict): job description + qobj (Qobj): payload of the experiment Returns: AerJob: derived from BaseJob @@ -259,64 +213,74 @@ def run(self, qobj): return aer_job def _run_job(self, job_id, qobj): - """Run circuits in qobj""" + """Run experiments in qobj + + Args: + job_id (str): unique id for the job. + qobj (Qobj): job description + + Returns: + Result: Result object + """ self._validate(qobj) result_list = [] self._shots = qobj.config.shots self._qobj_config = qobj.config start = time.time() - for circuit in qobj.experiments: - result_list.append(self.run_circuit(circuit)) + for experiment in qobj.experiments: + result_list.append(self.run_experiment(experiment)) end = time.time() - result = {'backend': self.name(), - 'id': qobj.qobj_id, + result = {'backend_name': self.name(), + 'backend_version': self._configuration.backend_version, + 'qobj_id': qobj.qobj_id, 'job_id': job_id, - 'result': result_list, + 'results': result_list, 'status': 'COMPLETED', 'success': True, - 'time_taken': (end - start)} - - copy_qasm_from_qobj_into_result(qobj, result) + 'time_taken': (end - start), + 'header': qobj.header.as_dict()} - return result_from_old_style_dict(result) + return Result.from_dict(result) - def run_circuit(self, circuit): - """Run a circuit and return a single Result. + def run_experiment(self, experiment): + """Run an experiment (circuit) and return a single experiment result. Args: - circuit (QobjExperiment): experiment from qobj experiments list + experiment (QobjExperiment): experiment from qobj experiments list Returns: - dict: A dictionary of results which looks something like:: + dict: A result dictionary which looks something like:: { + "name": name of this experiment (obtained from qobj.experiment header) + "seed": random seed used for simulation + "shots": number of shots used in the simulation "data": - { #### DATA CAN BE A DIFFERENT DICTIONARY FOR EACH BACKEND #### - "counts": {'00000': XXXX, '00001': XXXXX}, - "time" : xx.xxxxxxxx + { + "memory": ['0x9', '0xF', '0x1D', ..., '0x9'] + "snapshots": + { + '1': [0.7, 0, 0, 0.7], + '2': [0.5, 0.5, 0.5, 0.5] + } }, - "status": --status (string)-- + "status": status string for the simulation + "success": boolean + "time_taken": simulation time of this single experiment } Raises: SimulatorError: if an error occurred. """ - self._number_of_qubits = circuit.header.number_of_qubits - self._number_of_cbits = circuit.header.number_of_clbits + self._number_of_qubits = experiment.config.n_qubits + self._number_of_cbits = experiment.config.memory_slots self._statevector = 0 self._classical_state = 0 self._snapshots = {} - cl_reg_index = [] # starting bit index of classical register - cl_reg_nbits = [] # number of bits in classical register - cbit_index = 0 - for cl_reg in circuit.header.clbit_labels: - cl_reg_nbits.append(cl_reg[1]) - cl_reg_index.append(cbit_index) - cbit_index += cl_reg[1] # Get the seed looking in circuit, qobj, and then random. - if hasattr(circuit, 'config') and hasattr(circuit.config, 'seed'): - seed = circuit.config.seed + if hasattr(experiment, 'config') and hasattr(experiment.config, 'seed'): + seed = experiment.config.seed elif hasattr(self._qobj_config, 'seed'): seed = self._qobj_config.seed else: @@ -330,7 +294,7 @@ def run_circuit(self, circuit): dtype=complex) self._statevector[0] = 1 self._classical_state = 0 - for operation in circuit.instructions: + for operation in experiment.instructions: if getattr(operation, 'conditional', None): mask = int(operation.conditional.mask, 16) if mask > 0: @@ -356,7 +320,7 @@ def run_circuit(self, circuit): # Check if measure elif operation.name == 'measure': qubit = operation.qubits[0] - cbit = operation.clbits[0] + cbit = operation.memory[0] self._add_qasm_measure(qubit, cbit) # Check if reset elif operation.name == 'reset': @@ -374,23 +338,25 @@ def run_circuit(self, circuit): err_msg = '{0} encountered unrecognized operation "{1}"' raise SimulatorError(err_msg.format(backend, operation.name)) - # Turn classical_state (int) into bit string - outcomes.append(bin(self._classical_state)[2:].zfill( - self._number_of_cbits)) - # Return the results - counts = dict(Counter(outcomes)) + # Turn classical_state (int) into bit string and pad zero for unused cbits + outcome = bin(self._classical_state)[2:] + # Return a compact hexadecimal + outcomes.append(hex(int(outcome, 2))) + data = { - 'counts': self._format_result(counts, cl_reg_index, cl_reg_nbits), + 'counts': dict(Counter(outcomes)), + 'memory': outcomes, 'snapshots': self._snapshots } end = time.time() - return {'name': circuit.header.name, + return {'name': experiment.header.name, 'seed': seed, 'shots': self._shots, 'data': data, 'status': 'DONE', 'success': True, - 'time_taken': (end-start)} + 'time_taken': (end-start), + 'header': experiment.header.as_dict()} def _validate(self, qobj): for experiment in qobj.experiments: @@ -399,26 +365,3 @@ def _validate(self, qobj): logger.warning("no measurements in circuit '%s', " "classical register will remain all zeros.", experiment.header.name) - - def _format_result(self, counts, cl_reg_index, cl_reg_nbits): - """Format the result bit string. - - This formats the result bit strings such that spaces are inserted - at register divisions. - - Args: - counts (dict): dictionary of counts e.g. {'1111': 1000, '0000':5} - cl_reg_index (list): starting bit index of classical register - cl_reg_nbits (list): total amount of bits in classical register - Returns: - dict: spaces inserted into dictionary keys at register boundaries. - """ - fcounts = {} - for key, value in counts.items(): - if cl_reg_nbits: - new_key = [key[-cl_reg_nbits[0]:]] - for index, nbits in zip(cl_reg_index[1:], - cl_reg_nbits[1:]): - new_key.insert(0, key[-(index+nbits):-index]) - fcounts[' '.join(new_key)] = value - return fcounts diff --git a/qiskit/backends/aer/statevector_simulator.py b/qiskit/backends/aer/statevector_simulator.py --- a/qiskit/backends/aer/statevector_simulator.py +++ b/qiskit/backends/aer/statevector_simulator.py @@ -14,6 +14,7 @@ import logging import uuid from math import log2 +from numpy import array from qiskit._util import local_hardware_info from qiskit.backends.models import BackendConfiguration, BackendProperties from qiskit.qobj import QobjInstruction @@ -68,7 +69,7 @@ def properties(self): return BackendProperties.from_dict(properties) def run(self, qobj): - """Run a qobj on the the backend.""" + """Run a qobj on the backend.""" job_id = str(uuid.uuid4()) aer_job = AerJob(self, job_id, self._run_job, qobj) aer_job.submit() @@ -91,11 +92,11 @@ def _run_job(self, job_id, qobj): # Extract final state snapshot and move to 'statevector' data field for experiment_result in result.results: snapshots = experiment_result.data.snapshots.to_dict() - if str(final_state_key) in snapshots: + if str(final_state_key) in snapshots['statevector']: final_state_key = str(final_state_key) # Pop off final snapshot added above - final_state = snapshots.pop(final_state_key, None) - final_state = final_state['statevector'][0] + final_state = snapshots['statevector'].pop(final_state_key)[0] + final_state = array([v[0] + 1j * v[1] for v in final_state], dtype=complex) # Add final state to results data experiment_result.data.statevector = final_state # Remove snapshot dict if empty diff --git a/qiskit/backends/aer/unitary_simulator_py.py b/qiskit/backends/aer/unitary_simulator_py.py --- a/qiskit/backends/aer/unitary_simulator_py.py +++ b/qiskit/backends/aer/unitary_simulator_py.py @@ -10,75 +10,14 @@ It simulates a unitary of a quantum circuit that has been compiled to run on the simulator. It is exponential in the number of qubits. -The input is the circuit object and the output is the same circuit object with -a result field added results['data']['unitary'] where the unitary is -a 2**n x 2**n complex numpy array representing the unitary matrix. +.. code-block:: python + UnitarySimulator().run(qobj) -The input is - - compiled_circuit object - -and the output is the results object - -The simulator is run using - - UnitarySimulatorPy(compiled_circuit).run(). - -In the qasm, key operations with type 'measure' and 'reset' are dropped. - -Internal circuit_object:: - - compiled_circuit = - { - "header": { - "number_of_qubits": 2, // int - "number_of_clbits": 2, // int - "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]] - "clbit_labels": [["c", 2]], // list[list[string, int]] - } - "operations": // list[map] - [ - { - "name": , // required -- string - "params": , // optional -- list[double] - "qubits": , // required -- list[int] - "clbits": , //optional -- list[int] - "conditional": // optional -- map - { - "type": , // string - "mask": , // hex string - "val": , // bhex string - } - }, - ] - } - -returned results object:: - - result = - { - 'data': - { - 'unitary': np.array([[ 0.70710678 +0.00000000e+00j - 0.70710678 -8.65956056e-17j - 0.00000000 +0.00000000e+00j - 0.00000000 +0.00000000e+00j] - [ 0.00000000 +0.00000000e+00j - 0.00000000 +0.00000000e+00j - 0.70710678 +0.00000000e+00j - -0.70710678 +8.65956056e-17j] - [ 0.00000000 +0.00000000e+00j - 0.00000000 +0.00000000e+00j - 0.70710678 +0.00000000e+00j - 0.70710678 -8.65956056e-17j] - [ 0.70710678 +0.00000000e+00j - -0.70710678 +8.65956056e-17j - 0.00000000 +0.00000000e+00j - 0.00000000 +0.00000000e+00j] - } - 'state': 'DONE' - } +Where the input is a Qobj object and the output is a AerJob object, which can +later be queried for the Result object. The result will contain a 'unitary' +data field, which is a 2**n x 2**n complex numpy array representing the +circuit's unitary matrix. """ import logging import uuid @@ -87,10 +26,10 @@ import numpy as np from qiskit._util import local_hardware_info from qiskit.backends.models import BackendConfiguration, BackendProperties -from qiskit.result._utils import copy_qasm_from_qobj_into_result, result_from_old_style_dict from qiskit.backends import BaseBackend from qiskit.backends.aer.aerjob import AerJob -from qiskit import QiskitError +from qiskit.result import Result +from ._simulatorerror import SimulatorError from ._simulatortools import single_gate_matrix, einsum_matmul_index logger = logging.getLogger(__name__) @@ -201,50 +140,70 @@ def run(self, qobj): return aer_job def _run_job(self, job_id, qobj): - """Run qobj. This is a blocking call. + """Run experiments in qobj. Args: job_id (str): unique id for the job. qobj (Qobj): job description + Returns: Result: Result object """ + self._validate(qobj) result_list = [] start = time.time() - for circuit in qobj.experiments: - result_list.append(self.run_circuit(circuit)) + for experiment in qobj.experiments: + result_list.append(self.run_experiment(experiment)) end = time.time() - result = {'backend': self.name(), - 'id': qobj.qobj_id, + result = {'backend_name': self.name(), + 'backend_version': self._configuration.backend_version, + 'qobj_id': qobj.qobj_id, 'job_id': job_id, - 'result': result_list, + 'results': result_list, 'status': 'COMPLETED', 'success': True, - 'time_taken': (end - start)} - copy_qasm_from_qobj_into_result(qobj, result) + 'time_taken': (end - start), + 'header': qobj.header.as_dict()} - return result_from_old_style_dict(result) + return Result.from_dict(result) - def run_circuit(self, circuit): - """Apply the single-qubit gate. + def run_experiment(self, experiment): + """Run an experiment (circuit) and return a single experiment result. Args: - circuit (QobjExperiment): experiment from qobj experiments list + experiment (QobjExperiment): experiment from qobj experiments list Returns: dict: A dictionary of results. + dict: A result dictionary which looks something like:: + { + "name": name of this experiment (obtained from qobj.experiment header) + "seed": random seed used for simulation + "shots": number of shots used in the simulation + "data": + { + "unitary": [[0.2, 0.6, j+0.1, 0.2j], + [0, 0.9+j, 0.5, 0.7], + [-j, -0.1, -3.14, 0], + [0, 0, 0.5j, j-0.5]] + }, + "status": status string for the simulation + "success": boolean + "time taken": simulation time of this single experiment + } Raises: - QiskitError: if the number of qubits in the circuit is greater than 24. + SimulatorError: if the number of qubits in the circuit is greater than 24. Note that the practical qubit limit is much lower than 24. """ - self._number_of_qubits = circuit.header.number_of_qubits + self._number_of_qubits = experiment.header.n_qubits if self._number_of_qubits > 24: - raise QiskitError("np.einsum implementation limits unitary_simulator_py" + - " to 24 qubit circuits.") + raise SimulatorError("np.einsum implementation limits unitary_simulator_py" + + " to 24 qubit circuits.") result = { 'data': {}, - 'name': circuit.header.name + 'name': experiment.header.name, + 'header': experiment.header.as_dict() } # Initialize unitary as rank 2*N tensor @@ -252,7 +211,7 @@ def run_circuit(self, circuit): dtype=complex), self._number_of_qubits * [2, 2]) - for operation in circuit.instructions: + for operation in experiment.instructions: if operation.name in ('U', 'u1', 'u2', 'u3'): params = getattr(operation, 'params', None) qubit = operation.qubits[0] @@ -266,12 +225,6 @@ def run_circuit(self, circuit): gate = np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]]) self._add_unitary_two(gate, qubit0, qubit1) - elif operation.name == 'measure': - logger.info('Warning have dropped measure from unitary ' - 'simulator') - elif operation.name == 'reset': - logger.info('Warning have dropped reset from unitary ' - 'simulator') elif operation.name == 'barrier': pass else: @@ -285,3 +238,24 @@ def run_circuit(self, circuit): result['success'] = True result['shots'] = 1 return result + + def _validate(self, qobj): + """Semantic validations of the qobj which cannot be done via schemas. + Some of these may later move to backend schemas. + 1. No shots + 2. No measurements in the middle + """ + if qobj.config.shots != 1: + logger.info("unitary simulator only supports 1 shot. " + "Setting shots=1.") + qobj.config.shots = 1 + for experiment in qobj.experiments: + if getattr(experiment.config, 'shots', 1) != 1: + logger.info("unitary simulator only supports 1 shot. " + "Setting shots=1 for circuit %s.", experiment.name) + experiment.config.shots = 1 + for operation in experiment.instructions: + if operation.name in ['measure', 'reset']: + raise SimulatorError( + "In circuit {}: unitary simulator does not support " + "measure or reset.".format(experiment.header.name)) diff --git a/qiskit/backends/ibmq/ibmqjob.py b/qiskit/backends/ibmq/ibmqjob.py --- a/qiskit/backends/ibmq/ibmqjob.py +++ b/qiskit/backends/ibmq/ibmqjob.py @@ -159,6 +159,8 @@ def __init__(self, backend, job_id, api, is_device, qobj=None, 'shots': old_qobj['config']['shots'], 'max_credits': old_qobj['config']['max_credits'] } + else: + self._qobj_payload = {} self._future_captured_exception = None self._api = api @@ -503,7 +505,9 @@ def _submit_callback(self): def _result_from_job_response(self, job_response): if self._is_device: - _reorder_bits(job_response) + # TODO: temporarily disabled for #1373, reenable before 0.7. + # _reorder_bits(job_response) + pass experiment_results = [] for circuit_result in job_response['qasms']: @@ -521,14 +525,22 @@ def _result_from_job_response(self, job_response): this_result['header'] = {} experiment_results.append(this_result) - return result_from_old_style_dict({ + ret = { 'id': self._job_id, 'status': job_response['status'], 'used_credits': job_response.get('usedCredits'), 'result': experiment_results, 'backend_name': self.backend().name(), - 'success': job_response['status'] == 'COMPLETED' - }) + 'success': job_response['status'] == 'COMPLETED', + } + + # Append header: from the response; from the payload; or none. + header = job_response.get('header', + self._qobj_payload.get('header', {})) + if header: + ret['header'] = header + + return result_from_old_style_dict(ret) def _reorder_bits(job_data): diff --git a/qiskit/qobj/_qobj.py b/qiskit/qobj/_qobj.py --- a/qiskit/qobj/_qobj.py +++ b/qiskit/qobj/_qobj.py @@ -62,6 +62,8 @@ def _expand_item(cls, obj): return float(obj.evalf()) if isinstance(obj, numpy.ndarray): return obj.tolist() + if isinstance(obj, complex): + return [obj.real, obj.imag] return obj @classmethod diff --git a/qiskit/result/_utils.py b/qiskit/result/_utils.py --- a/qiskit/result/_utils.py +++ b/qiskit/result/_utils.py @@ -52,30 +52,8 @@ def result_from_old_style_dict(result_dict): return Result.from_dict(result_dict) -def copy_qasm_from_qobj_into_result(qobj_, result): - """Copy QASMs belonging to the Qobj experiment into a Result. - - Find the QASMs belonging to the Qobj experiments and copy them - into the corresponding result entries. - - Args: - qobj_ (qobj): Qobj - result (qiskit.Result): Result (modified in-place). - """ - for experiment in qobj_.experiments: - name = experiment.header.name - qasm = getattr(experiment.header, 'compiled_circuit_qasm', None) - experiment_result = _find_experiment_result(result, name) - if qasm and experiment_result: - experiment_result['compiled_circuit_qasm'] = qasm - - # TODO: passing the header to the results should be done at a higher - # level. This ensures result[x].header.name is present, for results. - experiment_result['header'] = experiment.header.as_dict() - - def _find_experiment_result(result, name): - for experiment_result in result['result']: + for experiment_result in result['results']: if experiment_result['name'] == name: return experiment_result diff --git a/qiskit/tools/_compiler.py b/qiskit/tools/_compiler.py --- a/qiskit/tools/_compiler.py +++ b/qiskit/tools/_compiler.py @@ -47,7 +47,6 @@ def compile(circuits, backend, Raises: TranspilerError: in case of bad compile options, e.g. the hpc options. - """ pass_manager = None # default pass manager which executes predetermined passes if skip_transpiler: # empty pass manager which does nothing @@ -90,11 +89,6 @@ def circuits_to_qobj(circuits, backend_name, config=None, shots=1024, # Step 1: create the Qobj, with empty experiments. # Copy the configuration: the values in `config` have preference qobj_config = deepcopy(config or {}) - # TODO: "memory_slots" is required by the qobj schema in the top-level - # qobj.config, and is user-defined. At the moment is set to the maximum - # number of *register* slots for the circuits, in order to have `measure` - # behave properly until the transition is over; and each circuit stores - # its memory_slots in its configuration. qobj_config.update({'shots': shots, 'max_credits': max_credits, 'memory_slots': 0}) @@ -115,14 +109,10 @@ def circuits_to_qobj(circuits, backend_name, config=None, shots=1024, basis_gates, coupling_map)) - # Update the `memory_slots` value. - # TODO: remove when `memory_slots` can be provided by the user. + # Update the global `memory_slots` and `n_qubits` values. qobj.config.memory_slots = max(experiment.config.memory_slots for experiment in qobj.experiments) - # Update the `n_qubits` global value. - # TODO: num_qubits is not part of the qobj specification, but needed - # for the simulator. qobj.config.n_qubits = max(experiment.config.n_qubits for experiment in qobj.experiments) @@ -142,21 +132,18 @@ def _circuit_to_experiment(circuit, config=None, basis_gates=None, Returns: Qobj: Qobj to be run on the backends """ + # pylint: disable=unused-argument + # TODO: if arguments are really unused, consider changing the signature + dag = DAGCircuit.fromQuantumCircuit(circuit) json_circuit = DagUnroller(dag, JsonBackend(dag.basis)).execute() # Step 3a: create the Experiment based on json_circuit experiment = QobjExperiment.from_dict(json_circuit) # Step 3b: populate the Experiment configuration and header experiment.header.name = circuit.name - # TODO: place in header or config? experiment_config = deepcopy(config or {}) experiment_config.update({ - 'coupling_map': coupling_map, - 'basis_gates': basis_gates, - 'layout': [[[i[0][0].name, i[0][1]], [i[1][0].name, i[1][1]]] - for i in dag.layout] if dag.layout else [], 'memory_slots': sum([creg.size for creg in dag.cregs.values()]), - # TODO: `n_qubits` is not part of the qobj spec, but needed for the simulator. 'n_qubits': sum([qreg.size for qreg in dag.qregs.values()]) }) experiment.config = QobjItem(**experiment_config) diff --git a/qiskit/unrollers/_jsonbackend.py b/qiskit/unrollers/_jsonbackend.py --- a/qiskit/unrollers/_jsonbackend.py +++ b/qiskit/unrollers/_jsonbackend.py @@ -12,11 +12,13 @@ The input is a AST and a basis set and returns a json memory object:: { - "header": { - "number_of_qubits": 2, // int - "number_of_clbits": 2, // int - "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]] - "clbit_labels": [["c", 2]], // list[list[string, int]] + "header": { + "n_qubits": 2, // int + "memory_slots": 2, // int + "qubit_labels": [["q", 0], ["q", 1], null], // list[list[string, int] or null] + "clbit_labels": [["c", 0], ["c", 1]], // list[list[string, int]] + "qreg_sizes": [["q", 1], ["v", 1]], // list[list[string, int]] + "creg_sizes": [["c", 2]] // list[list[string, int]] } "instructions": // list[map] [ @@ -56,13 +58,17 @@ def __init__(self, basis=None): self.circuit = {} self.circuit['instructions'] = [] self.circuit['header'] = { - 'number_of_qubits': 0, - 'number_of_clbits': 0, + 'n_qubits': 0, + 'memory_slots': 0, 'qubit_labels': [], - 'clbit_labels': [] + 'clbit_labels': [], + 'qreg_sizes': [], + 'creg_sizes': [] } self._number_of_qubits = 0 - self._number_of_cbits = 0 + self._number_of_clbits = 0 + self._qreg_sizes = [] + self._creg_sizes = [] self._qubit_order = [] self._cbit_order = [] self._qubit_order_internal = OrderedDict() @@ -98,11 +104,16 @@ def new_qreg(self, qreg): qreg = QuantumRegister object """ + self._qreg_sizes.append([qreg.name, qreg.size]) + + # order qubits from lower to higher index. backends will do little endian. for j in range(qreg.size): self._qubit_order.append([qreg.name, j]) self._qubit_order_internal[(qreg.name, j)] = self._number_of_qubits + j self._number_of_qubits += qreg.size - self.circuit['header']['number_of_qubits'] = self._number_of_qubits + # TODO: avoid rewriting the same data over and over + self.circuit['header']['n_qubits'] = self._number_of_qubits + self.circuit['header']['qreg_sizes'] = self._qreg_sizes self.circuit['header']['qubit_labels'] = self._qubit_order def new_creg(self, creg): @@ -110,11 +121,15 @@ def new_creg(self, creg): creg = ClassicalRegister object """ - self._cbit_order.append([creg.name, creg.size]) + self._creg_sizes.append([creg.name, creg.size]) + # order clbits from lower to higher index. backends will do little endian. for j in range(creg.size): - self._cbit_order_internal[(creg.name, j)] = self._number_of_cbits + j - self._number_of_cbits += creg.size - self.circuit['header']['number_of_clbits'] = self._number_of_cbits + self._cbit_order.append([creg.name, j]) + self._cbit_order_internal[(creg.name, j)] = self._number_of_clbits + j + self._number_of_clbits += creg.size + # TODO: avoid rewriting the same data over and over + self.circuit['header']['memory_slots'] = self._number_of_clbits + self.circuit['header']['creg_sizes'] = self._creg_sizes self.circuit['header']['clbit_labels'] = self._cbit_order def define_gate(self, name, gatedata): @@ -190,8 +205,7 @@ def start_gate(self, op, qargs, cargs, extra_fields=None): 'params': list(map(lambda x: x.evalf(), op.param)), 'texparams': list(map(sympy.latex, op.param)), 'qubits': qubit_indices, - 'clbits': clbit_indices, - 'memory': clbit_indices.copy() + 'memory': clbit_indices } if extra_fields is not None: gate_instruction.update(extra_fields) </patch>
[]
[]
pandas-dev__pandas-11079
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Broken nunique on Series group by The following code works in 0.16.2 and not in latest master: ``` python data = pd.DataFrame( [[100, 1, 'Alice'], [200, 2, 'Bob'], [300, 3, 'Charlie'], [-400, 4, 'Dan'], [500, 5, 'Edith']], columns=['amount', 'id', 'name'] ) expected = data.groupby(['id', 'amount'])['name'].nunique() ``` Going to bisect this today unless someone beats me to it. </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 <table> 4 <tr> 5 <td>Latest Release</td> 6 <td><img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /></td> 7 </tr> 8 <tr> 9 <td>Package Status</td> 10 <td><img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 11 </tr> 12 <tr> 13 <td>License</td> 14 <td><img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /></td> 15 </tr> 16 <tr> 17 <td>Build Status</td> 18 <td> 19 <a href="https://travis-ci.org/pydata/pandas"> 20 <img src="https://travis-ci.org/pydata/pandas.svg?branch=master" alt="build status" /> 21 </a> 22 </td> 23 </tr> 24 <tr> 25 <td>Conda</td> 26 <td> 27 <a href="http://pandas.pydata.org"> 28 <img src="http://pubbadges.s3-website-us-east-1.amazonaws.com/pkgs-downloads-pandas.png" alt="conda downloads" /> 29 </a> 30 </td> 31 </tr> 32 <tr> 33 <td>PyPI</td> 34 <td> 35 <a href="https://pypi.python.org/pypi/pandas/"> 36 <img src="https://img.shields.io/pypi/dm/pandas.svg" alt="pypi downloads" /> 37 </a> 38 </td> 39 </tr> 40 </table> 41 42 [![https://gitter.im/pydata/pandas](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 43 44 ## What is it 45 46 **pandas** is a Python package providing fast, flexible, and expressive data 47 structures designed to make working with "relational" or "labeled" data both 48 easy and intuitive. It aims to be the fundamental high-level building block for 49 doing practical, **real world** data analysis in Python. Additionally, it has 50 the broader goal of becoming **the most powerful and flexible open source data 51 analysis / manipulation tool available in any language**. It is already well on 52 its way toward this goal. 53 54 ## Main Features 55 Here are just a few of the things that pandas does well: 56 57 - Easy handling of [**missing data**][missing-data] (represented as 58 `NaN`) in floating point as well as non-floating point data 59 - Size mutability: columns can be [**inserted and 60 deleted**][insertion-deletion] from DataFrame and higher dimensional 61 objects 62 - Automatic and explicit [**data alignment**][alignment]: objects can 63 be explicitly aligned to a set of labels, or the user can simply 64 ignore the labels and let `Series`, `DataFrame`, etc. automatically 65 align the data for you in computations 66 - Powerful, flexible [**group by**][groupby] functionality to perform 67 split-apply-combine operations on data sets, for both aggregating 68 and transforming data 69 - Make it [**easy to convert**][conversion] ragged, 70 differently-indexed data in other Python and NumPy data structures 71 into DataFrame objects 72 - Intelligent label-based [**slicing**][slicing], [**fancy 73 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 74 large data sets 75 - Intuitive [**merging**][merging] and [**joining**][joining] data 76 sets 77 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 78 data sets 79 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 80 labels per tick) 81 - Robust IO tools for loading data from [**flat files**][flat-files] 82 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 83 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 84 - [**Time series**][timeseries]-specific functionality: date range 85 generation and frequency conversion, moving window statistics, 86 moving window linear regressions, date shifting and lagging, etc. 87 88 89 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 90 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 91 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 92 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 93 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 94 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 95 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 96 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 97 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 98 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 99 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 100 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 101 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 102 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 103 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 104 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 105 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 106 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 107 108 ## Where to get it 109 The source code is currently hosted on GitHub at: 110 http://github.com/pydata/pandas 111 112 Binary installers for the latest released version are available at the Python 113 package index 114 115 http://pypi.python.org/pypi/pandas/ 116 117 And via `easy_install`: 118 119 ```sh 120 easy_install pandas 121 ``` 122 123 or `pip`: 124 125 ```sh 126 pip install pandas 127 ``` 128 129 or `conda`: 130 131 ```sh 132 conda install pandas 133 ``` 134 135 ## Dependencies 136 - [NumPy](http://www.numpy.org): 1.7.0 or higher 137 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 138 - [pytz](http://pytz.sourceforge.net) 139 - Needed for time zone support with ``pandas.date_range`` 140 141 ### Highly Recommended Dependencies 142 - [numexpr](https://github.com/pydata/numexpr) 143 - Needed to accelerate some expression evaluation operations 144 - Required by PyTables 145 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 146 - Needed to accelerate certain numerical operations 147 148 ### Optional dependencies 149 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 150 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 151 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 152 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 153 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 154 - [statsmodels](http://statsmodels.sourceforge.net/) 155 - Needed for parts of `pandas.stats` 156 - For Excel I/O: 157 - [xlrd/xlwt](http://www.python-excel.org/) 158 - Excel reading (xlrd) and writing (xlwt) 159 - [openpyxl](http://packages.python.org/openpyxl/) 160 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for 161 writing .xlsx files 162 - xlrd >= 0.9.0 163 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 164 - Alternative Excel writer. 165 - [Google bq Command Line Tool](https://cloud.google.com/bigquery/bq-command-line-tool) 166 - Needed for `pandas.io.gbq` 167 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 168 - One of the following combinations of libraries is needed to use the 169 top-level [`pandas.read_html`][read-html-docs] function: 170 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 171 recent version of [html5lib][html5lib] is okay.) 172 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 173 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 174 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 175 for reasons as to why you should probably **not** take this approach. 176 177 #### Notes about HTML parsing libraries 178 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 179 either [lxml][lxml] or [html5lib][html5lib] or both. 180 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 181 installed. 182 - You are strongly encouraged to read [HTML reading 183 gotchas][html-gotchas]. It explains issues surrounding the 184 installation and usage of the above three libraries. 185 - You may need to install an older version of 186 [BeautifulSoup4][BeautifulSoup4]: 187 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 188 32-bit Ubuntu/Debian 189 - Additionally, if you're using [Anaconda][Anaconda] you should 190 definitely read [the gotchas about HTML parsing][html-gotchas] 191 libraries 192 - If you're on a system with `apt-get` you can do 193 194 ```sh 195 sudo apt-get build-dep python-lxml 196 ``` 197 198 to get the necessary dependencies for installation of [lxml][lxml]. 199 This will prevent further headaches down the line. 200 201 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 202 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 203 [lxml]: http://lxml.de 204 [Anaconda]: https://store.continuum.io/cshop/anaconda 205 [NumPy]: http://numpy.scipy.org/ 206 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 207 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 208 209 ## Installation from sources 210 To install pandas from source you need Cython in addition to the normal 211 dependencies above. Cython can be installed from pypi: 212 213 ```sh 214 pip install cython 215 ``` 216 217 In the `pandas` directory (same one where you found this file after 218 cloning the git repo), execute: 219 220 ```sh 221 python setup.py install 222 ``` 223 224 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 225 226 ```sh 227 python setup.py develop 228 ``` 229 230 Alternatively, you can use `pip` if you want all the dependencies pulled 231 in automatically (the `-e` option is for installing it in [development 232 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 233 234 ```sh 235 pip install -e . 236 ``` 237 238 On Windows, you will need to install MinGW and execute: 239 240 ```sh 241 python setup.py build --compiler=mingw32 242 python setup.py install 243 ``` 244 245 See http://pandas.pydata.org/ for more information. 246 247 ## License 248 BSD 249 250 ## Documentation 251 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 252 253 The Sphinx documentation should provide a good starting point for learning how 254 to use the library. Expect the docs to continue to expand as time goes on. 255 256 ## Background 257 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 258 has been under active development since then. 259 260 ## Discussion and Development 261 Since pandas development is related to a number of other scientific 262 Python projects, questions are welcome on the scipy-user mailing 263 list. Specialized discussions or design issues should take place on 264 the PyData mailing list / Google group: 265 266 https://groups.google.com/forum/#!forum/pydata 267 [end of README.md] [start of vb_suite/groupby.py] 1 from vbench.api import Benchmark 2 from datetime import datetime 3 4 common_setup = """from .pandas_vb_common import * 5 """ 6 7 setup = common_setup + """ 8 N = 100000 9 ngroups = 100 10 11 def get_test_data(ngroups=100, n=100000): 12 unique_groups = range(ngroups) 13 arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object) 14 15 if len(arr) < n: 16 arr = np.asarray(list(arr) + unique_groups[:n - len(arr)], 17 dtype=object) 18 19 random.shuffle(arr) 20 return arr 21 22 # aggregate multiple columns 23 df = DataFrame({'key1' : get_test_data(ngroups=ngroups), 24 'key2' : get_test_data(ngroups=ngroups), 25 'data1' : np.random.randn(N), 26 'data2' : np.random.randn(N)}) 27 def f(): 28 df.groupby(['key1', 'key2']).agg(lambda x: x.values.sum()) 29 30 simple_series = Series(np.random.randn(N)) 31 key1 = df['key1'] 32 """ 33 34 stmt1 = "df.groupby(['key1', 'key2'])['data1'].agg(lambda x: x.values.sum())" 35 groupby_multi_python = Benchmark(stmt1, setup, 36 start_date=datetime(2011, 7, 1)) 37 38 stmt3 = "df.groupby(['key1', 'key2']).sum()" 39 groupby_multi_cython = Benchmark(stmt3, setup, 40 start_date=datetime(2011, 7, 1)) 41 42 stmt = "df.groupby(['key1', 'key2'])['data1'].agg(np.std)" 43 groupby_multi_series_op = Benchmark(stmt, setup, 44 start_date=datetime(2011, 8, 1)) 45 46 groupby_series_simple_cython = \ 47 Benchmark('simple_series.groupby(key1).sum()', setup, 48 start_date=datetime(2011, 3, 1)) 49 50 51 stmt4 = "df.groupby('key1').rank(pct=True)" 52 groupby_series_simple_cython = Benchmark(stmt4, setup, 53 start_date=datetime(2014, 1, 16)) 54 55 #---------------------------------------------------------------------- 56 # 2d grouping, aggregate many columns 57 58 setup = common_setup + """ 59 labels = np.random.randint(0, 100, size=1000) 60 df = DataFrame(randn(1000, 1000)) 61 """ 62 63 groupby_frame_cython_many_columns = Benchmark( 64 'df.groupby(labels).sum()', setup, 65 start_date=datetime(2011, 8, 1), 66 logy=True) 67 68 #---------------------------------------------------------------------- 69 # single key, long, integer key 70 71 setup = common_setup + """ 72 data = np.random.randn(100000, 1) 73 labels = np.random.randint(0, 1000, size=100000) 74 df = DataFrame(data) 75 """ 76 77 groupby_frame_singlekey_integer = \ 78 Benchmark('df.groupby(labels).sum()', setup, 79 start_date=datetime(2011, 8, 1), logy=True) 80 81 #---------------------------------------------------------------------- 82 # group with different functions per column 83 84 setup = common_setup + """ 85 fac1 = np.array(['A', 'B', 'C'], dtype='O') 86 fac2 = np.array(['one', 'two'], dtype='O') 87 88 df = DataFrame({'key1': fac1.take(np.random.randint(0, 3, size=100000)), 89 'key2': fac2.take(np.random.randint(0, 2, size=100000)), 90 'value1' : np.random.randn(100000), 91 'value2' : np.random.randn(100000), 92 'value3' : np.random.randn(100000)}) 93 """ 94 95 groupby_multi_different_functions = \ 96 Benchmark("""df.groupby(['key1', 'key2']).agg({'value1' : 'mean', 97 'value2' : 'var', 98 'value3' : 'sum'})""", 99 setup, start_date=datetime(2011, 9, 1)) 100 101 groupby_multi_different_numpy_functions = \ 102 Benchmark("""df.groupby(['key1', 'key2']).agg({'value1' : np.mean, 103 'value2' : np.var, 104 'value3' : np.sum})""", 105 setup, start_date=datetime(2011, 9, 1)) 106 107 #---------------------------------------------------------------------- 108 # size() speed 109 110 setup = common_setup + """ 111 n = 100000 112 offsets = np.random.randint(n, size=n).astype('timedelta64[ns]') 113 dates = np.datetime64('now') + offsets 114 df = DataFrame({'key1': np.random.randint(0, 500, size=n), 115 'key2': np.random.randint(0, 100, size=n), 116 'value1' : np.random.randn(n), 117 'value2' : np.random.randn(n), 118 'value3' : np.random.randn(n), 119 'dates' : dates}) 120 """ 121 122 groupby_multi_size = Benchmark("df.groupby(['key1', 'key2']).size()", 123 setup, start_date=datetime(2011, 10, 1)) 124 125 groupby_dt_size = Benchmark("df.groupby(['dates']).size()", 126 setup, start_date=datetime(2011, 10, 1)) 127 128 groupby_dt_timegrouper_size = Benchmark("df.groupby(TimeGrouper(key='dates', freq='M')).size()", 129 setup, start_date=datetime(2011, 10, 1)) 130 131 #---------------------------------------------------------------------- 132 # count() speed 133 134 setup = common_setup + """ 135 n = 10000 136 offsets = np.random.randint(n, size=n).astype('timedelta64[ns]') 137 138 dates = np.datetime64('now') + offsets 139 dates[np.random.rand(n) > 0.5] = np.datetime64('nat') 140 141 offsets[np.random.rand(n) > 0.5] = np.timedelta64('nat') 142 143 value2 = np.random.randn(n) 144 value2[np.random.rand(n) > 0.5] = np.nan 145 146 obj = tm.choice(list('ab'), size=n).astype(object) 147 obj[np.random.randn(n) > 0.5] = np.nan 148 149 df = DataFrame({'key1': np.random.randint(0, 500, size=n), 150 'key2': np.random.randint(0, 100, size=n), 151 'dates': dates, 152 'value2' : value2, 153 'value3' : np.random.randn(n), 154 'ints': np.random.randint(0, 1000, size=n), 155 'obj': obj, 156 'offsets': offsets}) 157 """ 158 159 groupby_multi_count = Benchmark("df.groupby(['key1', 'key2']).count()", 160 setup, name='groupby_multi_count', 161 start_date=datetime(2014, 5, 5)) 162 163 setup = common_setup + """ 164 n = 10000 165 166 df = DataFrame({'key1': randint(0, 500, size=n), 167 'key2': randint(0, 100, size=n), 168 'ints': randint(0, 1000, size=n), 169 'ints2': randint(0, 1000, size=n)}) 170 """ 171 172 groupby_int_count = Benchmark("df.groupby(['key1', 'key2']).count()", 173 setup, name='groupby_int_count', 174 start_date=datetime(2014, 5, 6)) 175 #---------------------------------------------------------------------- 176 # Series.value_counts 177 178 setup = common_setup + """ 179 s = Series(np.random.randint(0, 1000, size=100000)) 180 """ 181 182 series_value_counts_int64 = Benchmark('s.value_counts()', setup, 183 start_date=datetime(2011, 10, 21)) 184 185 # value_counts on lots of strings 186 187 setup = common_setup + """ 188 K = 1000 189 N = 100000 190 uniques = tm.makeStringIndex(K).values 191 s = Series(np.tile(uniques, N // K)) 192 """ 193 194 series_value_counts_strings = Benchmark('s.value_counts()', setup, 195 start_date=datetime(2011, 10, 21)) 196 197 #value_counts on float dtype 198 199 setup = common_setup + """ 200 s = Series(np.random.randint(0, 1000, size=100000)).astype(float) 201 """ 202 203 series_value_counts_float64 = Benchmark('s.value_counts()', setup, 204 start_date=datetime(2015, 8, 17)) 205 206 #---------------------------------------------------------------------- 207 # pivot_table 208 209 setup = common_setup + """ 210 fac1 = np.array(['A', 'B', 'C'], dtype='O') 211 fac2 = np.array(['one', 'two'], dtype='O') 212 213 ind1 = np.random.randint(0, 3, size=100000) 214 ind2 = np.random.randint(0, 2, size=100000) 215 216 df = DataFrame({'key1': fac1.take(ind1), 217 'key2': fac2.take(ind2), 218 'key3': fac2.take(ind2), 219 'value1' : np.random.randn(100000), 220 'value2' : np.random.randn(100000), 221 'value3' : np.random.randn(100000)}) 222 """ 223 224 stmt = "df.pivot_table(index='key1', columns=['key2', 'key3'])" 225 groupby_pivot_table = Benchmark(stmt, setup, start_date=datetime(2011, 12, 15)) 226 227 228 #---------------------------------------------------------------------- 229 # dict return values 230 231 setup = common_setup + """ 232 labels = np.arange(1000).repeat(10) 233 data = Series(randn(len(labels))) 234 f = lambda x: {'first': x.values[0], 'last': x.values[-1]} 235 """ 236 237 groupby_apply_dict_return = Benchmark('data.groupby(labels).apply(f)', 238 setup, start_date=datetime(2011, 12, 15)) 239 240 #---------------------------------------------------------------------- 241 # First / last functions 242 243 setup = common_setup + """ 244 labels = np.arange(10000).repeat(10) 245 data = Series(randn(len(labels))) 246 data[::3] = np.nan 247 data[1::3] = np.nan 248 data2 = Series(randn(len(labels)),dtype='float32') 249 data2[::3] = np.nan 250 data2[1::3] = np.nan 251 labels = labels.take(np.random.permutation(len(labels))) 252 """ 253 254 groupby_first_float64 = Benchmark('data.groupby(labels).first()', setup, 255 start_date=datetime(2012, 5, 1)) 256 257 groupby_first_float32 = Benchmark('data2.groupby(labels).first()', setup, 258 start_date=datetime(2013, 1, 1)) 259 260 groupby_last_float64 = Benchmark('data.groupby(labels).last()', setup, 261 start_date=datetime(2012, 5, 1)) 262 263 groupby_last_float32 = Benchmark('data2.groupby(labels).last()', setup, 264 start_date=datetime(2013, 1, 1)) 265 266 groupby_nth_float64_none = Benchmark('data.groupby(labels).nth(0)', setup, 267 start_date=datetime(2012, 5, 1)) 268 groupby_nth_float32_none = Benchmark('data2.groupby(labels).nth(0)', setup, 269 start_date=datetime(2013, 1, 1)) 270 groupby_nth_float64_any = Benchmark('data.groupby(labels).nth(0,dropna="all")', setup, 271 start_date=datetime(2012, 5, 1)) 272 groupby_nth_float32_any = Benchmark('data2.groupby(labels).nth(0,dropna="all")', setup, 273 start_date=datetime(2013, 1, 1)) 274 275 # with datetimes (GH7555) 276 setup = common_setup + """ 277 df = DataFrame({'a' : date_range('1/1/2011',periods=100000,freq='s'),'b' : range(100000)}) 278 """ 279 280 groupby_first_datetimes = Benchmark('df.groupby("b").first()', setup, 281 start_date=datetime(2013, 5, 1)) 282 groupby_last_datetimes = Benchmark('df.groupby("b").last()', setup, 283 start_date=datetime(2013, 5, 1)) 284 groupby_nth_datetimes_none = Benchmark('df.groupby("b").nth(0)', setup, 285 start_date=datetime(2013, 5, 1)) 286 groupby_nth_datetimes_any = Benchmark('df.groupby("b").nth(0,dropna="all")', setup, 287 start_date=datetime(2013, 5, 1)) 288 289 # with object 290 setup = common_setup + """ 291 df = DataFrame({'a' : ['foo']*100000,'b' : range(100000)}) 292 """ 293 294 groupby_first_object = Benchmark('df.groupby("b").first()', setup, 295 start_date=datetime(2013, 5, 1)) 296 groupby_last_object = Benchmark('df.groupby("b").last()', setup, 297 start_date=datetime(2013, 5, 1)) 298 groupby_nth_object_none = Benchmark('df.groupby("b").nth(0)', setup, 299 start_date=datetime(2013, 5, 1)) 300 groupby_nth_object_any = Benchmark('df.groupby("b").nth(0,dropna="any")', setup, 301 start_date=datetime(2013, 5, 1)) 302 303 #---------------------------------------------------------------------- 304 # groupby_indices replacement, chop up Series 305 306 setup = common_setup + """ 307 try: 308 rng = date_range('1/1/2000', '12/31/2005', freq='H') 309 year, month, day = rng.year, rng.month, rng.day 310 except: 311 rng = date_range('1/1/2000', '12/31/2000', offset=datetools.Hour()) 312 year = rng.map(lambda x: x.year) 313 month = rng.map(lambda x: x.month) 314 day = rng.map(lambda x: x.day) 315 316 ts = Series(np.random.randn(len(rng)), index=rng) 317 """ 318 319 groupby_indices = Benchmark('len(ts.groupby([year, month, day]))', 320 setup, start_date=datetime(2012, 1, 1)) 321 322 #---------------------------------------------------------------------- 323 # median 324 325 #---------------------------------------------------------------------- 326 # single key, long, integer key 327 328 setup = common_setup + """ 329 data = np.random.randn(100000, 2) 330 labels = np.random.randint(0, 1000, size=100000) 331 df = DataFrame(data) 332 """ 333 334 groupby_frame_median = \ 335 Benchmark('df.groupby(labels).median()', setup, 336 start_date=datetime(2011, 8, 1), logy=True) 337 338 339 setup = common_setup + """ 340 data = np.random.randn(1000000, 2) 341 labels = np.random.randint(0, 1000, size=1000000) 342 df = DataFrame(data) 343 """ 344 345 groupby_simple_compress_timing = \ 346 Benchmark('df.groupby(labels).mean()', setup, 347 start_date=datetime(2011, 8, 1)) 348 349 350 #---------------------------------------------------------------------- 351 # DataFrame Apply overhead 352 353 setup = common_setup + """ 354 N = 10000 355 labels = np.random.randint(0, 2000, size=N) 356 labels2 = np.random.randint(0, 3, size=N) 357 df = DataFrame({'key': labels, 358 'key2': labels2, 359 'value1': randn(N), 360 'value2': ['foo', 'bar', 'baz', 'qux'] * (N / 4)}) 361 def f(g): 362 return 1 363 """ 364 365 groupby_frame_apply_overhead = Benchmark("df.groupby('key').apply(f)", setup, 366 start_date=datetime(2011, 10, 1)) 367 368 groupby_frame_apply = Benchmark("df.groupby(['key', 'key2']).apply(f)", setup, 369 start_date=datetime(2011, 10, 1)) 370 371 372 #---------------------------------------------------------------------- 373 # DataFrame nth 374 375 setup = common_setup + """ 376 df = DataFrame(np.random.randint(1, 100, (10000, 2))) 377 """ 378 379 # Not really a fair test as behaviour has changed! 380 groupby_frame_nth_none = Benchmark("df.groupby(0).nth(0)", setup, 381 start_date=datetime(2014, 3, 1)) 382 383 groupby_series_nth_none = Benchmark("df[1].groupby(df[0]).nth(0)", setup, 384 start_date=datetime(2014, 3, 1)) 385 groupby_frame_nth_any= Benchmark("df.groupby(0).nth(0,dropna='any')", setup, 386 start_date=datetime(2014, 3, 1)) 387 388 groupby_series_nth_any = Benchmark("df[1].groupby(df[0]).nth(0,dropna='any')", setup, 389 start_date=datetime(2014, 3, 1)) 390 391 392 #---------------------------------------------------------------------- 393 # Sum booleans #2692 394 395 setup = common_setup + """ 396 N = 500 397 df = DataFrame({'ii':range(N),'bb':[True for x in range(N)]}) 398 """ 399 400 groupby_sum_booleans = Benchmark("df.groupby('ii').sum()", setup) 401 402 403 #---------------------------------------------------------------------- 404 # multi-indexed group sum #9049 405 406 setup = common_setup + """ 407 N = 50 408 df = DataFrame({'A': range(N) * 2, 'B': range(N*2), 'C': 1}).set_index(["A", "B"]) 409 """ 410 411 groupby_sum_multiindex = Benchmark("df.groupby(level=[0, 1]).sum()", setup) 412 413 414 #---------------------------------------------------------------------- 415 # Transform testing 416 417 setup = common_setup + """ 418 n_dates = 400 419 n_securities = 250 420 n_columns = 3 421 share_na = 0.1 422 423 dates = date_range('1997-12-31', periods=n_dates, freq='B') 424 dates = Index(map(lambda x: x.year * 10000 + x.month * 100 + x.day, dates)) 425 426 secid_min = int('10000000', 16) 427 secid_max = int('F0000000', 16) 428 step = (secid_max - secid_min) // (n_securities - 1) 429 security_ids = map(lambda x: hex(x)[2:10].upper(), range(secid_min, secid_max + 1, step)) 430 431 data_index = MultiIndex(levels=[dates.values, security_ids], 432 labels=[[i for i in range(n_dates) for _ in xrange(n_securities)], range(n_securities) * n_dates], 433 names=['date', 'security_id']) 434 n_data = len(data_index) 435 436 columns = Index(['factor{}'.format(i) for i in range(1, n_columns + 1)]) 437 438 data = DataFrame(np.random.randn(n_data, n_columns), index=data_index, columns=columns) 439 440 step = int(n_data * share_na) 441 for column_index in range(n_columns): 442 index = column_index 443 while index < n_data: 444 data.set_value(data_index[index], columns[column_index], np.nan) 445 index += step 446 447 f_fillna = lambda x: x.fillna(method='pad') 448 """ 449 450 groupby_transform = Benchmark("data.groupby(level='security_id').transform(f_fillna)", setup) 451 groupby_transform_ufunc = Benchmark("data.groupby(level='date').transform(np.max)", setup) 452 453 setup = common_setup + """ 454 np.random.seed(0) 455 456 N = 120000 457 N_TRANSITIONS = 1400 458 459 # generate groups 460 transition_points = np.random.permutation(np.arange(N))[:N_TRANSITIONS] 461 transition_points.sort() 462 transitions = np.zeros((N,), dtype=np.bool) 463 transitions[transition_points] = True 464 g = transitions.cumsum() 465 466 df = DataFrame({ 'signal' : np.random.rand(N)}) 467 """ 468 groupby_transform_series = Benchmark("df['signal'].groupby(g).transform(np.mean)", setup) 469 470 setup = common_setup + """ 471 np.random.seed(0) 472 473 df=DataFrame( { 'id' : np.arange( 100000 ) / 3, 474 'val': np.random.randn( 100000) } ) 475 """ 476 477 groupby_transform_series2 = Benchmark("df.groupby('id')['val'].transform(np.mean)", setup) 478 479 setup = common_setup + ''' 480 np.random.seed(2718281) 481 n = 20000 482 df = DataFrame(np.random.randint(1, n, (n, 3)), 483 columns=['jim', 'joe', 'jolie']) 484 ''' 485 486 stmt = "df.groupby(['jim', 'joe'])['jolie'].transform('max')"; 487 groupby_transform_multi_key1 = Benchmark(stmt, setup) 488 groupby_transform_multi_key2 = Benchmark(stmt, setup + "df['jim'] = df['joe']") 489 490 setup = common_setup + ''' 491 np.random.seed(2718281) 492 n = 200000 493 df = DataFrame(np.random.randint(1, n / 10, (n, 3)), 494 columns=['jim', 'joe', 'jolie']) 495 ''' 496 groupby_transform_multi_key3 = Benchmark(stmt, setup) 497 groupby_transform_multi_key4 = Benchmark(stmt, setup + "df['jim'] = df['joe']") 498 499 setup = common_setup + ''' 500 np.random.seed(27182) 501 n = 100000 502 df = DataFrame(np.random.randint(1, n / 100, (n, 3)), 503 columns=['jim', 'joe', 'jolie']) 504 ''' 505 506 groupby_agg_builtins1 = Benchmark("df.groupby('jim').agg([sum, min, max])", setup) 507 groupby_agg_builtins2 = Benchmark("df.groupby(['jim', 'joe']).agg([sum, min, max])", setup) 508 509 510 setup = common_setup + ''' 511 arr = np.random.randint(- 1 << 12, 1 << 12, (1 << 17, 5)) 512 i = np.random.choice(len(arr), len(arr) * 5) 513 arr = np.vstack((arr, arr[i])) # add sume duplicate rows 514 515 i = np.random.permutation(len(arr)) 516 arr = arr[i] # shuffle rows 517 518 df = DataFrame(arr, columns=list('abcde')) 519 df['jim'], df['joe'] = np.random.randn(2, len(df)) * 10 520 ''' 521 522 groupby_int64_overflow = Benchmark("df.groupby(list('abcde')).max()", setup, 523 name='groupby_int64_overflow') 524 525 526 setup = common_setup + ''' 527 from itertools import product 528 from string import ascii_letters, digits 529 530 n = 5 * 7 * 11 * (1 << 9) 531 alpha = list(map(''.join, product(ascii_letters + digits, repeat=4))) 532 f = lambda k: np.repeat(np.random.choice(alpha, n // k), k) 533 534 df = DataFrame({'a': f(11), 'b': f(7), 'c': f(5), 'd': f(1)}) 535 df['joe'] = (np.random.randn(len(df)) * 10).round(3) 536 537 i = np.random.permutation(len(df)) 538 df = df.iloc[i].reset_index(drop=True).copy() 539 ''' 540 541 groupby_multi_index = Benchmark("df.groupby(list('abcd')).max()", setup, 542 name='groupby_multi_index') 543 544 #---------------------------------------------------------------------- 545 # groupby with a variable value for ngroups 546 547 548 ngroups_list = [100, 10000] 549 no_arg_func_list = [ 550 'all', 551 'any', 552 'count', 553 'cumcount', 554 'cummax', 555 'cummin', 556 'cumprod', 557 'cumsum', 558 'describe', 559 'diff', 560 'first', 561 'head', 562 'last', 563 'mad', 564 'max', 565 'mean', 566 'median', 567 'min', 568 'nunique', 569 'pct_change', 570 'prod', 571 'rank', 572 'sem', 573 'size', 574 'skew', 575 'std', 576 'sum', 577 'tail', 578 'unique', 579 'var', 580 'value_counts', 581 ] 582 583 584 _stmt_template = "df.groupby('value')['timestamp'].%s" 585 _setup_template = common_setup + """ 586 np.random.seed(1234) 587 ngroups = %s 588 size = ngroups * 2 589 rng = np.arange(ngroups) 590 df = DataFrame(dict( 591 timestamp=rng.take(np.random.randint(0, ngroups, size=size)), 592 value=np.random.randint(0, size, size=size) 593 )) 594 """ 595 START_DATE = datetime(2011, 7, 1) 596 597 598 def make_large_ngroups_bmark(ngroups, func_name, func_args=''): 599 bmark_name = 'groupby_ngroups_%s_%s' % (ngroups, func_name) 600 stmt = _stmt_template % ('%s(%s)' % (func_name, func_args)) 601 setup = _setup_template % ngroups 602 bmark = Benchmark(stmt, setup, start_date=START_DATE) 603 # MUST set name 604 bmark.name = bmark_name 605 return bmark 606 607 608 def inject_bmark_into_globals(bmark): 609 if not bmark.name: 610 raise AssertionError('benchmark must have a name') 611 globals()[bmark.name] = bmark 612 613 614 for ngroups in ngroups_list: 615 for func_name in no_arg_func_list: 616 bmark = make_large_ngroups_bmark(ngroups, func_name) 617 inject_bmark_into_globals(bmark) 618 619 # avoid bmark to be collected as Benchmark object 620 del bmark 621 [end of vb_suite/groupby.py] [start of vb_suite/perf_HEAD.py] 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 4 from __future__ import print_function 5 6 """Run all the vbenches in `suite`, and post the results as a json blob to gist 7 8 """ 9 10 import urllib2 11 from contextlib import closing 12 from urllib2 import urlopen 13 import json 14 15 import pandas as pd 16 17 WEB_TIMEOUT = 10 18 19 20 def get_travis_data(): 21 """figure out what worker we're running on, and the number of jobs it's running 22 """ 23 import os 24 jobid = os.environ.get("TRAVIS_JOB_ID") 25 if not jobid: 26 return None, None 27 28 with closing(urlopen("https://api.travis-ci.org/workers/")) as resp: 29 workers = json.loads(resp.read()) 30 31 host = njobs = None 32 for item in workers: 33 host = item.get("host") 34 id = ((item.get("payload") or {}).get("job") or {}).get("id") 35 if id and str(id) == str(jobid): 36 break 37 if host: 38 njobs = len( 39 [x for x in workers if host in x['host'] and x['payload']]) 40 41 return host, njobs 42 43 44 def get_utcdatetime(): 45 try: 46 from datetime import datetime 47 return datetime.utcnow().isoformat(" ") 48 except: 49 pass 50 51 52 def dump_as_gist(data, desc="The Commit", njobs=None): 53 host, njobs2 = get_travis_data()[:2] 54 55 if njobs: # be slightly more reliable 56 njobs = max(njobs, njobs2) 57 58 content = dict(version="0.1.1", 59 timings=data, 60 datetime=get_utcdatetime(), # added in 0.1.1 61 hostname=host, # added in 0.1.1 62 njobs=njobs # added in 0.1.1, a measure of load on the travis box 63 ) 64 65 payload = dict(description=desc, 66 public=True, 67 files={'results.json': dict(content=json.dumps(content))}) 68 try: 69 with closing(urlopen("https://api.github.com/gists", 70 json.dumps(payload), timeout=WEB_TIMEOUT)) as r: 71 if 200 <= r.getcode() < 300: 72 print("\n\n" + "-" * 80) 73 74 gist = json.loads(r.read()) 75 file_raw_url = gist['files'].items()[0][1]['raw_url'] 76 print("[vbench-gist-raw_url] %s" % file_raw_url) 77 print("[vbench-html-url] %s" % gist['html_url']) 78 print("[vbench-api-url] %s" % gist['url']) 79 80 print("-" * 80 + "\n\n") 81 else: 82 print("api.github.com returned status %d" % r.getcode()) 83 except: 84 print("Error occured while dumping to gist") 85 86 87 def main(): 88 import warnings 89 from suite import benchmarks 90 91 exit_code = 0 92 warnings.filterwarnings('ignore', category=FutureWarning) 93 94 host, njobs = get_travis_data()[:2] 95 results = [] 96 for b in benchmarks: 97 try: 98 d = b.run() 99 d.update(dict(name=b.name)) 100 results.append(d) 101 msg = "{name:<40}: {timing:> 10.4f} [ms]" 102 print(msg.format(name=results[-1]['name'], 103 timing=results[-1]['timing'])) 104 105 except Exception as e: 106 exit_code = 1 107 if (type(e) == KeyboardInterrupt or 108 'KeyboardInterrupt' in str(d)): 109 raise KeyboardInterrupt() 110 111 msg = "{name:<40}: ERROR:\n<-------" 112 print(msg.format(name=b.name)) 113 if isinstance(d, dict): 114 if d['succeeded']: 115 print("\nException:\n%s\n" % str(e)) 116 else: 117 for k, v in sorted(d.iteritems()): 118 print("{k}: {v}".format(k=k, v=v)) 119 120 print("------->\n") 121 122 dump_as_gist(results, "testing", njobs=njobs) 123 124 return exit_code 125 126 127 if __name__ == "__main__": 128 import sys 129 sys.exit(main()) 130 131 ##################################################### 132 # functions for retrieving and processing the results 133 134 135 def get_vbench_log(build_url): 136 with closing(urllib2.urlopen(build_url)) as r: 137 if not (200 <= r.getcode() < 300): 138 return 139 140 s = json.loads(r.read()) 141 s = [x for x in s['matrix'] if "VBENCH" in ((x.get('config', {}) 142 or {}).get('env', {}) or {})] 143 # s=[x for x in s['matrix']] 144 if not s: 145 return 146 id = s[0]['id'] # should be just one for now 147 with closing(urllib2.urlopen("https://api.travis-ci.org/jobs/%s" % id)) as r2: 148 if not 200 <= r.getcode() < 300: 149 return 150 s2 = json.loads(r2.read()) 151 return s2.get('log') 152 153 154 def get_results_raw_url(build): 155 "Taks a Travis a build number, retrieves the build log and extracts the gist url" 156 import re 157 log = get_vbench_log("https://api.travis-ci.org/builds/%s" % build) 158 if not log: 159 return 160 l = [x.strip( 161 ) for x in log.split("\n") if re.match(".vbench-gist-raw_url", x)] 162 if l: 163 s = l[0] 164 m = re.search("(https://[^\s]+)", s) 165 if m: 166 return m.group(0) 167 168 169 def convert_json_to_df(results_url): 170 """retrieve json results file from url and return df 171 172 df contains timings for all successful vbenchmarks 173 """ 174 175 with closing(urlopen(results_url)) as resp: 176 res = json.loads(resp.read()) 177 timings = res.get("timings") 178 if not timings: 179 return 180 res = [x for x in timings if x.get('succeeded')] 181 df = pd.DataFrame(res) 182 df = df.set_index("name") 183 return df 184 185 186 def get_build_results(build): 187 "Returns a df with the results of the VBENCH job associated with the travis build" 188 r_url = get_results_raw_url(build) 189 if not r_url: 190 return 191 192 return convert_json_to_df(r_url) 193 194 195 def get_all_results(repo_id=53976): # travis pydata/pandas id 196 """Fetches the VBENCH results for all travis builds, and returns a list of result df 197 198 unsuccesful individual vbenches are dropped. 199 """ 200 from collections import OrderedDict 201 202 def get_results_from_builds(builds): 203 dfs = OrderedDict() 204 for build in builds: 205 build_id = build['id'] 206 build_number = build['number'] 207 print(build_number) 208 res = get_build_results(build_id) 209 if res is not None: 210 dfs[build_number] = res 211 return dfs 212 213 base_url = 'https://api.travis-ci.org/builds?url=%2Fbuilds&repository_id={repo_id}' 214 url = base_url.format(repo_id=repo_id) 215 url_after = url + '&after_number={after}' 216 dfs = OrderedDict() 217 218 while True: 219 with closing(urlopen(url)) as r: 220 if not (200 <= r.getcode() < 300): 221 break 222 builds = json.loads(r.read()) 223 res = get_results_from_builds(builds) 224 if not res: 225 break 226 last_build_number = min(res.keys()) 227 dfs.update(res) 228 url = url_after.format(after=last_build_number) 229 230 return dfs 231 232 233 def get_all_results_joined(repo_id=53976): 234 def mk_unique(df): 235 for dupe in df.index.get_duplicates(): 236 df = df.ix[df.index != dupe] 237 return df 238 dfs = get_all_results(repo_id) 239 for k in dfs: 240 dfs[k] = mk_unique(dfs[k]) 241 ss = [pd.Series(v.timing, name=k) for k, v in dfs.iteritems()] 242 results = pd.concat(reversed(ss), 1) 243 return results 244 [end of vb_suite/perf_HEAD.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
9be218045216c207b72545a9ea5338303598d89d
Broken nunique on Series group by The following code works in 0.16.2 and not in latest master: ``` python data = pd.DataFrame( [[100, 1, 'Alice'], [200, 2, 'Bob'], [300, 3, 'Charlie'], [-400, 4, 'Dan'], [500, 5, 'Edith']], columns=['amount', 'id', 'name'] ) expected = data.groupby(['id', 'amount'])['name'].nunique() ``` Going to bisect this today unless someone beats me to it.
https://github.com/pydata/pandas/pull/10894 Can we just revert that change and keep the test? The only other solution I see is to coerce `val` to `str` if it's an object dtype if its not `int64` factorize it ``` ipdb> p np.lexsort((pd.factorize(val)[0],ids)) array([0, 1, 2, 3, 4]) ``` ok or rather it has a `TypeError`, though this shouldn't have gotten this far if that was the case....
2015-09-12T18:31:35Z
<patch> diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt --- a/doc/source/whatsnew/v0.17.0.txt +++ b/doc/source/whatsnew/v0.17.0.txt @@ -1014,7 +1014,7 @@ Performance Improvements - Development support for benchmarking with the `Air Speed Velocity library <https://github.com/spacetelescope/asv/>`_ (:issue:`8316`) - Added vbench benchmarks for alternative ExcelWriter engines and reading Excel files (:issue:`7171`) - Performance improvements in ``Categorical.value_counts`` (:issue:`10804`) -- Performance improvements in ``SeriesGroupBy.nunique`` and ``SeriesGroupBy.value_counts`` (:issue:`10820`) +- Performance improvements in ``SeriesGroupBy.nunique`` and ``SeriesGroupBy.value_counts`` (:issue:`10820`, :issue:`11077`) - Performance improvements in ``DataFrame.drop_duplicates`` with integer dtypes (:issue:`10917`) - 4x improvement in ``timedelta`` string parsing (:issue:`6755`, :issue:`10426`) - 8x improvement in ``timedelta64`` and ``datetime64`` ops (:issue:`6755`) diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -2565,7 +2565,17 @@ def nunique(self, dropna=True): ids, _, _ = self.grouper.group_info val = self.obj.get_values() - sorter = np.lexsort((val, ids)) + try: + sorter = np.lexsort((val, ids)) + except TypeError: # catches object dtypes + assert val.dtype == object, \ + 'val.dtype must be object, got %s' % val.dtype + val, _ = algos.factorize(val, sort=False) + sorter = np.lexsort((val, ids)) + isnull = lambda a: a == -1 + else: + isnull = com.isnull + ids, val = ids[sorter], val[sorter] # group boundries are where group ids change </patch>
[]
[]
mesonbuild__meson-3975
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> gtkdoc-scangobj fails to find the shared library it scans (W32) For example: `Error in gtkdoc helper script:` `'mingw\\bin\\python.EXE' failed with status 3221225781` `WARNING:root:Running scanner failed: -1073741515, command: ./gtk4-scan.exe` This is because the shared library being scanned is not in library search path. On *nix that is handled by adjusting `LD_LIBRARY_PATH`, but on W32 the variable that needs adjustment is `PATH`, and meson doesn't change it. Here's a patch to fix that. [0001-gtk-doc-Use-LD_LIBRARY_PATH-to-modify-PATH-on-W32.patch.txt](https://github.com/mesonbuild/meson/files/1843401/0001-gtk-doc-Use-LD_LIBRARY_PATH-to-modify-PATH-on-W32.patch.txt) </issue> <code> [start of README.md] 1 <p align="center"> 2 <img src="http://mesonbuild.com/assets/images/meson_logo.png"> 3 </p> 4 Meson® is a project to create the best possible next-generation 5 build system. 6 7 #### Status 8 9 [![PyPI](https://img.shields.io/pypi/v/meson.svg)](https://pypi.python.org/pypi/meson) 10 [![Travis](https://travis-ci.org/mesonbuild/meson.svg?branch=master)](https://travis-ci.org/mesonbuild/meson) 11 [![Appveyor](https://ci.appveyor.com/api/projects/status/7jfaotriu8d8ncov?svg=true)](https://ci.appveyor.com/project/mesonbuild/meson) 12 [![Codecov](https://codecov.io/gh/mesonbuild/meson/coverage.svg?branch=master)](https://codecov.io/gh/mesonbuild/meson/branch/master) 13 14 #### Dependencies 15 16 - [Python](http://python.org) (version 3.5 or newer) 17 - [Ninja](https://ninja-build.org) (version 1.5 or newer) 18 19 #### Installing from source 20 21 You can run Meson directly from a revision control checkout or an 22 extracted tarball. If you wish you can install it locally with the 23 standard Python distutils command `python3 setup.py install <your 24 options here>`. 25 26 Meson is also available from 27 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed 28 with `pip3 install meson` (this does not require a source checkout, 29 pip will download the package automatically). The exact command to 30 type to install with pip can vary between systems, be sure to use the 31 Python 3 version of pip. 32 33 #### Running 34 35 Meson requires that you have a source directory and a build directory 36 and that these two are different. In your source root must exist a file 37 called 'meson.build'. To generate the build system run this command: 38 39 `meson <source directory> <build directory>` 40 41 Depending on how you obtained Meson the command might also be called 42 `meson.py` instead of plain `meson`. In the rest of this document we 43 are going to use the latter form. 44 45 You can omit either of the two directories, and Meson will substitute 46 the current directory and autodetect what you mean. This allows you to 47 do things like this: 48 49 `cd source_root; mkdir builddir; cd builddir; meson ..` 50 51 or 52 53 `cd source_root; mkdir builddir; meson builddir` 54 55 To compile, cd into your build directory and type `ninja`. To run unit 56 tests, type `ninja test`. 57 58 Install is the same but it can take an extra argument: 59 60 `DESTDIR=/destdir/path ninja install` 61 62 `DESTDIR` can be omitted. If you are installing to system directories, 63 you may need to run this command with sudo. 64 65 66 #### Contributing 67 68 We love code contributions. See the contributing.txt file for 69 details. 70 71 72 #### IRC 73 74 The irc channel for Meson is `#mesonbuild` over at Freenode. 75 76 77 #### Further info 78 79 More information about the Meson build system can be found at the 80 [project's home page](http://mesonbuild.com). 81 82 Meson is a registered trademark of Jussi Pakkanen 83 [end of README.md] [start of mesonbuild/dependencies/boost.py] 1 # Copyright 2013-2017 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 # This file contains the detection logic for miscellaneous external dependencies. 16 17 import glob 18 import os 19 20 from .. import mlog 21 from .. import mesonlib 22 from ..environment import detect_cpu_family 23 24 from .base import (DependencyException, ExternalDependency) 25 26 # On windows 3 directory layouts are supported: 27 # * The default layout (versioned) installed: 28 # - $BOOST_ROOT/include/boost-x_x/boost/*.hpp 29 # - $BOOST_ROOT/lib/*.lib 30 # * The non-default layout (system) installed: 31 # - $BOOST_ROOT/include/boost/*.hpp 32 # - $BOOST_ROOT/lib/*.lib 33 # * The pre-built binaries from sf.net: 34 # - $BOOST_ROOT/boost/*.hpp 35 # - $BOOST_ROOT/lib<arch>-<compiler>/*.lib where arch=32/64 and compiler=msvc-14.1 36 # 37 # Note that we should also try to support: 38 # mingw-w64 / Windows : libboost_<module>-mt.a (location = <prefix>/mingw64/lib/) 39 # libboost_<module>-mt.dll.a 40 # 41 # Library names supported: 42 # - libboost_<module>-<compiler>-mt-gd-x_x.lib (static) 43 # - boost_<module>-<compiler>-mt-gd-x_x.lib|.dll (shared) 44 # - libboost_<module>.lib (static) 45 # - boost_<module>.lib|.dll (shared) 46 # where compiler is vc141 for example. 47 # 48 # NOTE: -gd means runtime and build time debugging is on 49 # -mt means threading=multi 50 # 51 # The `modules` argument accept library names. This is because every module that 52 # has libraries to link against also has multiple options regarding how to 53 # link. See for example: 54 # * http://www.boost.org/doc/libs/1_65_1/libs/test/doc/html/boost_test/usage_variants.html 55 # * http://www.boost.org/doc/libs/1_65_1/doc/html/stacktrace/configuration_and_build.html 56 # * http://www.boost.org/doc/libs/1_65_1/libs/math/doc/html/math_toolkit/main_tr1.html 57 58 # **On Unix**, official packaged versions of boost libraries follow the following schemes: 59 # 60 # Linux / Debian: libboost_<module>.so -> libboost_<module>.so.1.66.0 61 # Linux / Red Hat: libboost_<module>.so -> libboost_<module>.so.1.66.0 62 # Linux / OpenSuse: libboost_<module>.so -> libboost_<module>.so.1.66.0 63 # Win / Cygwin: libboost_<module>.dll.a (location = /usr/lib) 64 # libboost_<module>.a 65 # cygboost_<module>_1_64.dll (location = /usr/bin) 66 # Win / VS: boost_<module>-vc<ver>-mt[-gd]-<arch>-1_67.dll (location = C:/local/boost_1_67_0) 67 # Mac / homebrew: libboost_<module>.dylib + libboost_<module>-mt.dylib (location = /usr/local/lib) 68 # Mac / macports: libboost_<module>.dylib + libboost_<module>-mt.dylib (location = /opt/local/lib) 69 # 70 # Its not clear that any other abi tags (e.g. -gd) are used in official packages. 71 # 72 # On Linux systems, boost libs have multithreading support enabled, but without the -mt tag. 73 # 74 # Boost documentation recommends using complex abi tags like "-lboost_regex-gcc34-mt-d-1_36". 75 # (See http://www.boost.org/doc/libs/1_66_0/more/getting_started/unix-variants.html#library-naming) 76 # However, its not clear that any Unix distribution follows this scheme. 77 # Furthermore, the boost documentation for unix above uses examples from windows like 78 # "libboost_regex-vc71-mt-d-x86-1_34.lib", so apparently the abi tags may be more aimed at windows. 79 # 80 # Probably we should use the linker search path to decide which libraries to use. This will 81 # make it possible to find the macports boost libraries without setting BOOST_ROOT, and will 82 # also mean that it would be possible to use user-installed boost libraries when official 83 # packages are installed. 84 # 85 # We thus follow the following strategy: 86 # 1. Look for libraries using compiler.find_library( ) 87 # 1.1 On Linux, just look for boost_<module> 88 # 1.2 On other systems (e.g. Mac) look for boost_<module>-mt if multithreading. 89 # 1.3 Otherwise look for boost_<module> 90 # 2. Fall back to previous approach 91 # 2.1. Search particular directories. 92 # 2.2. Find boost libraries with unknown suffixes using file-name globbing. 93 94 # TODO: Unix: Don't assume we know where the boost dir is, rely on -Idir and -Ldir being set. 95 # TODO: Allow user to specify suffix in BOOST_SUFFIX, or add specific options like BOOST_DEBUG for 'd' for debug. 96 97 class BoostDependency(ExternalDependency): 98 def __init__(self, environment, kwargs): 99 super().__init__('boost', environment, 'cpp', kwargs) 100 self.need_static_link = ['boost_exception', 'boost_test_exec_monitor'] 101 self.is_debug = environment.coredata.get_builtin_option('buildtype').startswith('debug') 102 threading = kwargs.get("threading", "multi") 103 self.is_multithreading = threading == "multi" 104 105 self.requested_modules = self.get_requested(kwargs) 106 107 self.boost_root = None 108 self.boost_roots = [] 109 self.incdir = None 110 self.libdir = None 111 112 if 'BOOST_ROOT' in os.environ: 113 self.boost_root = os.environ['BOOST_ROOT'] 114 self.boost_roots = [self.boost_root] 115 if not os.path.isabs(self.boost_root): 116 raise DependencyException('BOOST_ROOT must be an absolute path.') 117 if 'BOOST_INCLUDEDIR' in os.environ: 118 self.incdir = os.environ['BOOST_INCLUDEDIR'] 119 if 'BOOST_LIBRARYDIR' in os.environ: 120 self.libdir = os.environ['BOOST_LIBRARYDIR'] 121 122 if self.boost_root is None: 123 if mesonlib.for_windows(self.want_cross, self.env): 124 self.boost_roots = self.detect_win_roots() 125 else: 126 self.boost_roots = self.detect_nix_roots() 127 128 if self.incdir is None: 129 if mesonlib.for_windows(self.want_cross, self.env): 130 self.incdir = self.detect_win_incdir() 131 else: 132 self.incdir = self.detect_nix_incdir() 133 134 if self.check_invalid_modules(): 135 self.log_fail() 136 return 137 138 mlog.debug('Boost library root dir is', mlog.bold(self.boost_root)) 139 mlog.debug('Boost include directory is', mlog.bold(self.incdir)) 140 141 # 1. check if we can find BOOST headers. 142 self.detect_headers_and_version() 143 144 # 2. check if we can find BOOST libraries. 145 if self.is_found: 146 self.detect_lib_modules() 147 mlog.debug('Boost library directory is', mlog.bold(self.libdir)) 148 149 # 3. Report success or failure 150 if self.is_found: 151 self.log_success() 152 else: 153 self.log_fail() 154 155 def check_invalid_modules(self): 156 invalid_modules = [c for c in self.requested_modules if 'boost_' + c not in BOOST_LIBS] 157 158 # previous versions of meson allowed include dirs as modules 159 remove = [] 160 for m in invalid_modules: 161 if m in BOOST_DIRS: 162 mlog.warning('Requested boost library', mlog.bold(m), 'that doesn\'t exist. ' 163 'This will be an error in the future') 164 remove.append(m) 165 166 self.requested_modules = [x for x in self.requested_modules if x not in remove] 167 invalid_modules = [x for x in invalid_modules if x not in remove] 168 169 if invalid_modules: 170 mlog.error('Invalid Boost modules: ' + ', '.join(invalid_modules)) 171 return True 172 else: 173 return False 174 175 def log_fail(self): 176 module_str = ', '.join(self.requested_modules) 177 mlog.log("Dependency Boost (%s) found:" % module_str, mlog.red('NO')) 178 179 def log_success(self): 180 module_str = ', '.join(self.requested_modules) 181 if self.boost_root: 182 info = self.version + ', ' + self.boost_root 183 else: 184 info = self.version 185 mlog.log('Dependency Boost (%s) found:' % module_str, mlog.green('YES'), info) 186 187 def detect_nix_roots(self): 188 return [os.path.abspath(os.path.join(x, '..')) 189 for x in self.clib_compiler.get_default_include_dirs()] 190 191 def detect_win_roots(self): 192 res = [] 193 # Where boost documentation says it should be 194 globtext = 'C:\\Program Files\\boost\\boost_*' 195 files = glob.glob(globtext) 196 res.extend(files) 197 198 # Where boost built from source actually installs it 199 if os.path.isdir('C:\\Boost'): 200 res.append('C:\\Boost') 201 202 # Where boost prebuilt binaries are 203 globtext = 'C:\\local\\boost_*' 204 files = glob.glob(globtext) 205 res.extend(files) 206 return res 207 208 def detect_nix_incdir(self): 209 if self.boost_root: 210 return os.path.join(self.boost_root, 'include') 211 return None 212 213 # FIXME: Should pick a version that matches the requested version 214 # Returns the folder that contains the boost folder. 215 def detect_win_incdir(self): 216 for root in self.boost_roots: 217 globtext = os.path.join(root, 'include', 'boost-*') 218 incdirs = glob.glob(globtext) 219 if len(incdirs) > 0: 220 return incdirs[0] 221 incboostdir = os.path.join(root, 'include', 'boost') 222 if os.path.isdir(incboostdir): 223 return os.path.join(root, 'include') 224 incboostdir = os.path.join(root, 'boost') 225 if os.path.isdir(incboostdir): 226 return root 227 return None 228 229 def get_compile_args(self): 230 args = [] 231 include_dir = self.incdir 232 233 # Use "-isystem" when including boost headers instead of "-I" 234 # to avoid compiler warnings/failures when "-Werror" is used 235 236 # Careful not to use "-isystem" on default include dirs as it 237 # breaks some of the headers for certain gcc versions 238 239 # For example, doing g++ -isystem /usr/include on a simple 240 # "int main()" source results in the error: 241 # "/usr/include/c++/6.3.1/cstdlib:75:25: fatal error: stdlib.h: No such file or directory" 242 243 # See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129 244 # and http://stackoverflow.com/questions/37218953/isystem-on-a-system-include-directory-causes-errors 245 # for more details 246 247 if include_dir and include_dir not in self.clib_compiler.get_default_include_dirs(): 248 args.append("".join(self.clib_compiler.get_include_args(include_dir, True))) 249 return args 250 251 def get_requested(self, kwargs): 252 candidates = mesonlib.extract_as_list(kwargs, 'modules') 253 for c in candidates: 254 if not isinstance(c, str): 255 raise DependencyException('Boost module argument is not a string.') 256 return candidates 257 258 def detect_headers_and_version(self): 259 try: 260 version = self.clib_compiler.get_define('BOOST_LIB_VERSION', '#include <boost/version.hpp>', self.env, self.get_compile_args(), []) 261 except mesonlib.EnvironmentException: 262 return 263 except TypeError: 264 return 265 # Remove quotes 266 version = version[1:-1] 267 # Fix version string 268 self.version = version.replace('_', '.') 269 self.is_found = True 270 271 def detect_lib_modules(self): 272 self.lib_modules = {} 273 # 1. Try to find modules using compiler.find_library( ) 274 if self.find_libraries_with_abi_tags(self.abi_tags()): 275 pass 276 # 2. Fall back to the old method 277 else: 278 if mesonlib.for_windows(self.want_cross, self.env): 279 self.detect_lib_modules_win() 280 else: 281 self.detect_lib_modules_nix() 282 283 # 3. Check if we can find the modules 284 for m in self.requested_modules: 285 if 'boost_' + m not in self.lib_modules: 286 mlog.debug('Requested Boost library {!r} not found'.format(m)) 287 self.is_found = False 288 289 def modname_from_filename(self, filename): 290 modname = os.path.basename(filename) 291 modname = modname.split('.', 1)[0] 292 modname = modname.split('-', 1)[0] 293 if modname.startswith('libboost'): 294 modname = modname[3:] 295 return modname 296 297 def compiler_tag(self): 298 tag = None 299 compiler = self.env.detect_cpp_compiler(self.want_cross) 300 if mesonlib.for_windows(self.want_cross, self.env): 301 if compiler.get_id() == 'msvc': 302 comp_ts_version = compiler.get_toolset_version() 303 compiler_ts = comp_ts_version.split('.') 304 # FIXME - what about other compilers? 305 tag = '-vc{}{}'.format(compiler_ts[0], compiler_ts[1]) 306 else: 307 tag = '' 308 return tag 309 310 def threading_tag(self): 311 if not self.is_multithreading: 312 return '' 313 314 if mesonlib.for_darwin(self.want_cross, self.env): 315 # - Mac: requires -mt for multithreading, so should not fall back to non-mt libraries. 316 return '-mt' 317 elif mesonlib.for_windows(self.want_cross, self.env): 318 # - Windows: requires -mt for multithreading, so should not fall back to non-mt libraries. 319 return '-mt' 320 else: 321 # - Linux: leaves off -mt but libraries are multithreading-aware. 322 # - Cygwin: leaves off -mt but libraries are multithreading-aware. 323 return '' 324 325 def version_tag(self): 326 return '-' + self.version.replace('.', '_') 327 328 def debug_tag(self): 329 return '-gd' if self.is_debug else '' 330 331 def arch_tag(self): 332 # currently only applies to windows msvc installed binaries 333 if self.env.detect_cpp_compiler(self.want_cross).get_id() != 'msvc': 334 return '' 335 # pre-compiled binaries only added arch tag for versions > 1.64 336 if float(self.version) < 1.65: 337 return '' 338 arch = detect_cpu_family(self.env.coredata.compilers) 339 if arch == 'x86': 340 return '-x32' 341 elif arch == 'x86_64': 342 return '-x64' 343 return '' 344 345 def versioned_abi_tag(self): 346 return self.compiler_tag() + self.threading_tag() + self.debug_tag() + self.arch_tag() + self.version_tag() 347 348 # FIXME - how to handle different distributions, e.g. for Mac? Currently we handle homebrew and macports, but not fink. 349 def abi_tags(self): 350 if mesonlib.for_windows(self.want_cross, self.env): 351 return [self.versioned_abi_tag(), self.threading_tag()] 352 else: 353 return [self.threading_tag()] 354 355 def sourceforge_dir(self): 356 if self.env.detect_cpp_compiler(self.want_cross).get_id() != 'msvc': 357 return None 358 comp_ts_version = self.env.detect_cpp_compiler(self.want_cross).get_toolset_version() 359 arch = detect_cpu_family(self.env.coredata.compilers) 360 if arch == 'x86': 361 return 'lib32-msvc-{}'.format(comp_ts_version) 362 elif arch == 'x86_64': 363 return 'lib64-msvc-{}'.format(comp_ts_version) 364 else: 365 # Does anyone do Boost cross-compiling to other archs on Windows? 366 return None 367 368 def find_libraries_with_abi_tag(self, tag): 369 370 # All modules should have the same tag 371 self.lib_modules = {} 372 373 all_found = True 374 375 for module in self.requested_modules: 376 libname = 'boost_' + module + tag 377 378 args = self.clib_compiler.find_library(libname, self.env, self.extra_lib_dirs()) 379 if args is None: 380 mlog.debug("Couldn\'t find library '{}' for boost module '{}' (ABI tag = '{}')".format(libname, module, tag)) 381 all_found = False 382 else: 383 mlog.debug('Link args for boost module "{}" are {}'.format(module, args)) 384 self.lib_modules['boost_' + module] = args 385 386 return all_found 387 388 def find_libraries_with_abi_tags(self, tags): 389 for tag in tags: 390 if self.find_libraries_with_abi_tag(tag): 391 return True 392 return False 393 394 def detect_lib_modules_win(self): 395 if not self.libdir: 396 # The libdirs in the distributed binaries (from sf) 397 lib_sf = self.sourceforge_dir() 398 399 if self.boost_root: 400 roots = [self.boost_root] 401 else: 402 roots = self.boost_roots 403 for root in roots: 404 # The default libdir when building 405 libdir = os.path.join(root, 'lib') 406 if os.path.isdir(libdir): 407 self.libdir = libdir 408 break 409 if lib_sf: 410 full_path = os.path.join(root, lib_sf) 411 if os.path.isdir(full_path): 412 self.libdir = full_path 413 break 414 415 if not self.libdir: 416 return 417 418 for name in self.need_static_link: 419 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a 420 libname = 'lib' + name + self.versioned_abi_tag() + '.lib' 421 if os.path.isfile(os.path.join(self.libdir, libname)): 422 self.lib_modules[self.modname_from_filename(libname)] = [libname] 423 else: 424 libname = "lib{}.lib".format(name) 425 if os.path.isfile(os.path.join(self.libdir, libname)): 426 self.lib_modules[name[3:]] = [libname] 427 428 # globber1 applies to a layout=system installation 429 # globber2 applies to a layout=versioned installation 430 globber1 = 'libboost_*' if self.static else 'boost_*' 431 globber2 = globber1 + self.versioned_abi_tag() 432 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a 433 globber2_matches = glob.glob(os.path.join(self.libdir, globber2 + '.lib')) 434 for entry in globber2_matches: 435 fname = os.path.basename(entry) 436 self.lib_modules[self.modname_from_filename(fname)] = [fname] 437 if len(globber2_matches) == 0: 438 # FIXME - why are we only looking for *.lib? Mingw provides *.dll.a and *.a 439 for entry in glob.glob(os.path.join(self.libdir, globber1 + '.lib')): 440 if self.static: 441 fname = os.path.basename(entry) 442 self.lib_modules[self.modname_from_filename(fname)] = [fname] 443 444 def detect_lib_modules_nix(self): 445 if self.static: 446 libsuffix = 'a' 447 elif mesonlib.for_darwin(self.want_cross, self.env): 448 libsuffix = 'dylib' 449 else: 450 libsuffix = 'so' 451 452 globber = 'libboost_*.{}'.format(libsuffix) 453 if self.libdir: 454 libdirs = [self.libdir] 455 elif self.boost_root is None: 456 libdirs = mesonlib.get_library_dirs() 457 else: 458 libdirs = [os.path.join(self.boost_root, 'lib')] 459 for libdir in libdirs: 460 for name in self.need_static_link: 461 libname = 'lib{}.a'.format(name) 462 if os.path.isfile(os.path.join(libdir, libname)): 463 self.lib_modules[name] = [libname] 464 for entry in glob.glob(os.path.join(libdir, globber)): 465 # I'm not 100% sure what to do here. Some distros 466 # have modules such as thread only as -mt versions. 467 # On debian all packages are built threading=multi 468 # but not suffixed with -mt. 469 # FIXME: implement detect_lib_modules_{debian, redhat, ...} 470 # FIXME: this wouldn't work with -mt-gd either. -BDR 471 if self.is_multithreading and mesonlib.is_debianlike(): 472 pass 473 elif self.is_multithreading and entry.endswith('-mt.{}'.format(libsuffix)): 474 pass 475 elif not entry.endswith('-mt.{}'.format(libsuffix)): 476 pass 477 else: 478 continue 479 modname = self.modname_from_filename(entry) 480 if modname not in self.lib_modules: 481 self.lib_modules[modname] = [entry] 482 483 def extra_lib_dirs(self): 484 if self.libdir: 485 return [self.libdir] 486 elif self.boost_root: 487 return [os.path.join(self.boost_root, 'lib')] 488 return [] 489 490 def get_link_args(self, **kwargs): 491 args = [] 492 for d in self.extra_lib_dirs(): 493 args += self.clib_compiler.get_linker_search_args(d) 494 for lib in self.requested_modules: 495 args += self.lib_modules['boost_' + lib] 496 return args 497 498 def get_sources(self): 499 return [] 500 501 def need_threads(self): 502 return 'thread' in self.requested_modules 503 504 505 # Generated with boost_names.py 506 BOOST_LIBS = [ 507 'boost_atomic', 508 'boost_chrono', 509 'boost_chrono', 510 'boost_container', 511 'boost_context', 512 'boost_coroutine', 513 'boost_date_time', 514 'boost_exception', 515 'boost_fiber', 516 'boost_filesystem', 517 'boost_graph', 518 'boost_iostreams', 519 'boost_locale', 520 'boost_log', 521 'boost_log_setup', 522 'boost_math_tr1', 523 'boost_math_tr1f', 524 'boost_math_tr1l', 525 'boost_math_c99', 526 'boost_math_c99f', 527 'boost_math_c99l', 528 'boost_math_tr1', 529 'boost_math_tr1f', 530 'boost_math_tr1l', 531 'boost_math_c99', 532 'boost_math_c99f', 533 'boost_math_c99l', 534 'boost_math_tr1', 535 'boost_math_tr1f', 536 'boost_math_tr1l', 537 'boost_math_c99', 538 'boost_math_c99f', 539 'boost_math_c99l', 540 'boost_math_tr1', 541 'boost_math_tr1f', 542 'boost_math_tr1l', 543 'boost_math_c99', 544 'boost_math_c99f', 545 'boost_math_c99l', 546 'boost_math_tr1', 547 'boost_math_tr1f', 548 'boost_math_tr1l', 549 'boost_math_c99', 550 'boost_math_c99f', 551 'boost_math_c99l', 552 'boost_math_tr1', 553 'boost_math_tr1f', 554 'boost_math_tr1l', 555 'boost_math_c99', 556 'boost_math_c99f', 557 'boost_math_c99l', 558 'boost_mpi', 559 'boost_program_options', 560 'boost_python', 561 'boost_python3', 562 'boost_numpy', 563 'boost_numpy3', 564 'boost_random', 565 'boost_regex', 566 'boost_serialization', 567 'boost_wserialization', 568 'boost_signals', 569 'boost_stacktrace_noop', 570 'boost_stacktrace_backtrace', 571 'boost_stacktrace_addr2line', 572 'boost_stacktrace_basic', 573 'boost_stacktrace_windbg', 574 'boost_stacktrace_windbg_cached', 575 'boost_system', 576 'boost_prg_exec_monitor', 577 'boost_test_exec_monitor', 578 'boost_unit_test_framework', 579 'boost_thread', 580 'boost_timer', 581 'boost_type_erasure', 582 'boost_wave' 583 ] 584 585 BOOST_DIRS = [ 586 'lambda', 587 'optional', 588 'convert', 589 'system', 590 'uuid', 591 'archive', 592 'align', 593 'timer', 594 'chrono', 595 'gil', 596 'logic', 597 'signals', 598 'predef', 599 'tr1', 600 'multi_index', 601 'property_map', 602 'multi_array', 603 'context', 604 'random', 605 'endian', 606 'circular_buffer', 607 'proto', 608 'assign', 609 'format', 610 'math', 611 'phoenix', 612 'graph', 613 'locale', 614 'mpl', 615 'pool', 616 'unordered', 617 'core', 618 'exception', 619 'ptr_container', 620 'flyweight', 621 'range', 622 'typeof', 623 'thread', 624 'move', 625 'spirit', 626 'dll', 627 'compute', 628 'serialization', 629 'ratio', 630 'msm', 631 'config', 632 'metaparse', 633 'coroutine2', 634 'qvm', 635 'program_options', 636 'concept', 637 'detail', 638 'hana', 639 'concept_check', 640 'compatibility', 641 'variant', 642 'type_erasure', 643 'mpi', 644 'test', 645 'fusion', 646 'log', 647 'sort', 648 'local_function', 649 'units', 650 'functional', 651 'preprocessor', 652 'integer', 653 'container', 654 'polygon', 655 'interprocess', 656 'numeric', 657 'iterator', 658 'wave', 659 'lexical_cast', 660 'multiprecision', 661 'utility', 662 'tti', 663 'asio', 664 'dynamic_bitset', 665 'algorithm', 666 'xpressive', 667 'bimap', 668 'signals2', 669 'type_traits', 670 'regex', 671 'statechart', 672 'parameter', 673 'icl', 674 'python', 675 'lockfree', 676 'intrusive', 677 'io', 678 'pending', 679 'geometry', 680 'tuple', 681 'iostreams', 682 'heap', 683 'atomic', 684 'filesystem', 685 'smart_ptr', 686 'function', 687 'fiber', 688 'type_index', 689 'accumulators', 690 'function_types', 691 'coroutine', 692 'vmd', 693 'date_time', 694 'property_tree', 695 'bind' 696 ] 697 [end of mesonbuild/dependencies/boost.py] [start of mesonbuild/scripts/gtkdochelper.py] 1 # Copyright 2015-2016 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import sys, os 16 import subprocess 17 import shlex 18 import shutil 19 import argparse 20 from ..mesonlib import MesonException, Popen_safe 21 from . import destdir_join 22 23 parser = argparse.ArgumentParser() 24 25 parser.add_argument('--sourcedir', dest='sourcedir') 26 parser.add_argument('--builddir', dest='builddir') 27 parser.add_argument('--subdir', dest='subdir') 28 parser.add_argument('--headerdirs', dest='headerdirs') 29 parser.add_argument('--mainfile', dest='mainfile') 30 parser.add_argument('--modulename', dest='modulename') 31 parser.add_argument('--htmlargs', dest='htmlargs', default='') 32 parser.add_argument('--scanargs', dest='scanargs', default='') 33 parser.add_argument('--scanobjsargs', dest='scanobjsargs', default='') 34 parser.add_argument('--gobjects-types-file', dest='gobject_typesfile', default='') 35 parser.add_argument('--fixxrefargs', dest='fixxrefargs', default='') 36 parser.add_argument('--mkdbargs', dest='mkdbargs', default='') 37 parser.add_argument('--ld', dest='ld', default='') 38 parser.add_argument('--cc', dest='cc', default='') 39 parser.add_argument('--ldflags', dest='ldflags', default='') 40 parser.add_argument('--cflags', dest='cflags', default='') 41 parser.add_argument('--content-files', dest='content_files', default='') 42 parser.add_argument('--expand-content-files', dest='expand_content_files', default='') 43 parser.add_argument('--html-assets', dest='html_assets', default='') 44 parser.add_argument('--ignore-headers', dest='ignore_headers', default='') 45 parser.add_argument('--namespace', dest='namespace', default='') 46 parser.add_argument('--mode', dest='mode', default='') 47 parser.add_argument('--installdir', dest='install_dir') 48 parser.add_argument('--run', dest='run', default='') 49 50 def gtkdoc_run_check(cmd, cwd, library_path=None): 51 env = dict(os.environ) 52 if library_path: 53 env['LD_LIBRARY_PATH'] = library_path 54 # Put stderr into stdout since we want to print it out anyway. 55 # This preserves the order of messages. 56 p, out = Popen_safe(cmd, cwd=cwd, env=env, stderr=subprocess.STDOUT)[0:2] 57 if p.returncode != 0: 58 err_msg = ["{!r} failed with status {:d}".format(cmd[0], p.returncode)] 59 if out: 60 err_msg.append(out) 61 raise MesonException('\n'.join(err_msg)) 62 elif out: 63 print(out) 64 65 def build_gtkdoc(source_root, build_root, doc_subdir, src_subdirs, 66 main_file, module, 67 html_args, scan_args, fixxref_args, mkdb_args, 68 gobject_typesfile, scanobjs_args, run, ld, cc, ldflags, cflags, 69 html_assets, content_files, ignore_headers, namespace, 70 expand_content_files, mode): 71 print("Building documentation for %s" % module) 72 73 src_dir_args = [] 74 for src_dir in src_subdirs: 75 if not os.path.isabs(src_dir): 76 dirs = [os.path.join(source_root, src_dir), 77 os.path.join(build_root, src_dir)] 78 else: 79 dirs = [src_dir] 80 src_dir_args += ['--source-dir=' + d for d in dirs] 81 82 doc_src = os.path.join(source_root, doc_subdir) 83 abs_out = os.path.join(build_root, doc_subdir) 84 htmldir = os.path.join(abs_out, 'html') 85 86 content_files += [main_file] 87 sections = os.path.join(doc_src, module + "-sections.txt") 88 if os.path.exists(sections): 89 content_files.append(sections) 90 91 overrides = os.path.join(doc_src, module + "-overrides.txt") 92 if os.path.exists(overrides): 93 content_files.append(overrides) 94 95 # Copy files to build directory 96 for f in content_files: 97 # FIXME: Use mesonlib.File objects so we don't need to do this 98 if not os.path.isabs(f): 99 f = os.path.join(doc_src, f) 100 elif os.path.commonpath([f, build_root]) == build_root: 101 continue 102 shutil.copyfile(f, os.path.join(abs_out, os.path.basename(f))) 103 104 shutil.rmtree(htmldir, ignore_errors=True) 105 try: 106 os.mkdir(htmldir) 107 except Exception: 108 pass 109 110 for f in html_assets: 111 f_abs = os.path.join(doc_src, f) 112 shutil.copyfile(f_abs, os.path.join(htmldir, os.path.basename(f_abs))) 113 114 scan_cmd = ['gtkdoc-scan', '--module=' + module] + src_dir_args 115 if ignore_headers: 116 scan_cmd.append('--ignore-headers=' + ' '.join(ignore_headers)) 117 # Add user-specified arguments 118 scan_cmd += scan_args 119 gtkdoc_run_check(scan_cmd, abs_out) 120 121 # Use the generated types file when available, otherwise gobject_typesfile 122 # would often be a path to source dir instead of build dir. 123 if '--rebuild-types' in scan_args: 124 gobject_typesfile = os.path.join(abs_out, module + '.types') 125 126 if gobject_typesfile: 127 scanobjs_cmd = ['gtkdoc-scangobj'] + scanobjs_args + ['--types=' + gobject_typesfile, 128 '--module=' + module, 129 '--run=' + run, 130 '--cflags=' + cflags, 131 '--ldflags=' + ldflags, 132 '--cc=' + cc, 133 '--ld=' + ld, 134 '--output-dir=' + abs_out] 135 136 library_paths = [] 137 for ldflag in shlex.split(ldflags): 138 if ldflag.startswith('-Wl,-rpath,'): 139 library_paths.append(ldflag[11:]) 140 if 'LD_LIBRARY_PATH' in os.environ: 141 library_paths.append(os.environ['LD_LIBRARY_PATH']) 142 library_path = ':'.join(library_paths) 143 144 gtkdoc_run_check(scanobjs_cmd, build_root, library_path) 145 146 # Make docbook files 147 if mode == 'auto': 148 # Guessing is probably a poor idea but these keeps compat 149 # with previous behavior 150 if main_file.endswith('sgml'): 151 modeflag = '--sgml-mode' 152 else: 153 modeflag = '--xml-mode' 154 elif mode == 'xml': 155 modeflag = '--xml-mode' 156 elif mode == 'sgml': 157 modeflag = '--sgml-mode' 158 else: # none 159 modeflag = None 160 161 mkdb_cmd = ['gtkdoc-mkdb', 162 '--module=' + module, 163 '--output-format=xml', 164 '--expand-content-files=' + ' '.join(expand_content_files), 165 ] + src_dir_args 166 if namespace: 167 mkdb_cmd.append('--name-space=' + namespace) 168 if modeflag: 169 mkdb_cmd.append(modeflag) 170 if len(main_file) > 0: 171 # Yes, this is the flag even if the file is in xml. 172 mkdb_cmd.append('--main-sgml-file=' + main_file) 173 # Add user-specified arguments 174 mkdb_cmd += mkdb_args 175 gtkdoc_run_check(mkdb_cmd, abs_out) 176 177 # Make HTML documentation 178 mkhtml_cmd = ['gtkdoc-mkhtml', 179 '--path=' + ':'.join((doc_src, abs_out)), 180 module, 181 ] + html_args 182 if len(main_file) > 0: 183 mkhtml_cmd.append('../' + main_file) 184 else: 185 mkhtml_cmd.append('%s-docs.xml' % module) 186 # html gen must be run in the HTML dir 187 gtkdoc_run_check(mkhtml_cmd, os.path.join(abs_out, 'html')) 188 189 # Fix cross-references in HTML files 190 fixref_cmd = ['gtkdoc-fixxref', 191 '--module=' + module, 192 '--module-dir=html'] + fixxref_args 193 gtkdoc_run_check(fixref_cmd, abs_out) 194 195 def install_gtkdoc(build_root, doc_subdir, install_prefix, datadir, module): 196 source = os.path.join(build_root, doc_subdir, 'html') 197 final_destination = os.path.join(install_prefix, datadir, module) 198 shutil.rmtree(final_destination, ignore_errors=True) 199 shutil.copytree(source, final_destination) 200 201 def run(args): 202 options = parser.parse_args(args) 203 if len(options.htmlargs) > 0: 204 htmlargs = options.htmlargs.split('@@') 205 else: 206 htmlargs = [] 207 if len(options.scanargs) > 0: 208 scanargs = options.scanargs.split('@@') 209 else: 210 scanargs = [] 211 if len(options.scanobjsargs) > 0: 212 scanobjsargs = options.scanobjsargs.split('@@') 213 else: 214 scanobjsargs = [] 215 if len(options.fixxrefargs) > 0: 216 fixxrefargs = options.fixxrefargs.split('@@') 217 else: 218 fixxrefargs = [] 219 if len(options.mkdbargs) > 0: 220 mkdbargs = options.mkdbargs.split('@@') 221 else: 222 mkdbargs = [] 223 build_gtkdoc( 224 options.sourcedir, 225 options.builddir, 226 options.subdir, 227 options.headerdirs.split('@@'), 228 options.mainfile, 229 options.modulename, 230 htmlargs, 231 scanargs, 232 fixxrefargs, 233 mkdbargs, 234 options.gobject_typesfile, 235 scanobjsargs, 236 options.run, 237 options.ld, 238 options.cc, 239 options.ldflags, 240 options.cflags, 241 options.html_assets.split('@@') if options.html_assets else [], 242 options.content_files.split('@@') if options.content_files else [], 243 options.ignore_headers.split('@@') if options.ignore_headers else [], 244 options.namespace, 245 options.expand_content_files.split('@@') if options.expand_content_files else [], 246 options.mode) 247 248 if 'MESON_INSTALL_PREFIX' in os.environ: 249 destdir = os.environ.get('DESTDIR', '') 250 install_prefix = destdir_join(destdir, os.environ['MESON_INSTALL_PREFIX']) 251 install_dir = options.install_dir if options.install_dir else options.modulename 252 if os.path.isabs(install_dir): 253 install_dir = destdir_join(destdir, install_dir) 254 install_gtkdoc(options.builddir, 255 options.subdir, 256 install_prefix, 257 'share/gtk-doc/html', 258 install_dir) 259 return 0 260 261 if __name__ == '__main__': 262 sys.exit(run(sys.argv[1:])) 263 [end of mesonbuild/scripts/gtkdochelper.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
mesonbuild/meson
c9aea4e11c648f1051454132626bcb4aef976d6d
gtkdoc-scangobj fails to find the shared library it scans (W32) For example: `Error in gtkdoc helper script:` `'mingw\\bin\\python.EXE' failed with status 3221225781` `WARNING:root:Running scanner failed: -1073741515, command: ./gtk4-scan.exe` This is because the shared library being scanned is not in library search path. On *nix that is handled by adjusting `LD_LIBRARY_PATH`, but on W32 the variable that needs adjustment is `PATH`, and meson doesn't change it. Here's a patch to fix that. [0001-gtk-doc-Use-LD_LIBRARY_PATH-to-modify-PATH-on-W32.patch.txt](https://github.com/mesonbuild/meson/files/1843401/0001-gtk-doc-Use-LD_LIBRARY_PATH-to-modify-PATH-on-W32.patch.txt)
Please file a MR with that instead. I can confirm that his fixes the docs build for pango
2018-08-04T08:11:57Z
<patch> diff --git a/mesonbuild/scripts/gtkdochelper.py b/mesonbuild/scripts/gtkdochelper.py --- a/mesonbuild/scripts/gtkdochelper.py +++ b/mesonbuild/scripts/gtkdochelper.py @@ -17,7 +17,7 @@ import shlex import shutil import argparse -from ..mesonlib import MesonException, Popen_safe +from ..mesonlib import MesonException, Popen_safe, is_windows from . import destdir_join parser = argparse.ArgumentParser() @@ -47,10 +47,20 @@ parser.add_argument('--installdir', dest='install_dir') parser.add_argument('--run', dest='run', default='') -def gtkdoc_run_check(cmd, cwd, library_path=None): +def gtkdoc_run_check(cmd, cwd, library_paths=None): + if library_paths is None: + library_paths = [] + env = dict(os.environ) - if library_path: - env['LD_LIBRARY_PATH'] = library_path + if is_windows(): + if 'PATH' in env: + library_paths.extend(env['PATH'].split(os.pathsep)) + env['PATH'] = os.pathsep.join(library_paths) + else: + if 'LD_LIBRARY_PATH' in env: + library_paths.extend(env['LD_LIBRARY_PATH'].split(os.pathsep)) + env['LD_LIBRARY_PATH'] = os.pathsep.join(library_paths) + # Put stderr into stdout since we want to print it out anyway. # This preserves the order of messages. p, out = Popen_safe(cmd, cwd=cwd, env=env, stderr=subprocess.STDOUT)[0:2] @@ -137,11 +147,8 @@ def build_gtkdoc(source_root, build_root, doc_subdir, src_subdirs, for ldflag in shlex.split(ldflags): if ldflag.startswith('-Wl,-rpath,'): library_paths.append(ldflag[11:]) - if 'LD_LIBRARY_PATH' in os.environ: - library_paths.append(os.environ['LD_LIBRARY_PATH']) - library_path = ':'.join(library_paths) - gtkdoc_run_check(scanobjs_cmd, build_root, library_path) + gtkdoc_run_check(scanobjs_cmd, build_root, library_paths) # Make docbook files if mode == 'auto': </patch>
[]
[]
pandas-dev__pandas-4313
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Date formatting option in to_csv? http://stackoverflow.com/questions/13999850/how-to-specify-date-format-when-using-pandas-to-csv http://stackoverflow.com/questions/15651527/how-to-stop-python-pandas-from-adding-000000-to-every-date </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.png) 4 5 ## What is it 6 **pandas** is a Python package providing fast, flexible, and expressive data 7 structures designed to make working with "relational" or "labeled" data both 8 easy and intuitive. It aims to be the fundamental high-level building block for 9 doing practical, **real world** data analysis in Python. Additionally, it has 10 the broader goal of becoming **the most powerful and flexible open source data 11 analysis / manipulation tool available in any language**. It is already well on 12 its way toward this goal. 13 14 ## Main Features 15 Here are just a few of the things that pandas does well: 16 17 - Easy handling of [**missing data**][missing-data] (represented as 18 `NaN`) in floating point as well as non-floating point data 19 - Size mutability: columns can be [**inserted and 20 deleted**][insertion-deletion] from DataFrame and higher dimensional 21 objects 22 - Automatic and explicit [**data alignment**][alignment]: objects can 23 be explicitly aligned to a set of labels, or the user can simply 24 ignore the labels and let `Series`, `DataFrame`, etc. automatically 25 align the data for you in computations 26 - Powerful, flexible [**group by**][groupby] functionality to perform 27 split-apply-combine operations on data sets, for both aggregating 28 and transforming data 29 - Make it [**easy to convert**][conversion] ragged, 30 differently-indexed data in other Python and NumPy data structures 31 into DataFrame objects 32 - Intelligent label-based [**slicing**][slicing], [**fancy 33 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 34 large data sets 35 - Intuitive [**merging**][merging] and [**joining**][joining] data 36 sets 37 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 38 data sets 39 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 40 labels per tick) 41 - Robust IO tools for loading data from [**flat files**][flat-files] 42 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 43 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 44 - [**Time series**][timeseries]-specific functionality: date range 45 generation and frequency conversion, moving window statistics, 46 moving window linear regressions, date shifting and lagging, etc. 47 48 49 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 50 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 51 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 52 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 53 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 54 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 55 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 56 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 57 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 58 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 59 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 60 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 61 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 62 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 63 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 64 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 65 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 66 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 67 68 ## Where to get it 69 The source code is currently hosted on GitHub at: 70 http://github.com/pydata/pandas 71 72 Binary installers for the latest released version are available at the Python 73 package index 74 75 http://pypi.python.org/pypi/pandas/ 76 77 And via `easy_install`: 78 79 ```sh 80 easy_install pandas 81 ``` 82 83 or `pip`: 84 85 ```sh 86 pip install pandas 87 ``` 88 89 ## Dependencies 90 - [NumPy](http://www.numpy.org): 1.6.1 or higher 91 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 92 - [pytz](http://pytz.sourceforge.net) 93 - Needed for time zone support with ``pandas.date_range`` 94 95 ### Highly Recommended Dependencies 96 - [numexpr](http://code.google.com/p/numexpr/) 97 - Needed to accelerate some expression evaluation operations 98 - Required by PyTables 99 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 100 - Needed to accelerate certain numerical operations 101 102 ### Optional dependencies 103 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 104 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 105 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 106 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 107 - [statsmodels](http://statsmodels.sourceforge.net/) 108 - Needed for parts of `pandas.stats` 109 - For Excel I/O: 110 - [xlrd/xlwt](http://www.python-excel.org/) 111 - Excel reading (xlrd) and writing (xlwt) 112 - [openpyxl](http://packages.python.org/openpyxl/) 113 - openpyxl version 1.6.1 or higher, for writing .xlsx files 114 - xlrd >= 0.9.0 115 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 116 - Alternative Excel writer. 117 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/) 118 - Needed for `pandas.io.gbq` 119 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 120 - One of the following combinations of libraries is needed to use the 121 top-level [`pandas.read_html`][read-html-docs] function: 122 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 123 recent version of [html5lib][html5lib] is okay.) 124 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 125 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 126 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 127 for reasons as to why you should probably **not** take this approach. 128 129 #### Notes about HTML parsing libraries 130 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 131 either [lxml][lxml] or [html5lib][html5lib] or both. 132 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 133 installed. 134 - You are strongly encouraged to read [HTML reading 135 gotchas][html-gotchas]. It explains issues surrounding the 136 installation and usage of the above three libraries. 137 - You may need to install an older version of 138 [BeautifulSoup4][BeautifulSoup4]: 139 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 140 32-bit Ubuntu/Debian 141 - Additionally, if you're using [Anaconda][Anaconda] you should 142 definitely read [the gotchas about HTML parsing][html-gotchas] 143 libraries 144 - If you're on a system with `apt-get` you can do 145 146 ```sh 147 sudo apt-get build-dep python-lxml 148 ``` 149 150 to get the necessary dependencies for installation of [lxml][lxml]. 151 This will prevent further headaches down the line. 152 153 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 154 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 155 [lxml]: http://lxml.de 156 [Anaconda]: https://store.continuum.io/cshop/anaconda 157 [NumPy]: http://numpy.scipy.org/ 158 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 159 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 160 161 ## Installation from sources 162 To install pandas from source you need Cython in addition to the normal 163 dependencies above. Cython can be installed from pypi: 164 165 ```sh 166 pip install cython 167 ``` 168 169 In the `pandas` directory (same one where you found this file after 170 cloning the git repo), execute: 171 172 ```sh 173 python setup.py install 174 ``` 175 176 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html): 177 178 ```sh 179 python setup.py develop 180 ``` 181 182 Alternatively, you can use `pip` if you want all the dependencies pulled 183 in automatically (the `-e` option is for installing it in [development 184 mode](http://www.pip-installer.org/en/latest/usage.html)): 185 186 ```sh 187 pip install -e . 188 ``` 189 190 On Windows, you will need to install MinGW and execute: 191 192 ```sh 193 python setup.py build --compiler=mingw32 194 python setup.py install 195 ``` 196 197 See http://pandas.pydata.org/ for more information. 198 199 ## License 200 BSD 201 202 ## Documentation 203 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 204 205 The Sphinx documentation should provide a good starting point for learning how 206 to use the library. Expect the docs to continue to expand as time goes on. 207 208 ## Background 209 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 210 has been under active development since then. 211 212 ## Discussion and Development 213 Since pandas development is related to a number of other scientific 214 Python projects, questions are welcome on the scipy-user mailing 215 list. Specialized discussions or design issues should take place on 216 the pystatsmodels mailing list / Google group, where 217 ``scikits.statsmodels`` and other libraries will also be discussed: 218 219 http://groups.google.com/group/pystatsmodels 220 [end of README.md] [start of doc/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # pandas documentation build configuration file, created by 4 # 5 # This file is execfile()d with the current directory set to its containing dir. 6 # 7 # Note that not all possible configuration values are present in this 8 # autogenerated file. 9 # 10 # All configuration values have a default; values that are commented out 11 # serve to show the default. 12 13 import sys 14 import os 15 16 # If extensions (or modules to document with autodoc) are in another directory, 17 # add these directories to sys.path here. If the directory is relative to the 18 # documentation root, use os.path.abspath to make it absolute, like shown here. 19 # sys.path.append(os.path.abspath('.')) 20 sys.path.insert(0, os.path.abspath('../sphinxext')) 21 22 sys.path.extend([ 23 24 # numpy standard doc extensions 25 os.path.join(os.path.dirname(__file__), 26 '..', '../..', 27 'sphinxext') 28 29 ]) 30 31 # -- General configuration ----------------------------------------------- 32 33 # Add any Sphinx extension module names here, as strings. They can be extensions 34 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. sphinxext. 35 36 extensions = ['sphinx.ext.autodoc', 37 'sphinx.ext.doctest', 38 'sphinx.ext.extlinks', 39 'sphinx.ext.todo', 40 'numpydoc', # used to parse numpy-style docstrings for autodoc 41 'ipython_directive', 42 'ipython_console_highlighting', 43 'sphinx.ext.intersphinx', 44 'sphinx.ext.todo', 45 'sphinx.ext.coverage', 46 'sphinx.ext.pngmath', 47 'sphinx.ext.ifconfig', 48 'sphinx.ext.autosummary', 49 'matplotlib.sphinxext.only_directives', 50 'matplotlib.sphinxext.plot_directive', 51 ] 52 53 # Add any paths that contain templates here, relative to this directory. 54 templates_path = ['_templates', '_templates/autosummary'] 55 56 # The suffix of source filenames. 57 source_suffix = '.rst' 58 59 # The encoding of source files. 60 # source_encoding = 'utf-8' 61 62 # The master toctree document. 63 master_doc = 'index' 64 65 # General information about the project. 66 project = u'pandas' 67 copyright = u'2008-2012, the pandas development team' 68 69 # The version info for the project you're documenting, acts as replacement for 70 # |version| and |release|, also used in various other places throughout the 71 # built documents. 72 # 73 # The short X.Y version. 74 import pandas 75 76 # version = '%s r%s' % (pandas.__version__, svn_version()) 77 version = '%s' % (pandas.__version__) 78 79 # The full version, including alpha/beta/rc tags. 80 release = version 81 82 # JP: added from sphinxdocs 83 autosummary_generate = True 84 85 # The language for content autogenerated by Sphinx. Refer to documentation 86 # for a list of supported languages. 87 # language = None 88 89 # There are two options for replacing |today|: either, you set today to some 90 # non-false value, then it is used: 91 # today = '' 92 # Else, today_fmt is used as the format for a strftime call. 93 # today_fmt = '%B %d, %Y' 94 95 # List of documents that shouldn't be included in the build. 96 # unused_docs = [] 97 98 # List of directories, relative to source directory, that shouldn't be searched 99 # for source files. 100 exclude_trees = [] 101 102 # The reST default role (used for this markup: `text`) to use for all documents. 103 # default_role = None 104 105 # If true, '()' will be appended to :func: etc. cross-reference text. 106 # add_function_parentheses = True 107 108 # If true, the current module name will be prepended to all description 109 # unit titles (such as .. function::). 110 # add_module_names = True 111 112 # If true, sectionauthor and moduleauthor directives will be shown in the 113 # output. They are ignored by default. 114 # show_authors = False 115 116 # The name of the Pygments (syntax highlighting) style to use. 117 pygments_style = 'sphinx' 118 119 # A list of ignored prefixes for module index sorting. 120 # modindex_common_prefix = [] 121 122 123 # -- Options for HTML output --------------------------------------------- 124 125 # The theme to use for HTML and HTML Help pages. Major themes that come with 126 # Sphinx are currently 'default' and 'sphinxdoc'. 127 html_theme = 'nature_with_gtoc' 128 129 # The style sheet to use for HTML and HTML Help pages. A file of that name 130 # must exist either in Sphinx' static/ path, or in one of the custom paths 131 # given in html_static_path. 132 # html_style = 'statsmodels.css' 133 134 # Theme options are theme-specific and customize the look and feel of a theme 135 # further. For a list of options available for each theme, see the 136 # documentation. 137 # html_theme_options = {} 138 139 # Add any paths that contain custom themes here, relative to this directory. 140 html_theme_path = ['themes'] 141 142 # The name for this set of Sphinx documents. If None, it defaults to 143 # "<project> v<release> documentation". 144 # html_title = None 145 146 # A shorter title for the navigation bar. Default is the same as html_title. 147 # html_short_title = None 148 149 # The name of an image file (relative to this directory) to place at the top 150 # of the sidebar. 151 # html_logo = None 152 153 # The name of an image file (within the static path) to use as favicon of the 154 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 155 # pixels large. 156 # html_favicon = None 157 158 # Add any paths that contain custom static files (such as style sheets) here, 159 # relative to this directory. They are copied after the builtin static files, 160 # so a file named "default.css" will overwrite the builtin "default.css". 161 html_static_path = ['_static'] 162 163 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 164 # using the given strftime format. 165 # html_last_updated_fmt = '%b %d, %Y' 166 167 # If true, SmartyPants will be used to convert quotes and dashes to 168 # typographically correct entities. 169 # html_use_smartypants = True 170 171 # Custom sidebar templates, maps document names to template names. 172 # html_sidebars = {} 173 174 # Additional templates that should be rendered to pages, maps page names to 175 # template names. 176 # html_additional_pages = {} 177 178 # If false, no module index is generated. 179 html_use_modindex = True 180 181 # If false, no index is generated. 182 # html_use_index = True 183 184 # If true, the index is split into individual pages for each letter. 185 # html_split_index = False 186 187 # If true, links to the reST sources are added to the pages. 188 # html_show_sourcelink = True 189 190 # If true, an OpenSearch description file will be output, and all pages will 191 # contain a <link> tag referring to it. The value of this option must be the 192 # base URL from which the finished HTML is served. 193 # html_use_opensearch = '' 194 195 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 196 # html_file_suffix = '' 197 198 # Output file base name for HTML help builder. 199 htmlhelp_basename = 'pandas' 200 201 202 # -- Options for LaTeX output -------------------------------------------- 203 204 # The paper size ('letter' or 'a4'). 205 # latex_paper_size = 'letter' 206 207 # The font size ('10pt', '11pt' or '12pt'). 208 # latex_font_size = '10pt' 209 210 # Grouping the document tree into LaTeX files. List of tuples 211 # (source start file, target name, title, author, documentclass [howto/manual]). 212 latex_documents = [ 213 ('index', 'pandas.tex', 214 u'pandas: powerful Python data analysis toolkit', 215 u'Wes McKinney\n\& PyData Development Team', 'manual'), 216 ] 217 218 # The name of an image file (relative to this directory) to place at the top of 219 # the title page. 220 # latex_logo = None 221 222 # For "manual" documents, if this is true, then toplevel headings are parts, 223 # not chapters. 224 # latex_use_parts = False 225 226 # Additional stuff for the LaTeX preamble. 227 # latex_preamble = '' 228 229 # Documents to append as an appendix to all manuals. 230 # latex_appendices = [] 231 232 # If false, no module index is generated. 233 # latex_use_modindex = True 234 235 236 # Example configuration for intersphinx: refer to the Python standard library. 237 intersphinx_mapping = { 238 'statsmodels': ('http://statsmodels.sourceforge.net/devel/', None), 239 'python': ('http://docs.python.org/', None) 240 } 241 import glob 242 autosummary_generate = glob.glob("*.rst") 243 244 # extlinks alias 245 extlinks = {'issue': ('https://github.com/pydata/pandas/issues/%s', 246 'GH'), 247 'wiki': ('https://github.com/pydata/pandas/wiki/%s', 248 'wiki ')} 249 [end of doc/source/conf.py] [start of pandas/io/gbq.py] 1 """ 2 Pandas module to interface with Google BigQuery. 3 """ 4 import os 5 import sys 6 import tempfile 7 import csv 8 import logging 9 from datetime import datetime 10 11 import pandas as pd 12 import numpy as np 13 14 from pandas import DataFrame, concat 15 from pandas.core.common import PandasError 16 17 try: 18 import bq 19 import bigquery_client 20 import gflags as flags 21 _BQ_INSTALLED = True 22 except ImportError: 23 _BQ_INSTALLED = False 24 25 26 # Setup the logger 27 logger = logging.getLogger('pandas.io.gbq') 28 29 # These are some custom exceptions that the 30 # to_gbq() method can throw 31 32 class SchemaMissing(PandasError,IOError): 33 """ 34 Raised when attempting to write a DataFrame to 35 a new table in Google BigQuery without specifying 36 a schema describing the DataFrame. 37 """ 38 pass 39 40 class InvalidSchema(PandasError,IOError): 41 """ 42 Raised when attempting to write a DataFrame to 43 Google BigQuery with an invalid table schema. 44 """ 45 pass 46 47 class TableExistsFail(PandasError,IOError): 48 """ 49 Raised when attempting to write a DataFrame to 50 an existing Google BigQuery table without specifying 51 that a replace/update action be taken. 52 """ 53 pass 54 55 class InvalidColumnOrder(PandasError,IOError): 56 """ 57 Raised when the provided column order for output 58 results DataFrame does not match the schema 59 returned by BigQuery. 60 """ 61 pass 62 63 64 def _authenticate(): 65 """ 66 For testing, we abstract the authentication to BigQuery API. 67 Presently this is implemented using the bq.py Client.Get() 68 method. Any exceptions raised are considered fatal, so we 69 do not process them. 70 71 Returns 72 ------- 73 BigqueryClient : Configured connection to Google BigQuery 74 """ 75 return bq.Client.Get() 76 77 def _parse_entry(field_value, field_type): 78 """ 79 Given a value and the corresponding BigQuery data type, 80 perform any operations needed and return in a format 81 appropriate for a numpy record dictionary 82 83 Parameters 84 ---------- 85 field_value : Source object to be transformed 86 field_type : String representation of Google BigQuery 87 data type (per schema) 88 89 Returns 90 ------- 91 field_value : object or primitive of type corresponding 92 to field_type 93 """ 94 95 # Avoid any casting problems 96 if field_value is None or field_value == 'null': 97 return None 98 if field_type == 'INTEGER' or field_type == 'FLOAT': 99 field_value = float(field_value) 100 elif field_type == 'TIMESTAMP': 101 timestamp = datetime.utcfromtimestamp(float(field_value)) 102 field_value = np.datetime64(timestamp) 103 elif field_type == 'BOOLEAN': 104 field_value = field_value == 'true' 105 else: 106 field_value = str(field_value) 107 return field_value 108 109 110 def _parse_page(raw_page, col_names, col_types, col_dtypes): 111 """ 112 Given a list of rows produced by the client.apiclient.tabledata().list(), 113 build a numpy array with proper dtypes and column names as specified 114 by the arguments. 115 116 Parameters 117 ---------- 118 raw_page : Resulting list of rows from a page retrieved via 119 bigquery API 120 client.apiclient.tabledata().list().execute()['rows'] 121 col_names: An ordered list of names for the columns 122 col_types: String representation of the BigQuery DataType for that 123 column 124 col_dtypes: Target numpy.dtype for the column 125 126 Returns 127 ------- 128 page_array : numpy record array corresponding 129 to the page data 130 """ 131 132 # Should be at most 100,000 per the API, but this could 133 # be increased in the future. Should only be less than 134 # this for the last page to reduce API calls 135 page_row_count = len(raw_page) 136 137 # Place to hold the results for a page of data 138 page_array = np.zeros( 139 (page_row_count,), 140 dtype=zip(col_names,col_dtypes) 141 ) 142 for row_num, raw_row in enumerate(raw_page): 143 entries = raw_row.get('f', []) 144 # Iterate over each entry - setting proper field types 145 for col_num, field_type in enumerate(col_types): 146 # Process the field's types using schema 147 field_value = _parse_entry(entries[col_num].get('v', ''), 148 field_type) 149 # Fill the value into the final array 150 page_array[row_num][col_num] = field_value 151 152 return page_array 153 154 def _parse_data(client, job, index_col=None, col_order=None): 155 """ 156 Iterate through the query results and piece together the 157 final DataFrame. Builds a DataFrame for each page of 158 results, then concatenates them together when finished. 159 To save memory, we use numpy record arrays to build these 160 DataFrames. 161 162 Parameters 163 ---------- 164 client: An instance of bq.Client 165 job: An array containing the job info for a completed query 166 index_col: str (optional) 167 Name of result column to use for index in results DataFrame 168 col_order: list() (optional) 169 List of BigQuery column names in the desired order for results 170 DataFrame 171 172 Returns 173 ------- 174 df: pandas DataFrame 175 DataFrame representing results of query 176 177 Raises: 178 ------ 179 InvalidColumnOrder: 180 Raised if 'col_order' parameter doesn't match returned DataFrame 181 BigqueryError: 182 Raised by bigquery_client if a Google API error is encountered 183 184 185 Notes: 186 ----- 187 This script relies on Google being consistent with their 188 pagination API. We are using the most flexible iteration method 189 that we could find in the bq.py/bigquery_client.py API's, but 190 these have undergone large amounts of change recently. 191 192 We have encountered bugs with this functionality, see: 193 http://stackoverflow.com/questions/19145587/bq-py-not-paging-results 194 """ 195 196 # dtype Map - 197 # see: http://pandas.pydata.org/pandas-docs/dev/missing_data.html#missing-data-casting-rules-and-indexing 198 dtype_map = {'INTEGER': np.dtype(float), 199 'FLOAT': np.dtype(float), 200 'TIMESTAMP': 'M8[ns]'} # This seems to be buggy without nanosecond indicator 201 202 # We first need the schema to get information about the columns of 203 # our dataframe. 204 205 table_dict = job['configuration']['query']['destinationTable'] 206 fields = client.GetTableSchema(table_dict)['fields'] 207 208 # Get the schema into a format useable to create our 209 # dataframe 210 col_dtypes = [] 211 col_types = [] 212 col_names = [] 213 214 # TODO: Do this in one clean step 215 for field in fields: 216 col_types.append(field['type']) 217 # Note the encoding... numpy doesn't like titles that are UTF8, which is the return 218 # type from the API 219 col_names.append(field['name'].encode('ascii', 'ignore')) 220 # Note, it would be nice to use 'str' types, but BigQuery doesn't have a fixed length 221 # in mind - just maxes out at 64k 222 col_dtypes.append(dtype_map.get(field['type'],object)) 223 224 225 # How many columns are there 226 num_columns = len(col_names) 227 228 # Iterate over the result rows. 229 # Since Google's API now requires pagination of results, 230 # we do that here. The following is repurposed from 231 # bigquery_client.py :: Client.ReadTableRows() 232 233 # Initially, no page token is set 234 page_token = None 235 236 # Most of Google's client API's allow one to set total_rows in case 237 # the user only wants the first 'n' results from a query. Typically 238 # they set this to sys.maxint by default, but this caused problems 239 # during testing - specifically on OS X. It appears that at some 240 # point in bigquery_client.py, there is an attempt to cast this value 241 # to an unsigned integer. Depending on the python install, 242 # sys.maxint may exceed the limitations of unsigned integers. 243 # 244 # See: 245 # https://code.google.com/p/google-bigquery-tools/issues/detail?id=14 246 247 # This is hardcoded value for 32bit sys.maxint per 248 # the above note. Theoretically, we could simply use 249 # 100,000 (or whatever the current max page size is), 250 # but this is more flexible in the event of an API change 251 total_rows = 2147483647 252 253 # Keep track of rows read 254 row_count = 0 255 256 # Keep our page DataFrames until the end when we 257 # concatentate them 258 dataframe_list = list() 259 260 # Iterate over all rows 261 while row_count < total_rows: 262 data = client.apiclient.tabledata().list(maxResults=total_rows - row_count, 263 pageToken=page_token, 264 **table_dict).execute() 265 266 # If there are more results than will fit on a page, 267 # you will recieve a token for the next page. 268 page_token = data.get('pageToken', None) 269 270 # How many rows are there across all pages? 271 total_rows = min(total_rows, int(data['totalRows'])) # Changed to use get(data[rows],0) 272 raw_page = data.get('rows', []) 273 page_array = _parse_page(raw_page, col_names, col_types, col_dtypes) 274 275 row_count += len(page_array) 276 if total_rows > 0: 277 completed = (100 * row_count) / total_rows 278 logger.info('Remaining Rows: ' + str(total_rows - row_count) + '(' + str(completed) + '% Complete)') 279 else: 280 logger.info('No Rows') 281 282 dataframe_list.append(DataFrame(page_array)) 283 284 # Handle any exceptions that might have occured 285 if not page_token and row_count != total_rows: 286 raise bigquery_client.BigqueryInterfaceError( 287 'PageToken missing for %r' % ( 288 bigquery_client.ApiClientHelper.TableReference.Create(**table_dict),)) 289 if not raw_page and row_count != total_rows: 290 raise bigquery_client.BigqueryInterfaceError( 291 'Not enough rows returned by server for %r' % ( 292 bigquery_client.ApiClientHelper.TableReference.Create(**table_dict),)) 293 294 # Build final dataframe 295 final_df = concat(dataframe_list, ignore_index=True) 296 297 # Reindex the DataFrame on the provided column 298 if index_col is not None: 299 if index_col in col_names: 300 final_df.set_index(index_col, inplace=True) 301 col_names.remove(index_col) 302 else: 303 raise InvalidColumnOrder('Index column "{0}" does not exist in DataFrame.'.format(index_col)) 304 305 # Change the order of columns in the DataFrame based on provided list 306 if col_order is not None: 307 if sorted(col_order) == sorted(col_names): 308 final_df = final_df[col_order] 309 else: 310 raise InvalidColumnOrder('Column order does not match this DataFrame.') 311 312 # Downcast floats to integers and objects to booleans 313 # if there are no NaN's. This is presently due to a 314 # limitation of numpy in handling missing data. 315 final_df._data = final_df._data.downcast(dtypes='infer') 316 return final_df 317 318 def to_gbq(dataframe, destination_table, schema=None, col_order=None, if_exists='fail', **kwargs): 319 """ 320 Write a DataFrame to a Google BigQuery table. If the table exists, 321 the DataFrame will be appended. If not, a new table will be created, 322 in which case the schema will have to be specified. By default, 323 rows will be written in the order they appear in the DataFrame, though 324 the user may specify an alternative order. 325 326 Parameters 327 --------------- 328 dataframe: DataFrame 329 DataFrame to be written 330 destination_table: string 331 name of table to be written, in the form 'dataset.tablename' 332 schema : sequence (optional) 333 list of column types in order for data to be inserted, e.g. ['INTEGER', 'TIMESTAMP', 'BOOLEAN'] 334 col_order: sequence (optional) 335 order which columns are to be inserted, e.g. ['primary_key', 'birthday', 'username'] 336 if_exists: {'fail', 'replace', 'append'} (optional) 337 fail: If table exists, do nothing. 338 replace: If table exists, drop it, recreate it, and insert data. 339 append: If table exists, insert data. Create if does not exist. 340 kwargs are passed to the Client constructor 341 342 Raises: 343 ------ 344 SchemaMissing: 345 Raised if the 'if_exists' parameter is set to 'replace', but no schema is specified 346 TableExists: 347 Raised if the specified 'destination_table' exists but the 'if_exists' parameter is set to 'fail' (the default) 348 InvalidSchema: 349 Raised if the 'schema' parameter does not match the provided DataFrame 350 """ 351 352 if not _BQ_INSTALLED: 353 if sys.version_info >= (3, 0): 354 raise NotImplementedError('gbq module does not support Python 3 yet') 355 else: 356 raise ImportError('Could not import Google BigQuery Client.') 357 358 ALLOWED_TYPES = ['STRING', 'INTEGER', 'FLOAT', 'BOOLEAN', 'TIMESTAMP', 'RECORD'] 359 360 if if_exists == 'replace' and schema is None: 361 raise SchemaMissing('Cannot replace a table without specifying the data schema') 362 else: 363 client = _authenticate() 364 table_reference = client.GetTableReference(destination_table) 365 if client.TableExists(table_reference): 366 if if_exists == 'fail': 367 raise TableExistsFail('Cannot overwrite existing tables if \'if_exists="fail"\'') 368 else: 369 # Build up a string representation of the 370 # table's schema. Since the table already 371 # exists, we ask ask the API for it, which 372 # is returned in a list of dictionaries 373 # describing column data. Iterate over these 374 # and build up a string of form: 375 # "col_name1 : col_type1, col_name2 : col_type2..." 376 schema_full = client.GetTableSchema(dict(table_reference))['fields'] 377 schema = '' 378 for count, row in enumerate(schema_full): 379 if count > 0: 380 schema += ', ' 381 schema += row['name'] + ':' + row['type'] 382 else: 383 logger.info('Creating New Table') 384 if schema is None: 385 raise SchemaMissing('Cannot create a new table without specifying the data schema') 386 else: 387 columns = dataframe.columns 388 if len(schema) != len(columns): 389 raise InvalidSchema('Incorrect number of columns in schema') 390 else: 391 schema_string = '' 392 for count, name in enumerate(columns): 393 if count > 0: 394 schema_string += ', ' 395 column_type = schema[count].upper() 396 if column_type in ALLOWED_TYPES: 397 schema_string += name + ':' + schema[count].lower() 398 else: 399 raise InvalidSchema('Invalid Type: ' + column_type + ". Must be one of: " + str(ALLOWED_TYPES)) 400 schema = schema_string 401 402 opts = kwargs 403 opts['sync'] = True 404 opts['skip_leading_rows'] = 1 405 opts['encoding'] = 'UTF-8' 406 opts['max_bad_records'] = 0 407 408 # See: https://developers.google.com/bigquery/docs/reference/v2/jobs 409 if if_exists == 'replace': 410 opts['write_disposition'] = 'WRITE_TRUNCATE' 411 elif if_exists == 'append': 412 opts['write_disposition'] = 'WRITE_APPEND' 413 414 with tempfile.NamedTemporaryFile() as csv_file: 415 dataframe.to_csv(csv_file.name, index=False, encoding='utf-8') 416 job = client.Load(table_reference, csv_file.name, schema=schema, **opts) 417 418 def read_gbq(query, project_id = None, destination_table = None, index_col=None, col_order=None, **kwargs): 419 """ 420 The main method a user calls to load data from Google BigQuery into a pandas DataFrame. 421 This is a simple wrapper for Google's bq.py and bigquery_client.py, which we use 422 to get the source data. Because of this, this script respects the user's bq settings 423 file, '~/.bigqueryrc', if it exists. Such a file can be generated using 'bq init'. Further, 424 additional parameters for the query can be specified as either **kwds in the command, 425 or using FLAGS provided in the 'gflags' module. Particular options can be found in 426 bigquery_client.py. 427 428 Parameters 429 ---------- 430 query: str 431 SQL-Like Query to return data values 432 project_id: str (optional) 433 Google BigQuery Account project ID. Optional, since it may be 434 located in ~/.bigqueryrc 435 index_col: str (optional) 436 Name of result column to use for index in results DataFrame 437 col_order: list(str) (optional) 438 List of BigQuery column names in the desired order for results 439 DataFrame 440 destination_table: string (optional) 441 If provided, send the results to the given table. 442 **kwargs: to be passed to bq.Client.Create(). Particularly: 'trace', 'sync', 443 'api', 'api_version' 444 445 Returns 446 ------- 447 df: pandas DataFrame 448 DataFrame representing results of query 449 450 """ 451 if not _BQ_INSTALLED: 452 if sys.version_info >= (3, 0): 453 raise NotImplementedError('gbq module does not support Python 3 yet') 454 else: 455 raise ImportError('Could not import Google BigQuery Client.') 456 457 query_args = kwargs 458 query_args['project_id'] = project_id 459 query_args['query'] = query 460 query_args['destination_table'] = destination_table 461 query_args['sync'] = True 462 463 client = _authenticate() 464 465 job = client.Query(**query_args) 466 467 return _parse_data(client, job, index_col=index_col, col_order=col_order) 468 [end of pandas/io/gbq.py] [start of pandas/util/terminal.py] 1 """ 2 get_terminal_size() -- return width and height of terminal as a tuple 3 4 code from: 5 http://stackoverflow.com/questions/566746/how-to-get-console- window-width-in- 6 python 7 8 written by 9 Harco Kuppens (http://stackoverflow.com/users/825214/harco-kuppens) 10 11 It is mentioned in the stackoverflow response that this code works 12 on linux, os x, windows and cygwin (windows). 13 """ 14 from __future__ import print_function 15 16 import os 17 18 __all__ = ['get_terminal_size'] 19 20 21 def get_terminal_size(): 22 """ 23 Detect terminal size and return tuple = (width, height). 24 25 Only to be used when running in a terminal. Note that the IPython notebook, 26 IPython zmq frontends, or IDLE do not run in a terminal, 27 """ 28 import platform 29 current_os = platform.system() 30 tuple_xy = None 31 if current_os == 'Windows': 32 tuple_xy = _get_terminal_size_windows() 33 if tuple_xy is None: 34 tuple_xy = _get_terminal_size_tput() 35 # needed for window's python in cygwin's xterm! 36 if current_os == 'Linux' or \ 37 current_os == 'Darwin' or \ 38 current_os.startswith('CYGWIN'): 39 tuple_xy = _get_terminal_size_linux() 40 if tuple_xy is None: 41 tuple_xy = (80, 25) # default value 42 return tuple_xy 43 44 45 def _get_terminal_size_windows(): 46 res = None 47 try: 48 from ctypes import windll, create_string_buffer 49 50 # stdin handle is -10 51 # stdout handle is -11 52 # stderr handle is -12 53 54 h = windll.kernel32.GetStdHandle(-12) 55 csbi = create_string_buffer(22) 56 res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi) 57 except: 58 return None 59 if res: 60 import struct 61 (bufx, bufy, curx, cury, wattr, left, top, right, bottom, maxx, 62 maxy) = struct.unpack("hhhhHhhhhhh", csbi.raw) 63 sizex = right - left + 1 64 sizey = bottom - top + 1 65 return sizex, sizey 66 else: 67 return None 68 69 70 def _get_terminal_size_tput(): 71 # get terminal width 72 # src: http://stackoverflow.com/questions/263890/how-do-i-find-the-width 73 # -height-of-a-terminal-window 74 try: 75 import subprocess 76 proc = subprocess.Popen(["tput", "cols"], 77 stdin=subprocess.PIPE, 78 stdout=subprocess.PIPE) 79 output = proc.communicate(input=None) 80 cols = int(output[0]) 81 proc = subprocess.Popen(["tput", "lines"], 82 stdin=subprocess.PIPE, 83 stdout=subprocess.PIPE) 84 output = proc.communicate(input=None) 85 rows = int(output[0]) 86 return (cols, rows) 87 except: 88 return None 89 90 91 def _get_terminal_size_linux(): 92 def ioctl_GWINSZ(fd): 93 try: 94 import fcntl 95 import termios 96 import struct 97 import os 98 cr = struct.unpack( 99 'hh', fcntl.ioctl(fd, termios.TIOCGWINSZ, '1234')) 100 except: 101 return None 102 return cr 103 cr = ioctl_GWINSZ(0) or ioctl_GWINSZ(1) or ioctl_GWINSZ(2) 104 if not cr: 105 try: 106 fd = os.open(os.ctermid(), os.O_RDONLY) 107 cr = ioctl_GWINSZ(fd) 108 os.close(fd) 109 except: 110 pass 111 if not cr or cr == (0, 0): 112 try: 113 from os import environ as env 114 cr = (env['LINES'], env['COLUMNS']) 115 except: 116 return None 117 return int(cr[1]), int(cr[0]) 118 119 if __name__ == "__main__": 120 sizex, sizey = get_terminal_size() 121 print('width = %s height = %s' % (sizex, sizey)) 122 [end of pandas/util/terminal.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
bdb43eba44fca27afa63f1e70c4b3e48d267c749
Date formatting option in to_csv? http://stackoverflow.com/questions/13999850/how-to-specify-date-format-when-using-pandas-to-csv http://stackoverflow.com/questions/15651527/how-to-stop-python-pandas-from-adding-000000-to-every-date
can easily support in new `to_csv` via say `date_format` (as the DatetimeBlock can handle analagously to float_float by the FloatBlock), pushing to 0.12 Also should add a couple of methods (in format.py?) to handle the various cases for passed formatters e.g. `%.2f` `lambda x: "mycool float: %.2f" % x` This needs to be uniform across all IO apis, and flexible enough. For example, float_format doesn't support new style python formatting (no thousands seperator for example). Formatters is a better concept, but not as concise. We need to find a better way to this. fwiw, modifying the frame and then exporting is actually a nicer seperation of concerns. formatting arguably doesn't belong in to_csv. easy enough to have a class that can accept multiple input types (new style formatting,old as strings lambdas), then provide an api to the funcs that need it yeah, I'm for doing the formatting as a seperate stage using a delegate class, and then having IOs use that across the lib. not sure how that would work. 0.13? also need a top-level option for `nat_rep='NaT'` (for putting NaT) instead of an empty string
2013-07-22T05:09:16Z
<patch> diff --git a/doc/source/release.rst b/doc/source/release.rst --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -60,6 +60,9 @@ New features - Clipboard functionality now works with PySide (:issue:`4282`) - New ``extract`` string method returns regex matches more conveniently (:issue:`4685`) - Auto-detect field widths in read_fwf when unspecified (:issue:`4488`) + - ``to_csv()`` now outputs datetime objects according to a specified format string + via the ``date_format`` keyword (:issue:`4313`) + Experimental Features ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt --- a/doc/source/v0.13.0.txt +++ b/doc/source/v0.13.0.txt @@ -87,6 +87,9 @@ API changes and arithmetic flex methods (add, sub, mul, etc.). ``SparsePanel`` does not support ``pow`` or ``mod`` with non-scalars. (:issue:`3765`) + - ``to_csv`` now takes a ``date_format`` keyword argument that specifies how + output datetime objects should be formatted. Datetimes encountered in the + index, columns, and values will all have this formatting applied. (:issue:`4313`) Prior Version Deprecations/Changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/pandas/core/format.py b/pandas/core/format.py --- a/pandas/core/format.py +++ b/pandas/core/format.py @@ -18,7 +18,7 @@ import itertools import csv -from pandas.tseries.period import PeriodIndex +from pandas.tseries.period import PeriodIndex, DatetimeIndex docstring_to_string = """ Parameters @@ -850,7 +850,7 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None, cols=None, header=True, index=True, index_label=None, mode='w', nanRep=None, encoding=None, quoting=None, line_terminator='\n', chunksize=None, engine=None, - tupleize_cols=False, quotechar='"'): + tupleize_cols=False, quotechar='"', date_format=None): self.engine = engine # remove for 0.13 self.obj = obj @@ -877,6 +877,8 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None, self.line_terminator = line_terminator + self.date_format = date_format + #GH3457 if not self.obj.columns.is_unique and engine == 'python': msg= "columns.is_unique == False not supported with engine='python'" @@ -893,7 +895,8 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None, if cols is not None: if isinstance(cols,Index): - cols = cols.to_native_types(na_rep=na_rep,float_format=float_format) + cols = cols.to_native_types(na_rep=na_rep,float_format=float_format, + date_format=date_format) else: cols=list(cols) self.obj = self.obj.loc[:,cols] @@ -902,7 +905,8 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None, # and make sure sure cols is just a list of labels cols = self.obj.columns if isinstance(cols,Index): - cols = cols.to_native_types(na_rep=na_rep,float_format=float_format) + cols = cols.to_native_types(na_rep=na_rep,float_format=float_format, + date_format=date_format) else: cols=list(cols) @@ -923,6 +927,9 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None, if isinstance(obj.index, PeriodIndex): self.data_index = obj.index.to_timestamp() + if isinstance(self.data_index, DatetimeIndex) and date_format is not None: + self.data_index = Index([x.strftime(date_format) if notnull(x) else '' for x in self.data_index]) + self.nlevels = getattr(self.data_index, 'nlevels', 1) if not index: self.nlevels = 0 @@ -931,15 +938,10 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None, # invoked by df.to_csv(engine=python) def _helper_csv(self, writer, na_rep=None, cols=None, header=True, index=True, - index_label=None, float_format=None): + index_label=None, float_format=None, date_format=None): if cols is None: cols = self.columns - series = {} - for k, v in compat.iteritems(self.obj._series): - series[k] = v.values - - has_aliases = isinstance(header, (tuple, list, np.ndarray)) if has_aliases or header: if index: @@ -981,10 +983,34 @@ def _helper_csv(self, writer, na_rep=None, cols=None, encoded_cols = list(cols) writer.writerow(encoded_cols) + if date_format is None: + date_formatter = lambda x: lib.Timestamp(x)._repr_base + else: + def strftime_with_nulls(x): + x = lib.Timestamp(x) + if notnull(x): + return x.strftime(date_format) + + date_formatter = lambda x: strftime_with_nulls(x) + data_index = self.obj.index + if isinstance(self.obj.index, PeriodIndex): data_index = self.obj.index.to_timestamp() + if isinstance(data_index, DatetimeIndex) and date_format is not None: + data_index = Index([date_formatter(x) for x in data_index]) + + values = self.obj.copy() + values.index = data_index + values.columns = values.columns.to_native_types(na_rep=na_rep,float_format=float_format, + date_format=date_format) + values = values[cols] + + series = {} + for k, v in compat.iteritems(values._series): + series[k] = v.values + nlevels = getattr(data_index, 'nlevels', 1) for j, idx in enumerate(data_index): row_fields = [] @@ -1000,8 +1026,8 @@ def _helper_csv(self, writer, na_rep=None, cols=None, if float_format is not None and com.is_float(val): val = float_format % val - elif isinstance(val, np.datetime64): - val = lib.Timestamp(val)._repr_base + elif isinstance(val, (np.datetime64, lib.Timestamp)): + val = date_formatter(val) row_fields.append(val) @@ -1031,7 +1057,7 @@ def save(self): self._helper_csv(self.writer, na_rep=self.na_rep, float_format=self.float_format, cols=self.cols, header=self.header, index=self.index, - index_label=self.index_label) + index_label=self.index_label, date_format=self.date_format) else: self._save() @@ -1150,13 +1176,16 @@ def _save_chunk(self, start_i, end_i): slicer = slice(start_i,end_i) for i in range(len(self.blocks)): b = self.blocks[i] - d = b.to_native_types(slicer=slicer, na_rep=self.na_rep, float_format=self.float_format) + d = b.to_native_types(slicer=slicer, na_rep=self.na_rep, + float_format=self.float_format, date_format=self.date_format) + for i, item in enumerate(b.items): # self.data is a preallocated list self.data[self.column_map[b][i]] = d[i] - ix = data_index.to_native_types(slicer=slicer, na_rep=self.na_rep, float_format=self.float_format) + ix = data_index.to_native_types(slicer=slicer, na_rep=self.na_rep, + float_format=self.float_format, date_format=self.date_format) lib.write_csv_rows(self.data, ix, self.nlevels, self.cols, self.writer) diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1030,7 +1030,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None, cols=None, header=True, index=True, index_label=None, mode='w', nanRep=None, encoding=None, quoting=None, line_terminator='\n', chunksize=None, - tupleize_cols=False, **kwds): + tupleize_cols=False, date_format=None, **kwds): r"""Write DataFrame to a comma-separated values (csv) file Parameters @@ -1073,6 +1073,8 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None, tupleize_cols : boolean, default False write multi_index columns as a list of tuples (if True) or new (expanded format) if False) + date_format : string, default None + Format string for datetime objects. """ if nanRep is not None: # pragma: no cover warnings.warn("nanRep is deprecated, use na_rep", @@ -1088,7 +1090,8 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None, index_label=index_label, mode=mode, chunksize=chunksize, engine=kwds.get( "engine"), - tupleize_cols=tupleize_cols) + tupleize_cols=tupleize_cols, + date_format=date_format) formatter.save() def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='', diff --git a/pandas/core/internals.py b/pandas/core/internals.py --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -22,7 +22,7 @@ from pandas.tslib import Timestamp from pandas import compat -from pandas.compat import range, lrange, lmap, callable, map, zip +from pandas.compat import range, lrange, lmap, callable, map, zip, u from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type class Block(PandasObject): @@ -1396,7 +1396,7 @@ def fillna(self, value, inplace=False, downcast=None): return [self if inplace else make_block(values, self.items, self.ref_items, fastpath=True)] - def to_native_types(self, slicer=None, na_rep=None, **kwargs): + def to_native_types(self, slicer=None, na_rep=None, date_format=None, **kwargs): """ convert to our native types format, slicing if desired """ values = self.values @@ -1409,8 +1409,14 @@ def to_native_types(self, slicer=None, na_rep=None, **kwargs): na_rep = 'NaT' rvalues[mask] = na_rep imask = (-mask).ravel() - rvalues.flat[imask] = np.array( - [Timestamp(val)._repr_base for val in values.ravel()[imask]], dtype=object) + + if date_format is None: + date_formatter = lambda x: Timestamp(x)._repr_base + else: + date_formatter = lambda x: Timestamp(x).strftime(date_format) + + rvalues.flat[imask] = np.array([date_formatter(val) for val in + values.ravel()[imask]], dtype=object) return rvalues.tolist() diff --git a/pandas/core/series.py b/pandas/core/series.py --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -2129,7 +2129,8 @@ def from_csv(cls, path, sep=',', parse_dates=True, header=None, def to_csv(self, path, index=True, sep=",", na_rep='', float_format=None, header=False, - index_label=None, mode='w', nanRep=None, encoding=None): + index_label=None, mode='w', nanRep=None, encoding=None, + date_format=None): """ Write Series to a comma-separated values (csv) file @@ -2154,13 +2155,15 @@ def to_csv(self, path, index=True, sep=",", na_rep='', encoding : string, optional a string representing the encoding to use if the contents are non-ascii, for python versions prior to 3 + date_format: string, default None + Format string for datetime objects. """ from pandas.core.frame import DataFrame df = DataFrame(self) df.to_csv(path, index=index, sep=sep, na_rep=na_rep, float_format=float_format, header=header, index_label=index_label, mode=mode, nanRep=nanRep, - encoding=encoding) + encoding=encoding, date_format=date_format) def dropna(self): """ diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -7,7 +7,8 @@ import numpy as np from pandas.core.common import (isnull, _NS_DTYPE, _INT64_DTYPE, - is_list_like,_values_from_object, _maybe_box) + is_list_like,_values_from_object, _maybe_box, + notnull) from pandas.core.index import Index, Int64Index, _Identity import pandas.compat as compat from pandas.compat import u @@ -599,23 +600,29 @@ def __contains__(self, key): def _format_with_header(self, header, **kwargs): return header + self._format_native_types(**kwargs) - def _format_native_types(self, na_rep=u('NaT'), **kwargs): + def _format_native_types(self, na_rep=u('NaT'), date_format=None, **kwargs): data = list(self) # tz formatter or time formatter zero_time = time(0, 0) - for d in data: - if d.time() != zero_time or d.tzinfo is not None: - return [u('%s') % x for x in data] + if date_format is None: + for d in data: + if d.time() != zero_time or d.tzinfo is not None: + return [u('%s') % x for x in data] values = np.array(data, dtype=object) mask = isnull(self.values) values[mask] = na_rep imask = -mask - values[imask] = np.array([u('%d-%.2d-%.2d') % (dt.year, dt.month, - dt.day) - for dt in values[imask]]) + + if date_format is None: + date_formatter = lambda x: u('%d-%.2d-%.2d' % (x.year, x.month, x.day)) + else: + date_formatter = lambda x: u(x.strftime(date_format)) + + values[imask] = np.array([date_formatter(dt) for dt in values[imask]]) + return values.tolist() def isin(self, values): diff --git a/vb_suite/io_bench.py b/vb_suite/io_bench.py --- a/vb_suite/io_bench.py +++ b/vb_suite/io_bench.py @@ -88,3 +88,13 @@ def create_cols(name): " parse_dates=['foo'])") read_parse_dates_iso8601 = Benchmark(stmt, setup, start_date=datetime(2012, 3, 1)) + +setup = common_setup + """ +rng = date_range('1/1/2000', periods=1000) +data = DataFrame(rng, index=rng) +""" + +stmt = ("data.to_csv('__test__.csv', date_format='%Y%m%d')") + +frame_to_csv_date_formatting = Benchmark(stmt, setup, + start_date=datetime(2013, 9, 1)) </patch>
[]
[]
ipython__ipython-9833
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> AttributeError: 'StreamLogger' object has no attribute 'isatty' OS: macOS 10.11.6 Python version: Python 2.7.10 ipython version: IPython 5.0.0 Scrapy project with settings: `LOG_STDOUT = True` which redirects stdout to log ``` Traceback (most recent call last): File "/Library/Python/2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/Library/Python/2.7/site-packages/scrapy_splash/middleware.py", line 156, in process_spider_output for el in result: File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output for x in result: File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr> return (_set_referer(r) for r in result or ()) File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "/Users/mini/PycharmProjects/spider_demo/spider_demo/spiders/baidu_news.py", line 42, in parse inspect_response(response, self) File "/Library/Python/2.7/site-packages/scrapy/shell.py", line 157, in inspect_response Shell(spider.crawler).start(response=response) File "/Library/Python/2.7/site-packages/scrapy/shell.py", line 80, in start banner=self.vars.pop('banner', '')) File "/Library/Python/2.7/site-packages/scrapy/utils/console.py", line 75, in start_python_console shell = get_shell_embed_func(shells) File "/Library/Python/2.7/site-packages/scrapy/utils/console.py", line 63, in get_shell_embed_func return known_shells[shell]() File "/Library/Python/2.7/site-packages/scrapy/utils/console.py", line 7, in _embed_ipython_shell from IPython.terminal.embed import InteractiveShellEmbed File "/Library/Python/2.7/site-packages/IPython/__init__.py", line 49, in <module> from .terminal.embed import embed File "/Library/Python/2.7/site-packages/IPython/terminal/embed.py", line 17, in <module> from IPython.terminal.interactiveshell import TerminalInteractiveShell File "/Library/Python/2.7/site-packages/IPython/terminal/interactiveshell.py", line 77, in <module> _is_tty = (sys.stdin.isatty()) and (sys.stdout.isatty()) and (sys.stderr.isatty()) AttributeError: 'StreamLogger' object has no attribute 'isatty' ``` </issue> <code> [start of README.rst] 1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master 2 :target: https://codecov.io/github/ipython/ipython?branch=master 3 4 .. image:: https://img.shields.io/pypi/dm/IPython.svg 5 :target: https://pypi.python.org/pypi/ipython 6 7 .. image:: https://img.shields.io/pypi/v/IPython.svg 8 :target: https://pypi.python.org/pypi/ipython 9 10 .. image:: https://img.shields.io/travis/ipython/ipython.svg 11 :target: https://travis-ci.org/ipython/ipython 12 13 14 =========================================== 15 IPython: Productive Interactive Computing 16 =========================================== 17 18 Overview 19 ======== 20 21 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io 22 <https://ipython.readthedocs.io/en/stable/>`_ and contain information on how to install, use 23 contribute to the project. 24 25 Officially, IPython requires Python version 2.7, or 3.3 and above. 26 IPython 1.x is the last IPython version to support Python 2.6 and 3.2. 27 28 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*. 29 See the `Jupyter installation docs <http://jupyter.readthedocs.io/en/latest/install.html>`__ 30 if you want to use these. 31 32 33 34 35 Developement and Instant runnimg 36 ================================ 37 38 You can find the latest version of the development documentation on `readthedocs 39 <http://ipython.readthedocs.io/en/latest/>`_. 40 41 You can run IPython from this directory without even installing it system-wide 42 by typing at the terminal:: 43 44 $ python -m IPython 45 46 Or see the `developement installation docs 47 <http://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_ 48 for the latest revision on read the docs. 49 50 Documentation and installation instructions for older version of IPython can be 51 found on the `IPython website <http://ipython.org/documentation.html>`_ 52 [end of README.rst] [start of IPython/core/magics/code.py] 1 """Implementation of code management magic functions. 2 """ 3 from __future__ import print_function 4 from __future__ import absolute_import 5 #----------------------------------------------------------------------------- 6 # Copyright (c) 2012 The IPython Development Team. 7 # 8 # Distributed under the terms of the Modified BSD License. 9 # 10 # The full license is in the file COPYING.txt, distributed with this software. 11 #----------------------------------------------------------------------------- 12 13 #----------------------------------------------------------------------------- 14 # Imports 15 #----------------------------------------------------------------------------- 16 17 # Stdlib 18 import inspect 19 import io 20 import os 21 import re 22 import sys 23 import ast 24 from itertools import chain 25 26 # Our own packages 27 from IPython.core.error import TryNext, StdinNotImplementedError, UsageError 28 from IPython.core.macro import Macro 29 from IPython.core.magic import Magics, magics_class, line_magic 30 from IPython.core.oinspect import find_file, find_source_lines 31 from IPython.testing.skipdoctest import skip_doctest 32 from IPython.utils import py3compat 33 from IPython.utils.py3compat import string_types 34 from IPython.utils.contexts import preserve_keys 35 from IPython.utils.path import get_py_filename 36 from warnings import warn 37 from logging import error 38 from IPython.utils.text import get_text_list 39 40 #----------------------------------------------------------------------------- 41 # Magic implementation classes 42 #----------------------------------------------------------------------------- 43 44 # Used for exception handling in magic_edit 45 class MacroToEdit(ValueError): pass 46 47 ipython_input_pat = re.compile(r"<ipython\-input\-(\d+)-[a-z\d]+>$") 48 49 # To match, e.g. 8-10 1:5 :10 3- 50 range_re = re.compile(r""" 51 (?P<start>\d+)? 52 ((?P<sep>[\-:]) 53 (?P<end>\d+)?)? 54 $""", re.VERBOSE) 55 56 57 def extract_code_ranges(ranges_str): 58 """Turn a string of range for %%load into 2-tuples of (start, stop) 59 ready to use as a slice of the content splitted by lines. 60 61 Examples 62 -------- 63 list(extract_input_ranges("5-10 2")) 64 [(4, 10), (1, 2)] 65 """ 66 for range_str in ranges_str.split(): 67 rmatch = range_re.match(range_str) 68 if not rmatch: 69 continue 70 sep = rmatch.group("sep") 71 start = rmatch.group("start") 72 end = rmatch.group("end") 73 74 if sep == '-': 75 start = int(start) - 1 if start else None 76 end = int(end) if end else None 77 elif sep == ':': 78 start = int(start) - 1 if start else None 79 end = int(end) - 1 if end else None 80 else: 81 end = int(start) 82 start = int(start) - 1 83 yield (start, end) 84 85 86 @skip_doctest 87 def extract_symbols(code, symbols): 88 """ 89 Return a tuple (blocks, not_found) 90 where ``blocks`` is a list of code fragments 91 for each symbol parsed from code, and ``not_found`` are 92 symbols not found in the code. 93 94 For example:: 95 96 >>> code = '''a = 10 97 98 def b(): return 42 99 100 class A: pass''' 101 102 >>> extract_symbols(code, 'A,b,z') 103 (["class A: pass", "def b(): return 42"], ['z']) 104 """ 105 symbols = symbols.split(',') 106 107 # this will raise SyntaxError if code isn't valid Python 108 py_code = ast.parse(code) 109 110 marks = [(getattr(s, 'name', None), s.lineno) for s in py_code.body] 111 code = code.split('\n') 112 113 symbols_lines = {} 114 115 # we already know the start_lineno of each symbol (marks). 116 # To find each end_lineno, we traverse in reverse order until each 117 # non-blank line 118 end = len(code) 119 for name, start in reversed(marks): 120 while not code[end - 1].strip(): 121 end -= 1 122 if name: 123 symbols_lines[name] = (start - 1, end) 124 end = start - 1 125 126 # Now symbols_lines is a map 127 # {'symbol_name': (start_lineno, end_lineno), ...} 128 129 # fill a list with chunks of codes for each requested symbol 130 blocks = [] 131 not_found = [] 132 for symbol in symbols: 133 if symbol in symbols_lines: 134 start, end = symbols_lines[symbol] 135 blocks.append('\n'.join(code[start:end]) + '\n') 136 else: 137 not_found.append(symbol) 138 139 return blocks, not_found 140 141 def strip_initial_indent(lines): 142 """For %load, strip indent from lines until finding an unindented line. 143 144 https://github.com/ipython/ipython/issues/9775 145 """ 146 indent_re = re.compile(r'\s+') 147 148 it = iter(lines) 149 first_line = next(it) 150 indent_match = indent_re.match(first_line) 151 152 if indent_match: 153 # First line was indented 154 indent = indent_match.group() 155 yield first_line[len(indent):] 156 157 for line in it: 158 if line.startswith(indent): 159 yield line[len(indent):] 160 else: 161 # Less indented than the first line - stop dedenting 162 yield line 163 break 164 else: 165 yield first_line 166 167 # Pass the remaining lines through without dedenting 168 for line in it: 169 yield line 170 171 172 class InteractivelyDefined(Exception): 173 """Exception for interactively defined variable in magic_edit""" 174 def __init__(self, index): 175 self.index = index 176 177 178 @magics_class 179 class CodeMagics(Magics): 180 """Magics related to code management (loading, saving, editing, ...).""" 181 182 def __init__(self, *args, **kwargs): 183 self._knowntemps = set() 184 super(CodeMagics, self).__init__(*args, **kwargs) 185 186 @line_magic 187 def save(self, parameter_s=''): 188 """Save a set of lines or a macro to a given filename. 189 190 Usage:\\ 191 %save [options] filename n1-n2 n3-n4 ... n5 .. n6 ... 192 193 Options: 194 195 -r: use 'raw' input. By default, the 'processed' history is used, 196 so that magics are loaded in their transformed version to valid 197 Python. If this option is given, the raw input as typed as the 198 command line is used instead. 199 200 -f: force overwrite. If file exists, %save will prompt for overwrite 201 unless -f is given. 202 203 -a: append to the file instead of overwriting it. 204 205 This function uses the same syntax as %history for input ranges, 206 then saves the lines to the filename you specify. 207 208 It adds a '.py' extension to the file if you don't do so yourself, and 209 it asks for confirmation before overwriting existing files. 210 211 If `-r` option is used, the default extension is `.ipy`. 212 """ 213 214 opts,args = self.parse_options(parameter_s,'fra',mode='list') 215 if not args: 216 raise UsageError('Missing filename.') 217 raw = 'r' in opts 218 force = 'f' in opts 219 append = 'a' in opts 220 mode = 'a' if append else 'w' 221 ext = u'.ipy' if raw else u'.py' 222 fname, codefrom = args[0], " ".join(args[1:]) 223 if not fname.endswith((u'.py',u'.ipy')): 224 fname += ext 225 file_exists = os.path.isfile(fname) 226 if file_exists and not force and not append: 227 try: 228 overwrite = self.shell.ask_yes_no('File `%s` exists. Overwrite (y/[N])? ' % fname, default='n') 229 except StdinNotImplementedError: 230 print("File `%s` exists. Use `%%save -f %s` to force overwrite" % (fname, parameter_s)) 231 return 232 if not overwrite : 233 print('Operation cancelled.') 234 return 235 try: 236 cmds = self.shell.find_user_code(codefrom,raw) 237 except (TypeError, ValueError) as e: 238 print(e.args[0]) 239 return 240 out = py3compat.cast_unicode(cmds) 241 with io.open(fname, mode, encoding="utf-8") as f: 242 if not file_exists or not append: 243 f.write(u"# coding: utf-8\n") 244 f.write(out) 245 # make sure we end on a newline 246 if not out.endswith(u'\n'): 247 f.write(u'\n') 248 print('The following commands were written to file `%s`:' % fname) 249 print(cmds) 250 251 @line_magic 252 def pastebin(self, parameter_s=''): 253 """Upload code to Github's Gist paste bin, returning the URL. 254 255 Usage:\\ 256 %pastebin [-d "Custom description"] 1-7 257 258 The argument can be an input history range, a filename, or the name of a 259 string or macro. 260 261 Options: 262 263 -d: Pass a custom description for the gist. The default will say 264 "Pasted from IPython". 265 """ 266 opts, args = self.parse_options(parameter_s, 'd:') 267 268 try: 269 code = self.shell.find_user_code(args) 270 except (ValueError, TypeError) as e: 271 print(e.args[0]) 272 return 273 274 # Deferred import 275 try: 276 from urllib.request import urlopen # Py 3 277 except ImportError: 278 from urllib2 import urlopen 279 import json 280 post_data = json.dumps({ 281 "description": opts.get('d', "Pasted from IPython"), 282 "public": True, 283 "files": { 284 "file1.py": { 285 "content": code 286 } 287 } 288 }).encode('utf-8') 289 290 response = urlopen("https://api.github.com/gists", post_data) 291 response_data = json.loads(response.read().decode('utf-8')) 292 return response_data['html_url'] 293 294 @line_magic 295 def loadpy(self, arg_s): 296 """Alias of `%load` 297 298 `%loadpy` has gained some flexibility and dropped the requirement of a `.py` 299 extension. So it has been renamed simply into %load. You can look at 300 `%load`'s docstring for more info. 301 """ 302 self.load(arg_s) 303 304 @line_magic 305 def load(self, arg_s): 306 """Load code into the current frontend. 307 308 Usage:\\ 309 %load [options] source 310 311 where source can be a filename, URL, input history range, macro, or 312 element in the user namespace 313 314 Options: 315 316 -r <lines>: Specify lines or ranges of lines to load from the source. 317 Ranges could be specified as x-y (x..y) or in python-style x:y 318 (x..(y-1)). Both limits x and y can be left blank (meaning the 319 beginning and end of the file, respectively). 320 321 -s <symbols>: Specify function or classes to load from python source. 322 323 -y : Don't ask confirmation for loading source above 200 000 characters. 324 325 -n : Include the user's namespace when searching for source code. 326 327 This magic command can either take a local filename, a URL, an history 328 range (see %history) or a macro as argument, it will prompt for 329 confirmation before loading source with more than 200 000 characters, unless 330 -y flag is passed or if the frontend does not support raw_input:: 331 332 %load myscript.py 333 %load 7-27 334 %load myMacro 335 %load http://www.example.com/myscript.py 336 %load -r 5-10 myscript.py 337 %load -r 10-20,30,40: foo.py 338 %load -s MyClass,wonder_function myscript.py 339 %load -n MyClass 340 %load -n my_module.wonder_function 341 """ 342 opts,args = self.parse_options(arg_s,'yns:r:') 343 344 if not args: 345 raise UsageError('Missing filename, URL, input history range, ' 346 'macro, or element in the user namespace.') 347 348 search_ns = 'n' in opts 349 350 contents = self.shell.find_user_code(args, search_ns=search_ns) 351 352 if 's' in opts: 353 try: 354 blocks, not_found = extract_symbols(contents, opts['s']) 355 except SyntaxError: 356 # non python code 357 error("Unable to parse the input as valid Python code") 358 return 359 360 if len(not_found) == 1: 361 warn('The symbol `%s` was not found' % not_found[0]) 362 elif len(not_found) > 1: 363 warn('The symbols %s were not found' % get_text_list(not_found, 364 wrap_item_with='`') 365 ) 366 367 contents = '\n'.join(blocks) 368 369 if 'r' in opts: 370 ranges = opts['r'].replace(',', ' ') 371 lines = contents.split('\n') 372 slices = extract_code_ranges(ranges) 373 contents = [lines[slice(*slc)] for slc in slices] 374 contents = '\n'.join(strip_initial_indent(chain.from_iterable(contents))) 375 376 l = len(contents) 377 378 # 200 000 is ~ 2500 full 80 caracter lines 379 # so in average, more than 5000 lines 380 if l > 200000 and 'y' not in opts: 381 try: 382 ans = self.shell.ask_yes_no(("The text you're trying to load seems pretty big"\ 383 " (%d characters). Continue (y/[N]) ?" % l), default='n' ) 384 except StdinNotImplementedError: 385 #asume yes if raw input not implemented 386 ans = True 387 388 if ans is False : 389 print('Operation cancelled.') 390 return 391 392 contents = "# %load {}\n".format(arg_s) + contents 393 394 self.shell.set_next_input(contents, replace=True) 395 396 @staticmethod 397 def _find_edit_target(shell, args, opts, last_call): 398 """Utility method used by magic_edit to find what to edit.""" 399 400 def make_filename(arg): 401 "Make a filename from the given args" 402 try: 403 filename = get_py_filename(arg) 404 except IOError: 405 # If it ends with .py but doesn't already exist, assume we want 406 # a new file. 407 if arg.endswith('.py'): 408 filename = arg 409 else: 410 filename = None 411 return filename 412 413 # Set a few locals from the options for convenience: 414 opts_prev = 'p' in opts 415 opts_raw = 'r' in opts 416 417 # custom exceptions 418 class DataIsObject(Exception): pass 419 420 # Default line number value 421 lineno = opts.get('n',None) 422 423 if opts_prev: 424 args = '_%s' % last_call[0] 425 if args not in shell.user_ns: 426 args = last_call[1] 427 428 # by default this is done with temp files, except when the given 429 # arg is a filename 430 use_temp = True 431 432 data = '' 433 434 # First, see if the arguments should be a filename. 435 filename = make_filename(args) 436 if filename: 437 use_temp = False 438 elif args: 439 # Mode where user specifies ranges of lines, like in %macro. 440 data = shell.extract_input_lines(args, opts_raw) 441 if not data: 442 try: 443 # Load the parameter given as a variable. If not a string, 444 # process it as an object instead (below) 445 446 #print '*** args',args,'type',type(args) # dbg 447 data = eval(args, shell.user_ns) 448 if not isinstance(data, string_types): 449 raise DataIsObject 450 451 except (NameError,SyntaxError): 452 # given argument is not a variable, try as a filename 453 filename = make_filename(args) 454 if filename is None: 455 warn("Argument given (%s) can't be found as a variable " 456 "or as a filename." % args) 457 return (None, None, None) 458 use_temp = False 459 460 except DataIsObject: 461 # macros have a special edit function 462 if isinstance(data, Macro): 463 raise MacroToEdit(data) 464 465 # For objects, try to edit the file where they are defined 466 filename = find_file(data) 467 if filename: 468 if 'fakemodule' in filename.lower() and \ 469 inspect.isclass(data): 470 # class created by %edit? Try to find source 471 # by looking for method definitions instead, the 472 # __module__ in those classes is FakeModule. 473 attrs = [getattr(data, aname) for aname in dir(data)] 474 for attr in attrs: 475 if not inspect.ismethod(attr): 476 continue 477 filename = find_file(attr) 478 if filename and \ 479 'fakemodule' not in filename.lower(): 480 # change the attribute to be the edit 481 # target instead 482 data = attr 483 break 484 485 m = ipython_input_pat.match(os.path.basename(filename)) 486 if m: 487 raise InteractivelyDefined(int(m.groups()[0])) 488 489 datafile = 1 490 if filename is None: 491 filename = make_filename(args) 492 datafile = 1 493 if filename is not None: 494 # only warn about this if we get a real name 495 warn('Could not find file where `%s` is defined.\n' 496 'Opening a file named `%s`' % (args, filename)) 497 # Now, make sure we can actually read the source (if it was 498 # in a temp file it's gone by now). 499 if datafile: 500 if lineno is None: 501 lineno = find_source_lines(data) 502 if lineno is None: 503 filename = make_filename(args) 504 if filename is None: 505 warn('The file where `%s` was defined ' 506 'cannot be read or found.' % data) 507 return (None, None, None) 508 use_temp = False 509 510 if use_temp: 511 filename = shell.mktempfile(data) 512 print('IPython will make a temporary file named:',filename) 513 514 # use last_call to remember the state of the previous call, but don't 515 # let it be clobbered by successive '-p' calls. 516 try: 517 last_call[0] = shell.displayhook.prompt_count 518 if not opts_prev: 519 last_call[1] = args 520 except: 521 pass 522 523 524 return filename, lineno, use_temp 525 526 def _edit_macro(self,mname,macro): 527 """open an editor with the macro data in a file""" 528 filename = self.shell.mktempfile(macro.value) 529 self.shell.hooks.editor(filename) 530 531 # and make a new macro object, to replace the old one 532 with open(filename) as mfile: 533 mvalue = mfile.read() 534 self.shell.user_ns[mname] = Macro(mvalue) 535 536 @skip_doctest 537 @line_magic 538 def edit(self, parameter_s='',last_call=['','']): 539 """Bring up an editor and execute the resulting code. 540 541 Usage: 542 %edit [options] [args] 543 544 %edit runs IPython's editor hook. The default version of this hook is 545 set to call the editor specified by your $EDITOR environment variable. 546 If this isn't found, it will default to vi under Linux/Unix and to 547 notepad under Windows. See the end of this docstring for how to change 548 the editor hook. 549 550 You can also set the value of this editor via the 551 ``TerminalInteractiveShell.editor`` option in your configuration file. 552 This is useful if you wish to use a different editor from your typical 553 default with IPython (and for Windows users who typically don't set 554 environment variables). 555 556 This command allows you to conveniently edit multi-line code right in 557 your IPython session. 558 559 If called without arguments, %edit opens up an empty editor with a 560 temporary file and will execute the contents of this file when you 561 close it (don't forget to save it!). 562 563 564 Options: 565 566 -n <number>: open the editor at a specified line number. By default, 567 the IPython editor hook uses the unix syntax 'editor +N filename', but 568 you can configure this by providing your own modified hook if your 569 favorite editor supports line-number specifications with a different 570 syntax. 571 572 -p: this will call the editor with the same data as the previous time 573 it was used, regardless of how long ago (in your current session) it 574 was. 575 576 -r: use 'raw' input. This option only applies to input taken from the 577 user's history. By default, the 'processed' history is used, so that 578 magics are loaded in their transformed version to valid Python. If 579 this option is given, the raw input as typed as the command line is 580 used instead. When you exit the editor, it will be executed by 581 IPython's own processor. 582 583 -x: do not execute the edited code immediately upon exit. This is 584 mainly useful if you are editing programs which need to be called with 585 command line arguments, which you can then do using %run. 586 587 588 Arguments: 589 590 If arguments are given, the following possibilities exist: 591 592 - If the argument is a filename, IPython will load that into the 593 editor. It will execute its contents with execfile() when you exit, 594 loading any code in the file into your interactive namespace. 595 596 - The arguments are ranges of input history, e.g. "7 ~1/4-6". 597 The syntax is the same as in the %history magic. 598 599 - If the argument is a string variable, its contents are loaded 600 into the editor. You can thus edit any string which contains 601 python code (including the result of previous edits). 602 603 - If the argument is the name of an object (other than a string), 604 IPython will try to locate the file where it was defined and open the 605 editor at the point where it is defined. You can use `%edit function` 606 to load an editor exactly at the point where 'function' is defined, 607 edit it and have the file be executed automatically. 608 609 - If the object is a macro (see %macro for details), this opens up your 610 specified editor with a temporary file containing the macro's data. 611 Upon exit, the macro is reloaded with the contents of the file. 612 613 Note: opening at an exact line is only supported under Unix, and some 614 editors (like kedit and gedit up to Gnome 2.8) do not understand the 615 '+NUMBER' parameter necessary for this feature. Good editors like 616 (X)Emacs, vi, jed, pico and joe all do. 617 618 After executing your code, %edit will return as output the code you 619 typed in the editor (except when it was an existing file). This way 620 you can reload the code in further invocations of %edit as a variable, 621 via _<NUMBER> or Out[<NUMBER>], where <NUMBER> is the prompt number of 622 the output. 623 624 Note that %edit is also available through the alias %ed. 625 626 This is an example of creating a simple function inside the editor and 627 then modifying it. First, start up the editor:: 628 629 In [1]: edit 630 Editing... done. Executing edited code... 631 Out[1]: 'def foo():\\n print "foo() was defined in an editing 632 session"\\n' 633 634 We can then call the function foo():: 635 636 In [2]: foo() 637 foo() was defined in an editing session 638 639 Now we edit foo. IPython automatically loads the editor with the 640 (temporary) file where foo() was previously defined:: 641 642 In [3]: edit foo 643 Editing... done. Executing edited code... 644 645 And if we call foo() again we get the modified version:: 646 647 In [4]: foo() 648 foo() has now been changed! 649 650 Here is an example of how to edit a code snippet successive 651 times. First we call the editor:: 652 653 In [5]: edit 654 Editing... done. Executing edited code... 655 hello 656 Out[5]: "print 'hello'\\n" 657 658 Now we call it again with the previous output (stored in _):: 659 660 In [6]: edit _ 661 Editing... done. Executing edited code... 662 hello world 663 Out[6]: "print 'hello world'\\n" 664 665 Now we call it with the output #8 (stored in _8, also as Out[8]):: 666 667 In [7]: edit _8 668 Editing... done. Executing edited code... 669 hello again 670 Out[7]: "print 'hello again'\\n" 671 672 673 Changing the default editor hook: 674 675 If you wish to write your own editor hook, you can put it in a 676 configuration file which you load at startup time. The default hook 677 is defined in the IPython.core.hooks module, and you can use that as a 678 starting example for further modifications. That file also has 679 general instructions on how to set a new hook for use once you've 680 defined it.""" 681 opts,args = self.parse_options(parameter_s,'prxn:') 682 683 try: 684 filename, lineno, is_temp = self._find_edit_target(self.shell, 685 args, opts, last_call) 686 except MacroToEdit as e: 687 self._edit_macro(args, e.args[0]) 688 return 689 except InteractivelyDefined as e: 690 print("Editing In[%i]" % e.index) 691 args = str(e.index) 692 filename, lineno, is_temp = self._find_edit_target(self.shell, 693 args, opts, last_call) 694 if filename is None: 695 # nothing was found, warnings have already been issued, 696 # just give up. 697 return 698 699 if is_temp: 700 self._knowntemps.add(filename) 701 elif (filename in self._knowntemps): 702 is_temp = True 703 704 705 # do actual editing here 706 print('Editing...', end=' ') 707 sys.stdout.flush() 708 try: 709 # Quote filenames that may have spaces in them 710 if ' ' in filename: 711 filename = "'%s'" % filename 712 self.shell.hooks.editor(filename,lineno) 713 except TryNext: 714 warn('Could not open editor') 715 return 716 717 # XXX TODO: should this be generalized for all string vars? 718 # For now, this is special-cased to blocks created by cpaste 719 if args.strip() == 'pasted_block': 720 with open(filename, 'r') as f: 721 self.shell.user_ns['pasted_block'] = f.read() 722 723 if 'x' in opts: # -x prevents actual execution 724 print() 725 else: 726 print('done. Executing edited code...') 727 with preserve_keys(self.shell.user_ns, '__file__'): 728 if not is_temp: 729 self.shell.user_ns['__file__'] = filename 730 if 'r' in opts: # Untranslated IPython code 731 with open(filename, 'r') as f: 732 source = f.read() 733 self.shell.run_cell(source, store_history=False) 734 else: 735 self.shell.safe_execfile(filename, self.shell.user_ns, 736 self.shell.user_ns) 737 738 if is_temp: 739 try: 740 return open(filename).read() 741 except IOError as msg: 742 if msg.filename == filename: 743 warn('File not found. Did you forget to save?') 744 return 745 else: 746 self.shell.showtraceback() 747 [end of IPython/core/magics/code.py] [start of IPython/core/release.py] 1 # -*- coding: utf-8 -*- 2 """Release data for the IPython project.""" 3 4 #----------------------------------------------------------------------------- 5 # Copyright (c) 2008, IPython Development Team. 6 # Copyright (c) 2001, Fernando Perez <[email protected]> 7 # Copyright (c) 2001, Janko Hauser <[email protected]> 8 # Copyright (c) 2001, Nathaniel Gray <[email protected]> 9 # 10 # Distributed under the terms of the Modified BSD License. 11 # 12 # The full license is in the file COPYING.txt, distributed with this software. 13 #----------------------------------------------------------------------------- 14 15 # Name of the package for release purposes. This is the name which labels 16 # the tarballs and RPMs made by distutils, so it's best to lowercase it. 17 name = 'ipython' 18 19 # IPython version information. An empty _version_extra corresponds to a full 20 # release. 'dev' as a _version_extra string means this is a development 21 # version 22 _version_major = 5 23 _version_minor = 1 24 _version_patch = 0 25 _version_extra = '.dev' 26 # _version_extra = 'rc1' 27 #_version_extra = '' # Uncomment this for full releases 28 29 # release.codename is deprecated in 2.0, will be removed in 3.0 30 codename = '' 31 32 # Construct full version string from these. 33 _ver = [_version_major, _version_minor, _version_patch] 34 35 __version__ = '.'.join(map(str, _ver)) 36 if _version_extra: 37 __version__ = __version__ + _version_extra 38 39 version = __version__ # backwards compatibility name 40 version_info = (_version_major, _version_minor, _version_patch, _version_extra) 41 42 # Change this when incrementing the kernel protocol version 43 kernel_protocol_version_info = (5, 0) 44 kernel_protocol_version = "%i.%i" % kernel_protocol_version_info 45 46 description = "IPython: Productive Interactive Computing" 47 48 long_description = \ 49 """ 50 IPython provides a rich toolkit to help you make the most out of using Python 51 interactively. Its main components are: 52 53 * A powerful interactive Python shell 54 * A `Jupyter <http://jupyter.org/>`_ kernel to work with Python code in Jupyter 55 notebooks and other interactive frontends. 56 57 The enhanced interactive Python shells have the following main features: 58 59 * Comprehensive object introspection. 60 61 * Input history, persistent across sessions. 62 63 * Caching of output results during a session with automatically generated 64 references. 65 66 * Extensible tab completion, with support by default for completion of python 67 variables and keywords, filenames and function keywords. 68 69 * Extensible system of 'magic' commands for controlling the environment and 70 performing many tasks related either to IPython or the operating system. 71 72 * A rich configuration system with easy switching between different setups 73 (simpler than changing $PYTHONSTARTUP environment variables every time). 74 75 * Session logging and reloading. 76 77 * Extensible syntax processing for special purpose situations. 78 79 * Access to the system shell with user-extensible alias system. 80 81 * Easily embeddable in other Python programs and GUIs. 82 83 * Integrated access to the pdb debugger and the Python profiler. 84 85 The latest development version is always available from IPython's `GitHub 86 site <http://github.com/ipython>`_. 87 """ 88 89 license = 'BSD' 90 91 authors = {'Fernando' : ('Fernando Perez','[email protected]'), 92 'Janko' : ('Janko Hauser','[email protected]'), 93 'Nathan' : ('Nathaniel Gray','[email protected]'), 94 'Ville' : ('Ville Vainio','[email protected]'), 95 'Brian' : ('Brian E Granger', '[email protected]'), 96 'Min' : ('Min Ragan-Kelley', '[email protected]'), 97 'Thomas' : ('Thomas A. Kluyver', '[email protected]'), 98 'Jorgen' : ('Jorgen Stenarson', '[email protected]'), 99 'Matthias' : ('Matthias Bussonnier', '[email protected]'), 100 } 101 102 author = 'The IPython Development Team' 103 104 author_email = '[email protected]' 105 106 url = 'http://ipython.org' 107 108 109 platforms = ['Linux','Mac OSX','Windows'] 110 111 keywords = ['Interactive','Interpreter','Shell', 'Embedding'] 112 113 classifiers = [ 114 'Framework :: IPython', 115 'Intended Audience :: Developers', 116 'Intended Audience :: Science/Research', 117 'License :: OSI Approved :: BSD License', 118 'Programming Language :: Python', 119 'Programming Language :: Python :: 2', 120 'Programming Language :: Python :: 2.7', 121 'Programming Language :: Python :: 3', 122 'Topic :: System :: Shells' 123 ] 124 [end of IPython/core/release.py] [start of setup.py] 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 """Setup script for IPython. 4 5 Under Posix environments it works like a typical setup.py script. 6 Under Windows, the command sdist is not supported, since IPython 7 requires utilities which are not available under Windows.""" 8 9 #----------------------------------------------------------------------------- 10 # Copyright (c) 2008-2011, IPython Development Team. 11 # Copyright (c) 2001-2007, Fernando Perez <[email protected]> 12 # Copyright (c) 2001, Janko Hauser <[email protected]> 13 # Copyright (c) 2001, Nathaniel Gray <[email protected]> 14 # 15 # Distributed under the terms of the Modified BSD License. 16 # 17 # The full license is in the file COPYING.rst, distributed with this software. 18 #----------------------------------------------------------------------------- 19 20 #----------------------------------------------------------------------------- 21 # Minimal Python version sanity check 22 #----------------------------------------------------------------------------- 23 from __future__ import print_function 24 25 import sys 26 27 # This check is also made in IPython/__init__, don't forget to update both when 28 # changing Python version requirements. 29 v = sys.version_info 30 if v[:2] < (2,7) or (v[0] >= 3 and v[:2] < (3,3)): 31 error = "ERROR: IPython requires Python version 2.7 or 3.3 or above." 32 print(error, file=sys.stderr) 33 sys.exit(1) 34 35 PY3 = (sys.version_info[0] >= 3) 36 37 # At least we're on the python version we need, move on. 38 39 #------------------------------------------------------------------------------- 40 # Imports 41 #------------------------------------------------------------------------------- 42 43 # Stdlib imports 44 import os 45 46 from glob import glob 47 48 # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly 49 # update it when the contents of directories change. 50 if os.path.exists('MANIFEST'): os.remove('MANIFEST') 51 52 from distutils.core import setup 53 54 # Our own imports 55 from setupbase import target_update 56 57 from setupbase import ( 58 setup_args, 59 find_packages, 60 find_package_data, 61 check_package_data_first, 62 find_entry_points, 63 build_scripts_entrypt, 64 find_data_files, 65 git_prebuild, 66 install_symlinked, 67 install_lib_symlink, 68 install_scripts_for_symlink, 69 unsymlink, 70 ) 71 72 isfile = os.path.isfile 73 pjoin = os.path.join 74 75 #------------------------------------------------------------------------------- 76 # Handle OS specific things 77 #------------------------------------------------------------------------------- 78 79 if os.name in ('nt','dos'): 80 os_name = 'windows' 81 else: 82 os_name = os.name 83 84 # Under Windows, 'sdist' has not been supported. Now that the docs build with 85 # Sphinx it might work, but let's not turn it on until someone confirms that it 86 # actually works. 87 if os_name == 'windows' and 'sdist' in sys.argv: 88 print('The sdist command is not available under Windows. Exiting.') 89 sys.exit(1) 90 91 92 #------------------------------------------------------------------------------- 93 # Things related to the IPython documentation 94 #------------------------------------------------------------------------------- 95 96 # update the manuals when building a source dist 97 if len(sys.argv) >= 2 and sys.argv[1] in ('sdist','bdist_rpm'): 98 99 # List of things to be updated. Each entry is a triplet of args for 100 # target_update() 101 to_update = [ 102 ('docs/man/ipython.1.gz', 103 ['docs/man/ipython.1'], 104 'cd docs/man && gzip -9c ipython.1 > ipython.1.gz'), 105 ] 106 107 108 [ target_update(*t) for t in to_update ] 109 110 #--------------------------------------------------------------------------- 111 # Find all the packages, package data, and data_files 112 #--------------------------------------------------------------------------- 113 114 packages = find_packages() 115 package_data = find_package_data() 116 117 data_files = find_data_files() 118 119 setup_args['packages'] = packages 120 setup_args['package_data'] = package_data 121 setup_args['data_files'] = data_files 122 123 #--------------------------------------------------------------------------- 124 # custom distutils commands 125 #--------------------------------------------------------------------------- 126 # imports here, so they are after setuptools import if there was one 127 from distutils.command.sdist import sdist 128 from distutils.command.upload import upload 129 130 class UploadWindowsInstallers(upload): 131 132 description = "Upload Windows installers to PyPI (only used from tools/release_windows.py)" 133 user_options = upload.user_options + [ 134 ('files=', 'f', 'exe file (or glob) to upload') 135 ] 136 def initialize_options(self): 137 upload.initialize_options(self) 138 meta = self.distribution.metadata 139 base = '{name}-{version}'.format( 140 name=meta.get_name(), 141 version=meta.get_version() 142 ) 143 self.files = os.path.join('dist', '%s.*.exe' % base) 144 145 def run(self): 146 for dist_file in glob(self.files): 147 self.upload_file('bdist_wininst', 'any', dist_file) 148 149 setup_args['cmdclass'] = { 150 'build_py': \ 151 check_package_data_first(git_prebuild('IPython')), 152 'sdist' : git_prebuild('IPython', sdist), 153 'upload_wininst' : UploadWindowsInstallers, 154 'symlink': install_symlinked, 155 'install_lib_symlink': install_lib_symlink, 156 'install_scripts_sym': install_scripts_for_symlink, 157 'unsymlink': unsymlink, 158 } 159 160 161 #--------------------------------------------------------------------------- 162 # Handle scripts, dependencies, and setuptools specific things 163 #--------------------------------------------------------------------------- 164 165 # For some commands, use setuptools. Note that we do NOT list install here! 166 # If you want a setuptools-enhanced install, just run 'setupegg.py install' 167 needs_setuptools = set(('develop', 'release', 'bdist_egg', 'bdist_rpm', 168 'bdist', 'bdist_dumb', 'bdist_wininst', 'bdist_wheel', 169 'egg_info', 'easy_install', 'upload', 'install_egg_info', 170 )) 171 172 if len(needs_setuptools.intersection(sys.argv)) > 0: 173 import setuptools 174 175 # This dict is used for passing extra arguments that are setuptools 176 # specific to setup 177 setuptools_extra_args = {} 178 179 # setuptools requirements 180 181 extras_require = dict( 182 parallel = ['ipyparallel'], 183 qtconsole = ['qtconsole'], 184 doc = ['Sphinx>=1.3'], 185 test = ['nose>=0.10.1', 'requests', 'testpath', 'pygments', 'nbformat', 'ipykernel', 'numpy'], 186 terminal = [], 187 kernel = ['ipykernel'], 188 nbformat = ['nbformat'], 189 notebook = ['notebook', 'ipywidgets'], 190 nbconvert = ['nbconvert'], 191 ) 192 193 install_requires = [ 194 'setuptools>=18.5', 195 'decorator', 196 'pickleshare', 197 'simplegeneric>0.8', 198 'traitlets>=4.2', 199 'prompt_toolkit>=1.0.3,<2.0.0', 200 'pygments', 201 ] 202 203 # Platform-specific dependencies: 204 # This is the correct way to specify these, 205 # but requires pip >= 6. pip < 6 ignores these. 206 207 extras_require.update({ 208 ':python_version == "2.7"': ['backports.shutil_get_terminal_size'], 209 ':python_version == "2.7" or python_version == "3.3"': ['pathlib2'], 210 ':sys_platform != "win32"': ['pexpect'], 211 ':sys_platform == "darwin"': ['appnope'], 212 ':sys_platform == "win32"': ['colorama', 'win_unicode_console>=0.5'], 213 'test:python_version == "2.7"': ['mock'], 214 }) 215 # FIXME: re-specify above platform dependencies for pip < 6 216 # These would result in non-portable bdists. 217 if not any(arg.startswith('bdist') for arg in sys.argv): 218 if sys.version_info < (3, 3): 219 extras_require['test'].append('mock') 220 221 if sys.platform == 'darwin': 222 install_requires.extend(['appnope']) 223 224 if not sys.platform.startswith('win'): 225 install_requires.append('pexpect') 226 227 # workaround pypa/setuptools#147, where setuptools misspells 228 # platform_python_implementation as python_implementation 229 if 'setuptools' in sys.modules: 230 for key in list(extras_require): 231 if 'platform_python_implementation' in key: 232 new_key = key.replace('platform_python_implementation', 'python_implementation') 233 extras_require[new_key] = extras_require.pop(key) 234 235 everything = set() 236 for key, deps in extras_require.items(): 237 if ':' not in key: 238 everything.update(deps) 239 extras_require['all'] = everything 240 241 if 'setuptools' in sys.modules: 242 setuptools_extra_args['zip_safe'] = False 243 setuptools_extra_args['entry_points'] = { 244 'console_scripts': find_entry_points(), 245 'pygments.lexers': [ 246 'ipythonconsole = IPython.lib.lexers:IPythonConsoleLexer', 247 'ipython = IPython.lib.lexers:IPythonLexer', 248 'ipython3 = IPython.lib.lexers:IPython3Lexer', 249 ], 250 } 251 setup_args['extras_require'] = extras_require 252 requires = setup_args['install_requires'] = install_requires 253 254 # Script to be run by the windows binary installer after the default setup 255 # routine, to add shortcuts and similar windows-only things. Windows 256 # post-install scripts MUST reside in the scripts/ dir, otherwise distutils 257 # doesn't find them. 258 if 'bdist_wininst' in sys.argv: 259 if len(sys.argv) > 2 and \ 260 ('sdist' in sys.argv or 'bdist_rpm' in sys.argv): 261 print("ERROR: bdist_wininst must be run alone. Exiting.", file=sys.stderr) 262 sys.exit(1) 263 setup_args['data_files'].append( 264 ['Scripts', ('scripts/ipython.ico', 'scripts/ipython_nb.ico')]) 265 setup_args['scripts'] = [pjoin('scripts','ipython_win_post_install.py')] 266 setup_args['options'] = {"bdist_wininst": 267 {"install_script": 268 "ipython_win_post_install.py"}} 269 270 else: 271 # scripts has to be a non-empty list, or install_scripts isn't called 272 setup_args['scripts'] = [e.split('=')[0].strip() for e in find_entry_points()] 273 274 setup_args['cmdclass']['build_scripts'] = build_scripts_entrypt 275 276 #--------------------------------------------------------------------------- 277 # Do the actual setup now 278 #--------------------------------------------------------------------------- 279 280 setup_args.update(setuptools_extra_args) 281 282 283 284 def main(): 285 setup(**setup_args) 286 287 if __name__ == '__main__': 288 main() 289 [end of setup.py] [start of tools/toollib.py] 1 """Various utilities common to IPython release and maintenance tools. 2 """ 3 from __future__ import print_function 4 5 # Library imports 6 import os 7 8 # Useful shorthands 9 pjoin = os.path.join 10 cd = os.chdir 11 12 # Constants 13 14 # SSH root address of the archive site 15 archive_user = '[email protected]' 16 archive_dir = 'archive.ipython.org' 17 archive = '%s:%s' % (archive_user, archive_dir) 18 19 # Build commands 20 # Source dists 21 sdists = './setup.py sdist --formats=gztar,zip' 22 # Binary dists 23 def buildwheels(): 24 sh('python setupegg.py bdist_wheel') 25 26 # Utility functions 27 def sh(cmd): 28 """Run system command in shell, raise SystemExit if it returns an error.""" 29 print("$", cmd) 30 stat = os.system(cmd) 31 #stat = 0 # Uncomment this and comment previous to run in debug mode 32 if stat: 33 raise SystemExit("Command %s failed with code: %s" % (cmd, stat)) 34 35 # Backwards compatibility 36 c = sh 37 38 def get_ipdir(): 39 """Get IPython directory from command line, or assume it's the one above.""" 40 41 # Initialize arguments and check location 42 ipdir = pjoin(os.path.dirname(__file__), os.pardir) 43 44 ipdir = os.path.abspath(ipdir) 45 46 cd(ipdir) 47 if not os.path.isdir('IPython') and os.path.isfile('setup.py'): 48 raise SystemExit('Invalid ipython directory: %s' % ipdir) 49 return ipdir 50 51 try: 52 execfile = execfile 53 except NameError: 54 def execfile(fname, globs, locs=None): 55 locs = locs or globs 56 exec(compile(open(fname).read(), fname, "exec"), globs, locs) 57 [end of tools/toollib.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
ipython/ipython
99d29c2d556b9889b5040874e5e673ae2e5a032a
AttributeError: 'StreamLogger' object has no attribute 'isatty' OS: macOS 10.11.6 Python version: Python 2.7.10 ipython version: IPython 5.0.0 Scrapy project with settings: `LOG_STDOUT = True` which redirects stdout to log ``` Traceback (most recent call last): File "/Library/Python/2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback yield next(it) File "/Library/Python/2.7/site-packages/scrapy_splash/middleware.py", line 156, in process_spider_output for el in result: File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output for x in result: File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr> return (_set_referer(r) for r in result or ()) File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "/Library/Python/2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "/Users/mini/PycharmProjects/spider_demo/spider_demo/spiders/baidu_news.py", line 42, in parse inspect_response(response, self) File "/Library/Python/2.7/site-packages/scrapy/shell.py", line 157, in inspect_response Shell(spider.crawler).start(response=response) File "/Library/Python/2.7/site-packages/scrapy/shell.py", line 80, in start banner=self.vars.pop('banner', '')) File "/Library/Python/2.7/site-packages/scrapy/utils/console.py", line 75, in start_python_console shell = get_shell_embed_func(shells) File "/Library/Python/2.7/site-packages/scrapy/utils/console.py", line 63, in get_shell_embed_func return known_shells[shell]() File "/Library/Python/2.7/site-packages/scrapy/utils/console.py", line 7, in _embed_ipython_shell from IPython.terminal.embed import InteractiveShellEmbed File "/Library/Python/2.7/site-packages/IPython/__init__.py", line 49, in <module> from .terminal.embed import embed File "/Library/Python/2.7/site-packages/IPython/terminal/embed.py", line 17, in <module> from IPython.terminal.interactiveshell import TerminalInteractiveShell File "/Library/Python/2.7/site-packages/IPython/terminal/interactiveshell.py", line 77, in <module> _is_tty = (sys.stdin.isatty()) and (sys.stdout.isatty()) and (sys.stderr.isatty()) AttributeError: 'StreamLogger' object has no attribute 'isatty' ```
2016-08-03T09:36:07Z
<patch> diff --git a/IPython/terminal/interactiveshell.py b/IPython/terminal/interactiveshell.py --- a/IPython/terminal/interactiveshell.py +++ b/IPython/terminal/interactiveshell.py @@ -74,11 +74,17 @@ def get_default_editor(): else: return 'notepad' # same in Windows! - -if sys.stdin and sys.stdout and sys.stderr: - _is_tty = (sys.stdin.isatty()) and (sys.stdout.isatty()) and (sys.stderr.isatty()) +# conservatively check for tty +# overridden streams can result in things like: +# - sys.stdin = None +# - no isatty method +for _name in ('stdin', 'stdout', 'stderr'): + _stream = getattr(sys, _name) + if not _stream or not hasattr(_stream, 'isatty') or not _stream.isatty(): + _is_tty = False + break else: - _is_tty = False + _is_tty = True _use_simple_prompt = ('IPY_TEST_SIMPLE_PROMPT' in os.environ) or (not _is_tty) </patch>
[]
[]
conan-io__conan-8167
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [bug] YCM generator uses deprecated FlagsForFile method instead of Settings <!-- Please don't forget to update the issue title. Include all applicable information to help us reproduce your problem. To help us debug your issue please explain: --> ### Environment Details (include every applicable attribute) * Operating System+version: macOS 10.14.5 * Compiler+version: clang 10.0.1 * Conan version: 1.31.4 * Python version: 3.9.0 ### Steps to reproduce (Include if Applicable) Follow instructions at https://docs.conan.io/en/latest/integrations/ide/youcompleteme.html#youcompleteme-integration to configure `.ycm_extra_conf` and `conan_ycm_flags.json`: conanfile.txt ``` [generators] ycm ``` ```bash # from your base folder $ cp build/conan_ycm_extra_conf.py .ycm_extra_conf.py $ ln -s build/conan_ycm_flags.json conan_ycm_flags.json ``` Install `gtest` as a package, and then import it in a source file. ### Logs (Executed commands with output) (Include/Attach if Applicable) <!-- Your log content should be related to the bug description, it can be: - Conan command output - Server output (Artifactory, conan_server) --> YCM was unable to find the gtest package as installed by conan. YCM Debug Info: ``` Printing YouCompleteMe debug information... -- Resolve completions: Up front -- Client logfile: /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycm_x9dk66na.log -- Server Python interpreter: /usr/local/opt/[email protected]/bin/python3.9 -- Server Python version: 3.9.0 -- Server has Clang support compiled in: True -- Clang version: clang version 10.0.0 -- Extra configuration file found and loaded -- Extra configuration path: /Users/username/home/projects/project/.ycm_extra_conf.py -- C-family completer debug information: -- Clangd running -- Clangd process ID: 56305 -- Clangd executable: ['/Users/username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clangd/output/bin/clangd', '-header-insertion-decorators=0', '-resource-dir=/Users/ username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clang/lib/clang/10.0.0', '-limit-results=500', '-log=verbose'] -- Clangd logfiles: -- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/clangd_stderr615mhccn.log -- Clangd Server State: Initialized -- Clangd Project Directory: /Users/username/home/projects/project -- Clangd Settings: {} -- Clangd Compilation Command: False -- Server running at: http://127.0.0.1:50225 -- Server process ID: 56303 -- Server logfiles: -- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stdout_nstboyjy.log -- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stderr_ey11rfes.log ``` As can be seen, `clangd` is not using the flags `'-x', 'c++'` as defined in the default `flags` list in the generated `.ycm_extra_conf.py`, or the `gtest` package as installed by conan. The generated `conan_ycm_flags.json` file contains the following: ``` { "includes": [ "-isystem/Users/username/.conan/data/gtest/1.10.0/_/_/package/03ad53d73db1da068548d1d6a87ac3219077b5c0/include", "-isystem/Users/username/.conan/data/rapidjson/1.1.0/_/_/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include" ], "defines": [], "flags": [] } ``` These flags are also not included in the compilation arguments. The issue appears to be caused by the fact that the [generator](https://github.com/conan-io/conan/blob/develop/conans/client/generators/ycm.py) uses the deprecated `FlagsForFile` method instead of it's replacement, `Settings`. This can be resolved by modifying line 143 from: ```python def FlagsForFile( filename, **kwargs ): ``` to ```python def Settings( filename, **kwargs): ``` As a new user of YCM and conan, this took an inordinate amount of time to troubleshoot, though it is relatively trivial. </issue> <code> [start of README.rst] 1 |Logo| 2 3 Conan 4 ===== 5 6 Decentralized, open-source (MIT), C/C++ package manager. 7 8 - Homepage: https://conan.io/ 9 - Github: https://github.com/conan-io/conan 10 - Docs: https://docs.conan.io/en/latest/ 11 - Slack: https://cpplang-inviter.cppalliance.org/ (#conan channel) 12 - Twitter: https://twitter.com/conan_io 13 14 15 Conan is a package manager for C and C++ developers: 16 17 - It is fully decentralized. Users can host their packages in their servers, privately. Integrates with Artifactory and Bintray. 18 - Portable. Works across all platforms, including Linux, OSX, Windows (with native and first-class support, WSL, MinGW), 19 Solaris, FreeBSD, embedded and cross-compiling, docker, WSL 20 - Manage binaries. It can create, upload and download binaries for any configuration and platform, 21 even cross-compiling, saving lots of time in development and continuous integration. The binary compatibility 22 can be configured and customized. Manage all your artifacts in the same way on all platforms. 23 - Integrates with any build system, including any proprietary and custom one. Provides tested support for major build systems 24 (CMake, MSBuild, Makefiles, Meson, etc). 25 - Extensible: Its python based recipes, together with extensions points allows for great power and flexibility. 26 - Large and active community, especially in Github (https://github.com/conan-io/conan) and Slack (https://cpplang-inviter.cppalliance.org/ #conan channel). 27 This community also creates and maintains packages in ConanCenter and Bincrafters repositories in Bintray. 28 - Stable. Used in production by many companies, since 1.0 there is a commitment not to break package recipes and documented behavior. 29 30 31 32 +-------------------------+-------------------------+ 33 | **develop** | **Code Climate** | 34 +=========================+=========================+ 35 | |Build Status Develop| | |Develop climate| | 36 +-------------------------+-------------------------+ 37 38 39 Setup 40 ===== 41 42 Please read https://docs.conan.io/en/latest/installation.html to know how to 43 install and start using Conan. TL;DR: 44 45 .. code-block:: 46 47 $ pip install conan 48 49 50 Install a development version 51 ----------------------------- 52 53 You can run **Conan** client and server in Windows, MacOS, and Linux. 54 55 - **Install pip following** `pip docs`_. 56 57 - **Clone Conan repository:** 58 59 .. code-block:: bash 60 61 $ git clone https://github.com/conan-io/conan.git conan-io 62 63 NOTE: repository directory name matters, some directories are known to be problematic to run tests (e.g. `conan`). `conan-io` directory name was tested and guaranteed to be working. 64 65 - **Install in editable mode** 66 67 .. code-block:: bash 68 69 $ cd conan && sudo pip install -e . 70 71 If you are in Windows, using ``sudo`` is not required. 72 73 - **You are ready, try to run Conan:** 74 75 .. code-block:: 76 77 $ conan --help 78 79 Consumer commands 80 install Installs the requirements specified in a conanfile (.py or .txt). 81 config Manages configuration. Edits the conan.conf or installs config files. 82 get Gets a file or list a directory of a given reference or package. 83 info Gets information about the dependency graph of a recipe. 84 search Searches package recipes and binaries in the local cache or in a remote. 85 Creator commands 86 new Creates a new package recipe template with a 'conanfile.py'. 87 create Builds a binary package for a recipe (conanfile.py) located in the current dir. 88 upload Uploads a recipe and binary packages to a remote. 89 export Copies the recipe (conanfile.py & associated files) to your local cache. 90 export-pkg Exports a recipe & creates a package with given files calling 'package'. 91 test Test a package, consuming it with a conanfile recipe with a test() method. 92 Package development commands 93 source Calls your local conanfile.py 'source()' method. 94 build Calls your local conanfile.py 'build()' method. 95 package Calls your local conanfile.py 'package()' method. 96 Misc commands 97 profile Lists profiles in the '.conan/profiles' folder, or shows profile details. 98 remote Manages the remote list and the package recipes associated with a remote. 99 user Authenticates against a remote with user/pass, caching the auth token. 100 imports Calls your local conanfile.py or conanfile.txt 'imports' method. 101 copy Copies conan recipes and packages to another user/channel. 102 remove Removes packages or binaries matching pattern from local cache or remote. 103 alias Creates and exports an 'alias recipe'. 104 download Downloads recipe and binaries to the local cache, without using settings. 105 106 Conan commands. Type "conan <command> -h" for help 107 108 Contributing to the project 109 =========================== 110 111 Feedback and contribution are always welcome in this project. 112 Please read our `contributing guide <https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md>`_. 113 114 Running the tests 115 ================= 116 117 Using tox 118 --------- 119 120 .. code-block:: bash 121 122 $ python -m tox 123 124 It will install the needed requirements and launch `pytest` skipping some heavy and slow tests. 125 If you want to run the full test suite: 126 127 .. code-block:: bash 128 129 $ python -m tox -e full 130 131 Without tox 132 ----------- 133 134 **Install python requirements** 135 136 .. code-block:: bash 137 138 $ python -m pip install -r conans/requirements.txt 139 $ python -m pip install -r conans/requirements_server.txt 140 $ python -m pip install -r conans/requirements_dev.txt 141 142 If you are not Windows and you are not using a python virtual environment, you will need to run these 143 commands using `sudo`. 144 145 Before you can run the tests, you need to set a few environment variables first. 146 147 .. code-block:: bash 148 149 $ export PYTHONPATH=$PYTHONPATH:$(pwd) 150 151 On Windows it would be (while being in the Conan root directory): 152 153 .. code-block:: bash 154 155 $ set PYTHONPATH=. 156 157 Ensure that your ``cmake`` has version 2.8 or later. You can see the 158 version with the following command: 159 160 .. code-block:: bash 161 162 $ cmake --version 163 164 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your 165 operating system and your requirements. 166 167 These should work for the GCC from ``build-essential`` on Ubuntu 14.04: 168 169 .. code-block:: bash 170 171 $ export CONAN_COMPILER=gcc 172 $ export CONAN_COMPILER_VERSION=4.8 173 174 These should work for OS X: 175 176 .. code-block:: bash 177 178 $ export CONAN_COMPILER=clang 179 $ export CONAN_COMPILER_VERSION=3.5 180 181 You can run the actual tests like this: 182 183 .. code-block:: bash 184 185 $ python -m pytest . 186 187 188 There are a couple of test attributes defined, as ``slow`` that you can use 189 to filter the tests, and do not execute them: 190 191 .. code-block:: bash 192 193 $ python -m pytest . -m "not slow" 194 195 A few minutes later it should print ``OK``: 196 197 .. code-block:: bash 198 199 ............................................................................................ 200 ---------------------------------------------------------------------- 201 Ran 146 tests in 50.993s 202 203 OK 204 205 To run specific tests, you can specify the test name too, something like: 206 207 .. code-block:: bash 208 209 $ python -m pytest conans/test/unittests/client/cmd/export_test.py::ExportTest::test_export_warning -s 210 211 The ``-s`` argument can be useful to see some output that otherwise is captured by pytest. 212 213 Also, you can run tests against an instance of Artifactory. Those tests should add the attribute 214 ``artifactory_ready``. 215 216 .. code-block:: bash 217 218 $ python -m pytest . -m artifactory_ready 219 220 Some environment variables have to be defined to run them. For example, for an 221 Artifactory instance that is running on the localhost with default user and password configured, the 222 variables could take the values: 223 224 .. code-block:: bash 225 226 $ export CONAN_TEST_WITH_ARTIFACTORY=1 227 $ export ARTIFACTORY_DEFAULT_URL=http://localhost:8081/artifactory 228 $ export ARTIFACTORY_DEFAULT_USER=admin 229 $ export ARTIFACTORY_DEFAULT_PASSWORD=password 230 231 ``ARTIFACTORY_DEFAULT_URL`` is the base url for the Artifactory repo, not one for an specific 232 repository. Running the tests with a real Artifactory instance will create repos on the fly so please 233 use a separate server for testing purposes. 234 235 License 236 ------- 237 238 `MIT LICENSE <./LICENSE.md>`__ 239 240 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop 241 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop 242 243 .. |Develop climate| image:: https://api.codeclimate.com/v1/badges/081b53e570d5220b34e4/maintainability.svg 244 :target: https://codeclimate.com/github/conan-io/conan/maintainability 245 246 .. |Logo| image:: https://conan.io/img/jfrog_conan_logo.png 247 248 249 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/ 250 251 [end of README.rst] [start of conans/client/build/cmake.py] 1 import os 2 import platform 3 import re 4 import warnings 5 from itertools import chain 6 7 from six import StringIO # Python 2 and 3 compatible 8 9 10 from conans.client import tools 11 from conans.client.build import defs_to_string, join_arguments 12 from conans.client.build.cmake_flags import CMakeDefinitionsBuilder, \ 13 get_generator, is_multi_configuration, verbose_definition, verbose_definition_name, \ 14 cmake_install_prefix_var_name, get_toolset, build_type_definition, \ 15 cmake_in_local_cache_var_name, runtime_definition_var_name, get_generator_platform, \ 16 is_generator_platform_supported, is_toolset_supported 17 from conans.client.output import ConanOutput, Color 18 from conans.client.tools.env import environment_append, _environment_add 19 from conans.client.tools.oss import cpu_count, args_to_string 20 from conans.errors import ConanException 21 from conans.model.version import Version 22 from conans.util.conan_v2_mode import conan_v2_behavior 23 from conans.util.config_parser import get_bool_from_text 24 from conans.util.files import mkdir, get_abs_path, walk, decode_text 25 from conans.util.runners import version_runner 26 27 28 class CMake(object): 29 def __new__(cls, conanfile, *args, **kwargs): 30 """ Inject the proper CMake base class in the hierarchy """ 31 from conans import ConanFile 32 if not isinstance(conanfile, ConanFile): 33 raise ConanException("First argument of CMake() has to be ConanFile. Use CMake(self)") 34 35 # If already injected, create and return 36 from conan.tools.cmake.cmake import CMake as CMakeToolchainBuildHelper 37 if CMakeToolchainBuildHelper in cls.__bases__ or CMakeBuildHelper in cls.__bases__: 38 return super(CMake, cls).__new__(cls) 39 40 # If not, add the proper CMake implementation 41 if hasattr(conanfile, "toolchain") or hasattr(conanfile, "generate"): 42 # Warning 43 msg = ("\n*****************************************************************\n" 44 "******************************************************************\n" 45 "This 'CMake' build helper has been deprecated and moved.\n" 46 "It will be removed in next Conan release.\n" 47 "Use 'from conan.tools.cmake import CMake' instead.\n" 48 "********************************************************************\n" 49 "********************************************************************\n") 50 ConanOutput(conanfile.output._stream, 51 color=conanfile.output._color).writeln(msg, front=Color.BRIGHT_RED) 52 warnings.warn(msg) 53 CustomCMakeClass = type("CustomCMakeClass", (cls, CMakeToolchainBuildHelper), {}) 54 else: 55 CustomCMakeClass = type("CustomCMakeClass", (cls, CMakeBuildHelper), {}) 56 57 return CustomCMakeClass.__new__(CustomCMakeClass, conanfile, *args, **kwargs) 58 59 def __init__(self, *args, **kwargs): 60 super(CMake, self).__init__(*args, **kwargs) 61 62 @staticmethod 63 def get_version(): 64 # FIXME: Conan 2.0 This function is require for python2 65 return CMakeBuildHelper.get_version() 66 67 68 class CMakeBuildHelper(object): 69 70 def __init__(self, conanfile, generator=None, cmake_system_name=True, 71 parallel=True, build_type=None, toolset=None, make_program=None, 72 set_cmake_flags=False, msbuild_verbosity="minimal", cmake_program=None, 73 generator_platform=None, append_vcvars=False): 74 """ 75 :param conanfile: Conanfile instance 76 :param generator: Generator name to use or none to autodetect 77 :param cmake_system_name: False to not use CMAKE_SYSTEM_NAME variable, 78 True for auto-detect or directly a string with the system name 79 :param parallel: Try to build with multiple cores if available 80 :param build_type: Overrides default build type coming from settings 81 :param toolset: Toolset name to use (such as llvm-vs2014) or none for default one, 82 applies only to certain generators (e.g. Visual Studio) 83 :param set_cmake_flags: whether or not to set CMake flags like CMAKE_CXX_FLAGS, 84 CMAKE_C_FLAGS, etc. it's vital to set for certain projects 85 (e.g. using CMAKE_SIZEOF_VOID_P or CMAKE_LIBRARY_ARCHITECTURE) 86 :param msbuild_verbosity: verbosity level for MSBuild (in case of Visual Studio generator) 87 :param cmake_program: Path to the custom cmake executable 88 :param generator_platform: Generator platform name or none to autodetect (-A cmake option) 89 """ 90 self._append_vcvars = append_vcvars 91 self._conanfile = conanfile 92 self._settings = conanfile.settings 93 self._build_type = build_type or conanfile.settings.get_safe("build_type") 94 self._cmake_program = os.getenv("CONAN_CMAKE_PROGRAM") or cmake_program or "cmake" 95 96 self.generator_platform = generator_platform 97 self.generator = generator or get_generator(conanfile) 98 99 if not self.generator: 100 self._conanfile.output.warn("CMake generator could not be deduced from settings") 101 self.parallel = parallel 102 # Initialize definitions (won't be updated if conanfile or any of these variables change) 103 builder = CMakeDefinitionsBuilder(self._conanfile, 104 cmake_system_name=cmake_system_name, 105 make_program=make_program, parallel=parallel, 106 generator=self.generator, 107 set_cmake_flags=set_cmake_flags, 108 forced_build_type=build_type, 109 output=self._conanfile.output) 110 # FIXME CONAN 2.0: CMake() interface should be always the constructor and self.definitions. 111 # FIXME CONAN 2.0: Avoid properties and attributes to make the user interface more clear 112 113 try: 114 cmake_version = self.get_version() 115 self.definitions = builder.get_definitions(cmake_version) 116 except ConanException: 117 self.definitions = builder.get_definitions(None) 118 119 self.definitions["CONAN_EXPORTED"] = "1" 120 121 self.toolset = toolset or get_toolset(self._settings, self.generator) 122 self.build_dir = None 123 self.msbuild_verbosity = os.getenv("CONAN_MSBUILD_VERBOSITY") or msbuild_verbosity 124 125 @property 126 def generator(self): 127 return self._generator 128 129 @generator.setter 130 def generator(self, value): 131 self._generator = value 132 if not self._generator_platform_is_assigned: 133 self._generator_platform = get_generator_platform(self._settings, self._generator) 134 135 @property 136 def generator_platform(self): 137 return self._generator_platform 138 139 @generator_platform.setter 140 def generator_platform(self, value): 141 self._generator_platform = value 142 self._generator_platform_is_assigned = bool(value is not None) 143 144 @property 145 def build_folder(self): 146 return self.build_dir 147 148 @build_folder.setter 149 def build_folder(self, value): 150 self.build_dir = value 151 152 @property 153 def build_type(self): 154 return self._build_type 155 156 @build_type.setter 157 def build_type(self, build_type): 158 settings_build_type = self._settings.get_safe("build_type") 159 self.definitions.pop("CMAKE_BUILD_TYPE", None) 160 self.definitions.update(build_type_definition(build_type, settings_build_type, 161 self.generator, self._conanfile.output)) 162 self._build_type = build_type 163 164 @property 165 def in_local_cache(self): 166 try: 167 in_local_cache = self.definitions[cmake_in_local_cache_var_name] 168 return get_bool_from_text(str(in_local_cache)) 169 except KeyError: 170 return False 171 172 @property 173 def runtime(self): 174 return defs_to_string(self.definitions.get(runtime_definition_var_name)) 175 176 @property 177 def flags(self): 178 return defs_to_string(self.definitions) 179 180 @property 181 def is_multi_configuration(self): 182 return is_multi_configuration(self.generator) 183 184 @property 185 def command_line(self): 186 if self.generator_platform and not is_generator_platform_supported(self.generator): 187 raise ConanException('CMake does not support generator platform with generator ' 188 '"%s:. Please check your conan profile to either remove the ' 189 'generator platform, or change the CMake generator.' 190 % self.generator) 191 192 if self.toolset and not is_toolset_supported(self.generator): 193 raise ConanException('CMake does not support toolsets with generator "%s:.' 194 'Please check your conan profile to either remove the toolset,' 195 ' or change the CMake generator.' % self.generator) 196 197 generator = self.generator 198 generator_platform = self.generator_platform 199 200 if self.generator_platform and 'Visual Studio' in generator: 201 # FIXME: Conan 2.0 We are adding the platform to the generator instead of using 202 # the -A argument to keep previous implementation, but any modern CMake will support 203 # (and recommend) passing the platform in its own argument. 204 # Get the version from the generator, as it could have been defined by user argument 205 compiler_version = re.search("Visual Studio ([0-9]*)", generator).group(1) 206 if Version(compiler_version) < "16" and self._settings.get_safe("os") != "WindowsCE": 207 if self.generator_platform == "x64": 208 generator += " Win64" if not generator.endswith(" Win64") else "" 209 generator_platform = None 210 elif self.generator_platform == "ARM": 211 generator += " ARM" if not generator.endswith(" ARM") else "" 212 generator_platform = None 213 elif self.generator_platform == "Win32": 214 generator_platform = None 215 216 args = ['-G "{}"'.format(generator)] if generator else [] 217 if generator_platform: 218 args.append('-A "{}"'.format(generator_platform)) 219 220 args.append(self.flags) 221 args.append('-Wno-dev') 222 223 if self.toolset: 224 args.append('-T "%s"' % self.toolset) 225 226 return join_arguments(args) 227 228 @property 229 def build_config(self): 230 """ cmake --build tool have a --config option for Multi-configuration IDEs 231 """ 232 if self._build_type and self.is_multi_configuration: 233 return "--config %s" % self._build_type 234 return "" 235 236 def _get_dirs(self, source_folder, build_folder, source_dir, build_dir, cache_build_folder): 237 if (source_folder or build_folder) and (source_dir or build_dir): 238 raise ConanException("Use 'build_folder'/'source_folder' arguments") 239 240 def get_dir(folder, origin): 241 if folder: 242 if os.path.isabs(folder): 243 return folder 244 return os.path.join(origin, folder) 245 return origin 246 247 if source_dir or build_dir: # OLD MODE 248 build_ret = build_dir or self.build_dir or self._conanfile.build_folder 249 source_ret = source_dir or self._conanfile.source_folder 250 else: 251 build_ret = get_dir(build_folder, self._conanfile.build_folder) 252 source_ret = get_dir(source_folder, self._conanfile.source_folder) 253 254 if self._conanfile.in_local_cache and cache_build_folder: 255 build_ret = get_dir(cache_build_folder, self._conanfile.build_folder) 256 257 return source_ret, build_ret 258 259 def _run(self, command): 260 compiler = self._settings.get_safe("compiler") 261 if not compiler: 262 conan_v2_behavior("compiler setting should be defined.", 263 v1_behavior=self._conanfile.output.warn) 264 the_os = self._settings.get_safe("os") 265 is_clangcl = the_os == "Windows" and compiler == "clang" 266 is_msvc = compiler == "Visual Studio" 267 is_intel = compiler == "intel" 268 context = tools.no_op() 269 270 if (is_msvc or is_clangcl) and platform.system() == "Windows": 271 if self.generator in ["Ninja", "NMake Makefiles", "NMake Makefiles JOM"]: 272 vcvars_dict = tools.vcvars_dict(self._settings, force=True, filter_known_paths=False, 273 output=self._conanfile.output) 274 context = _environment_add(vcvars_dict, post=self._append_vcvars) 275 elif is_intel: 276 if self.generator in ["Ninja", "NMake Makefiles", "NMake Makefiles JOM", 277 "Unix Makefiles"]: 278 intel_compilervars_dict = tools.intel_compilervars_dict(self._conanfile, force=True) 279 context = _environment_add(intel_compilervars_dict, post=self._append_vcvars) 280 with context: 281 self._conanfile.run(command) 282 283 def configure(self, args=None, defs=None, source_dir=None, build_dir=None, 284 source_folder=None, build_folder=None, cache_build_folder=None, 285 pkg_config_paths=None): 286 287 # TODO: Deprecate source_dir and build_dir in favor of xxx_folder 288 if not self._conanfile.should_configure: 289 return 290 args = args or [] 291 defs = defs or {} 292 source_dir, self.build_dir = self._get_dirs(source_folder, build_folder, 293 source_dir, build_dir, 294 cache_build_folder) 295 mkdir(self.build_dir) 296 arg_list = join_arguments([ 297 self.command_line, 298 args_to_string(args), 299 defs_to_string(defs), 300 args_to_string([source_dir]) 301 ]) 302 303 if pkg_config_paths: 304 pkg_env = {"PKG_CONFIG_PATH": 305 os.pathsep.join(get_abs_path(f, self._conanfile.install_folder) 306 for f in pkg_config_paths)} 307 else: 308 # If we are using pkg_config generator automate the pcs location, otherwise it could 309 # read wrong files 310 set_env = "pkg_config" in self._conanfile.generators \ 311 and "PKG_CONFIG_PATH" not in os.environ 312 pkg_env = {"PKG_CONFIG_PATH": self._conanfile.install_folder} if set_env else None 313 314 with environment_append(pkg_env): 315 command = "cd %s && %s %s" % (args_to_string([self.build_dir]), self._cmake_program, 316 arg_list) 317 if platform.system() == "Windows" and self.generator == "MinGW Makefiles": 318 with tools.remove_from_path("sh"): 319 self._run(command) 320 else: 321 self._run(command) 322 323 def build(self, args=None, build_dir=None, target=None): 324 if not self._conanfile.should_build: 325 return 326 if not self._build_type: 327 conan_v2_behavior("build_type setting should be defined.", 328 v1_behavior=self._conanfile.output.warn) 329 self._build(args, build_dir, target) 330 331 def _build(self, args=None, build_dir=None, target=None): 332 args = args or [] 333 build_dir = build_dir or self.build_dir or self._conanfile.build_folder 334 if target is not None: 335 args = ["--target", target] + args 336 337 if self.generator and self.parallel: 338 if ("Makefiles" in self.generator or "Ninja" in self.generator) and \ 339 "NMake" not in self.generator: 340 if "--" not in args: 341 args.append("--") 342 args.append("-j%i" % cpu_count(self._conanfile.output)) 343 elif "Visual Studio" in self.generator: 344 compiler_version = re.search("Visual Studio ([0-9]*)", self.generator).group(1) 345 if Version(compiler_version) >= "10": 346 if "--" not in args: 347 args.append("--") 348 # Parallel for building projects in the solution 349 args.append("/m:%i" % cpu_count(output=self._conanfile.output)) 350 351 if self.generator and self.msbuild_verbosity: 352 if "Visual Studio" in self.generator: 353 compiler_version = re.search("Visual Studio ([0-9]*)", self.generator).group(1) 354 if Version(compiler_version) >= "10": 355 if "--" not in args: 356 args.append("--") 357 args.append("/verbosity:%s" % self.msbuild_verbosity) 358 359 arg_list = join_arguments([ 360 args_to_string([build_dir]), 361 self.build_config, 362 args_to_string(args) 363 ]) 364 command = "%s --build %s" % (self._cmake_program, arg_list) 365 self._run(command) 366 367 def install(self, args=None, build_dir=None): 368 if not self._conanfile.should_install: 369 return 370 mkdir(self._conanfile.package_folder) 371 if not self.definitions.get(cmake_install_prefix_var_name): 372 raise ConanException("%s not defined for 'cmake.install()'\n" 373 "Make sure 'package_folder' is " 374 "defined" % cmake_install_prefix_var_name) 375 self._build(args=args, build_dir=build_dir, target="install") 376 377 def test(self, args=None, build_dir=None, target=None, output_on_failure=False): 378 if not self._conanfile.should_test: 379 return 380 if not target: 381 target = "RUN_TESTS" if self.is_multi_configuration else "test" 382 383 test_env = {'CTEST_OUTPUT_ON_FAILURE': '1' if output_on_failure else '0'} 384 if self.parallel: 385 test_env['CTEST_PARALLEL_LEVEL'] = str(cpu_count(self._conanfile.output)) 386 with environment_append(test_env): 387 self._build(args=args, build_dir=build_dir, target=target) 388 389 @property 390 def verbose(self): 391 try: 392 verbose = self.definitions[verbose_definition_name] 393 return get_bool_from_text(str(verbose)) 394 except KeyError: 395 return False 396 397 @verbose.setter 398 def verbose(self, value): 399 self.definitions.update(verbose_definition(value)) 400 401 def patch_config_paths(self): 402 """ 403 changes references to the absolute path of the installed package and its dependencies in 404 exported cmake config files to the appropriate conan variable. This makes 405 most (sensible) cmake config files portable. 406 407 For example, if a package foo installs a file called "fooConfig.cmake" to 408 be used by cmake's find_package method, normally this file will contain 409 absolute paths to the installed package folder, for example it will contain 410 a line such as: 411 412 SET(Foo_INSTALL_DIR /home/developer/.conan/data/Foo/1.0.0/...) 413 414 This will cause cmake find_package() method to fail when someone else 415 installs the package via conan. 416 417 This function will replace such mentions to 418 419 SET(Foo_INSTALL_DIR ${CONAN_FOO_ROOT}) 420 421 which is a variable that is set by conanbuildinfo.cmake, so that find_package() 422 now correctly works on this conan package. 423 424 For dependent packages, if a package foo installs a file called "fooConfig.cmake" to 425 be used by cmake's find_package method and if it depends to a package bar, 426 normally this file will contain absolute paths to the bar package folder, 427 for example it will contain a line such as: 428 429 SET_TARGET_PROPERTIES(foo PROPERTIES 430 INTERFACE_INCLUDE_DIRECTORIES 431 "/home/developer/.conan/data/Bar/1.0.0/user/channel/id/include") 432 433 This function will replace such mentions to 434 435 SET_TARGET_PROPERTIES(foo PROPERTIES 436 INTERFACE_INCLUDE_DIRECTORIES 437 "${CONAN_BAR_ROOT}/include") 438 439 If the install() method of the CMake object in the conan file is used, this 440 function should be called _after_ that invocation. For example: 441 442 def build(self): 443 cmake = CMake(self) 444 cmake.configure() 445 cmake.build() 446 cmake.install() 447 cmake.patch_config_paths() 448 """ 449 450 if not self._conanfile.should_install: 451 return 452 if not self._conanfile.name: 453 raise ConanException("cmake.patch_config_paths() can't work without package name. " 454 "Define name in your recipe") 455 pf = self.definitions.get(cmake_install_prefix_var_name) 456 replstr = "${CONAN_%s_ROOT}" % self._conanfile.name.upper() 457 allwalk = chain(walk(self._conanfile.build_folder), walk(self._conanfile.package_folder)) 458 459 # We don't want warnings printed because there is no replacement of the abs path. 460 # there could be MANY cmake files in the package and the normal thing is to not find 461 # the abs paths 462 _null_out = ConanOutput(StringIO()) 463 for root, _, files in allwalk: 464 for f in files: 465 if f.endswith(".cmake") and not f.startswith("conan"): 466 path = os.path.join(root, f) 467 468 tools.replace_path_in_file(path, pf, replstr, strict=False, 469 output=_null_out) 470 471 # patch paths of dependent packages that are found in any cmake files of the 472 # current package 473 for dep in self._conanfile.deps_cpp_info.deps: 474 from_str = self._conanfile.deps_cpp_info[dep].rootpath 475 dep_str = "${CONAN_%s_ROOT}" % dep.upper() 476 ret = tools.replace_path_in_file(path, from_str, dep_str, strict=False, 477 output=_null_out) 478 if ret: 479 self._conanfile.output.info("Patched paths for %s: %s to %s" 480 % (dep, from_str, dep_str)) 481 482 @staticmethod 483 def get_version(): 484 try: 485 out = version_runner(["cmake", "--version"]) 486 version_line = decode_text(out).split('\n', 1)[0] 487 version_str = version_line.rsplit(' ', 1)[-1] 488 return Version(version_str) 489 except Exception as e: 490 raise ConanException("Error retrieving CMake version: '{}'".format(e)) 491 [end of conans/client/build/cmake.py] [start of conans/client/generators/ycm.py] 1 import json 2 3 from conans.model import Generator 4 5 6 class YouCompleteMeGenerator(Generator): 7 template = ''' 8 # This file is NOT licensed under the GPLv3, which is the license for the rest 9 # of YouCompleteMe. 10 # 11 # Here's the license text for this file: 12 # 13 # This is free and unencumbered software released into the public domain. 14 # 15 # Anyone is free to copy, modify, publish, use, compile, sell, or 16 # distribute this software, either in source code form or as a compiled 17 # binary, for any purpose, commercial or non-commercial, and by any 18 # means. 19 # 20 # In jurisdictions that recognize copyright laws, the author or authors 21 # of this software dedicate any and all copyright interest in the 22 # software to the public domain. We make this dedication for the benefit 23 # of the public at large and to the detriment of our heirs and 24 # successors. We intend this dedication to be an overt act of 25 # relinquishment in perpetuity of all present and future rights to this 26 # software under copyright law. 27 # 28 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 29 # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 30 # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 31 # IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR 32 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 33 # ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 34 # OTHER DEALINGS IN THE SOFTWARE. 35 # 36 # For more information, please refer to <http://unlicense.org/> 37 38 import os 39 import json 40 import ycm_core 41 import logging 42 43 44 _logger = logging.getLogger(__name__) 45 46 47 def DirectoryOfThisScript(): 48 return os.path.dirname( os.path.abspath( __file__ ) ) 49 50 51 # These are the compilation flags that will be used in case there's no 52 # compilation database set (by default, one is not set). 53 # CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR. 54 flags = [ 55 '-x', 'c++' 56 ] 57 58 conan_flags = json.loads(open("conan_ycm_flags.json", "r").read()) 59 60 flags.extend(conan_flags["flags"]) 61 flags.extend(conan_flags["defines"]) 62 flags.extend(conan_flags["includes"]) 63 64 65 # Set this to the absolute path to the folder (NOT the file!) containing the 66 # compile_commands.json file to use that instead of 'flags'. See here for 67 # more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html 68 # 69 # You can get CMake to generate this file for you by adding: 70 # set( CMAKE_EXPORT_COMPILE_COMMANDS 1 ) 71 # to your CMakeLists.txt file. 72 # 73 # Most projects will NOT need to set this to anything; you can just change the 74 # 'flags' list of compilation flags. Notice that YCM itself uses that approach. 75 compilation_database_folder = os.path.join(DirectoryOfThisScript(), 'Debug') 76 77 if os.path.exists( compilation_database_folder ): 78 database = ycm_core.CompilationDatabase( compilation_database_folder ) 79 if not database.DatabaseSuccessfullyLoaded(): 80 _logger.warn("Failed to load database") 81 database = None 82 else: 83 database = None 84 85 SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ] 86 87 def GetAbsolutePath(include_path, working_directory): 88 if os.path.isabs(include_path): 89 return include_path 90 return os.path.join(working_directory, include_path) 91 92 93 def MakeRelativePathsInFlagsAbsolute( flags, working_directory ): 94 if not working_directory: 95 return list( flags ) 96 new_flags = [] 97 make_next_absolute = False 98 path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ] 99 for flag in flags: 100 new_flag = flag 101 102 if make_next_absolute: 103 make_next_absolute = False 104 new_flag = GetAbsolutePath(flag, working_directory) 105 106 for path_flag in path_flags: 107 if flag == path_flag: 108 make_next_absolute = True 109 break 110 111 if flag.startswith( path_flag ): 112 path = flag[ len( path_flag ): ] 113 new_flag = flag[:len(path_flag)] + GetAbsolutePath(path, working_directory) 114 break 115 116 if new_flag: 117 new_flags.append( new_flag ) 118 return new_flags 119 120 121 def IsHeaderFile( filename ): 122 extension = os.path.splitext( filename )[ 1 ] 123 return extension.lower() in [ '.h', '.hxx', '.hpp', '.hh' ] 124 125 126 def GetCompilationInfoForFile( filename ): 127 # The compilation_commands.json file generated by CMake does not have entries 128 # for header files. So we do our best by asking the db for flags for a 129 # corresponding source file, if any. If one exists, the flags for that file 130 # should be good enough. 131 if IsHeaderFile( filename ): 132 basename = os.path.splitext( filename )[ 0 ] 133 for extension in SOURCE_EXTENSIONS: 134 replacement_file = basename + extension 135 if os.path.exists( replacement_file ): 136 compilation_info = database.GetCompilationInfoForFile( replacement_file ) 137 if compilation_info.compiler_flags_: 138 return compilation_info 139 return None 140 return database.GetCompilationInfoForFile( filename ) 141 142 143 def FlagsForFile( filename, **kwargs ): 144 relative_to = None 145 compiler_flags = None 146 147 if database: 148 # Bear in mind that compilation_info.compiler_flags_ does NOT return a 149 # python list, but a "list-like" StringVec object 150 compilation_info = GetCompilationInfoForFile( filename ) 151 if compilation_info is None: 152 relative_to = DirectoryOfThisScript() 153 compiler_flags = flags 154 else: 155 relative_to = compilation_info.compiler_working_dir_ 156 compiler_flags = compilation_info.compiler_flags_ 157 158 else: 159 relative_to = DirectoryOfThisScript() 160 compiler_flags = flags 161 162 final_flags = MakeRelativePathsInFlagsAbsolute( compiler_flags, relative_to ) 163 for flag in final_flags: 164 if flag.startswith("-W"): 165 final_flags.remove(flag) 166 _logger.info("Final flags for %s are %s" % (filename, ' '.join(final_flags))) 167 168 return {{ 169 'flags': final_flags + ["-I/usr/include", "-I/usr/include/c++/{cxx_version}"], 170 'do_cache': True 171 }} 172 ''' 173 174 @property 175 def filename(self): 176 pass 177 178 @property 179 def content(self): 180 def prefixed(prefix, values): 181 return [prefix + x for x in values] 182 183 conan_flags = { 184 "includes": prefixed("-isystem", self.deps_build_info.include_paths), 185 "defines": prefixed("-D", self.deps_build_info.defines), 186 "flags": self.deps_build_info.cxxflags 187 } 188 189 cxx_version = '' 190 try: 191 cxx_version = str(self.settings.compiler.version).split('.')[0] 192 except Exception: 193 pass 194 195 ycm_data = self.template.format(cxx_version=cxx_version) 196 return {"conan_ycm_extra_conf.py": ycm_data, 197 "conan_ycm_flags.json": json.dumps(conan_flags, indent=2)} 198 [end of conans/client/generators/ycm.py] [start of setup.py] 1 """A setuptools based setup module. 2 See: 3 https://packaging.python.org/en/latest/distributing.html 4 https://github.com/pypa/sampleproject 5 """ 6 7 import os 8 import re 9 # To use a consistent encoding 10 from codecs import open 11 from os import path 12 13 # Always prefer setuptools over distutils 14 from setuptools import find_packages, setup 15 16 here = path.abspath(path.dirname(__file__)) 17 18 19 def get_requires(filename): 20 requirements = [] 21 with open(filename, "rt") as req_file: 22 for line in req_file.read().splitlines(): 23 if not line.strip().startswith("#"): 24 requirements.append(line) 25 return requirements 26 27 28 project_requirements = get_requires("conans/requirements.txt") 29 project_requirements.extend(get_requires("conans/requirements_server.txt")) 30 dev_requirements = get_requires("conans/requirements_dev.txt") 31 # The tests utils are used by conan-package-tools 32 exclude_test_packages = ["conans.test.{}*".format(d) 33 for d in os.listdir(os.path.join(here, "conans/test")) 34 if os.path.isdir(os.path.join(here, "conans/test", d)) and d != "utils"] 35 36 37 def load_version(): 38 """ Loads a file content """ 39 filename = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), 40 "conans", "__init__.py")) 41 with open(filename, "rt") as version_file: 42 conan_init = version_file.read() 43 version = re.search(r"__version__ = '([0-9a-z.-]+)'", conan_init).group(1) 44 return version 45 46 47 def generate_long_description_file(): 48 this_directory = path.abspath(path.dirname(__file__)) 49 with open(path.join(this_directory, 'README.rst'), encoding='utf-8') as f: 50 long_description = f.read() 51 return long_description 52 53 54 setup( 55 name='conan', 56 # Versions should comply with PEP440. For a discussion on single-sourcing 57 # the version across setup.py and the project code, see 58 # https://packaging.python.org/en/latest/single_source_version.html 59 version=load_version(), # + ".rc1", 60 61 description='Conan C/C++ package manager', 62 long_description=generate_long_description_file(), 63 long_description_content_type='text/x-rst', 64 65 # The project's main homepage. 66 url='https://conan.io', 67 68 # Author details 69 author='JFrog LTD', 70 author_email='[email protected]', 71 72 # Choose your license 73 license='MIT', 74 75 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers 76 classifiers=[ 77 'Development Status :: 5 - Production/Stable', 78 'Intended Audience :: Developers', 79 'Topic :: Software Development :: Build Tools', 80 'License :: OSI Approved :: MIT License', 81 'Programming Language :: Python :: 3', 82 'Programming Language :: Python :: 3.6', 83 'Programming Language :: Python :: 3.7', 84 'Programming Language :: Python :: 3.8' 85 ], 86 87 # What does your project relate to? 88 keywords=['C/C++', 'package', 'libraries', 'developer', 'manager', 89 'dependency', 'tool', 'c', 'c++', 'cpp'], 90 91 # You can just specify the packages manually here if your project is 92 # simple. Or you can use find_packages(). 93 packages=find_packages(exclude=exclude_test_packages), 94 95 # Alternatively, if you want to distribute just a my_module.py, uncomment 96 # this: 97 # py_modules=["my_module"], 98 99 # List run-time dependencies here. These will be installed by pip when 100 # your project is installed. For an analysis of "install_requires" vs pip's 101 # requirements files see: 102 # https://packaging.python.org/en/latest/requirements.html 103 install_requires=project_requirements, 104 105 # List additional groups of dependencies here (e.g. development 106 # dependencies). You can install these using the following syntax, 107 # for example: 108 # $ pip install -e .[dev,test] 109 extras_require={ 110 'dev': dev_requirements, 111 'test': dev_requirements, 112 }, 113 114 # If there are data files included in your packages that need to be 115 # installed, specify them here. If using Python 2.6 or less, then these 116 # have to be included in MANIFEST.in as well. 117 package_data={ 118 'conans': ['*.txt'], 119 }, 120 121 # Although 'package_data' is the preferred approach, in some case you may 122 # need to place data files outside of your packages. See: 123 # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa 124 # In this case, 'data_file' will be installed into '<sys.prefix>/my_data' 125 # data_files=[('my_data', ['data/data_file'])], 126 127 # To provide executable scripts, use entry points in preference to the 128 # "scripts" keyword. Entry points provide cross-platform support and allow 129 # pip to create the appropriate form of executable for the target platform. 130 entry_points={ 131 'console_scripts': [ 132 'conan=conans.conan:run', 133 'conan_server=conans.conan_server:run', 134 'conan_build_info=conans.build_info.command:run' 135 ], 136 }, 137 ) 138 [end of setup.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conan-io/conan
8b138923eef2fc35e23f20f17222d8aa6e8729fd
[bug] YCM generator uses deprecated FlagsForFile method instead of Settings <!-- Please don't forget to update the issue title. Include all applicable information to help us reproduce your problem. To help us debug your issue please explain: --> ### Environment Details (include every applicable attribute) * Operating System+version: macOS 10.14.5 * Compiler+version: clang 10.0.1 * Conan version: 1.31.4 * Python version: 3.9.0 ### Steps to reproduce (Include if Applicable) Follow instructions at https://docs.conan.io/en/latest/integrations/ide/youcompleteme.html#youcompleteme-integration to configure `.ycm_extra_conf` and `conan_ycm_flags.json`: conanfile.txt ``` [generators] ycm ``` ```bash # from your base folder $ cp build/conan_ycm_extra_conf.py .ycm_extra_conf.py $ ln -s build/conan_ycm_flags.json conan_ycm_flags.json ``` Install `gtest` as a package, and then import it in a source file. ### Logs (Executed commands with output) (Include/Attach if Applicable) <!-- Your log content should be related to the bug description, it can be: - Conan command output - Server output (Artifactory, conan_server) --> YCM was unable to find the gtest package as installed by conan. YCM Debug Info: ``` Printing YouCompleteMe debug information... -- Resolve completions: Up front -- Client logfile: /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycm_x9dk66na.log -- Server Python interpreter: /usr/local/opt/[email protected]/bin/python3.9 -- Server Python version: 3.9.0 -- Server has Clang support compiled in: True -- Clang version: clang version 10.0.0 -- Extra configuration file found and loaded -- Extra configuration path: /Users/username/home/projects/project/.ycm_extra_conf.py -- C-family completer debug information: -- Clangd running -- Clangd process ID: 56305 -- Clangd executable: ['/Users/username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clangd/output/bin/clangd', '-header-insertion-decorators=0', '-resource-dir=/Users/ username/.vim/plugged/YouCompleteMe/third_party/ycmd/third_party/clang/lib/clang/10.0.0', '-limit-results=500', '-log=verbose'] -- Clangd logfiles: -- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/clangd_stderr615mhccn.log -- Clangd Server State: Initialized -- Clangd Project Directory: /Users/username/home/projects/project -- Clangd Settings: {} -- Clangd Compilation Command: False -- Server running at: http://127.0.0.1:50225 -- Server process ID: 56303 -- Server logfiles: -- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stdout_nstboyjy.log -- /var/folders/_2/cyfwx31x0y1dh06whkrkrmh00000gn/T/ycmd_50225_stderr_ey11rfes.log ``` As can be seen, `clangd` is not using the flags `'-x', 'c++'` as defined in the default `flags` list in the generated `.ycm_extra_conf.py`, or the `gtest` package as installed by conan. The generated `conan_ycm_flags.json` file contains the following: ``` { "includes": [ "-isystem/Users/username/.conan/data/gtest/1.10.0/_/_/package/03ad53d73db1da068548d1d6a87ac3219077b5c0/include", "-isystem/Users/username/.conan/data/rapidjson/1.1.0/_/_/package/5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9/include" ], "defines": [], "flags": [] } ``` These flags are also not included in the compilation arguments. The issue appears to be caused by the fact that the [generator](https://github.com/conan-io/conan/blob/develop/conans/client/generators/ycm.py) uses the deprecated `FlagsForFile` method instead of it's replacement, `Settings`. This can be resolved by modifying line 143 from: ```python def FlagsForFile( filename, **kwargs ): ``` to ```python def Settings( filename, **kwargs): ``` As a new user of YCM and conan, this took an inordinate amount of time to troubleshoot, though it is relatively trivial.
2020-12-07T13:32:50Z
<patch> diff --git a/conans/client/generators/ycm.py b/conans/client/generators/ycm.py --- a/conans/client/generators/ycm.py +++ b/conans/client/generators/ycm.py @@ -140,7 +140,7 @@ def GetCompilationInfoForFile( filename ): return database.GetCompilationInfoForFile( filename ) -def FlagsForFile( filename, **kwargs ): +def Settings( filename, **kwargs ): relative_to = None compiler_flags = None </patch>
[]
[]
conan-io__conan-4317
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Hard to debug error ERROR: Expecting value: line 1 column 1 (char 0) Hi, first of all i want to congratulate you guys on this great project that is conan. I performed some conan copy to rename the owner of the package on custom packages that some have sources other get the sources from git and so on, they all are different but now i'm getting this anoying error when i try to do a conan install on some packages: ``` ERROR: Expecting value: line 1 column 1 (char 0) ``` The problem is that it is very hard to know what package is creating this error. I kind understand that the problem was that the conan copy does not copy sources or it seems... something like that i can't really explain, but doing the conan create again on the package seems to fix this... (when i find the package with error... but i got many and is hard to find the one in error or ones) It will be nice to add the package name on the error to easily find the package that is creating the problem (many package dependencies create this hard to find package in error) </issue> <code> [start of README.rst] 1 Conan 2 ===== 3 4 A distributed, open-source, C/C++ package manager. 5 6 +------------------------+-------------------------+ 7 | **master** | **develop** | 8 +========================+=========================+ 9 | |Build Status Master| | |Build Status Develop| | 10 +------------------------+-------------------------+ 11 12 13 +------------------------+---------------------------+---------------------------------------------+ 14 | **Coverage master** | **Coverage develop** | **Coverage graph** | 15 +========================+===========================+=============================================+ 16 | |Master coverage| | |Develop coverage| | |Coverage graph| | 17 +------------------------+---------------------------+---------------------------------------------+ 18 19 20 Setup 21 ===== 22 23 From binaries 24 ------------- 25 26 We have installers for `most platforms here <http://conan.io>`__ but you 27 can run **conan** from sources if you want. 28 29 From pip 30 -------- 31 32 Conan is compatible with Python 2 and Python 3. 33 34 - Install pip following `pip docs`_. 35 - Install conan: 36 37 .. code-block:: bash 38 39 $ pip install conan 40 41 You can also use `test.pypi.org <https://test.pypi.org/project/conan/#history>`_ repository to install development (non-stable) Conan versions: 42 43 44 .. code-block:: bash 45 46 $ pip install --index-url https://test.pypi.org/simple/ conan 47 48 49 From Homebrew (OSx) 50 ------------------- 51 52 - Install Homebrew following `brew homepage`_. 53 54 .. code-block:: bash 55 56 $ brew update 57 $ brew install conan 58 59 From source 60 ----------- 61 62 You can run **conan** client and server in Windows, MacOS, and Linux. 63 64 - **Install pip following** `pip docs`_. 65 66 - **Clone conan repository:** 67 68 .. code-block:: bash 69 70 $ git clone https://github.com/conan-io/conan.git 71 72 - **Install in editable mode** 73 74 .. code-block:: bash 75 76 $ cd conan && sudo pip install -e . 77 78 If you are in Windows, using ``sudo`` is not required. 79 80 - **You are ready, try to run conan:** 81 82 .. code-block:: 83 84 $ conan --help 85 86 Consumer commands 87 install Installs the requirements specified in a conanfile (.py or .txt). 88 config Manages configuration. Edits the conan.conf or installs config files. 89 get Gets a file or list a directory of a given reference or package. 90 info Gets information about the dependency graph of a recipe. 91 search Searches package recipes and binaries in the local cache or in a remote. 92 Creator commands 93 new Creates a new package recipe template with a 'conanfile.py'. 94 create Builds a binary package for recipe (conanfile.py) located in current dir. 95 upload Uploads a recipe and binary packages to a remote. 96 export Copies the recipe (conanfile.py & associated files) to your local cache. 97 export-pkg Exports a recipe & creates a package with given files calling 'package'. 98 test Test a package, consuming it with a conanfile recipe with a test() method. 99 Package development commands 100 source Calls your local conanfile.py 'source()' method. 101 build Calls your local conanfile.py 'build()' method. 102 package Calls your local conanfile.py 'package()' method. 103 Misc commands 104 profile Lists profiles in the '.conan/profiles' folder, or shows profile details. 105 remote Manages the remote list and the package recipes associated to a remote. 106 user Authenticates against a remote with user/pass, caching the auth token. 107 imports Calls your local conanfile.py or conanfile.txt 'imports' method. 108 copy Copies conan recipes and packages to another user/channel. 109 remove Removes packages or binaries matching pattern from local cache or remote. 110 alias Creates and exports an 'alias recipe'. 111 download Downloads recipe and binaries to the local cache, without using settings. 112 113 Conan commands. Type "conan <command> -h" for help 114 115 Contributing to the project 116 =========================== 117 118 Feedback and contribution is always welcome in this project. 119 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). 120 121 Running the tests 122 ================= 123 124 Using tox 125 --------- 126 127 .. code-block:: bash 128 129 $ tox 130 131 It will install the needed requirements and launch `nose` skipping some heavy and slow test. 132 If you want to run the full test suite: 133 134 .. code-block:: bash 135 136 $ tox -e full 137 138 Without tox 139 ----------- 140 141 **Install python requirements** 142 143 .. code-block:: bash 144 145 $ pip install -r conans/requirements.txt 146 $ pip install -r conans/requirements_server.txt 147 $ pip install -r conans/requirements_dev.txt 148 149 150 Only in OSX: 151 152 .. code-block:: bash 153 154 $ pip install -r conans/requirements_osx.txt # You can omit this one if not running OSX 155 156 157 If you are not Windows and you are not using a python virtual environment, you will need to run these 158 commands using `sudo`. 159 160 Before you can run the tests, you need to set a few environment variables first. 161 162 .. code-block:: bash 163 164 $ export PYTHONPATH=$PYTHONPATH:$(pwd) 165 166 On Windows it would be (while being in the conan root directory): 167 168 .. code-block:: bash 169 170 $ set PYTHONPATH=. 171 172 Ensure that your ``cmake`` has version 2.8 or later. You can see the 173 version with the following command: 174 175 .. code-block:: bash 176 177 $ cmake --version 178 179 The appropriate values of ``CONAN_COMPILER`` and ``CONAN_COMPILER_VERSION`` depend on your 180 operating system and your requirements. 181 182 These should work for the GCC from ``build-essential`` on Ubuntu 14.04: 183 184 .. code-block:: bash 185 186 $ export CONAN_COMPILER=gcc 187 $ export CONAN_COMPILER_VERSION=4.8 188 189 These should work for OS X: 190 191 .. code-block:: bash 192 193 $ export CONAN_COMPILER=clang 194 $ export CONAN_COMPILER_VERSION=3.5 195 196 Finally, there are some tests that use conan to package Go-lang 197 libraries, so you might **need to install go-lang** in your computer and 198 add it to the path. 199 200 You can run the actual tests like this: 201 202 .. code-block:: bash 203 204 $ nosetests . 205 206 207 There are a couple of test attributes defined, as ``slow``, or ``golang`` that you can use 208 to filter the tests, and do not execute them: 209 210 .. code-block:: bash 211 212 $ nosetests . -a !golang 213 214 A few minutes later it should print ``OK``: 215 216 .. code-block:: bash 217 218 ............................................................................................ 219 ---------------------------------------------------------------------- 220 Ran 146 tests in 50.993s 221 222 OK 223 224 To run specific tests, you can specify the test name too, something like: 225 226 .. code-block:: bash 227 228 $ nosetests conans.test.command.config_install_test:ConfigInstallTest.install_file_test --nocapture 229 230 The ``--nocapture`` argument can be useful to see some output that otherwise is captured by nosetests. 231 232 License 233 ------- 234 235 `MIT LICENSE <./LICENSE.md>`__ 236 237 .. |Build Status Master| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/master 238 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/master 239 240 .. |Build Status Develop| image:: https://conan-ci.jfrog.info/buildStatus/icon?job=ConanTestSuite/develop 241 :target: https://conan-ci.jfrog.info/job/ConanTestSuite/job/develop 242 243 .. |Master coverage| image:: https://codecov.io/gh/conan-io/conan/branch/master/graph/badge.svg 244 :target: https://codecov.io/gh/conan-io/conan/branch/master 245 246 .. |Develop coverage| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graph/badge.svg 247 :target: https://codecov.io/gh/conan-io/conan/branch/develop 248 249 .. |Coverage graph| image:: https://codecov.io/gh/conan-io/conan/branch/develop/graphs/tree.svg 250 :height: 50px 251 :width: 50 px 252 :alt: Conan develop coverage 253 254 .. _`pip docs`: https://pip.pypa.io/en/stable/installing/ 255 256 .. _`brew homepage`: http://brew.sh/ 257 [end of README.rst] [start of conans/client/installer.py] 1 import os 2 import platform 3 import shutil 4 import time 5 6 from conans.client import tools 7 from conans.client.file_copier import report_copied_files 8 from conans.client.generators import TXTGenerator, write_generators 9 from conans.client.graph.graph import BINARY_BUILD, BINARY_CACHE, BINARY_DOWNLOAD, BINARY_MISSING, \ 10 BINARY_SKIP, BINARY_UPDATE, BINARY_EDITABLE 11 from conans.client.importer import remove_imports 12 from conans.client.output import ScopedOutput 13 from conans.client.packager import create_package 14 from conans.client.recorder.action_recorder import INSTALL_ERROR_BUILDING, INSTALL_ERROR_MISSING, \ 15 INSTALL_ERROR_MISSING_BUILD_FOLDER 16 from conans.client.source import complete_recipe_sources, config_source 17 from conans.client.tools.env import pythonpath 18 from conans.errors import (ConanException, ConanExceptionInUserConanfileMethod, 19 conanfile_exception_formatter) 20 from conans.model.build_info import CppInfo 21 from conans.model.conan_file import get_env_context_manager 22 from conans.model.editable_cpp_info import EditableCppInfo 23 from conans.model.env_info import EnvInfo 24 from conans.model.manifest import FileTreeManifest 25 from conans.model.ref import PackageReference 26 from conans.model.user_info import UserInfo 27 from conans.paths import BUILD_INFO, CONANINFO, RUN_LOG_NAME 28 from conans.util.env_reader import get_env 29 from conans.util.files import (clean_dirty, is_dirty, make_read_only, mkdir, rmdir, save, set_dirty) 30 from conans.util.log import logger 31 from conans.util.tracer import log_package_built, \ 32 log_package_got_from_local_cache 33 34 35 def build_id(conan_file): 36 if hasattr(conan_file, "build_id"): 37 # construct new ConanInfo 38 build_id_info = conan_file.info.copy() 39 conan_file.info_build = build_id_info 40 # effectively call the user function to change the package values 41 with conanfile_exception_formatter(str(conan_file), "build_id"): 42 conan_file.build_id() 43 # compute modified ID 44 return build_id_info.package_id() 45 return None 46 47 48 class _ConanPackageBuilder(object): 49 """Builds and packages a single conan_file binary package""" 50 51 def __init__(self, conan_file, pref, cache, output, hook_manager): 52 self._cache = cache 53 self._conan_file = conan_file 54 self._out = output 55 self._pref = pref 56 self._ref = self._pref.ref 57 self._skip_build = False # If build_id() 58 self._hook_manager = hook_manager 59 60 new_id = build_id(self._conan_file) 61 self.build_pref = PackageReference(self._ref, new_id) if new_id else pref 62 self.build_folder = self._cache.build(self.build_pref, self._conan_file.short_paths) 63 self.package_folder = self._cache.package(self._pref, self._conan_file.short_paths) 64 self.source_folder = self._cache.source(self._ref, self._conan_file.short_paths) 65 66 def prepare_build(self): 67 if self.build_pref != self._pref and \ 68 os.path.exists(self.build_folder) and hasattr(self._conan_file, "build_id"): 69 self._skip_build = True 70 return 71 72 # build_id is not caching the build folder, so actually rebuild the package 73 export_folder = self._cache.export(self._ref) 74 export_source_folder = self._cache.export_sources(self._ref, 75 self._conan_file.short_paths) 76 conanfile_path = self._cache.conanfile(self._ref) 77 78 try: 79 rmdir(self.build_folder) 80 rmdir(self.package_folder) 81 except OSError as e: 82 raise ConanException("%s\n\nCouldn't remove folder, might be busy or open\n" 83 "Close any app using it, and retry" % str(e)) 84 85 self._out.info('Building your package in %s' % self.build_folder) 86 config_source(export_folder, export_source_folder, self.source_folder, 87 self._conan_file, self._out, conanfile_path, self._ref, 88 self._hook_manager, self._cache) 89 self._out.info('Copying sources to build folder') 90 91 if getattr(self._conan_file, 'no_copy_source', False): 92 mkdir(self.build_folder) 93 self._conan_file.source_folder = self.source_folder 94 else: 95 if platform.system() == "Windows" and os.getenv("CONAN_USER_HOME_SHORT") != "None": 96 from conans.util.windows import ignore_long_path_files 97 ignore = ignore_long_path_files(self.source_folder, self.build_folder, self._out) 98 else: 99 ignore = None 100 101 shutil.copytree(self.source_folder, self.build_folder, symlinks=True, ignore=ignore) 102 logger.debug("BUILD: Copied to %s", self.build_folder) 103 logger.debug("BUILD: Files copied %s", ",".join(os.listdir(self.build_folder))) 104 self._conan_file.source_folder = self.build_folder 105 106 def build(self): 107 """Calls the conanfile's build method""" 108 if self._skip_build: 109 return 110 with get_env_context_manager(self._conan_file): 111 self._build_package() 112 113 def package(self): 114 """Generate the info txt files and calls the conanfile package method. 115 """ 116 117 # FIXME: Is weak to assign here the recipe_hash 118 manifest = self._cache.package_layout(self._ref).load_manifest() 119 self._conan_file.info.recipe_hash = manifest.summary_hash 120 121 # Creating ***info.txt files 122 save(os.path.join(self.build_folder, CONANINFO), self._conan_file.info.dumps()) 123 self._out.info("Generated %s" % CONANINFO) 124 save(os.path.join(self.build_folder, BUILD_INFO), TXTGenerator(self._conan_file).content) 125 self._out.info("Generated %s" % BUILD_INFO) 126 127 os.chdir(self.build_folder) 128 129 if getattr(self._conan_file, 'no_copy_source', False): 130 source_folder = self.source_folder 131 else: 132 source_folder = self.build_folder 133 with get_env_context_manager(self._conan_file): 134 install_folder = self.build_folder # While installing, the infos goes to build folder 135 pkg_id = self._conan_file.info.package_id() 136 conanfile_path = self._cache.conanfile(self._ref) 137 138 create_package(self._conan_file, pkg_id, source_folder, self.build_folder, 139 self.package_folder, install_folder, self._hook_manager, 140 conanfile_path, self._ref) 141 142 package_hash = self._cache.package_layout(self._pref.ref, 143 self._conan_file.short_paths).package_summary_hash(self._pref) 144 package_id = self._pref.id 145 146 with self._cache.package_layout(self._ref).update_metadata() as metadata: 147 metadata.packages[package_id].revision = package_hash 148 metadata.packages[package_id].recipe_revision = self._ref.revision 149 150 if get_env("CONAN_READ_ONLY_CACHE", False): 151 make_read_only(self.package_folder) 152 153 def _build_package(self): 154 """ calls the imports + conanfile.build() method 155 """ 156 os.chdir(self.build_folder) 157 self._conan_file.build_folder = self.build_folder 158 self._conan_file.package_folder = self.package_folder 159 # In local cache, install folder always is build_folder 160 self._conan_file.install_folder = self.build_folder 161 162 # Read generators from conanfile and generate the needed files 163 logger.info("GENERATORS: Writing generators") 164 write_generators(self._conan_file, self.build_folder, self._out) 165 166 # Build step might need DLLs, binaries as protoc to generate source files 167 # So execute imports() before build, storing the list of copied_files 168 from conans.client.importer import run_imports 169 copied_files = run_imports(self._conan_file, self.build_folder) 170 171 try: 172 # This is necessary because it is different for user projects 173 # than for packages 174 self._hook_manager.execute("pre_build", conanfile=self._conan_file, 175 reference=self._ref, 176 package_id=self._pref.id) 177 logger.debug("Call conanfile.build() with files in build folder: %s", 178 os.listdir(self.build_folder)) 179 self._out.highlight("Calling build()") 180 with conanfile_exception_formatter(str(self._conan_file), "build"): 181 self._conan_file.build() 182 183 self._out.success("Package '%s' built" % self._conan_file.info.package_id()) 184 self._out.info("Build folder %s" % self.build_folder) 185 self._hook_manager.execute("post_build", conanfile=self._conan_file, 186 reference=self._ref, 187 package_id=self._pref.id) 188 except Exception as exc: 189 self._out.writeln("") 190 self._out.error("Package '%s' build failed" % self._conan_file.info.package_id()) 191 self._out.warn("Build folder %s" % self.build_folder) 192 if isinstance(exc, ConanExceptionInUserConanfileMethod): 193 raise exc 194 raise ConanException(exc) 195 finally: 196 # Now remove all files that were imported with imports() 197 remove_imports(self._conan_file, copied_files, self._out) 198 199 200 def _handle_system_requirements(conan_file, pref, cache, out): 201 """ check first the system_reqs/system_requirements.txt existence, if not existing 202 check package/sha1/ 203 204 Used after remote package retrieving and before package building 205 """ 206 if "system_requirements" not in type(conan_file).__dict__: 207 return 208 209 system_reqs_path = cache.system_reqs(pref.ref) 210 system_reqs_package_path = cache.system_reqs_package(pref) 211 if os.path.exists(system_reqs_path) or os.path.exists(system_reqs_package_path): 212 return 213 214 ret = call_system_requirements(conan_file, out) 215 216 try: 217 ret = str(ret or "") 218 except Exception: 219 out.warn("System requirements didn't return a string") 220 ret = "" 221 if getattr(conan_file, "global_system_requirements", None): 222 save(system_reqs_path, ret) 223 else: 224 save(system_reqs_package_path, ret) 225 226 227 def call_system_requirements(conanfile, output): 228 try: 229 return conanfile.system_requirements() 230 except Exception as e: 231 output.error("while executing system_requirements(): %s" % str(e)) 232 raise ConanException("Error in system requirements") 233 234 235 def raise_package_not_found_error(conan_file, ref, package_id, dependencies, out, recorder): 236 settings_text = ", ".join(conan_file.info.full_settings.dumps().splitlines()) 237 options_text = ", ".join(conan_file.info.full_options.dumps().splitlines()) 238 dependencies_text = ', '.join(dependencies) 239 240 msg = '''Can't find a '%s' package for the specified settings, options and dependencies: 241 - Settings: %s 242 - Options: %s 243 - Dependencies: %s 244 - Package ID: %s 245 ''' % (ref, settings_text, options_text, dependencies_text, package_id) 246 out.warn(msg) 247 recorder.package_install_error(PackageReference(ref, package_id), INSTALL_ERROR_MISSING, msg) 248 raise ConanException('''Missing prebuilt package for '%s' 249 Try to build it from sources with "--build %s" 250 Or read "http://docs.conan.io/en/latest/faq/troubleshooting.html#error-missing-prebuilt-package" 251 ''' % (ref, ref.name)) 252 253 254 class BinaryInstaller(object): 255 """ main responsible of retrieving binary packages or building them from source 256 locally in case they are not found in remotes 257 """ 258 def __init__(self, cache, output, remote_manager, recorder, workspace, hook_manager): 259 self._cache = cache 260 self._out = output 261 self._remote_manager = remote_manager 262 self._registry = cache.registry 263 self._recorder = recorder 264 self._workspace = workspace 265 self._hook_manager = hook_manager 266 self._editable_cpp_info = self._load_editables_cpp_info() 267 268 def install(self, deps_graph, keep_build=False, graph_info=None): 269 # order by levels and separate the root node (ref=None) from the rest 270 nodes_by_level = deps_graph.by_levels() 271 root_level = nodes_by_level.pop() 272 root_node = root_level[0] 273 # Get the nodes in order and if we have to build them 274 self._build(nodes_by_level, deps_graph, keep_build, root_node, graph_info) 275 276 def _build(self, nodes_by_level, deps_graph, keep_build, root_node, graph_info): 277 inverse_levels = {n: i for i, level in enumerate(deps_graph.inverse_levels()) for n in level} 278 279 processed_package_refs = set() 280 for level in nodes_by_level: 281 for node in level: 282 ref, conan_file = node.ref, node.conanfile 283 output = conan_file.output 284 package_id = conan_file.info.package_id() 285 if node.binary == BINARY_MISSING: 286 dependencies = [str(dep.dst) for dep in node.dependencies] 287 raise_package_not_found_error(conan_file, ref, package_id, dependencies, 288 out=output, recorder=self._recorder) 289 290 if node.binary == BINARY_EDITABLE: 291 self._handle_node_editable(node) 292 continue 293 294 workspace_package = self._workspace[node.ref] if self._workspace else None 295 if workspace_package: 296 self._handle_node_workspace(node, workspace_package, inverse_levels, deps_graph, 297 graph_info) 298 else: 299 self._propagate_info(node, inverse_levels, deps_graph) 300 if node.binary == BINARY_SKIP: # Privates not necessary 301 continue 302 pref = PackageReference(ref, package_id) 303 _handle_system_requirements(conan_file, pref, self._cache, output) 304 self._handle_node_cache(node, pref, keep_build, processed_package_refs) 305 306 # Finally, propagate information to root node (ref=None) 307 self._propagate_info(root_node, inverse_levels, deps_graph) 308 309 def _node_concurrently_installed(self, node, package_folder): 310 if node.binary == BINARY_DOWNLOAD and os.path.exists(package_folder): 311 return True 312 elif node.binary == BINARY_UPDATE: 313 read_manifest = FileTreeManifest.load(package_folder) 314 if node.update_manifest == read_manifest: 315 return True 316 317 def _load_editables_cpp_info(self): 318 editables_path = self._cache.default_editable_path 319 if os.path.exists(editables_path): 320 return EditableCppInfo.load(editables_path, require_namespace=True) 321 return None 322 323 def _handle_node_editable(self, node): 324 # Get source of information 325 package_layout = self._cache.package_layout(node.ref) 326 base_path = package_layout.conan() 327 self._call_package_info(node.conanfile, package_folder=base_path) 328 329 # Try with package-provided file 330 package_layout_file = package_layout.editable_package_layout_file() 331 if os.path.exists(package_layout_file): 332 editable_cpp_info = EditableCppInfo.load(package_layout_file, 333 require_namespace=False) 334 editable_cpp_info.apply_to(node.conanfile.name, 335 node.conanfile.cpp_info, 336 base_path=base_path, 337 settings=node.conanfile.settings, 338 options=node.conanfile.options) 339 340 # Try with the profile-like file 341 elif self._editable_cpp_info and self._editable_cpp_info.has_info_for(node.conanfile.name): 342 self._editable_cpp_info.apply_to(node.conanfile.name, 343 node.conanfile.cpp_info, 344 base_path=base_path, 345 settings=node.conanfile.settings, 346 options=node.conanfile.options) 347 348 # Use `package_info()` data 349 else: 350 pass # It will use `package_info()` data relative to path used as 'package_folder' 351 352 def _handle_node_cache(self, node, pref, keep_build, processed_package_references): 353 conan_file = node.conanfile 354 output = conan_file.output 355 package_folder = self._cache.package(pref, conan_file.short_paths) 356 357 with self._cache.package_lock(pref): 358 if pref not in processed_package_references: 359 processed_package_references.add(pref) 360 set_dirty(package_folder) 361 if node.binary == BINARY_BUILD: 362 self._build_package(node, pref, output, keep_build) 363 elif node.binary in (BINARY_UPDATE, BINARY_DOWNLOAD): 364 if not self._node_concurrently_installed(node, package_folder): 365 new_ref = self._remote_manager.get_package(pref, package_folder, 366 node.binary_remote, output, 367 self._recorder) 368 self._registry.prefs.set(new_ref, node.binary_remote.name) 369 else: 370 output.success('Download skipped. Probable concurrent download') 371 log_package_got_from_local_cache(pref) 372 self._recorder.package_fetched_from_cache(pref) 373 elif node.binary == BINARY_CACHE: 374 output.success('Already installed!') 375 log_package_got_from_local_cache(pref) 376 self._recorder.package_fetched_from_cache(pref) 377 clean_dirty(package_folder) 378 # Call the info method 379 self._call_package_info(conan_file, package_folder) 380 self._recorder.package_cpp_info(pref, conan_file.cpp_info) 381 382 def _handle_node_workspace(self, node, workspace_package, inverse_levels, deps_graph, 383 graph_info): 384 conan_file = node.conanfile 385 output = ScopedOutput("Workspace %s" % conan_file.display_name, self._out) 386 include_dirs = workspace_package.includedirs 387 lib_dirs = workspace_package.libdirs 388 self._call_package_info(conan_file, workspace_package.package_folder) 389 if include_dirs: 390 conan_file.cpp_info.includedirs = include_dirs 391 if lib_dirs: 392 conan_file.cpp_info.libdirs = lib_dirs 393 # Make sure the folders exists, otherwise they will be filtered out 394 lib_paths = [os.path.join(conan_file.cpp_info.rootpath, p) 395 if not os.path.isabs(p) else p for p in lib_dirs] 396 for p in lib_paths: 397 mkdir(p) 398 399 self._propagate_info(node, inverse_levels, deps_graph) 400 401 build_folder = workspace_package.build_folder 402 write_generators(conan_file, build_folder, output) 403 save(os.path.join(build_folder, CONANINFO), conan_file.info.dumps()) 404 output.info("Generated %s" % CONANINFO) 405 graph_info.save(build_folder) 406 output.info("Generated graphinfo") 407 save(os.path.join(build_folder, BUILD_INFO), TXTGenerator(conan_file).content) 408 output.info("Generated %s" % BUILD_INFO) 409 # Build step might need DLLs, binaries as protoc to generate source files 410 # So execute imports() before build, storing the list of copied_files 411 from conans.client.importer import run_imports 412 copied_files = run_imports(conan_file, build_folder) 413 report_copied_files(copied_files, output) 414 415 def _build_package(self, node, pref, output, keep_build): 416 ref, conan_file = node.ref, node.conanfile 417 418 t1 = time.time() 419 # It is necessary to complete the sources of python requires, which might be used 420 for python_require in conan_file.python_requires: 421 complete_recipe_sources(self._remote_manager, self._cache, 422 conan_file, python_require.ref) 423 424 builder = _ConanPackageBuilder(conan_file, pref, self._cache, output, self._hook_manager) 425 426 if is_dirty(builder.build_folder): 427 output.warn("Build folder is dirty, removing it: %s" % builder.build_folder) 428 rmdir(builder.build_folder) 429 430 skip_build = conan_file.develop and keep_build 431 if skip_build: 432 output.info("Won't be built as specified by --keep-build") 433 if skip_build: 434 if not os.path.exists(builder.build_folder): 435 msg = "--keep-build specified, but build folder not found" 436 self._recorder.package_install_error(pref, 437 INSTALL_ERROR_MISSING_BUILD_FOLDER, 438 msg, remote_name=None) 439 raise ConanException(msg) 440 else: 441 with self._cache.conanfile_write_lock(ref): 442 set_dirty(builder.build_folder) 443 complete_recipe_sources(self._remote_manager, self._cache, 444 conan_file, ref) 445 builder.prepare_build() 446 447 with self._cache.conanfile_read_lock(ref): 448 try: 449 if not skip_build: 450 builder.build() 451 clean_dirty(builder.build_folder) 452 builder.package() 453 except ConanException as exc: 454 self._recorder.package_install_error(pref, INSTALL_ERROR_BUILDING, 455 str(exc), remote_name=None) 456 raise exc 457 else: 458 # Log build 459 self._log_built_package(builder.build_folder, pref.copy_clear_rev(), 460 time.time() - t1) 461 # FIXME: Conan 2.0 Clear the registry entry (package ref) 462 463 def _log_built_package(self, build_folder, pref, duration): 464 log_file = os.path.join(build_folder, RUN_LOG_NAME) 465 log_file = log_file if os.path.exists(log_file) else None 466 log_package_built(pref, duration, log_file) 467 self._recorder.package_built(pref) 468 469 @staticmethod 470 def _propagate_info(node, inverse_levels, deps_graph): 471 # Get deps_cpp_info from upstream nodes 472 closure = deps_graph.full_closure(node) 473 node_order = [n for n in closure.values() if n.binary != BINARY_SKIP] 474 # List sort is stable, will keep the original order of the closure, but prioritize levels 475 node_order.sort(key=lambda n: inverse_levels[n]) 476 477 conan_file = node.conanfile 478 for n in node_order: 479 if n.build_require: 480 conan_file.output.info("Applying build-requirement: %s" % str(n.ref)) 481 conan_file.deps_cpp_info.update(n.conanfile.cpp_info, n.ref.name) 482 conan_file.deps_env_info.update(n.conanfile.env_info, n.ref.name) 483 conan_file.deps_user_info[n.ref.name] = n.conanfile.user_info 484 485 # Update the info but filtering the package values that not apply to the subtree 486 # of this current node and its dependencies. 487 subtree_libnames = [node.ref.name for node in node_order] 488 for package_name, env_vars in conan_file._conan_env_values.data.items(): 489 for name, value in env_vars.items(): 490 if not package_name or package_name in subtree_libnames or \ 491 package_name == conan_file.name: 492 conan_file.info.env_values.add(name, value, package_name) 493 494 @staticmethod 495 def _call_package_info(conanfile, package_folder): 496 conanfile.cpp_info = CppInfo(package_folder) 497 conanfile.cpp_info.version = conanfile.version 498 conanfile.cpp_info.description = conanfile.description 499 conanfile.env_info = EnvInfo() 500 conanfile.user_info = UserInfo() 501 502 # Get deps_cpp_info from upstream nodes 503 public_deps = [name for name, req in conanfile.requires.items() if not req.private] 504 conanfile.cpp_info.public_deps = public_deps 505 # Once the node is build, execute package info, so it has access to the 506 # package folder and artifacts 507 with pythonpath(conanfile): # Minimal pythonpath, not the whole context, make it 50% slower 508 with tools.chdir(package_folder): 509 with conanfile_exception_formatter(str(conanfile), "package_info"): 510 conanfile.package_folder = package_folder 511 conanfile.source_folder = None 512 conanfile.build_folder = None 513 conanfile.install_folder = None 514 conanfile.package_info() 515 [end of conans/client/installer.py] [start of conans/client/recorder/action_recorder.py] 1 2 # FIXME: The functions from the tracer.py module should be called here, I removed from there some 3 # of them because it has to be called in the remote manager, not in the proxy, where we have info 4 # about the downloaded files prior to unzip them 5 6 from collections import OrderedDict, defaultdict, namedtuple 7 from datetime import datetime 8 9 # Install actions 10 from conans.model.ref import ConanFileReference, PackageReference 11 12 INSTALL_CACHE = 0 13 INSTALL_DOWNLOADED = 1 14 INSTALL_BUILT = 2 15 INSTALL_EXPORTED = 3 16 INSTALL_ERROR = -1 17 18 # Actions errors 19 INSTALL_ERROR_MISSING = "missing" 20 INSTALL_ERROR_NETWORK = "network" 21 INSTALL_ERROR_MISSING_BUILD_FOLDER = "missing_build_folder" 22 INSTALL_ERROR_BUILDING = "building" 23 24 25 class Action(namedtuple("Action", "type, doc, time")): 26 27 def __new__(cls, the_type, doc=None): 28 doc = doc or {} 29 the_time = datetime.utcnow() 30 return super(cls, Action).__new__(cls, the_type, doc, the_time) 31 32 33 class ActionRecorder(object): 34 35 def __init__(self): 36 self.error = False 37 self._inst_recipes_actions = OrderedDict() 38 self._inst_packages_actions = OrderedDict() 39 self._inst_recipes_develop = set() # Recipes being created (to set dependency=False) 40 self._inst_packages_info = defaultdict(dict) 41 42 # ###### INSTALL METHODS ############ 43 def add_recipe_being_developed(self, ref): 44 assert(isinstance(ref, ConanFileReference)) 45 self._inst_recipes_develop.add(ref) 46 47 def _add_recipe_action(self, ref, action): 48 assert(isinstance(ref, ConanFileReference)) 49 ref = ref.copy_clear_rev() 50 if ref not in self._inst_recipes_actions: 51 self._inst_recipes_actions[ref] = [] 52 self._inst_recipes_actions[ref].append(action) 53 54 def _add_package_action(self, pref, action): 55 assert(isinstance(pref, PackageReference)) 56 pref = pref.copy_clear_rev() 57 if pref not in self._inst_packages_actions: 58 self._inst_packages_actions[pref] = [] 59 self._inst_packages_actions[pref].append(action) 60 61 # RECIPE METHODS 62 def recipe_exported(self, ref): 63 self._add_recipe_action(ref, Action(INSTALL_EXPORTED)) 64 65 def recipe_fetched_from_cache(self, ref): 66 self._add_recipe_action(ref, Action(INSTALL_CACHE)) 67 68 def recipe_downloaded(self, ref, remote_name): 69 self._add_recipe_action(ref, Action(INSTALL_DOWNLOADED, {"remote": remote_name})) 70 71 def recipe_install_error(self, ref, error_type, description, remote_name): 72 doc = {"type": error_type, "description": description, "remote": remote_name} 73 self._add_recipe_action(ref, Action(INSTALL_ERROR, doc)) 74 75 # PACKAGE METHODS 76 def package_exported(self, pref): 77 self._add_package_action(pref, Action(INSTALL_EXPORTED)) 78 79 def package_built(self, pref): 80 self._add_package_action(pref, Action(INSTALL_BUILT)) 81 82 def package_fetched_from_cache(self, pref): 83 self._add_package_action(pref, Action(INSTALL_CACHE)) 84 85 def package_downloaded(self, pref, remote_name): 86 self._add_package_action(pref, Action(INSTALL_DOWNLOADED, {"remote": remote_name})) 87 88 def package_install_error(self, pref, error_type, description, remote_name=None): 89 assert(isinstance(pref, PackageReference)) 90 pref = pref.copy_clear_rev() 91 if pref not in self._inst_packages_actions: 92 self._inst_packages_actions[pref] = [] 93 doc = {"type": error_type, "description": description, "remote": remote_name} 94 self._inst_packages_actions[pref].append(Action(INSTALL_ERROR, doc)) 95 96 def package_cpp_info(self, pref, cpp_info): 97 assert isinstance(pref, PackageReference) 98 pref = pref.copy_clear_rev() 99 # assert isinstance(cpp_info, CppInfo) 100 doc = {} 101 for it, value in vars(cpp_info).items(): 102 if it.startswith("_") or not value: 103 continue 104 doc[it] = value 105 self._inst_packages_info[pref]['cpp_info'] = doc 106 107 @property 108 def install_errored(self): 109 all_values = list(self._inst_recipes_actions.values()) + list(self._inst_packages_actions.values()) 110 for acts in all_values: 111 for act in acts: 112 if act.type == INSTALL_ERROR: 113 return True 114 return False 115 116 def _get_installed_packages(self, ref): 117 assert(isinstance(ref, ConanFileReference)) 118 ret = [] 119 for _pref, _package_actions in self._inst_packages_actions.items(): 120 # Could be a download and then an access to cache, we want the first one 121 _package_action = _package_actions[0] 122 if _pref.ref == ref: 123 ret.append((_pref, _package_action)) 124 return ret 125 126 def in_development_recipe(self, ref): 127 return ref in self._inst_recipes_develop 128 129 def get_info(self): 130 return self.get_install_info() 131 132 def get_install_info(self): 133 ret = {"error": self.install_errored or self.error, 134 "installed": []} 135 136 def get_doc_for_ref(the_ref, the_actions): 137 errors = [action.doc for action in the_actions if action.type == INSTALL_ERROR] 138 error = None if not errors else errors[0] 139 remotes = [action.doc.get("remote") for action in the_actions 140 if action.doc.get("remote", None) is not None] 141 remote = None if not remotes else remotes[0] 142 action_types = [action.type for action in the_actions] 143 time = the_actions[0].time 144 145 doc = {"id": str(the_ref), 146 "downloaded": INSTALL_DOWNLOADED in action_types, 147 "exported": INSTALL_EXPORTED in action_types, 148 "error": error, 149 "remote": remote, 150 "time": time} 151 if isinstance(the_ref, ConanFileReference): 152 doc["dependency"] = not self.in_development_recipe(the_ref.copy_clear_rev()) 153 doc["name"] = the_ref.name 154 doc["version"] = the_ref.version 155 doc["user"] = the_ref.user 156 doc["channel"] = the_ref.channel 157 if the_ref.revision: 158 doc["revision"] = the_ref.revision 159 else: 160 doc["built"] = INSTALL_BUILT in action_types 161 162 if doc["remote"] is None and error: 163 doc["remote"] = error.get("remote", None) 164 return doc 165 166 for ref, actions in self._inst_recipes_actions.items(): 167 tmp = {"recipe": get_doc_for_ref(ref, actions), 168 "packages": []} 169 170 packages = self._get_installed_packages(ref) 171 for pref, p_action in packages: 172 p_doc = get_doc_for_ref(pref.id, [p_action]) 173 package_data = self._inst_packages_info.get(pref, {}) 174 p_doc.update(package_data) 175 tmp["packages"].append(p_doc) 176 177 ret["installed"].append(tmp) 178 179 return ret 180 [end of conans/client/recorder/action_recorder.py] [start of conans/model/conan_file.py] 1 import os 2 from contextlib import contextmanager 3 4 from conans.client import tools 5 from conans.client.output import Color, ScopedOutput 6 from conans.client.tools.env import environment_append, no_op, pythonpath 7 from conans.client.tools.oss import OSInfo 8 from conans.errors import ConanException 9 from conans.model.build_info import DepsCppInfo 10 from conans.model.env_info import DepsEnvInfo 11 from conans.model.options import Options, OptionsValues, PackageOptions 12 from conans.model.requires import Requirements 13 from conans.model.user_info import DepsUserInfo 14 from conans.paths import RUN_LOG_NAME 15 16 17 def create_options(conanfile): 18 try: 19 package_options = PackageOptions(getattr(conanfile, "options", None)) 20 options = Options(package_options) 21 22 default_options = getattr(conanfile, "default_options", None) 23 if default_options: 24 if isinstance(default_options, (list, tuple, dict)): 25 default_values = OptionsValues(default_options) 26 elif isinstance(default_options, str): 27 default_values = OptionsValues.loads(default_options) 28 else: 29 raise ConanException("Please define your default_options as list, " 30 "multiline string or dictionary") 31 options.values = default_values 32 return options 33 except Exception as e: 34 raise ConanException("Error while initializing options. %s" % str(e)) 35 36 37 def create_requirements(conanfile): 38 try: 39 # Actual requirements of this package 40 if not hasattr(conanfile, "requires"): 41 return Requirements() 42 else: 43 if not conanfile.requires: 44 return Requirements() 45 if isinstance(conanfile.requires, tuple): 46 return Requirements(*conanfile.requires) 47 else: 48 return Requirements(conanfile.requires, ) 49 except Exception as e: 50 raise ConanException("Error while initializing requirements. %s" % str(e)) 51 52 53 def create_settings(conanfile, settings): 54 try: 55 defined_settings = getattr(conanfile, "settings", None) 56 if isinstance(defined_settings, str): 57 defined_settings = [defined_settings] 58 current = defined_settings or {} 59 settings.constraint(current) 60 return settings 61 except Exception as e: 62 raise ConanException("Error while initializing settings. %s" % str(e)) 63 64 65 @contextmanager 66 def _env_and_python(conanfile): 67 with environment_append(conanfile.env): 68 with pythonpath(conanfile): 69 yield 70 71 72 def get_env_context_manager(conanfile, without_python=False): 73 if not conanfile.apply_env: 74 return no_op() 75 if without_python: 76 return environment_append(conanfile.env) 77 return _env_and_python(conanfile) 78 79 80 class ConanFile(object): 81 """ The base class for all package recipes 82 """ 83 84 name = None 85 version = None # Any str, can be "1.1" or whatever 86 url = None # The URL where this File is located, as github, to collaborate in package 87 # The license of the PACKAGE, just a shortcut, does not replace or 88 # change the actual license of the source code 89 license = None 90 author = None # Main maintainer/responsible for the package, any format 91 description = None 92 topics = None 93 homepage = None 94 build_policy = None 95 short_paths = False 96 apply_env = True # Apply environment variables from requires deps_env_info and profiles 97 exports = None 98 exports_sources = None 99 generators = ["txt"] 100 101 # Vars to control the build steps (build(), package()) 102 should_configure = True 103 should_build = True 104 should_install = True 105 should_test = True 106 in_local_cache = True 107 develop = False 108 109 # Defaulting the reference fields 110 default_channel = None 111 default_user = None 112 113 # Settings and Options 114 settings = None 115 options = None 116 default_options = None 117 118 def __init__(self, output, runner, display_name="", user=None, channel=None): 119 # an output stream (writeln, info, warn error) 120 self.output = ScopedOutput(display_name, output) 121 self.display_name = display_name 122 # something that can run commands, as os.sytem 123 self._conan_runner = runner 124 self._conan_user = user 125 self._conan_channel = channel 126 127 def initialize(self, settings, env): 128 if isinstance(self.generators, str): 129 self.generators = [self.generators] 130 # User defined options 131 self.options = create_options(self) 132 self.requires = create_requirements(self) 133 self.settings = create_settings(self, settings) 134 try: 135 if self.settings.os_build and self.settings.os: 136 self.output.writeln("*"*60, front=Color.BRIGHT_RED) 137 self.output.writeln(" This package defines both 'os' and 'os_build' ", 138 front=Color.BRIGHT_RED) 139 self.output.writeln(" Please use 'os' for libraries and 'os_build'", 140 front=Color.BRIGHT_RED) 141 self.output.writeln(" only for build-requires used for cross-building", 142 front=Color.BRIGHT_RED) 143 self.output.writeln("*"*60, front=Color.BRIGHT_RED) 144 except ConanException: 145 pass 146 147 # needed variables to pack the project 148 self.cpp_info = None # Will be initialized at processing time 149 self.deps_cpp_info = DepsCppInfo() 150 151 # environment variables declared in the package_info 152 self.env_info = None # Will be initialized at processing time 153 self.deps_env_info = DepsEnvInfo() 154 155 # user declared variables 156 self.user_info = None 157 # Keys are the package names, and the values a dict with the vars 158 self.deps_user_info = DepsUserInfo() 159 160 # user specified env variables 161 self._conan_env_values = env.copy() # user specified -e 162 163 @property 164 def env(self): 165 """Apply the self.deps_env_info into a copy of self._conan_env_values (will prioritize the 166 self._conan_env_values, user specified from profiles or -e first, then inherited)""" 167 # Cannot be lazy cached, because it's called in configure node, and we still don't have 168 # the deps_env_info objects available 169 tmp_env_values = self._conan_env_values.copy() 170 tmp_env_values.update(self.deps_env_info) 171 172 ret, multiple = tmp_env_values.env_dicts(self.name) 173 ret.update(multiple) 174 return ret 175 176 @property 177 def channel(self): 178 if not self._conan_channel: 179 self._conan_channel = os.getenv("CONAN_CHANNEL") or self.default_channel 180 if not self._conan_channel: 181 raise ConanException("CONAN_CHANNEL environment variable not defined, " 182 "but self.channel is used in conanfile") 183 return self._conan_channel 184 185 @property 186 def user(self): 187 if not self._conan_user: 188 self._conan_user = os.getenv("CONAN_USERNAME") or self.default_user 189 if not self._conan_user: 190 raise ConanException("CONAN_USERNAME environment variable not defined, " 191 "but self.user is used in conanfile") 192 return self._conan_user 193 194 def collect_libs(self, folder=None): 195 self.output.warn("'self.collect_libs' is deprecated, " 196 "use 'tools.collect_libs(self)' instead") 197 return tools.collect_libs(self, folder=folder) 198 199 @property 200 def build_policy_missing(self): 201 return self.build_policy == "missing" 202 203 @property 204 def build_policy_always(self): 205 return self.build_policy == "always" 206 207 def source(self): 208 pass 209 210 def system_requirements(self): 211 """ this method can be overwritten to implement logic for system package 212 managers, as apt-get 213 214 You can define self.global_system_requirements = True, if you want the installation 215 to be for all packages (not depending on settings/options/requirements) 216 """ 217 218 def config_options(self): 219 """ modify options, probably conditioned to some settings. This call is executed 220 before config_settings. E.g. 221 if self.settings.os == "Windows": 222 del self.options.shared # shared/static not supported in win 223 """ 224 225 def configure(self): 226 """ modify settings, probably conditioned to some options. This call is executed 227 after config_options. E.g. 228 if self.options.header_only: 229 self.settings.clear() 230 This is also the place for conditional requirements 231 """ 232 233 def build(self): 234 """ build your project calling the desired build tools as done in the command line. 235 E.g. self.run("cmake --build .") Or use the provided build helpers. E.g. cmake.build() 236 """ 237 self.output.warn("This conanfile has no build step") 238 239 def package(self): 240 """ package the needed files from source and build folders. 241 E.g. self.copy("*.h", src="src/includes", dst="includes") 242 """ 243 self.output.warn("This conanfile has no package step") 244 245 def package_info(self): 246 """ define cpp_build_info, flags, etc 247 """ 248 249 def run(self, command, output=True, cwd=None, win_bash=False, subsystem=None, msys_mingw=True, 250 ignore_errors=False, run_environment=False): 251 def _run(): 252 if not win_bash: 253 return self._conan_runner(command, output, os.path.abspath(RUN_LOG_NAME), cwd) 254 # FIXME: run in windows bash is not using output 255 return tools.run_in_windows_bash(self, bashcmd=command, cwd=cwd, subsystem=subsystem, 256 msys_mingw=msys_mingw) 257 if run_environment: 258 with tools.run_environment(self): 259 if OSInfo().is_macos: 260 command = 'DYLD_LIBRARY_PATH="%s" %s' % (os.environ.get('DYLD_LIBRARY_PATH', ''), 261 command) 262 retcode = _run() 263 else: 264 retcode = _run() 265 266 if not ignore_errors and retcode != 0: 267 raise ConanException("Error %d while executing %s" % (retcode, command)) 268 269 return retcode 270 271 def package_id(self): 272 """ modify the conans info, typically to narrow values 273 eg.: conaninfo.package_references = [] 274 """ 275 276 def test(self): 277 """ test the generated executable. 278 E.g. self.run("./example") 279 """ 280 raise ConanException("You need to create a method 'test' in your test/conanfile.py") 281 282 def __repr__(self): 283 return self.display_name 284 [end of conans/model/conan_file.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conan-io/conan
606020b55df9b9523ef04a4af798db2092605630
Hard to debug error ERROR: Expecting value: line 1 column 1 (char 0) Hi, first of all i want to congratulate you guys on this great project that is conan. I performed some conan copy to rename the owner of the package on custom packages that some have sources other get the sources from git and so on, they all are different but now i'm getting this anoying error when i try to do a conan install on some packages: ``` ERROR: Expecting value: line 1 column 1 (char 0) ``` The problem is that it is very hard to know what package is creating this error. I kind understand that the problem was that the conan copy does not copy sources or it seems... something like that i can't really explain, but doing the conan create again on the package seems to fix this... (when i find the package with error... but i got many and is hard to find the one in error or ones) It will be nice to add the package name on the error to easily find the package that is creating the problem (many package dependencies create this hard to find package in error)
Hi @nesc1, thanks for your kind words :) Do you have some steps that you could repeat to reproduce the issue? Maybe with some specific package from conan center? Something like: ``` conan install zlib/.... conan copy zlib/... otheruser/channel conan install zlib/..@otheruser/channel ``` Also, more complete trace, with the output would help. Hi @memsharded, yes the steps is like that, but probably more true like: ``` conan create zlib/.... conan upload zlib/.... conan remove zlib/.... conan install zlib/.... conan copy zlib/... otheruser/channel conan install zlib/..@otheruser/channel ``` this for several packages... i can not reproduce it exacly because i'm in a midle of the transation packages from one server to another, but you are talking about some log that conan produces? if yes tell-me where it is that i will send you I have tried that and seems to be working good. Yes, please, when you finish doing that, try to reproduce the failure with a given specific package, and when it fails, take the terminal output, and copy it here too. Many thanks! I came to this issue after googling the same error message. Although my problem maybe was not the same, what I experienced could be of help to someone with a similar problem: I had added a Artifactory as a Conan remote , to which I could not connect to (error in network setup by my company). This remote was higher up in the list of remotes than another working remote. When doing ´conan install somepackage/1.0.0@package/stable´, it failed when searching for the package at the faulty remote, giving me ERROR: Failed requirement 'somepackage/1.0.0@package/stable' from 'PROJECT' ERROR: Expecting value: line 1 column 1 (char 0) as error output. Incidentally, I saw that doing ´conan install somepackage/1.0.0@package/stable -r my_remote´ worked, so removing the faulty remote solved my problem.
2019-01-15T23:02:37Z
<patch> diff --git a/conans/client/rest/rest_client_common.py b/conans/client/rest/rest_client_common.py --- a/conans/client/rest/rest_client_common.py +++ b/conans/client/rest/rest_client_common.py @@ -156,7 +156,16 @@ def get_json(self, url, data=None): response.charset = "utf-8" # To be able to access ret.text (ret.content are bytes) raise get_exception_from_error(response.status_code)(response.text) - result = json.loads(decode_text(response.content)) + content = decode_text(response.content) + content_type = response.headers.get("Content-Type") + if content_type != 'application/json': + raise ConanException("%s\n\nResponse from remote is not json, but '%s'" + % (content, content_type)) + + try: # This can fail, if some proxy returns 200 and an html message + result = json.loads(content) + except Exception: + raise ConanException("Remote responded with broken json: %s" % content) if not isinstance(result, dict): raise ConanException("Unexpected server response %s" % result) return result diff --git a/conans/server/rest/multipart_encoder.py b/conans/server/rest/multipart_encoder.py --- a/conans/server/rest/multipart_encoder.py +++ b/conans/server/rest/multipart_encoder.py @@ -77,7 +77,7 @@ def read_in_chunks(file_object, chunk_size=1024): # yield buf1 # else: yield buf1 - #yield buf2 + # yield buf2 if __name__ == "__main__": diff --git a/conans/server/rest/server.py b/conans/server/rest/server.py --- a/conans/server/rest/server.py +++ b/conans/server/rest/server.py @@ -1,6 +1,5 @@ import bottle -from conans.model.version import Version from conans.server.rest.api_v1 import ApiV1 from conans.server.rest.api_v2 import ApiV2 diff --git a/conans/server/service/mime.py b/conans/server/service/mime.py --- a/conans/server/service/mime.py +++ b/conans/server/service/mime.py @@ -6,4 +6,4 @@ def get_mime_type(filepath): else: mimetype = "auto" - return mimetype \ No newline at end of file + return mimetype </patch>
[]
[]
pandas-dev__pandas-5849
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: duplicate selection with missing values raises Hard to reprod. Select from a duplicate indexed axis in a frame with ix where part of the selection set is missing ``` In [22]: df = DataFrame(np.random.randn(5,5),columns=['A.1','B.1','B.2','B.3','A.2'],index=date_range('20130101',periods=5)) In [23]: df2 = df.rename(columns=lambda x: x.split('.')[0]) In [24]: df2 Out[24]: A B B B A 2013-01-01 -1.029245 -0.782139 0.584956 1.097301 -0.150675 2013-01-02 -0.723246 -0.356150 -0.441952 0.027012 -1.851583 2013-01-03 -1.001412 0.129464 0.093433 0.952615 -1.338390 2013-01-04 0.165987 0.227918 0.557940 -0.102501 -1.194053 2013-01-05 0.249493 -1.102096 -0.977755 -0.529540 0.783277 [5 rows x 5 columns] In [25]: df2.ix[:,['A','B','C']] AssertionError: Number of manager items must equal union of block items # manager items: 6, # tot_items: 14 ``` </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.png) 4 5 ## What is it 6 **pandas** is a Python package providing fast, flexible, and expressive data 7 structures designed to make working with "relational" or "labeled" data both 8 easy and intuitive. It aims to be the fundamental high-level building block for 9 doing practical, **real world** data analysis in Python. Additionally, it has 10 the broader goal of becoming **the most powerful and flexible open source data 11 analysis / manipulation tool available in any language**. It is already well on 12 its way toward this goal. 13 14 ## Main Features 15 Here are just a few of the things that pandas does well: 16 17 - Easy handling of [**missing data**][missing-data] (represented as 18 `NaN`) in floating point as well as non-floating point data 19 - Size mutability: columns can be [**inserted and 20 deleted**][insertion-deletion] from DataFrame and higher dimensional 21 objects 22 - Automatic and explicit [**data alignment**][alignment]: objects can 23 be explicitly aligned to a set of labels, or the user can simply 24 ignore the labels and let `Series`, `DataFrame`, etc. automatically 25 align the data for you in computations 26 - Powerful, flexible [**group by**][groupby] functionality to perform 27 split-apply-combine operations on data sets, for both aggregating 28 and transforming data 29 - Make it [**easy to convert**][conversion] ragged, 30 differently-indexed data in other Python and NumPy data structures 31 into DataFrame objects 32 - Intelligent label-based [**slicing**][slicing], [**fancy 33 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 34 large data sets 35 - Intuitive [**merging**][merging] and [**joining**][joining] data 36 sets 37 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 38 data sets 39 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 40 labels per tick) 41 - Robust IO tools for loading data from [**flat files**][flat-files] 42 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 43 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 44 - [**Time series**][timeseries]-specific functionality: date range 45 generation and frequency conversion, moving window statistics, 46 moving window linear regressions, date shifting and lagging, etc. 47 48 49 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 50 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 51 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 52 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 53 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 54 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 55 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 56 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 57 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 58 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 59 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 60 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 61 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 62 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 63 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 64 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 65 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 66 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 67 68 ## Where to get it 69 The source code is currently hosted on GitHub at: 70 http://github.com/pydata/pandas 71 72 Binary installers for the latest released version are available at the Python 73 package index 74 75 http://pypi.python.org/pypi/pandas/ 76 77 And via `easy_install`: 78 79 ```sh 80 easy_install pandas 81 ``` 82 83 or `pip`: 84 85 ```sh 86 pip install pandas 87 ``` 88 89 ## Dependencies 90 - [NumPy](http://www.numpy.org): 1.6.1 or higher 91 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 92 - [pytz](http://pytz.sourceforge.net) 93 - Needed for time zone support with ``pandas.date_range`` 94 95 ### Highly Recommended Dependencies 96 - [numexpr](http://code.google.com/p/numexpr/) 97 - Needed to accelerate some expression evaluation operations 98 - Required by PyTables 99 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 100 - Needed to accelerate certain numerical operations 101 102 ### Optional dependencies 103 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 104 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 105 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 106 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 107 - [statsmodels](http://statsmodels.sourceforge.net/) 108 - Needed for parts of `pandas.stats` 109 - For Excel I/O: 110 - [xlrd/xlwt](http://www.python-excel.org/) 111 - Excel reading (xlrd) and writing (xlwt) 112 - [openpyxl](http://packages.python.org/openpyxl/) 113 - openpyxl version 1.6.1 or higher, for writing .xlsx files 114 - xlrd >= 0.9.0 115 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 116 - Alternative Excel writer. 117 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/) 118 - Needed for `pandas.io.gbq` 119 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 120 - One of the following combinations of libraries is needed to use the 121 top-level [`pandas.read_html`][read-html-docs] function: 122 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 123 recent version of [html5lib][html5lib] is okay.) 124 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 125 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 126 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 127 for reasons as to why you should probably **not** take this approach. 128 129 #### Notes about HTML parsing libraries 130 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 131 either [lxml][lxml] or [html5lib][html5lib] or both. 132 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 133 installed. 134 - You are strongly encouraged to read [HTML reading 135 gotchas][html-gotchas]. It explains issues surrounding the 136 installation and usage of the above three libraries. 137 - You may need to install an older version of 138 [BeautifulSoup4][BeautifulSoup4]: 139 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 140 32-bit Ubuntu/Debian 141 - Additionally, if you're using [Anaconda][Anaconda] you should 142 definitely read [the gotchas about HTML parsing][html-gotchas] 143 libraries 144 - If you're on a system with `apt-get` you can do 145 146 ```sh 147 sudo apt-get build-dep python-lxml 148 ``` 149 150 to get the necessary dependencies for installation of [lxml][lxml]. 151 This will prevent further headaches down the line. 152 153 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 154 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 155 [lxml]: http://lxml.de 156 [Anaconda]: https://store.continuum.io/cshop/anaconda 157 [NumPy]: http://numpy.scipy.org/ 158 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 159 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 160 161 ## Installation from sources 162 To install pandas from source you need Cython in addition to the normal 163 dependencies above. Cython can be installed from pypi: 164 165 ```sh 166 pip install cython 167 ``` 168 169 In the `pandas` directory (same one where you found this file after 170 cloning the git repo), execute: 171 172 ```sh 173 python setup.py install 174 ``` 175 176 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html): 177 178 ```sh 179 python setup.py develop 180 ``` 181 182 Alternatively, you can use `pip` if you want all the dependencies pulled 183 in automatically (the `-e` option is for installing it in [development 184 mode](http://www.pip-installer.org/en/latest/usage.html)): 185 186 ```sh 187 pip install -e . 188 ``` 189 190 On Windows, you will need to install MinGW and execute: 191 192 ```sh 193 python setup.py build --compiler=mingw32 194 python setup.py install 195 ``` 196 197 See http://pandas.pydata.org/ for more information. 198 199 ## License 200 BSD 201 202 ## Documentation 203 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 204 205 The Sphinx documentation should provide a good starting point for learning how 206 to use the library. Expect the docs to continue to expand as time goes on. 207 208 ## Background 209 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 210 has been under active development since then. 211 212 ## Discussion and Development 213 Since pandas development is related to a number of other scientific 214 Python projects, questions are welcome on the scipy-user mailing 215 list. Specialized discussions or design issues should take place on 216 the pystatsmodels mailing list / Google group, where 217 ``scikits.statsmodels`` and other libraries will also be discussed: 218 219 http://groups.google.com/group/pystatsmodels 220 [end of README.md] [start of pandas/io/data.py] 1 """ 2 Module contains tools for collecting data from various remote sources 3 4 5 """ 6 import warnings 7 import tempfile 8 import datetime as dt 9 import time 10 11 from collections import defaultdict 12 13 import numpy as np 14 15 from pandas.compat import( 16 StringIO, bytes_to_str, range, lrange, lmap, zip 17 ) 18 import pandas.compat as compat 19 from pandas import Panel, DataFrame, Series, read_csv, concat 20 from pandas.core.common import is_list_like, PandasError 21 from pandas.io.parsers import TextParser 22 from pandas.io.common import urlopen, ZipFile, urlencode 23 from pandas.util.testing import _network_error_classes 24 25 26 class SymbolWarning(UserWarning): 27 pass 28 29 30 class RemoteDataError(PandasError, IOError): 31 pass 32 33 34 def DataReader(name, data_source=None, start=None, end=None, 35 retry_count=3, pause=0.001): 36 """ 37 Imports data from a number of online sources. 38 39 Currently supports Yahoo! Finance, Google Finance, St. Louis FED (FRED) 40 and Kenneth French's data library. 41 42 Parameters 43 ---------- 44 name : str or list of strs 45 the name of the dataset. Some data sources (yahoo, google, fred) will 46 accept a list of names. 47 data_source: str 48 the data source ("yahoo", "google", "fred", or "ff") 49 start : {datetime, None} 50 left boundary for range (defaults to 1/1/2010) 51 end : {datetime, None} 52 right boundary for range (defaults to today) 53 54 Examples 55 ---------- 56 57 # Data from Yahoo! Finance 58 gs = DataReader("GS", "yahoo") 59 60 # Data from Google Finance 61 aapl = DataReader("AAPL", "google") 62 63 # Data from FRED 64 vix = DataReader("VIXCLS", "fred") 65 66 # Data from Fama/French 67 ff = DataReader("F-F_Research_Data_Factors", "famafrench") 68 ff = DataReader("F-F_Research_Data_Factors_weekly", "famafrench") 69 ff = DataReader("6_Portfolios_2x3", "famafrench") 70 ff = DataReader("F-F_ST_Reversal_Factor", "famafrench") 71 """ 72 start, end = _sanitize_dates(start, end) 73 74 if data_source == "yahoo": 75 return get_data_yahoo(symbols=name, start=start, end=end, 76 adjust_price=False, chunksize=25, 77 retry_count=retry_count, pause=pause) 78 elif data_source == "google": 79 return get_data_google(symbols=name, start=start, end=end, 80 adjust_price=False, chunksize=25, 81 retry_count=retry_count, pause=pause) 82 elif data_source == "fred": 83 return get_data_fred(name, start, end) 84 elif data_source == "famafrench": 85 return get_data_famafrench(name) 86 87 88 def _sanitize_dates(start, end): 89 from pandas.core.datetools import to_datetime 90 start = to_datetime(start) 91 end = to_datetime(end) 92 if start is None: 93 start = dt.datetime(2010, 1, 1) 94 if end is None: 95 end = dt.datetime.today() 96 return start, end 97 98 99 def _in_chunks(seq, size): 100 """ 101 Return sequence in 'chunks' of size defined by size 102 """ 103 return (seq[pos:pos + size] for pos in range(0, len(seq), size)) 104 105 106 _yahoo_codes = {'symbol': 's', 'last': 'l1', 'change_pct': 'p2', 'PE': 'r', 107 'time': 't1', 'short_ratio': 's7'} 108 109 110 _YAHOO_QUOTE_URL = 'http://finance.yahoo.com/d/quotes.csv?' 111 112 113 def get_quote_yahoo(symbols): 114 """ 115 Get current yahoo quote 116 117 Returns a DataFrame 118 """ 119 if isinstance(symbols, compat.string_types): 120 sym_list = symbols 121 else: 122 sym_list = '+'.join(symbols) 123 124 # for codes see: http://www.gummy-stuff.org/Yahoo-data.htm 125 request = ''.join(compat.itervalues(_yahoo_codes)) # code request string 126 header = list(_yahoo_codes.keys()) 127 128 data = defaultdict(list) 129 130 url_str = _YAHOO_QUOTE_URL + 's=%s&f=%s' % (sym_list, request) 131 132 with urlopen(url_str) as url: 133 lines = url.readlines() 134 135 for line in lines: 136 fields = line.decode('utf-8').strip().split(',') 137 for i, field in enumerate(fields): 138 if field[-2:] == '%"': 139 v = float(field.strip('"%')) 140 elif field[0] == '"': 141 v = field.strip('"') 142 else: 143 try: 144 v = float(field) 145 except ValueError: 146 v = np.nan 147 data[header[i]].append(v) 148 149 idx = data.pop('symbol') 150 return DataFrame(data, index=idx) 151 152 153 def get_quote_google(symbols): 154 raise NotImplementedError("Google Finance doesn't have this functionality") 155 156 157 def _retry_read_url(url, retry_count, pause, name): 158 for _ in range(retry_count): 159 time.sleep(pause) 160 161 # kludge to close the socket ASAP 162 try: 163 with urlopen(url) as resp: 164 lines = resp.read() 165 except _network_error_classes: 166 pass 167 else: 168 rs = read_csv(StringIO(bytes_to_str(lines)), index_col=0, 169 parse_dates=True)[::-1] 170 # Yahoo! Finance sometimes does this awesome thing where they 171 # return 2 rows for the most recent business day 172 if len(rs) > 2 and rs.index[-1] == rs.index[-2]: # pragma: no cover 173 rs = rs[:-1] 174 return rs 175 176 raise IOError("after %d tries, %s did not " 177 "return a 200 for url %r" % (retry_count, name, url)) 178 179 180 _HISTORICAL_YAHOO_URL = 'http://ichart.finance.yahoo.com/table.csv?' 181 182 183 def _get_hist_yahoo(sym, start, end, retry_count, pause): 184 """ 185 Get historical data for the given name from yahoo. 186 Date format is datetime 187 188 Returns a DataFrame. 189 """ 190 start, end = _sanitize_dates(start, end) 191 url = (_HISTORICAL_YAHOO_URL + 's=%s' % sym + 192 '&a=%s' % (start.month - 1) + 193 '&b=%s' % start.day + 194 '&c=%s' % start.year + 195 '&d=%s' % (end.month - 1) + 196 '&e=%s' % end.day + 197 '&f=%s' % end.year + 198 '&g=d' + 199 '&ignore=.csv') 200 return _retry_read_url(url, retry_count, pause, 'Yahoo!') 201 202 203 _HISTORICAL_GOOGLE_URL = 'http://www.google.com/finance/historical?' 204 205 206 def _get_hist_google(sym, start, end, retry_count, pause): 207 """ 208 Get historical data for the given name from google. 209 Date format is datetime 210 211 Returns a DataFrame. 212 """ 213 start, end = _sanitize_dates(start, end) 214 215 # www.google.com/finance/historical?q=GOOG&startdate=Jun+9%2C+2011&enddate=Jun+8%2C+2013&output=csv 216 url = "%s%s" % (_HISTORICAL_GOOGLE_URL, 217 urlencode({"q": sym, 218 "startdate": start.strftime('%b %d, ' '%Y'), 219 "enddate": end.strftime('%b %d, %Y'), 220 "output": "csv"})) 221 return _retry_read_url(url, retry_count, pause, 'Google') 222 223 224 def _adjust_prices(hist_data, price_list=None): 225 """ 226 Return modifed DataFrame or Panel with adjusted prices based on 227 'Adj Close' price. Adds 'Adj_Ratio' column. 228 """ 229 if price_list is None: 230 price_list = 'Open', 'High', 'Low', 'Close' 231 adj_ratio = hist_data['Adj Close'] / hist_data['Close'] 232 233 data = hist_data.copy() 234 for item in price_list: 235 data[item] = hist_data[item] * adj_ratio 236 data['Adj_Ratio'] = adj_ratio 237 del data['Adj Close'] 238 return data 239 240 241 def _calc_return_index(price_df): 242 """ 243 Return a returns index from a input price df or series. Initial value 244 (typically NaN) is set to 1. 245 """ 246 df = price_df.pct_change().add(1).cumprod() 247 mask = df.ix[1].notnull() & df.ix[0].isnull() 248 df.ix[0][mask] = 1 249 250 # Check for first stock listings after starting date of index in ret_index 251 # If True, find first_valid_index and set previous entry to 1. 252 if (~mask).any(): 253 for sym in mask.index[~mask]: 254 tstamp = df[sym].first_valid_index() 255 t_idx = df.index.get_loc(tstamp) - 1 256 df[sym].ix[t_idx] = 1 257 258 return df 259 260 261 _YAHOO_COMPONENTS_URL = 'http://download.finance.yahoo.com/d/quotes.csv?' 262 263 264 def get_components_yahoo(idx_sym): 265 """ 266 Returns DataFrame containing list of component information for 267 index represented in idx_sym from yahoo. Includes component symbol 268 (ticker), exchange, and name. 269 270 Parameters 271 ---------- 272 idx_sym : str 273 Stock index symbol 274 Examples: 275 '^DJI' (Dow Jones Industrial Average) 276 '^NYA' (NYSE Composite) 277 '^IXIC' (NASDAQ Composite) 278 279 See: http://finance.yahoo.com/indices for other index symbols 280 281 Returns 282 ------- 283 idx_df : DataFrame 284 """ 285 stats = 'snx' 286 # URL of form: 287 # http://download.finance.yahoo.com/d/quotes.csv?s=@%5EIXIC&f=snxl1d1t1c1ohgv 288 url = _YAHOO_COMPONENTS_URL + 's={0}&f={1}&e=.csv&h={2}' 289 290 idx_mod = idx_sym.replace('^', '@%5E') 291 url_str = url.format(idx_mod, stats, 1) 292 293 idx_df = DataFrame() 294 mask = [True] 295 comp_idx = 1 296 297 # LOOP across component index structure, 298 # break when no new components are found 299 while True in mask: 300 url_str = url.format(idx_mod, stats, comp_idx) 301 with urlopen(url_str) as resp: 302 raw = resp.read() 303 lines = raw.decode('utf-8').strip().strip('"').split('"\r\n"') 304 lines = [line.strip().split('","') for line in lines] 305 306 temp_df = DataFrame(lines, columns=['ticker', 'name', 'exchange']) 307 temp_df = temp_df.drop_duplicates() 308 temp_df = temp_df.set_index('ticker') 309 mask = ~temp_df.index.isin(idx_df.index) 310 311 comp_idx = comp_idx + 50 312 idx_df = idx_df.append(temp_df[mask]) 313 314 return idx_df 315 316 317 def _dl_mult_symbols(symbols, start, end, chunksize, retry_count, pause, 318 method): 319 stocks = {} 320 for sym_group in _in_chunks(symbols, chunksize): 321 for sym in sym_group: 322 try: 323 stocks[sym] = method(sym, start, end, retry_count, pause) 324 except IOError: 325 warnings.warn('Failed to read symbol: {0!r}, replacing with ' 326 'NaN.'.format(sym), SymbolWarning) 327 stocks[sym] = np.nan 328 329 try: 330 return Panel(stocks).swapaxes('items', 'minor') 331 except AttributeError: 332 # cannot construct a panel with just 1D nans indicating no data 333 raise RemoteDataError("No data fetched using " 334 "{0!r}".format(method.__name__)) 335 336 337 _source_functions = {'google': _get_hist_google, 'yahoo': _get_hist_yahoo} 338 339 340 def _get_data_from(symbols, start, end, retry_count, pause, adjust_price, 341 ret_index, chunksize, source, name): 342 if name is not None: 343 warnings.warn("Arg 'name' is deprecated, please use 'symbols' " 344 "instead.", FutureWarning) 345 symbols = name 346 347 src_fn = _source_functions[source] 348 349 # If a single symbol, (e.g., 'GOOG') 350 if isinstance(symbols, (compat.string_types, int)): 351 hist_data = src_fn(symbols, start, end, retry_count, pause) 352 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) 353 elif isinstance(symbols, DataFrame): 354 hist_data = _dl_mult_symbols(symbols.index, start, end, chunksize, 355 retry_count, pause, src_fn) 356 else: 357 hist_data = _dl_mult_symbols(symbols, start, end, chunksize, 358 retry_count, pause, src_fn) 359 if source.lower() == 'yahoo': 360 if ret_index: 361 hist_data['Ret_Index'] = _calc_return_index(hist_data['Adj Close']) 362 if adjust_price: 363 hist_data = _adjust_prices(hist_data) 364 365 return hist_data 366 367 368 def get_data_yahoo(symbols=None, start=None, end=None, retry_count=3, 369 pause=0.001, adjust_price=False, ret_index=False, 370 chunksize=25, name=None): 371 """ 372 Returns DataFrame/Panel of historical stock prices from symbols, over date 373 range, start to end. To avoid being penalized by Yahoo! Finance servers, 374 pauses between downloading 'chunks' of symbols can be specified. 375 376 Parameters 377 ---------- 378 symbols : string, array-like object (list, tuple, Series), or DataFrame 379 Single stock symbol (ticker), array-like object of symbols or 380 DataFrame with index containing stock symbols. 381 start : string, (defaults to '1/1/2010') 382 Starting date, timestamp. Parses many different kind of date 383 representations (e.g., 'JAN-01-2010', '1/1/10', 'Jan, 1, 1980') 384 end : string, (defaults to today) 385 Ending date, timestamp. Same format as starting date. 386 retry_count : int, default 3 387 Number of times to retry query request. 388 pause : int, default 0 389 Time, in seconds, to pause between consecutive queries of chunks. If 390 single value given for symbol, represents the pause between retries. 391 adjust_price : bool, default False 392 If True, adjusts all prices in hist_data ('Open', 'High', 'Low', 393 'Close') based on 'Adj Close' price. Adds 'Adj_Ratio' column and drops 394 'Adj Close'. 395 ret_index : bool, default False 396 If True, includes a simple return index 'Ret_Index' in hist_data. 397 chunksize : int, default 25 398 Number of symbols to download consecutively before intiating pause. 399 400 Returns 401 ------- 402 hist_data : DataFrame (str) or Panel (array-like object, DataFrame) 403 """ 404 return _get_data_from(symbols, start, end, retry_count, pause, 405 adjust_price, ret_index, chunksize, 'yahoo', name) 406 407 408 def get_data_google(symbols=None, start=None, end=None, retry_count=3, 409 pause=0.001, adjust_price=False, ret_index=False, 410 chunksize=25, name=None): 411 """ 412 Returns DataFrame/Panel of historical stock prices from symbols, over date 413 range, start to end. To avoid being penalized by Google Finance servers, 414 pauses between downloading 'chunks' of symbols can be specified. 415 416 Parameters 417 ---------- 418 symbols : string, array-like object (list, tuple, Series), or DataFrame 419 Single stock symbol (ticker), array-like object of symbols or 420 DataFrame with index containing stock symbols. 421 start : string, (defaults to '1/1/2010') 422 Starting date, timestamp. Parses many different kind of date 423 representations (e.g., 'JAN-01-2010', '1/1/10', 'Jan, 1, 1980') 424 end : string, (defaults to today) 425 Ending date, timestamp. Same format as starting date. 426 retry_count : int, default 3 427 Number of times to retry query request. 428 pause : int, default 0 429 Time, in seconds, to pause between consecutive queries of chunks. If 430 single value given for symbol, represents the pause between retries. 431 chunksize : int, default 25 432 Number of symbols to download consecutively before intiating pause. 433 434 Returns 435 ------- 436 hist_data : DataFrame (str) or Panel (array-like object, DataFrame) 437 """ 438 return _get_data_from(symbols, start, end, retry_count, pause, 439 adjust_price, ret_index, chunksize, 'google', name) 440 441 442 _FRED_URL = "http://research.stlouisfed.org/fred2/series/" 443 444 445 def get_data_fred(name, start=dt.datetime(2010, 1, 1), 446 end=dt.datetime.today()): 447 """ 448 Get data for the given name from the St. Louis FED (FRED). 449 Date format is datetime 450 451 Returns a DataFrame. 452 453 If multiple names are passed for "series" then the index of the 454 DataFrame is the outer join of the indicies of each series. 455 """ 456 start, end = _sanitize_dates(start, end) 457 458 if not is_list_like(name): 459 names = [name] 460 else: 461 names = name 462 463 urls = [_FRED_URL + '%s' % n + '/downloaddata/%s' % n + '.csv' for 464 n in names] 465 466 def fetch_data(url, name): 467 with urlopen(url) as resp: 468 data = read_csv(resp, index_col=0, parse_dates=True, 469 header=None, skiprows=1, names=["DATE", name], 470 na_values='.') 471 try: 472 return data.truncate(start, end) 473 except KeyError: 474 if data.ix[3].name[7:12] == 'Error': 475 raise IOError("Failed to get the data. Check that {0!r} is " 476 "a valid FRED series.".format(name)) 477 raise 478 df = concat([fetch_data(url, n) for url, n in zip(urls, names)], 479 axis=1, join='outer') 480 return df 481 482 483 _FAMAFRENCH_URL = 'http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp' 484 485 486 def get_data_famafrench(name): 487 # path of zip files 488 zip_file_path = '{0}/{1}.zip'.format(_FAMAFRENCH_URL, name) 489 490 with urlopen(zip_file_path) as url: 491 raw = url.read() 492 493 with tempfile.TemporaryFile() as tmpf: 494 tmpf.write(raw) 495 496 with ZipFile(tmpf, 'r') as zf: 497 data = zf.open(name + '.txt').readlines() 498 499 line_lengths = np.array(lmap(len, data)) 500 file_edges = np.where(line_lengths == 2)[0] 501 502 datasets = {} 503 edges = zip(file_edges + 1, file_edges[1:]) 504 for i, (left_edge, right_edge) in enumerate(edges): 505 dataset = [d.split() for d in data[left_edge:right_edge]] 506 if len(dataset) > 10: 507 ncol_raw = np.array(lmap(len, dataset)) 508 ncol = np.median(ncol_raw) 509 header_index = np.where(ncol_raw == ncol - 1)[0][-1] 510 header = dataset[header_index] 511 ds_header = dataset[header_index + 1:] 512 # to ensure the header is unique 513 header = ['{0} {1}'.format(j, hj) for j, hj in enumerate(header, 514 start=1)] 515 index = np.array([d[0] for d in ds_header], dtype=int) 516 dataset = np.array([d[1:] for d in ds_header], dtype=float) 517 datasets[i] = DataFrame(dataset, index, columns=header) 518 519 return datasets 520 521 522 # Items needed for options class 523 CUR_MONTH = dt.datetime.now().month 524 CUR_YEAR = dt.datetime.now().year 525 526 527 def _unpack(row, kind): 528 els = row.xpath('.//%s' % kind) 529 return [val.text_content() for val in els] 530 531 532 def _parse_options_data(table): 533 rows = table.xpath('.//tr') 534 header = _unpack(rows[0], kind='th') 535 data = [_unpack(row, kind='td') for row in rows[1:]] 536 # Use ',' as a thousands separator as we're pulling from the US site. 537 return TextParser(data, names=header, na_values=['N/A'], 538 thousands=',').get_chunk() 539 540 541 def _two_char_month(s): 542 return '{0:0>2}'.format(s) 543 544 545 class Options(object): 546 """ 547 This class fetches call/put data for a given stock/expiry month. 548 549 It is instantiated with a string representing the ticker symbol. 550 551 The class has the following methods: 552 get_options:(month, year) 553 get_calls:(month, year) 554 get_puts: (month, year) 555 get_near_stock_price(opt_frame, above_below) 556 get_forward(months, call, put) 557 558 Examples 559 -------- 560 # Instantiate object with ticker 561 >>> aapl = Options('aapl', 'yahoo') 562 563 # Fetch September 2012 call data 564 >>> calls = aapl.get_calls(9, 2012) 565 566 # Can now access aapl.calls instance variable 567 >>> aapl.calls 568 569 # Fetch September 2012 put data 570 >>> puts = aapl.get_puts(9, 2012) 571 572 # Can now access aapl.puts instance variable 573 >>> aapl.puts 574 575 # cut down the call data to be 3 below and 3 above the stock price. 576 >>> cut_calls = aapl.get_near_stock_price(calls, above_below=3) 577 578 # Fetch call and put data with expiry from now to 8 months out 579 >>> forward_calls, forward_puts = aapl.get_forward_data(8, 580 ... call=True, put=True) 581 582 """ 583 def __init__(self, symbol, data_source=None): 584 """ Instantiates options_data with a ticker saved as symbol """ 585 self.symbol = symbol.upper() 586 if data_source is None: 587 warnings.warn("Options(symbol) is deprecated, use Options(symbol," 588 " data_source) instead", FutureWarning) 589 data_source = "yahoo" 590 if data_source != "yahoo": 591 raise NotImplementedError("currently only yahoo supported") 592 593 def get_options_data(self, month=None, year=None, expiry=None): 594 """ 595 Gets call/put data for the stock with the expiration data in the 596 given month and year 597 598 Parameters 599 ---------- 600 expiry: datetime.date, optional(default=None) 601 The date when options expire (defaults to current month) 602 603 Returns 604 ------- 605 call_data: pandas.DataFrame 606 A DataFrame with call options data. 607 608 put_data: pandas.DataFrame 609 A DataFrame with call options data. 610 611 612 Notes 613 ----- 614 When called, this function will add instance variables named 615 calls and puts. See the following example: 616 617 >>> aapl = Options('aapl', 'yahoo') # Create object 618 >>> aapl.calls # will give an AttributeError 619 >>> aapl.get_options() # Get data and set ivars 620 >>> aapl.calls # Doesn't throw AttributeError 621 622 Also note that aapl.calls and appl.puts will always be the calls 623 and puts for the next expiry. If the user calls this method with 624 a different month or year, the ivar will be named callsMMYY or 625 putsMMYY where MM and YY are, repsectively, two digit 626 representations of the month and year for the expiry of the 627 options. 628 """ 629 return [f(month, year, expiry) for f in (self.get_put_data, 630 self.get_call_data)] 631 632 _OPTIONS_BASE_URL = 'http://finance.yahoo.com/q/op?s={sym}' 633 634 def _get_option_data(self, month, year, expiry, table_loc, name): 635 year, month = self._try_parse_dates(year, month, expiry) 636 637 url = self._OPTIONS_BASE_URL.format(sym=self.symbol) 638 639 if month and year: # try to get specified month from yahoo finance 640 m1, m2 = _two_char_month(month), month 641 642 # if this month use other url 643 if m1 != CUR_MONTH and m2 != CUR_MONTH: 644 url += '&m={year}-{m1}'.format(year=year, m1=m1) 645 else: 646 url += '+Options' 647 else: # Default to current month 648 url += '+Options' 649 650 try: 651 from lxml.html import parse 652 except ImportError: 653 raise ImportError("Please install lxml if you want to use the " 654 "{0!r} class".format(self.__class__.__name__)) 655 try: 656 doc = parse(url) 657 except _network_error_classes: 658 raise RemoteDataError("Unable to parse tables from URL " 659 "{0!r}".format(url)) 660 else: 661 root = doc.getroot() 662 if root is None: 663 raise RemoteDataError("Parsed URL {0!r} has no root" 664 "element".format(url)) 665 tables = root.xpath('.//table') 666 ntables = len(tables) 667 if table_loc - 1 > ntables: 668 raise IndexError("Table location {0} invalid, {1} tables" 669 " found".format(table_loc, ntables)) 670 671 option_data = _parse_options_data(tables[table_loc]) 672 673 if month: 674 name += m1 + str(year)[-2:] 675 setattr(self, name, option_data) 676 return option_data 677 678 def get_call_data(self, month=None, year=None, expiry=None): 679 """ 680 Gets call/put data for the stock with the expiration data in the 681 given month and year 682 683 Parameters 684 ---------- 685 expiry: datetime.date, optional(default=None) 686 The date when options expire (defaults to current month) 687 688 Returns 689 ------- 690 call_data: pandas.DataFrame 691 A DataFrame with call options data. 692 693 Notes 694 ----- 695 When called, this function will add instance variables named 696 calls and puts. See the following example: 697 698 >>> aapl = Options('aapl', 'yahoo') # Create object 699 >>> aapl.calls # will give an AttributeError 700 >>> aapl.get_call_data() # Get data and set ivars 701 >>> aapl.calls # Doesn't throw AttributeError 702 703 Also note that aapl.calls will always be the calls for the next 704 expiry. If the user calls this method with a different month 705 or year, the ivar will be named callsMMYY where MM and YY are, 706 repsectively, two digit representations of the month and year 707 for the expiry of the options. 708 """ 709 return self._get_option_data(month, year, expiry, 9, 'calls') 710 711 def get_put_data(self, month=None, year=None, expiry=None): 712 """ 713 Gets put data for the stock with the expiration data in the 714 given month and year 715 716 Parameters 717 ---------- 718 expiry: datetime.date, optional(default=None) 719 The date when options expire (defaults to current month) 720 721 Returns 722 ------- 723 put_data: pandas.DataFrame 724 A DataFrame with call options data. 725 726 Notes 727 ----- 728 When called, this function will add instance variables named 729 puts. See the following example: 730 731 >>> aapl = Options('aapl') # Create object 732 >>> aapl.puts # will give an AttributeError 733 >>> aapl.get_put_data() # Get data and set ivars 734 >>> aapl.puts # Doesn't throw AttributeError 735 736 return self.__setattr__(self, str(str(x) + str(y))) 737 738 Also note that aapl.puts will always be the puts for the next 739 expiry. If the user calls this method with a different month 740 or year, the ivar will be named putsMMYY where MM and YY are, 741 repsectively, two digit representations of the month and year 742 for the expiry of the options. 743 """ 744 return self._get_option_data(month, year, expiry, 13, 'puts') 745 746 def get_near_stock_price(self, above_below=2, call=True, put=False, 747 month=None, year=None, expiry=None): 748 """ 749 Cuts the data frame opt_df that is passed in to only take 750 options that are near the current stock price. 751 752 Parameters 753 ---------- 754 above_below: number, int, optional (default=2) 755 The number of strike prices above and below the stock price that 756 should be taken 757 758 call: bool 759 Tells the function whether or not it should be using 760 self.calls 761 762 put: bool 763 Tells the function weather or not it should be using 764 self.puts 765 766 expiry: datetime.date, optional(default=None) 767 The date when options expire (defaults to current month) 768 769 Returns 770 ------- 771 chopped: DataFrame 772 The resultant DataFrame chopped down to be 2 * above_below + 1 rows 773 desired. If there isn't data as far out as the user has asked for 774 then 775 """ 776 year, month = self._try_parse_dates(year, month, expiry) 777 price = float(get_quote_yahoo([self.symbol])['last']) 778 779 to_ret = Series({'calls': call, 'puts': put}) 780 to_ret = to_ret[to_ret].index 781 782 data = {} 783 784 for nam in to_ret: 785 if month: 786 m1 = _two_char_month(month) 787 name = nam + m1 + str(year)[2:] 788 789 try: 790 df = getattr(self, name) 791 except AttributeError: 792 meth_name = 'get_{0}_data'.format(nam[:-1]) 793 df = getattr(self, meth_name)(month, year) 794 795 start_index = np.where(df['Strike'] > price)[0][0] 796 797 get_range = slice(start_index - above_below, 798 start_index + above_below + 1) 799 chop = df[get_range].dropna(how='all') 800 chop.reset_index(inplace=True) 801 data[nam] = chop 802 return [data[nam] for nam in to_ret] 803 804 def _try_parse_dates(self, year, month, expiry): 805 if year is not None or month is not None: 806 warnings.warn("month, year arguments are deprecated, use expiry" 807 " instead", FutureWarning) 808 809 if expiry is not None: 810 year = expiry.year 811 month = expiry.month 812 return year, month 813 814 def get_forward_data(self, months, call=True, put=False, near=False, 815 above_below=2): 816 """ 817 Gets either call, put, or both data for months starting in the current 818 month and going out in the future a specified amount of time. 819 820 Parameters 821 ---------- 822 months: number, int 823 How many months to go out in the collection of the data. This is 824 inclusive. 825 826 call: bool, optional (default=True) 827 Whether or not to collect data for call options 828 829 put: bool, optional (default=False) 830 Whether or not to collect data for put options. 831 832 near: bool, optional (default=False) 833 Whether this function should get only the data near the 834 current stock price. Uses Options.get_near_stock_price 835 836 above_below: number, int, optional (default=2) 837 The number of strike prices above and below the stock price that 838 should be taken if the near option is set to True 839 840 Returns 841 ------- 842 data : dict of str, DataFrame 843 """ 844 warnings.warn("get_forward_data() is deprecated", FutureWarning) 845 in_months = lrange(CUR_MONTH, CUR_MONTH + months + 1) 846 in_years = [CUR_YEAR] * (months + 1) 847 848 # Figure out how many items in in_months go past 12 849 to_change = 0 850 for i in range(months): 851 if in_months[i] > 12: 852 in_months[i] -= 12 853 to_change += 1 854 855 # Change the corresponding items in the in_years list. 856 for i in range(1, to_change + 1): 857 in_years[-i] += 1 858 859 to_ret = Series({'calls': call, 'puts': put}) 860 to_ret = to_ret[to_ret].index 861 data = {} 862 863 for name in to_ret: 864 all_data = DataFrame() 865 866 for mon in range(months): 867 m2 = in_months[mon] 868 y2 = in_years[mon] 869 870 if not near: 871 m1 = _two_char_month(m2) 872 nam = name + str(m1) + str(y2)[2:] 873 874 try: # Try to access on the instance 875 frame = getattr(self, nam) 876 except AttributeError: 877 meth_name = 'get_{0}_data'.format(name[:-1]) 878 frame = getattr(self, meth_name)(m2, y2) 879 else: 880 frame = self.get_near_stock_price(call=call, put=put, 881 above_below=above_below, 882 month=m2, year=y2) 883 tick = str(frame.Symbol[0]) 884 start = len(self.symbol) 885 year = tick[start:start + 2] 886 month = tick[start + 2:start + 4] 887 day = tick[start + 4:start + 6] 888 expiry = month + '-' + day + '-' + year 889 frame['Expiry'] = expiry 890 891 if not mon: 892 all_data = all_data.join(frame, how='right') 893 else: 894 all_data = concat([all_data, frame]) 895 data[name] = all_data 896 ret = [data[k] for k in to_ret] 897 if len(ret) == 1: 898 return ret.pop() 899 if len(ret) != 2: 900 raise AssertionError("should be len 2") 901 return ret 902 [end of pandas/io/data.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
f7bdd349ef3d5d5c89c2c3ad53f1217f823a5711
BUG: duplicate selection with missing values raises Hard to reprod. Select from a duplicate indexed axis in a frame with ix where part of the selection set is missing ``` In [22]: df = DataFrame(np.random.randn(5,5),columns=['A.1','B.1','B.2','B.3','A.2'],index=date_range('20130101',periods=5)) In [23]: df2 = df.rename(columns=lambda x: x.split('.')[0]) In [24]: df2 Out[24]: A B B B A 2013-01-01 -1.029245 -0.782139 0.584956 1.097301 -0.150675 2013-01-02 -0.723246 -0.356150 -0.441952 0.027012 -1.851583 2013-01-03 -1.001412 0.129464 0.093433 0.952615 -1.338390 2013-01-04 0.165987 0.227918 0.557940 -0.102501 -1.194053 2013-01-05 0.249493 -1.102096 -0.977755 -0.529540 0.783277 [5 rows x 5 columns] In [25]: df2.ix[:,['A','B','C']] AssertionError: Number of manager items must equal union of block items # manager items: 6, # tot_items: 14 ```
2014-01-04T17:20:44Z
<patch> diff --git a/doc/source/release.rst b/doc/source/release.rst --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -73,6 +73,7 @@ Bug Fixes ~~~~~~~~~ - Bug in Series replace with timestamp dict (:issue:`5797`) - read_csv/read_table now respects the `prefix` kwarg (:issue:`5732`). + - Bug in selection with missing values via ``.ix`` from a duplicate indexed DataFrame failing (:issue:`5835`) pandas 0.13.0 ------------- diff --git a/pandas/core/internals.py b/pandas/core/internals.py --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -3119,6 +3119,9 @@ def reindex_indexer(self, new_axis, indexer, axis=1, fill_value=None, if not allow_dups and not self.axes[axis].is_unique: raise ValueError("cannot reindex from a duplicate axis") + if not self.is_consolidated(): + self = self.consolidate() + if axis == 0: return self._reindex_indexer_items(new_axis, indexer, fill_value) @@ -3140,38 +3143,62 @@ def _reindex_indexer_items(self, new_items, indexer, fill_value): new_blocks = [] is_unique = new_items.is_unique + # we have duplicates in the items and what we are reindexing + if not is_unique and not self.items.is_unique: + + rl = self._set_ref_locs(do_refs='force') + for i, idx in enumerate(indexer): + item = new_items.take([i]) + if idx >= 0: + blk, lidx = rl[idx] + blk = make_block(_block_shape(blk.iget(lidx)), item, + new_items, ndim=self.ndim, fastpath=True, + placement=[i]) + + # a missing value + else: + blk = self._make_na_block(item, + new_items, + placement=[i], + fill_value=fill_value) + new_blocks.append(blk) + new_blocks = _consolidate(new_blocks, new_items) + + # keep track of what items aren't found anywhere - l = np.arange(len(item_order)) - mask = np.zeros(len(item_order), dtype=bool) - for blk in self.blocks: - blk_indexer = blk.items.get_indexer(item_order) - selector = blk_indexer != -1 + else: + l = np.arange(len(item_order)) + mask = np.zeros(len(item_order), dtype=bool) - # update with observed items - mask |= selector + for blk in self.blocks: + blk_indexer = blk.items.get_indexer(item_order) + selector = blk_indexer != -1 + + # update with observed items + mask |= selector - if not selector.any(): - continue + if not selector.any(): + continue - new_block_items = new_items.take(selector.nonzero()[0]) - new_values = com.take_nd(blk.values, blk_indexer[selector], axis=0, - allow_fill=False) - placement = l[selector] if not is_unique else None - new_blocks.append(make_block(new_values, - new_block_items, + new_block_items = new_items.take(selector.nonzero()[0]) + new_values = com.take_nd(blk.values, blk_indexer[selector], axis=0, + allow_fill=False) + placement = l[selector] if not is_unique else None + new_blocks.append(make_block(new_values, + new_block_items, new_items, - placement=placement, - fastpath=True)) - - if not mask.all(): - na_items = new_items[-mask] - placement = l[-mask] if not is_unique else None - na_block = self._make_na_block(na_items, - new_items, - placement=placement, - fill_value=fill_value) - new_blocks.append(na_block) - new_blocks = _consolidate(new_blocks, new_items) + placement=placement, + fastpath=True)) + + if not mask.all(): + na_items = new_items[-mask] + placement = l[-mask] if not is_unique else None + na_block = self._make_na_block(na_items, + new_items, + placement=placement, + fill_value=fill_value) + new_blocks.append(na_block) + new_blocks = _consolidate(new_blocks, new_items) return self.__class__(new_blocks, new_axes) </patch>
[]
[]
pandas-dev__pandas-8170
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> newey-west adjustment not working properly in OLS It looks newey-west adjustment is not working properly in OLS when 'cluster' is set to 'time' or 'entity'. Specifically, pandas.stats.plm.py lines 791-794 don't have any effect. Should that be replaced with: xox = math.newey_west(m, nw_lags, nobs, df, nw_overlap)? Here is some code to reproduce the issue. import numpy from pylab import * from pandas import * T = 100 panel_size = 3 data_dimensions = [T, panel_size] xs_per_y = WidePanel({ 'predictor a' : numpy.random.normal(size=data_dimensions), 'predictor b' : numpy.random.normal(size=data_dimensions) }) # y = B_a + B_b + noise ys = xs_per_y['predictor a'] + xs_per_y['predictor b'] + numpy.random.normal(size=data_dimensions) print ols(y=ys, x=xs_per_y, pool=True, cluster = 'time') # we expect the following t-stats to be smaller, but they are the same as the previous OLS print ols(y=ys, x=xs_per_y, pool=True, cluster = 'time', nw_lags=10) </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.svg) 4 5 [![Scatter-CI Status page](http://scatterci.github.io/scatterci48.jpg)](http://scatterci.github.io/pydata/pandas) 6 7 ## What is it 8 9 **pandas** is a Python package providing fast, flexible, and expressive data 10 structures designed to make working with "relational" or "labeled" data both 11 easy and intuitive. It aims to be the fundamental high-level building block for 12 doing practical, **real world** data analysis in Python. Additionally, it has 13 the broader goal of becoming **the most powerful and flexible open source data 14 analysis / manipulation tool available in any language**. It is already well on 15 its way toward this goal. 16 17 ## Main Features 18 Here are just a few of the things that pandas does well: 19 20 - Easy handling of [**missing data**][missing-data] (represented as 21 `NaN`) in floating point as well as non-floating point data 22 - Size mutability: columns can be [**inserted and 23 deleted**][insertion-deletion] from DataFrame and higher dimensional 24 objects 25 - Automatic and explicit [**data alignment**][alignment]: objects can 26 be explicitly aligned to a set of labels, or the user can simply 27 ignore the labels and let `Series`, `DataFrame`, etc. automatically 28 align the data for you in computations 29 - Powerful, flexible [**group by**][groupby] functionality to perform 30 split-apply-combine operations on data sets, for both aggregating 31 and transforming data 32 - Make it [**easy to convert**][conversion] ragged, 33 differently-indexed data in other Python and NumPy data structures 34 into DataFrame objects 35 - Intelligent label-based [**slicing**][slicing], [**fancy 36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 37 large data sets 38 - Intuitive [**merging**][merging] and [**joining**][joining] data 39 sets 40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 41 data sets 42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 43 labels per tick) 44 - Robust IO tools for loading data from [**flat files**][flat-files] 45 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 47 - [**Time series**][timeseries]-specific functionality: date range 48 generation and frequency conversion, moving window statistics, 49 moving window linear regressions, date shifting and lagging, etc. 50 51 52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 70 71 ## Where to get it 72 The source code is currently hosted on GitHub at: 73 http://github.com/pydata/pandas 74 75 Binary installers for the latest released version are available at the Python 76 package index 77 78 http://pypi.python.org/pypi/pandas/ 79 80 And via `easy_install`: 81 82 ```sh 83 easy_install pandas 84 ``` 85 86 or `pip`: 87 88 ```sh 89 pip install pandas 90 ``` 91 92 ## Dependencies 93 - [NumPy](http://www.numpy.org): 1.7.0 or higher 94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 95 - [pytz](http://pytz.sourceforge.net) 96 - Needed for time zone support with ``pandas.date_range`` 97 98 ### Highly Recommended Dependencies 99 - [numexpr](https://github.com/pydata/numexpr) 100 - Needed to accelerate some expression evaluation operations 101 - Required by PyTables 102 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 103 - Needed to accelerate certain numerical operations 104 105 ### Optional dependencies 106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 111 - [statsmodels](http://statsmodels.sourceforge.net/) 112 - Needed for parts of `pandas.stats` 113 - For Excel I/O: 114 - [xlrd/xlwt](http://www.python-excel.org/) 115 - Excel reading (xlrd) and writing (xlwt) 116 - [openpyxl](http://packages.python.org/openpyxl/) 117 - openpyxl version 1.6.1 or higher, but lower than 2.0.0, for 118 writing .xlsx files 119 - xlrd >= 0.9.0 120 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 121 - Alternative Excel writer. 122 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/) 123 - Needed for `pandas.io.gbq` 124 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 125 - One of the following combinations of libraries is needed to use the 126 top-level [`pandas.read_html`][read-html-docs] function: 127 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 128 recent version of [html5lib][html5lib] is okay.) 129 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 130 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 131 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 132 for reasons as to why you should probably **not** take this approach. 133 134 #### Notes about HTML parsing libraries 135 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 136 either [lxml][lxml] or [html5lib][html5lib] or both. 137 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 138 installed. 139 - You are strongly encouraged to read [HTML reading 140 gotchas][html-gotchas]. It explains issues surrounding the 141 installation and usage of the above three libraries. 142 - You may need to install an older version of 143 [BeautifulSoup4][BeautifulSoup4]: 144 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 145 32-bit Ubuntu/Debian 146 - Additionally, if you're using [Anaconda][Anaconda] you should 147 definitely read [the gotchas about HTML parsing][html-gotchas] 148 libraries 149 - If you're on a system with `apt-get` you can do 150 151 ```sh 152 sudo apt-get build-dep python-lxml 153 ``` 154 155 to get the necessary dependencies for installation of [lxml][lxml]. 156 This will prevent further headaches down the line. 157 158 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 159 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 160 [lxml]: http://lxml.de 161 [Anaconda]: https://store.continuum.io/cshop/anaconda 162 [NumPy]: http://numpy.scipy.org/ 163 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 164 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 165 166 ## Installation from sources 167 To install pandas from source you need Cython in addition to the normal 168 dependencies above. Cython can be installed from pypi: 169 170 ```sh 171 pip install cython 172 ``` 173 174 In the `pandas` directory (same one where you found this file after 175 cloning the git repo), execute: 176 177 ```sh 178 python setup.py install 179 ``` 180 181 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html): 182 183 ```sh 184 python setup.py develop 185 ``` 186 187 Alternatively, you can use `pip` if you want all the dependencies pulled 188 in automatically (the `-e` option is for installing it in [development 189 mode](http://www.pip-installer.org/en/latest/usage.html)): 190 191 ```sh 192 pip install -e . 193 ``` 194 195 On Windows, you will need to install MinGW and execute: 196 197 ```sh 198 python setup.py build --compiler=mingw32 199 python setup.py install 200 ``` 201 202 See http://pandas.pydata.org/ for more information. 203 204 ## License 205 BSD 206 207 ## Documentation 208 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 209 210 The Sphinx documentation should provide a good starting point for learning how 211 to use the library. Expect the docs to continue to expand as time goes on. 212 213 ## Background 214 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 215 has been under active development since then. 216 217 ## Discussion and Development 218 Since pandas development is related to a number of other scientific 219 Python projects, questions are welcome on the scipy-user mailing 220 list. Specialized discussions or design issues should take place on 221 the PyData mailing list / Google group: 222 223 https://groups.google.com/forum/#!forum/pydata 224 [end of README.md] [start of pandas/stats/interface.py] 1 from pandas.core.api import Series, DataFrame, Panel, MultiIndex 2 from pandas.stats.ols import OLS, MovingOLS 3 from pandas.stats.plm import PanelOLS, MovingPanelOLS, NonPooledPanelOLS 4 import pandas.stats.common as common 5 6 7 def ols(**kwargs): 8 """Returns the appropriate OLS object depending on whether you need 9 simple or panel OLS, and a full-sample or rolling/expanding OLS. 10 11 Will be a normal linear regression or a (pooled) panel regression depending 12 on the type of the inputs: 13 14 y : Series, x : DataFrame -> OLS 15 y : Series, x : dict of DataFrame -> OLS 16 y : DataFrame, x : DataFrame -> PanelOLS 17 y : DataFrame, x : dict of DataFrame/Panel -> PanelOLS 18 y : Series with MultiIndex, x : Panel/DataFrame + MultiIndex -> PanelOLS 19 20 Parameters 21 ---------- 22 y: Series or DataFrame 23 See above for types 24 x: Series, DataFrame, dict of Series, dict of DataFrame, Panel 25 weights : Series or ndarray 26 The weights are presumed to be (proportional to) the inverse of the 27 variance of the observations. That is, if the variables are to be 28 transformed by 1/sqrt(W) you must supply weights = 1/W 29 intercept: bool 30 True if you want an intercept. Defaults to True. 31 nw_lags: None or int 32 Number of Newey-West lags. Defaults to None. 33 nw_overlap: bool 34 Whether there are overlaps in the NW lags. Defaults to False. 35 window_type: {'full sample', 'rolling', 'expanding'} 36 'full sample' by default 37 window: int 38 size of window (for rolling/expanding OLS). If window passed and no 39 explicit window_type, 'rolling" will be used as the window_type 40 41 Panel OLS options: 42 pool: bool 43 Whether to run pooled panel regression. Defaults to true. 44 entity_effects: bool 45 Whether to account for entity fixed effects. Defaults to false. 46 time_effects: bool 47 Whether to account for time fixed effects. Defaults to false. 48 x_effects: list 49 List of x's to account for fixed effects. Defaults to none. 50 dropped_dummies: dict 51 Key is the name of the variable for the fixed effect. 52 Value is the value of that variable for which we drop the dummy. 53 54 For entity fixed effects, key equals 'entity'. 55 56 By default, the first dummy is dropped if no dummy is specified. 57 cluster: {'time', 'entity'} 58 cluster variances 59 60 Examples 61 -------- 62 # Run simple OLS. 63 result = ols(y=y, x=x) 64 65 # Run rolling simple OLS with window of size 10. 66 result = ols(y=y, x=x, window_type='rolling', window=10) 67 print(result.beta) 68 69 result = ols(y=y, x=x, nw_lags=1) 70 71 # Set up LHS and RHS for data across all items 72 y = A 73 x = {'B' : B, 'C' : C} 74 75 # Run panel OLS. 76 result = ols(y=y, x=x) 77 78 # Run expanding panel OLS with window 10 and entity clustering. 79 result = ols(y=y, x=x, cluster='entity', window_type='expanding', window=10) 80 81 Returns 82 ------- 83 The appropriate OLS object, which allows you to obtain betas and various 84 statistics, such as std err, t-stat, etc. 85 """ 86 pool = kwargs.get('pool') 87 if 'pool' in kwargs: 88 del kwargs['pool'] 89 90 window_type = kwargs.get('window_type') 91 window = kwargs.get('window') 92 93 if window_type is None: 94 if window is None: 95 window_type = 'full_sample' 96 else: 97 window_type = 'rolling' 98 else: 99 window_type = common._get_window_type(window_type) 100 101 if window_type != 'full_sample': 102 kwargs['window_type'] = common._get_window_type(window_type) 103 104 y = kwargs.get('y') 105 x = kwargs.get('x') 106 107 panel = False 108 if isinstance(y, DataFrame) or (isinstance(y, Series) and 109 isinstance(y.index, MultiIndex)): 110 panel = True 111 if isinstance(x, Panel): 112 panel = True 113 114 if window_type == 'full_sample': 115 for rolling_field in ('window_type', 'window', 'min_periods'): 116 if rolling_field in kwargs: 117 del kwargs[rolling_field] 118 119 if panel: 120 if pool is False: 121 klass = NonPooledPanelOLS 122 else: 123 klass = PanelOLS 124 else: 125 klass = OLS 126 else: 127 if panel: 128 if pool is False: 129 klass = NonPooledPanelOLS 130 else: 131 klass = MovingPanelOLS 132 else: 133 klass = MovingOLS 134 135 return klass(**kwargs) 136 [end of pandas/stats/interface.py] [start of pandas/stats/math.py] 1 # pylint: disable-msg=E1103 2 # pylint: disable-msg=W0212 3 4 from __future__ import division 5 6 from pandas.compat import range 7 import numpy as np 8 import numpy.linalg as linalg 9 10 11 def rank(X, cond=1.0e-12): 12 """ 13 Return the rank of a matrix X based on its generalized inverse, 14 not the SVD. 15 """ 16 X = np.asarray(X) 17 if len(X.shape) == 2: 18 import scipy.linalg as SL 19 D = SL.svdvals(X) 20 result = np.add.reduce(np.greater(D / D.max(), cond)) 21 return int(result.astype(np.int32)) 22 else: 23 return int(not np.alltrue(np.equal(X, 0.))) 24 25 26 def solve(a, b): 27 """Returns the solution of A X = B.""" 28 try: 29 return linalg.solve(a, b) 30 except linalg.LinAlgError: 31 return np.dot(linalg.pinv(a), b) 32 33 34 def inv(a): 35 """Returns the inverse of A.""" 36 try: 37 return np.linalg.inv(a) 38 except linalg.LinAlgError: 39 return np.linalg.pinv(a) 40 41 42 def is_psd(m): 43 eigvals = linalg.eigvals(m) 44 return np.isreal(eigvals).all() and (eigvals >= 0).all() 45 46 47 def newey_west(m, max_lags, nobs, df, nw_overlap=False): 48 """ 49 Compute Newey-West adjusted covariance matrix, taking into account 50 specified number of leads / lags 51 52 Parameters 53 ---------- 54 m : (N x K) 55 max_lags : int 56 nobs : int 57 Number of observations in model 58 df : int 59 Degrees of freedom in explanatory variables 60 nw_overlap : boolean, default False 61 Assume data is overlapping 62 63 Returns 64 ------- 65 ndarray (K x K) 66 67 Reference 68 --------- 69 Newey, W. K. & West, K. D. (1987) A Simple, Positive 70 Semi-definite, Heteroskedasticity and Autocorrelation Consistent 71 Covariance Matrix, Econometrica, vol. 55(3), 703-708 72 """ 73 Xeps = np.dot(m.T, m) 74 for lag in range(1, max_lags + 1): 75 auto_cov = np.dot(m[:-lag].T, m[lag:]) 76 weight = lag / (max_lags + 1) 77 if nw_overlap: 78 weight = 0 79 bb = auto_cov + auto_cov.T 80 dd = (1 - weight) * bb 81 Xeps += dd 82 83 Xeps *= nobs / (nobs - df) 84 85 if nw_overlap and not is_psd(Xeps): 86 new_max_lags = int(np.ceil(max_lags * 1.5)) 87 # print('nw_overlap is True and newey_west generated a non positive ' 88 # 'semidefinite matrix, so using newey_west with max_lags of %d.' 89 # % new_max_lags) 90 return newey_west(m, new_max_lags, nobs, df) 91 92 return Xeps 93 94 95 def calc_F(R, r, beta, var_beta, nobs, df): 96 """ 97 Computes the standard F-test statistic for linear restriction 98 hypothesis testing 99 100 Parameters 101 ---------- 102 R: ndarray (N x N) 103 Restriction matrix 104 r: ndarray (N x 1) 105 Restriction vector 106 beta: ndarray (N x 1) 107 Estimated model coefficients 108 var_beta: ndarray (N x N) 109 Variance covariance matrix of regressors 110 nobs: int 111 Number of observations in model 112 df: int 113 Model degrees of freedom 114 115 Returns 116 ------- 117 F value, (q, df_resid), p value 118 """ 119 from scipy.stats import f 120 121 hyp = np.dot(R, beta.reshape(len(beta), 1)) - r 122 RSR = np.dot(R, np.dot(var_beta, R.T)) 123 124 q = len(r) 125 126 F = np.dot(hyp.T, np.dot(inv(RSR), hyp)).squeeze() / q 127 128 p_value = 1 - f.cdf(F, q, nobs - df) 129 130 return F, (q, nobs - df), p_value 131 [end of pandas/stats/math.py] [start of pandas/stats/plm.py] 1 """ 2 Linear regression objects for panel data 3 """ 4 5 # pylint: disable-msg=W0231 6 # pylint: disable-msg=E1101,E1103 7 8 from __future__ import division 9 from pandas.compat import range 10 from pandas import compat 11 import warnings 12 13 import numpy as np 14 15 from pandas.core.panel import Panel 16 from pandas.core.frame import DataFrame 17 from pandas.core.reshape import get_dummies 18 from pandas.core.series import Series 19 from pandas.core.sparse import SparsePanel 20 from pandas.stats.ols import OLS, MovingOLS 21 import pandas.stats.common as com 22 import pandas.stats.math as math 23 from pandas.util.decorators import cache_readonly 24 25 26 class PanelOLS(OLS): 27 """Implements panel OLS. 28 29 See ols function docs 30 """ 31 _panel_model = True 32 33 def __init__(self, y, x, weights=None, intercept=True, nw_lags=None, 34 entity_effects=False, time_effects=False, x_effects=None, 35 cluster=None, dropped_dummies=None, verbose=False, 36 nw_overlap=False): 37 self._x_orig = x 38 self._y_orig = y 39 self._weights = weights 40 41 self._intercept = intercept 42 self._nw_lags = nw_lags 43 self._nw_overlap = nw_overlap 44 self._entity_effects = entity_effects 45 self._time_effects = time_effects 46 self._x_effects = x_effects 47 self._dropped_dummies = dropped_dummies or {} 48 self._cluster = com._get_cluster_type(cluster) 49 self._verbose = verbose 50 51 (self._x, self._x_trans, 52 self._x_filtered, self._y, 53 self._y_trans) = self._prepare_data() 54 55 self._index = self._x.index.levels[0] 56 57 self._T = len(self._index) 58 59 def log(self, msg): 60 if self._verbose: # pragma: no cover 61 print(msg) 62 63 def _prepare_data(self): 64 """Cleans and stacks input data into DataFrame objects 65 66 If time effects is True, then we turn off intercepts and omit an item 67 from every (entity and x) fixed effect. 68 69 Otherwise: 70 - If we have an intercept, we omit an item from every fixed effect. 71 - Else, we omit an item from every fixed effect except one of them. 72 73 The categorical variables will get dropped from x. 74 """ 75 (x, x_filtered, y, weights, cat_mapping) = self._filter_data() 76 77 self.log('Adding dummies to X variables') 78 x = self._add_dummies(x, cat_mapping) 79 80 self.log('Adding dummies to filtered X variables') 81 x_filtered = self._add_dummies(x_filtered, cat_mapping) 82 83 if self._x_effects: 84 x = x.drop(self._x_effects, axis=1) 85 x_filtered = x_filtered.drop(self._x_effects, axis=1) 86 87 if self._time_effects: 88 x_regressor = x.sub(x.mean(level=0), level=0) 89 90 unstacked_y = y.unstack() 91 y_regressor = unstacked_y.sub(unstacked_y.mean(1), axis=0).stack() 92 y_regressor.index = y.index 93 94 elif self._intercept: 95 # only add intercept when no time effects 96 self.log('Adding intercept') 97 x = x_regressor = add_intercept(x) 98 x_filtered = add_intercept(x_filtered) 99 y_regressor = y 100 else: 101 self.log('No intercept added') 102 x_regressor = x 103 y_regressor = y 104 105 if weights is not None: 106 if not y_regressor.index.equals(weights.index): 107 raise AssertionError("y_regressor and weights must have the " 108 "same index") 109 if not x_regressor.index.equals(weights.index): 110 raise AssertionError("x_regressor and weights must have the " 111 "same index") 112 113 rt_weights = np.sqrt(weights) 114 y_regressor = y_regressor * rt_weights 115 x_regressor = x_regressor.mul(rt_weights, axis=0) 116 117 return x, x_regressor, x_filtered, y, y_regressor 118 119 def _filter_data(self): 120 """ 121 122 """ 123 data = self._x_orig 124 cat_mapping = {} 125 126 if isinstance(data, DataFrame): 127 data = data.to_panel() 128 else: 129 if isinstance(data, Panel): 130 data = data.copy() 131 132 if not isinstance(data, SparsePanel): 133 data, cat_mapping = self._convert_x(data) 134 135 if not isinstance(data, Panel): 136 data = Panel.from_dict(data, intersect=True) 137 138 x_names = data.items 139 140 if self._weights is not None: 141 data['__weights__'] = self._weights 142 143 # Filter x's without y (so we can make a prediction) 144 filtered = data.to_frame() 145 146 # Filter all data together using to_frame 147 148 # convert to DataFrame 149 y = self._y_orig 150 if isinstance(y, Series): 151 y = y.unstack() 152 153 data['__y__'] = y 154 data_long = data.to_frame() 155 156 x_filt = filtered.filter(x_names) 157 x = data_long.filter(x_names) 158 y = data_long['__y__'] 159 160 if self._weights is not None and not self._weights.empty: 161 weights = data_long['__weights__'] 162 else: 163 weights = None 164 165 return x, x_filt, y, weights, cat_mapping 166 167 def _convert_x(self, x): 168 # Converts non-numeric data in x to floats. x_converted is the 169 # DataFrame with converted values, and x_conversion is a dict that 170 # provides the reverse mapping. For example, if 'A' was converted to 0 171 # for x named 'variety', then x_conversion['variety'][0] is 'A'. 172 x_converted = {} 173 cat_mapping = {} 174 # x can be either a dict or a Panel, but in Python 3, dicts don't have 175 # .iteritems 176 iteritems = getattr(x, 'iteritems', x.items) 177 for key, df in iteritems(): 178 if not isinstance(df, DataFrame): 179 raise AssertionError("all input items must be DataFrames, " 180 "at least one is of " 181 "type {0}".format(type(df))) 182 183 if _is_numeric(df): 184 x_converted[key] = df 185 else: 186 try: 187 df = df.astype(float) 188 except (TypeError, ValueError): 189 values = df.values 190 distinct_values = sorted(set(values.flat)) 191 cat_mapping[key] = dict(enumerate(distinct_values)) 192 new_values = np.searchsorted(distinct_values, values) 193 x_converted[key] = DataFrame(new_values, index=df.index, 194 columns=df.columns) 195 196 if len(cat_mapping) == 0: 197 x_converted = x 198 199 return x_converted, cat_mapping 200 201 def _add_dummies(self, panel, mapping): 202 """ 203 Add entity and / or categorical dummies to input X DataFrame 204 205 Returns 206 ------- 207 DataFrame 208 """ 209 panel = self._add_entity_effects(panel) 210 panel = self._add_categorical_dummies(panel, mapping) 211 212 return panel 213 214 def _add_entity_effects(self, panel): 215 """ 216 Add entity dummies to panel 217 218 Returns 219 ------- 220 DataFrame 221 """ 222 from pandas.core.reshape import make_axis_dummies 223 224 if not self._entity_effects: 225 return panel 226 227 self.log('-- Adding entity fixed effect dummies') 228 229 dummies = make_axis_dummies(panel, 'minor') 230 231 if not self._use_all_dummies: 232 if 'entity' in self._dropped_dummies: 233 to_exclude = str(self._dropped_dummies.get('entity')) 234 else: 235 to_exclude = dummies.columns[0] 236 237 if to_exclude not in dummies.columns: 238 raise Exception('%s not in %s' % (to_exclude, 239 dummies.columns)) 240 241 self.log('-- Excluding dummy for entity: %s' % to_exclude) 242 243 dummies = dummies.filter(dummies.columns - [to_exclude]) 244 245 dummies = dummies.add_prefix('FE_') 246 panel = panel.join(dummies) 247 248 return panel 249 250 def _add_categorical_dummies(self, panel, cat_mappings): 251 """ 252 Add categorical dummies to panel 253 254 Returns 255 ------- 256 DataFrame 257 """ 258 if not self._x_effects: 259 return panel 260 261 dropped_dummy = (self._entity_effects and not self._use_all_dummies) 262 263 for effect in self._x_effects: 264 self.log('-- Adding fixed effect dummies for %s' % effect) 265 266 dummies = get_dummies(panel[effect]) 267 268 val_map = cat_mappings.get(effect) 269 if val_map: 270 val_map = dict((v, k) for k, v in compat.iteritems(val_map)) 271 272 if dropped_dummy or not self._use_all_dummies: 273 if effect in self._dropped_dummies: 274 to_exclude = mapped_name = self._dropped_dummies.get( 275 effect) 276 277 if val_map: 278 mapped_name = val_map[to_exclude] 279 else: 280 to_exclude = mapped_name = dummies.columns[0] 281 282 if mapped_name not in dummies.columns: # pragma: no cover 283 raise Exception('%s not in %s' % (to_exclude, 284 dummies.columns)) 285 286 self.log( 287 '-- Excluding dummy for %s: %s' % (effect, to_exclude)) 288 289 dummies = dummies.filter(dummies.columns - [mapped_name]) 290 dropped_dummy = True 291 292 dummies = _convertDummies(dummies, cat_mappings.get(effect)) 293 dummies = dummies.add_prefix('%s_' % effect) 294 panel = panel.join(dummies) 295 296 return panel 297 298 @property 299 def _use_all_dummies(self): 300 """ 301 In the case of using an intercept or including time fixed 302 effects, completely partitioning the sample would make the X 303 not full rank. 304 """ 305 return (not self._intercept and not self._time_effects) 306 307 @cache_readonly 308 def _beta_raw(self): 309 """Runs the regression and returns the beta.""" 310 X = self._x_trans.values 311 Y = self._y_trans.values.squeeze() 312 313 beta, _, _, _ = np.linalg.lstsq(X, Y) 314 315 return beta 316 317 @cache_readonly 318 def beta(self): 319 return Series(self._beta_raw, index=self._x.columns) 320 321 @cache_readonly 322 def _df_model_raw(self): 323 """Returns the raw model degrees of freedom.""" 324 return self._df_raw - 1 325 326 @cache_readonly 327 def _df_resid_raw(self): 328 """Returns the raw residual degrees of freedom.""" 329 return self._nobs - self._df_raw 330 331 @cache_readonly 332 def _df_raw(self): 333 """Returns the degrees of freedom.""" 334 df = math.rank(self._x_trans.values) 335 if self._time_effects: 336 df += self._total_times 337 338 return df 339 340 @cache_readonly 341 def _r2_raw(self): 342 Y = self._y_trans.values.squeeze() 343 X = self._x_trans.values 344 345 resid = Y - np.dot(X, self._beta_raw) 346 347 SSE = (resid ** 2).sum() 348 349 if self._use_centered_tss: 350 SST = ((Y - np.mean(Y)) ** 2).sum() 351 else: 352 SST = (Y ** 2).sum() 353 354 return 1 - SSE / SST 355 356 @property 357 def _use_centered_tss(self): 358 # has_intercept = np.abs(self._resid_raw.sum()) < _FP_ERR 359 return self._intercept or self._entity_effects or self._time_effects 360 361 @cache_readonly 362 def _r2_adj_raw(self): 363 """Returns the raw r-squared adjusted values.""" 364 nobs = self._nobs 365 factors = (nobs - 1) / (nobs - self._df_raw) 366 return 1 - (1 - self._r2_raw) * factors 367 368 @cache_readonly 369 def _resid_raw(self): 370 Y = self._y.values.squeeze() 371 X = self._x.values 372 return Y - np.dot(X, self._beta_raw) 373 374 @cache_readonly 375 def resid(self): 376 return self._unstack_vector(self._resid_raw) 377 378 @cache_readonly 379 def _rmse_raw(self): 380 """Returns the raw rmse values.""" 381 # X = self._x.values 382 # Y = self._y.values.squeeze() 383 384 X = self._x_trans.values 385 Y = self._y_trans.values.squeeze() 386 387 resid = Y - np.dot(X, self._beta_raw) 388 ss = (resid ** 2).sum() 389 return np.sqrt(ss / (self._nobs - self._df_raw)) 390 391 @cache_readonly 392 def _var_beta_raw(self): 393 cluster_axis = None 394 if self._cluster == 'time': 395 cluster_axis = 0 396 elif self._cluster == 'entity': 397 cluster_axis = 1 398 399 x = self._x 400 y = self._y 401 402 if self._time_effects: 403 xx = _xx_time_effects(x, y) 404 else: 405 xx = np.dot(x.values.T, x.values) 406 407 return _var_beta_panel(y, x, self._beta_raw, xx, 408 self._rmse_raw, cluster_axis, self._nw_lags, 409 self._nobs, self._df_raw, self._nw_overlap) 410 411 @cache_readonly 412 def _y_fitted_raw(self): 413 """Returns the raw fitted y values.""" 414 return np.dot(self._x.values, self._beta_raw) 415 416 @cache_readonly 417 def y_fitted(self): 418 return self._unstack_vector(self._y_fitted_raw, index=self._x.index) 419 420 def _unstack_vector(self, vec, index=None): 421 if index is None: 422 index = self._y_trans.index 423 panel = DataFrame(vec, index=index, columns=['dummy']) 424 return panel.to_panel()['dummy'] 425 426 def _unstack_y(self, vec): 427 unstacked = self._unstack_vector(vec) 428 return unstacked.reindex(self.beta.index) 429 430 @cache_readonly 431 def _time_obs_count(self): 432 return self._y_trans.count(level=0).values 433 434 @cache_readonly 435 def _time_has_obs(self): 436 return self._time_obs_count > 0 437 438 @property 439 def _nobs(self): 440 return len(self._y) 441 442 443 def _convertDummies(dummies, mapping): 444 # cleans up the names of the generated dummies 445 new_items = [] 446 for item in dummies.columns: 447 if not mapping: 448 var = str(item) 449 if isinstance(item, float): 450 var = '%g' % item 451 452 new_items.append(var) 453 else: 454 # renames the dummies if a conversion dict is provided 455 new_items.append(mapping[int(item)]) 456 457 dummies = DataFrame(dummies.values, index=dummies.index, 458 columns=new_items) 459 460 return dummies 461 462 463 def _is_numeric(df): 464 for col in df: 465 if df[col].dtype.name == 'object': 466 return False 467 468 return True 469 470 471 def add_intercept(panel, name='intercept'): 472 """ 473 Add column of ones to input panel 474 475 Parameters 476 ---------- 477 panel: Panel / DataFrame 478 name: string, default 'intercept'] 479 480 Returns 481 ------- 482 New object (same type as input) 483 """ 484 panel = panel.copy() 485 panel[name] = 1. 486 487 return panel.consolidate() 488 489 490 class MovingPanelOLS(MovingOLS, PanelOLS): 491 """Implements rolling/expanding panel OLS. 492 493 See ols function docs 494 """ 495 _panel_model = True 496 497 def __init__(self, y, x, weights=None, 498 window_type='expanding', window=None, 499 min_periods=None, 500 min_obs=None, 501 intercept=True, 502 nw_lags=None, nw_overlap=False, 503 entity_effects=False, 504 time_effects=False, 505 x_effects=None, 506 cluster=None, 507 dropped_dummies=None, 508 verbose=False): 509 510 self._args = dict(intercept=intercept, 511 nw_lags=nw_lags, 512 nw_overlap=nw_overlap, 513 entity_effects=entity_effects, 514 time_effects=time_effects, 515 x_effects=x_effects, 516 cluster=cluster, 517 dropped_dummies=dropped_dummies, 518 verbose=verbose) 519 520 PanelOLS.__init__(self, y=y, x=x, weights=weights, 521 **self._args) 522 523 self._set_window(window_type, window, min_periods) 524 525 if min_obs is None: 526 min_obs = len(self._x.columns) + 1 527 528 self._min_obs = min_obs 529 530 @cache_readonly 531 def resid(self): 532 return self._unstack_y(self._resid_raw) 533 534 @cache_readonly 535 def y_fitted(self): 536 return self._unstack_y(self._y_fitted_raw) 537 538 @cache_readonly 539 def y_predict(self): 540 """Returns the predicted y values.""" 541 return self._unstack_y(self._y_predict_raw) 542 543 def lagged_y_predict(self, lag=1): 544 """ 545 Compute forecast Y value lagging coefficient by input number 546 of time periods 547 548 Parameters 549 ---------- 550 lag : int 551 552 Returns 553 ------- 554 DataFrame 555 """ 556 x = self._x.values 557 betas = self._beta_matrix(lag=lag) 558 return self._unstack_y((betas * x).sum(1)) 559 560 @cache_readonly 561 def _rolling_ols_call(self): 562 return self._calc_betas(self._x_trans, self._y_trans) 563 564 @cache_readonly 565 def _df_raw(self): 566 """Returns the degrees of freedom.""" 567 df = self._rolling_rank() 568 569 if self._time_effects: 570 df += self._window_time_obs 571 572 return df[self._valid_indices] 573 574 @cache_readonly 575 def _var_beta_raw(self): 576 """Returns the raw covariance of beta.""" 577 x = self._x 578 y = self._y 579 580 dates = x.index.levels[0] 581 582 cluster_axis = None 583 if self._cluster == 'time': 584 cluster_axis = 0 585 elif self._cluster == 'entity': 586 cluster_axis = 1 587 588 nobs = self._nobs 589 rmse = self._rmse_raw 590 beta = self._beta_raw 591 df = self._df_raw 592 window = self._window 593 594 if not self._time_effects: 595 # Non-transformed X 596 cum_xx = self._cum_xx(x) 597 598 results = [] 599 for n, i in enumerate(self._valid_indices): 600 if self._is_rolling and i >= window: 601 prior_date = dates[i - window + 1] 602 else: 603 prior_date = dates[0] 604 605 date = dates[i] 606 607 x_slice = x.truncate(prior_date, date) 608 y_slice = y.truncate(prior_date, date) 609 610 if self._time_effects: 611 xx = _xx_time_effects(x_slice, y_slice) 612 else: 613 xx = cum_xx[i] 614 if self._is_rolling and i >= window: 615 xx = xx - cum_xx[i - window] 616 617 result = _var_beta_panel(y_slice, x_slice, beta[n], xx, rmse[n], 618 cluster_axis, self._nw_lags, 619 nobs[n], df[n], self._nw_overlap) 620 621 results.append(result) 622 623 return np.array(results) 624 625 @cache_readonly 626 def _resid_raw(self): 627 beta_matrix = self._beta_matrix(lag=0) 628 629 Y = self._y.values.squeeze() 630 X = self._x.values 631 resid = Y - (X * beta_matrix).sum(1) 632 633 return resid 634 635 @cache_readonly 636 def _y_fitted_raw(self): 637 x = self._x.values 638 betas = self._beta_matrix(lag=0) 639 return (betas * x).sum(1) 640 641 @cache_readonly 642 def _y_predict_raw(self): 643 """Returns the raw predicted y values.""" 644 x = self._x.values 645 betas = self._beta_matrix(lag=1) 646 return (betas * x).sum(1) 647 648 def _beta_matrix(self, lag=0): 649 if lag < 0: 650 raise AssertionError("'lag' must be greater than or equal to 0, " 651 "input was {0}".format(lag)) 652 653 index = self._y_trans.index 654 major_labels = index.labels[0] 655 labels = major_labels - lag 656 indexer = self._valid_indices.searchsorted(labels, side='left') 657 658 beta_matrix = self._beta_raw[indexer] 659 beta_matrix[labels < self._valid_indices[0]] = np.NaN 660 661 return beta_matrix 662 663 @cache_readonly 664 def _enough_obs(self): 665 # XXX: what's the best way to determine where to start? 666 # TODO: write unit tests for this 667 668 rank_threshold = len(self._x.columns) + 1 669 if self._min_obs < rank_threshold: # pragma: no cover 670 warnings.warn('min_obs is smaller than rank of X matrix') 671 672 enough_observations = self._nobs_raw >= self._min_obs 673 enough_time_periods = self._window_time_obs >= self._min_periods 674 return enough_time_periods & enough_observations 675 676 677 def create_ols_dict(attr): 678 def attr_getter(self): 679 d = {} 680 for k, v in compat.iteritems(self.results): 681 result = getattr(v, attr) 682 d[k] = result 683 684 return d 685 686 return attr_getter 687 688 689 def create_ols_attr(attr): 690 return property(create_ols_dict(attr)) 691 692 693 class NonPooledPanelOLS(object): 694 """Implements non-pooled panel OLS. 695 696 Parameters 697 ---------- 698 y : DataFrame 699 x : Series, DataFrame, or dict of Series 700 intercept : bool 701 True if you want an intercept. 702 nw_lags : None or int 703 Number of Newey-West lags. 704 window_type : {'full_sample', 'rolling', 'expanding'} 705 'full_sample' by default 706 window : int 707 size of window (for rolling/expanding OLS) 708 """ 709 710 ATTRIBUTES = [ 711 'beta', 712 'df', 713 'df_model', 714 'df_resid', 715 'f_stat', 716 'p_value', 717 'r2', 718 'r2_adj', 719 'resid', 720 'rmse', 721 'std_err', 722 'summary_as_matrix', 723 't_stat', 724 'var_beta', 725 'x', 726 'y', 727 'y_fitted', 728 'y_predict' 729 ] 730 731 def __init__(self, y, x, window_type='full_sample', window=None, 732 min_periods=None, intercept=True, nw_lags=None, 733 nw_overlap=False): 734 735 for attr in self.ATTRIBUTES: 736 setattr(self.__class__, attr, create_ols_attr(attr)) 737 738 results = {} 739 740 for entity in y: 741 entity_y = y[entity] 742 743 entity_x = {} 744 for x_var in x: 745 entity_x[x_var] = x[x_var][entity] 746 747 from pandas.stats.interface import ols 748 results[entity] = ols(y=entity_y, 749 x=entity_x, 750 window_type=window_type, 751 window=window, 752 min_periods=min_periods, 753 intercept=intercept, 754 nw_lags=nw_lags, 755 nw_overlap=nw_overlap) 756 757 self.results = results 758 759 760 def _var_beta_panel(y, x, beta, xx, rmse, cluster_axis, 761 nw_lags, nobs, df, nw_overlap): 762 xx_inv = math.inv(xx) 763 764 yv = y.values 765 766 if cluster_axis is None: 767 if nw_lags is None: 768 return xx_inv * (rmse ** 2) 769 else: 770 resid = yv - np.dot(x.values, beta) 771 m = (x.values.T * resid).T 772 773 xeps = math.newey_west(m, nw_lags, nobs, df, nw_overlap) 774 775 return np.dot(xx_inv, np.dot(xeps, xx_inv)) 776 else: 777 Xb = np.dot(x.values, beta).reshape((len(x.values), 1)) 778 resid = DataFrame(yv[:, None] - Xb, index=y.index, columns=['resid']) 779 780 if cluster_axis == 1: 781 x = x.swaplevel(0, 1).sortlevel(0) 782 resid = resid.swaplevel(0, 1).sortlevel(0) 783 784 m = _group_agg(x.values * resid.values, x.index._bounds, 785 lambda x: np.sum(x, axis=0)) 786 787 if nw_lags is None: 788 nw_lags = 0 789 790 xox = 0 791 for i in range(len(x.index.levels[0])): 792 xox += math.newey_west(m[i: i + 1], nw_lags, 793 nobs, df, nw_overlap) 794 795 return np.dot(xx_inv, np.dot(xox, xx_inv)) 796 797 def _group_agg(values, bounds, f): 798 """ 799 R-style aggregator 800 801 Parameters 802 ---------- 803 values : N-length or N x K ndarray 804 bounds : B-length ndarray 805 f : ndarray aggregation function 806 807 Returns 808 ------- 809 ndarray with same length as bounds array 810 """ 811 if values.ndim == 1: 812 N = len(values) 813 result = np.empty(len(bounds), dtype=float) 814 elif values.ndim == 2: 815 N, K = values.shape 816 result = np.empty((len(bounds), K), dtype=float) 817 818 testagg = f(values[:min(1, len(values))]) 819 if isinstance(testagg, np.ndarray) and testagg.ndim == 2: 820 raise AssertionError('Function must reduce') 821 822 for i, left_bound in enumerate(bounds): 823 if i == len(bounds) - 1: 824 right_bound = N 825 else: 826 right_bound = bounds[i + 1] 827 828 result[i] = f(values[left_bound:right_bound]) 829 830 return result 831 832 def _xx_time_effects(x, y): 833 """ 834 Returns X'X - (X'T) (T'T)^-1 (T'X) 835 """ 836 # X'X 837 xx = np.dot(x.values.T, x.values) 838 xt = x.sum(level=0).values 839 840 count = y.unstack().count(1).values 841 selector = count > 0 842 843 # X'X - (T'T)^-1 (T'X) 844 xt = xt[selector] 845 count = count[selector] 846 847 return xx - np.dot(xt.T / count, xt) 848 [end of pandas/stats/plm.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
abd5333e7a3332921707888de9621c52dd3408e6
newey-west adjustment not working properly in OLS It looks newey-west adjustment is not working properly in OLS when 'cluster' is set to 'time' or 'entity'. Specifically, pandas.stats.plm.py lines 791-794 don't have any effect. Should that be replaced with: xox = math.newey_west(m, nw_lags, nobs, df, nw_overlap)? Here is some code to reproduce the issue. import numpy from pylab import * from pandas import * T = 100 panel_size = 3 data_dimensions = [T, panel_size] xs_per_y = WidePanel({ 'predictor a' : numpy.random.normal(size=data_dimensions), 'predictor b' : numpy.random.normal(size=data_dimensions) }) # y = B_a + B_b + noise ys = xs_per_y['predictor a'] + xs_per_y['predictor b'] + numpy.random.normal(size=data_dimensions) print ols(y=ys, x=xs_per_y, pool=True, cluster = 'time') # we expect the following t-stats to be smaller, but they are the same as the previous OLS print ols(y=ys, x=xs_per_y, pool=True, cluster = 'time', nw_lags=10)
2014-09-03T18:47:48Z
<patch> </patch>
[]
[]
ipython__ipython-4541
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> DOC: document how to change ipcontroller-engine.json in case controller was started with --ip="*" When the controller was started and bound to "*" (as the documentation of the config file says), the ipcontroller-engine.json file will include the following line: ``` "interface": "tcp://*", "location": "127.0.0.1" ``` At least in my case it was not enough to change the location to the controller IP, but I had to include the controller IP via the interface line: ``` "interface": "tcp://xx.xx.xx.xx", "location": "xx.xx.xx.xx" ``` It would both be nice to document how to specify the IP of the controller in the tutorial (http://ipython.org/ipython-doc/dev/parallel/parallel_process.html#starting-the-controller-and-engines-on-different-hosts) and/or change the logic to find the IP to bind to in the engine to take the location argument into account when the interface specifies "*". </issue> <code> [start of README.rst] 1 =========================================== 2 IPython: Productive Interactive Computing 3 =========================================== 4 5 Overview 6 ======== 7 8 Welcome to IPython. Our full documentation is available on `our website 9 <http://ipython.org/documentation.html>`_; if you downloaded a built source 10 distribution the ``docs/source`` directory contains the plaintext version of 11 these manuals. If you have Sphinx installed, you can build them by typing 12 ``cd docs; make html`` for local browsing. 13 14 15 Dependencies and supported Python versions 16 ========================================== 17 18 For full details, see the installation section of the manual. The basic parts 19 of IPython only need the Python standard library, but much of its more advanced 20 functionality requires extra packages. 21 22 Officially, IPython requires Python version 2.7, or 3.3 and above. 23 IPython 1.x is the last IPython version to support Python 2.6 and 3.2. 24 25 26 Instant running 27 =============== 28 29 You can run IPython from this directory without even installing it system-wide 30 by typing at the terminal:: 31 32 $ python -m IPython 33 34 35 Development installation 36 ======================== 37 38 If you want to hack on certain parts, e.g. the IPython notebook, in a clean 39 environment (such as a virtualenv) you can use ``pip`` to grab the necessary 40 dependencies quickly:: 41 42 $ git clone --recursive https://github.com/ipython/ipython.git 43 $ cd ipython 44 $ pip install -e ".[notebook]" 45 46 This installs the necessary packages and symlinks IPython into your current 47 environment so that you can work on your local repo copy and run it from anywhere:: 48 49 $ ipython notebook 50 51 The same process applies for other parts, such as the qtconsole (the 52 ``extras_require`` attribute in the setup.py file lists all the possibilities). 53 54 Git Hooks and Submodules 55 ************************ 56 57 IPython now uses git submodules to ship its javascript dependencies. 58 If you run IPython from git master, you may need to update submodules once in a while with:: 59 60 $ git submodule update 61 62 or:: 63 64 $ python setup.py submodule 65 66 We have some git hooks for helping keep your submodules always in sync, 67 see our ``git-hooks`` directory for more info. 68 [end of README.rst] [start of IPython/parallel/apps/ipcontrollerapp.py] 1 #!/usr/bin/env python 2 # encoding: utf-8 3 """ 4 The IPython controller application. 5 6 Authors: 7 8 * Brian Granger 9 * MinRK 10 11 """ 12 13 #----------------------------------------------------------------------------- 14 # Copyright (C) 2008 The IPython Development Team 15 # 16 # Distributed under the terms of the BSD License. The full license is in 17 # the file COPYING, distributed as part of this software. 18 #----------------------------------------------------------------------------- 19 20 #----------------------------------------------------------------------------- 21 # Imports 22 #----------------------------------------------------------------------------- 23 24 from __future__ import with_statement 25 26 import json 27 import os 28 import stat 29 import sys 30 31 from multiprocessing import Process 32 from signal import signal, SIGINT, SIGABRT, SIGTERM 33 34 import zmq 35 from zmq.devices import ProcessMonitoredQueue 36 from zmq.log.handlers import PUBHandler 37 38 from IPython.core.profiledir import ProfileDir 39 40 from IPython.parallel.apps.baseapp import ( 41 BaseParallelApplication, 42 base_aliases, 43 base_flags, 44 catch_config_error, 45 ) 46 from IPython.utils.importstring import import_item 47 from IPython.utils.localinterfaces import localhost, public_ips 48 from IPython.utils.traitlets import Instance, Unicode, Bool, List, Dict, TraitError 49 50 from IPython.kernel.zmq.session import ( 51 Session, session_aliases, session_flags, default_secure 52 ) 53 54 from IPython.parallel.controller.heartmonitor import HeartMonitor 55 from IPython.parallel.controller.hub import HubFactory 56 from IPython.parallel.controller.scheduler import TaskScheduler,launch_scheduler 57 from IPython.parallel.controller.dictdb import DictDB 58 59 from IPython.parallel.util import split_url, disambiguate_url, set_hwm 60 61 # conditional import of SQLiteDB / MongoDB backend class 62 real_dbs = [] 63 64 try: 65 from IPython.parallel.controller.sqlitedb import SQLiteDB 66 except ImportError: 67 pass 68 else: 69 real_dbs.append(SQLiteDB) 70 71 try: 72 from IPython.parallel.controller.mongodb import MongoDB 73 except ImportError: 74 pass 75 else: 76 real_dbs.append(MongoDB) 77 78 79 80 #----------------------------------------------------------------------------- 81 # Module level variables 82 #----------------------------------------------------------------------------- 83 84 85 _description = """Start the IPython controller for parallel computing. 86 87 The IPython controller provides a gateway between the IPython engines and 88 clients. The controller needs to be started before the engines and can be 89 configured using command line options or using a cluster directory. Cluster 90 directories contain config, log and security files and are usually located in 91 your ipython directory and named as "profile_name". See the `profile` 92 and `profile-dir` options for details. 93 """ 94 95 _examples = """ 96 ipcontroller --ip=192.168.0.1 --port=1000 # listen on ip, port for engines 97 ipcontroller --scheme=pure # use the pure zeromq scheduler 98 """ 99 100 101 #----------------------------------------------------------------------------- 102 # The main application 103 #----------------------------------------------------------------------------- 104 flags = {} 105 flags.update(base_flags) 106 flags.update({ 107 'usethreads' : ( {'IPControllerApp' : {'use_threads' : True}}, 108 'Use threads instead of processes for the schedulers'), 109 'sqlitedb' : ({'HubFactory' : {'db_class' : 'IPython.parallel.controller.sqlitedb.SQLiteDB'}}, 110 'use the SQLiteDB backend'), 111 'mongodb' : ({'HubFactory' : {'db_class' : 'IPython.parallel.controller.mongodb.MongoDB'}}, 112 'use the MongoDB backend'), 113 'dictdb' : ({'HubFactory' : {'db_class' : 'IPython.parallel.controller.dictdb.DictDB'}}, 114 'use the in-memory DictDB backend'), 115 'nodb' : ({'HubFactory' : {'db_class' : 'IPython.parallel.controller.dictdb.NoDB'}}, 116 """use dummy DB backend, which doesn't store any information. 117 118 This is the default as of IPython 0.13. 119 120 To enable delayed or repeated retrieval of results from the Hub, 121 select one of the true db backends. 122 """), 123 'reuse' : ({'IPControllerApp' : {'reuse_files' : True}}, 124 'reuse existing json connection files'), 125 'restore' : ({'IPControllerApp' : {'restore_engines' : True, 'reuse_files' : True}}, 126 'Attempt to restore engines from a JSON file. ' 127 'For use when resuming a crashed controller'), 128 }) 129 130 flags.update(session_flags) 131 132 aliases = dict( 133 ssh = 'IPControllerApp.ssh_server', 134 enginessh = 'IPControllerApp.engine_ssh_server', 135 location = 'IPControllerApp.location', 136 137 url = 'HubFactory.url', 138 ip = 'HubFactory.ip', 139 transport = 'HubFactory.transport', 140 port = 'HubFactory.regport', 141 142 ping = 'HeartMonitor.period', 143 144 scheme = 'TaskScheduler.scheme_name', 145 hwm = 'TaskScheduler.hwm', 146 ) 147 aliases.update(base_aliases) 148 aliases.update(session_aliases) 149 150 class IPControllerApp(BaseParallelApplication): 151 152 name = u'ipcontroller' 153 description = _description 154 examples = _examples 155 classes = [ProfileDir, Session, HubFactory, TaskScheduler, HeartMonitor, DictDB] + real_dbs 156 157 # change default to True 158 auto_create = Bool(True, config=True, 159 help="""Whether to create profile dir if it doesn't exist.""") 160 161 reuse_files = Bool(False, config=True, 162 help="""Whether to reuse existing json connection files. 163 If False, connection files will be removed on a clean exit. 164 """ 165 ) 166 restore_engines = Bool(False, config=True, 167 help="""Reload engine state from JSON file 168 """ 169 ) 170 ssh_server = Unicode(u'', config=True, 171 help="""ssh url for clients to use when connecting to the Controller 172 processes. It should be of the form: [user@]server[:port]. The 173 Controller's listening addresses must be accessible from the ssh server""", 174 ) 175 engine_ssh_server = Unicode(u'', config=True, 176 help="""ssh url for engines to use when connecting to the Controller 177 processes. It should be of the form: [user@]server[:port]. The 178 Controller's listening addresses must be accessible from the ssh server""", 179 ) 180 location = Unicode(u'', config=True, 181 help="""The external IP or domain name of the Controller, used for disambiguating 182 engine and client connections.""", 183 ) 184 import_statements = List([], config=True, 185 help="import statements to be run at startup. Necessary in some environments" 186 ) 187 188 use_threads = Bool(False, config=True, 189 help='Use threads instead of processes for the schedulers', 190 ) 191 192 engine_json_file = Unicode('ipcontroller-engine.json', config=True, 193 help="JSON filename where engine connection info will be stored.") 194 client_json_file = Unicode('ipcontroller-client.json', config=True, 195 help="JSON filename where client connection info will be stored.") 196 197 def _cluster_id_changed(self, name, old, new): 198 super(IPControllerApp, self)._cluster_id_changed(name, old, new) 199 self.engine_json_file = "%s-engine.json" % self.name 200 self.client_json_file = "%s-client.json" % self.name 201 202 203 # internal 204 children = List() 205 mq_class = Unicode('zmq.devices.ProcessMonitoredQueue') 206 207 def _use_threads_changed(self, name, old, new): 208 self.mq_class = 'zmq.devices.%sMonitoredQueue'%('Thread' if new else 'Process') 209 210 write_connection_files = Bool(True, 211 help="""Whether to write connection files to disk. 212 True in all cases other than runs with `reuse_files=True` *after the first* 213 """ 214 ) 215 216 aliases = Dict(aliases) 217 flags = Dict(flags) 218 219 220 def save_connection_dict(self, fname, cdict): 221 """save a connection dict to json file.""" 222 c = self.config 223 url = cdict['registration'] 224 location = cdict['location'] 225 226 if not location: 227 if public_ips(): 228 location = public_ips()[-1] 229 else: 230 self.log.warn("Could not identify this machine's IP, assuming %s." 231 " You may need to specify '--location=<external_ip_address>' to help" 232 " IPython decide when to connect via loopback." % localhost() ) 233 location = localhost() 234 cdict['location'] = location 235 fname = os.path.join(self.profile_dir.security_dir, fname) 236 self.log.info("writing connection info to %s", fname) 237 with open(fname, 'w') as f: 238 f.write(json.dumps(cdict, indent=2)) 239 os.chmod(fname, stat.S_IRUSR|stat.S_IWUSR) 240 241 def load_config_from_json(self): 242 """load config from existing json connector files.""" 243 c = self.config 244 self.log.debug("loading config from JSON") 245 246 # load engine config 247 248 fname = os.path.join(self.profile_dir.security_dir, self.engine_json_file) 249 self.log.info("loading connection info from %s", fname) 250 with open(fname) as f: 251 ecfg = json.loads(f.read()) 252 253 # json gives unicode, Session.key wants bytes 254 c.Session.key = ecfg['key'].encode('ascii') 255 256 xport,ip = ecfg['interface'].split('://') 257 258 c.HubFactory.engine_ip = ip 259 c.HubFactory.engine_transport = xport 260 261 self.location = ecfg['location'] 262 if not self.engine_ssh_server: 263 self.engine_ssh_server = ecfg['ssh'] 264 265 # load client config 266 267 fname = os.path.join(self.profile_dir.security_dir, self.client_json_file) 268 self.log.info("loading connection info from %s", fname) 269 with open(fname) as f: 270 ccfg = json.loads(f.read()) 271 272 for key in ('key', 'registration', 'pack', 'unpack', 'signature_scheme'): 273 assert ccfg[key] == ecfg[key], "mismatch between engine and client info: %r" % key 274 275 xport,addr = ccfg['interface'].split('://') 276 277 c.HubFactory.client_transport = xport 278 c.HubFactory.client_ip = ip 279 if not self.ssh_server: 280 self.ssh_server = ccfg['ssh'] 281 282 # load port config: 283 c.HubFactory.regport = ecfg['registration'] 284 c.HubFactory.hb = (ecfg['hb_ping'], ecfg['hb_pong']) 285 c.HubFactory.control = (ccfg['control'], ecfg['control']) 286 c.HubFactory.mux = (ccfg['mux'], ecfg['mux']) 287 c.HubFactory.task = (ccfg['task'], ecfg['task']) 288 c.HubFactory.iopub = (ccfg['iopub'], ecfg['iopub']) 289 c.HubFactory.notifier_port = ccfg['notification'] 290 291 def cleanup_connection_files(self): 292 if self.reuse_files: 293 self.log.debug("leaving JSON connection files for reuse") 294 return 295 self.log.debug("cleaning up JSON connection files") 296 for f in (self.client_json_file, self.engine_json_file): 297 f = os.path.join(self.profile_dir.security_dir, f) 298 try: 299 os.remove(f) 300 except Exception as e: 301 self.log.error("Failed to cleanup connection file: %s", e) 302 else: 303 self.log.debug(u"removed %s", f) 304 305 def load_secondary_config(self): 306 """secondary config, loading from JSON and setting defaults""" 307 if self.reuse_files: 308 try: 309 self.load_config_from_json() 310 except (AssertionError,IOError) as e: 311 self.log.error("Could not load config from JSON: %s" % e) 312 else: 313 # successfully loaded config from JSON, and reuse=True 314 # no need to wite back the same file 315 self.write_connection_files = False 316 317 # switch Session.key default to secure 318 default_secure(self.config) 319 self.log.debug("Config changed") 320 self.log.debug(repr(self.config)) 321 322 def init_hub(self): 323 c = self.config 324 325 self.do_import_statements() 326 327 try: 328 self.factory = HubFactory(config=c, log=self.log) 329 # self.start_logging() 330 self.factory.init_hub() 331 except TraitError: 332 raise 333 except Exception: 334 self.log.error("Couldn't construct the Controller", exc_info=True) 335 self.exit(1) 336 337 if self.write_connection_files: 338 # save to new json config files 339 f = self.factory 340 base = { 341 'key' : f.session.key.decode('ascii'), 342 'location' : self.location, 343 'pack' : f.session.packer, 344 'unpack' : f.session.unpacker, 345 'signature_scheme' : f.session.signature_scheme, 346 } 347 348 cdict = {'ssh' : self.ssh_server} 349 cdict.update(f.client_info) 350 cdict.update(base) 351 self.save_connection_dict(self.client_json_file, cdict) 352 353 edict = {'ssh' : self.engine_ssh_server} 354 edict.update(f.engine_info) 355 edict.update(base) 356 self.save_connection_dict(self.engine_json_file, edict) 357 358 fname = "engines%s.json" % self.cluster_id 359 self.factory.hub.engine_state_file = os.path.join(self.profile_dir.log_dir, fname) 360 if self.restore_engines: 361 self.factory.hub._load_engine_state() 362 363 def init_schedulers(self): 364 children = self.children 365 mq = import_item(str(self.mq_class)) 366 367 f = self.factory 368 ident = f.session.bsession 369 # disambiguate url, in case of * 370 monitor_url = disambiguate_url(f.monitor_url) 371 # maybe_inproc = 'inproc://monitor' if self.use_threads else monitor_url 372 # IOPub relay (in a Process) 373 q = mq(zmq.PUB, zmq.SUB, zmq.PUB, b'N/A',b'iopub') 374 q.bind_in(f.client_url('iopub')) 375 q.setsockopt_in(zmq.IDENTITY, ident + b"_iopub") 376 q.bind_out(f.engine_url('iopub')) 377 q.setsockopt_out(zmq.SUBSCRIBE, b'') 378 q.connect_mon(monitor_url) 379 q.daemon=True 380 children.append(q) 381 382 # Multiplexer Queue (in a Process) 383 q = mq(zmq.ROUTER, zmq.ROUTER, zmq.PUB, b'in', b'out') 384 385 q.bind_in(f.client_url('mux')) 386 q.setsockopt_in(zmq.IDENTITY, b'mux_in') 387 q.bind_out(f.engine_url('mux')) 388 q.setsockopt_out(zmq.IDENTITY, b'mux_out') 389 q.connect_mon(monitor_url) 390 q.daemon=True 391 children.append(q) 392 393 # Control Queue (in a Process) 394 q = mq(zmq.ROUTER, zmq.ROUTER, zmq.PUB, b'incontrol', b'outcontrol') 395 q.bind_in(f.client_url('control')) 396 q.setsockopt_in(zmq.IDENTITY, b'control_in') 397 q.bind_out(f.engine_url('control')) 398 q.setsockopt_out(zmq.IDENTITY, b'control_out') 399 q.connect_mon(monitor_url) 400 q.daemon=True 401 children.append(q) 402 if 'TaskScheduler.scheme_name' in self.config: 403 scheme = self.config.TaskScheduler.scheme_name 404 else: 405 scheme = TaskScheduler.scheme_name.get_default_value() 406 # Task Queue (in a Process) 407 if scheme == 'pure': 408 self.log.warn("task::using pure DEALER Task scheduler") 409 q = mq(zmq.ROUTER, zmq.DEALER, zmq.PUB, b'intask', b'outtask') 410 # q.setsockopt_out(zmq.HWM, hub.hwm) 411 q.bind_in(f.client_url('task')) 412 q.setsockopt_in(zmq.IDENTITY, b'task_in') 413 q.bind_out(f.engine_url('task')) 414 q.setsockopt_out(zmq.IDENTITY, b'task_out') 415 q.connect_mon(monitor_url) 416 q.daemon=True 417 children.append(q) 418 elif scheme == 'none': 419 self.log.warn("task::using no Task scheduler") 420 421 else: 422 self.log.info("task::using Python %s Task scheduler"%scheme) 423 sargs = (f.client_url('task'), f.engine_url('task'), 424 monitor_url, disambiguate_url(f.client_url('notification')), 425 disambiguate_url(f.client_url('registration')), 426 ) 427 kwargs = dict(logname='scheduler', loglevel=self.log_level, 428 log_url = self.log_url, config=dict(self.config)) 429 if 'Process' in self.mq_class: 430 # run the Python scheduler in a Process 431 q = Process(target=launch_scheduler, args=sargs, kwargs=kwargs) 432 q.daemon=True 433 children.append(q) 434 else: 435 # single-threaded Controller 436 kwargs['in_thread'] = True 437 launch_scheduler(*sargs, **kwargs) 438 439 # set unlimited HWM for all relay devices 440 if hasattr(zmq, 'SNDHWM'): 441 q = children[0] 442 q.setsockopt_in(zmq.RCVHWM, 0) 443 q.setsockopt_out(zmq.SNDHWM, 0) 444 445 for q in children[1:]: 446 if not hasattr(q, 'setsockopt_in'): 447 continue 448 q.setsockopt_in(zmq.SNDHWM, 0) 449 q.setsockopt_in(zmq.RCVHWM, 0) 450 q.setsockopt_out(zmq.SNDHWM, 0) 451 q.setsockopt_out(zmq.RCVHWM, 0) 452 q.setsockopt_mon(zmq.SNDHWM, 0) 453 454 455 def terminate_children(self): 456 child_procs = [] 457 for child in self.children: 458 if isinstance(child, ProcessMonitoredQueue): 459 child_procs.append(child.launcher) 460 elif isinstance(child, Process): 461 child_procs.append(child) 462 if child_procs: 463 self.log.critical("terminating children...") 464 for child in child_procs: 465 try: 466 child.terminate() 467 except OSError: 468 # already dead 469 pass 470 471 def handle_signal(self, sig, frame): 472 self.log.critical("Received signal %i, shutting down", sig) 473 self.terminate_children() 474 self.loop.stop() 475 476 def init_signal(self): 477 for sig in (SIGINT, SIGABRT, SIGTERM): 478 signal(sig, self.handle_signal) 479 480 def do_import_statements(self): 481 statements = self.import_statements 482 for s in statements: 483 try: 484 self.log.msg("Executing statement: '%s'" % s) 485 exec(s, globals(), locals()) 486 except: 487 self.log.msg("Error running statement: %s" % s) 488 489 def forward_logging(self): 490 if self.log_url: 491 self.log.info("Forwarding logging to %s"%self.log_url) 492 context = zmq.Context.instance() 493 lsock = context.socket(zmq.PUB) 494 lsock.connect(self.log_url) 495 handler = PUBHandler(lsock) 496 handler.root_topic = 'controller' 497 handler.setLevel(self.log_level) 498 self.log.addHandler(handler) 499 500 @catch_config_error 501 def initialize(self, argv=None): 502 super(IPControllerApp, self).initialize(argv) 503 self.forward_logging() 504 self.load_secondary_config() 505 self.init_hub() 506 self.init_schedulers() 507 508 def start(self): 509 # Start the subprocesses: 510 self.factory.start() 511 # children must be started before signals are setup, 512 # otherwise signal-handling will fire multiple times 513 for child in self.children: 514 child.start() 515 self.init_signal() 516 517 self.write_pid_file(overwrite=True) 518 519 try: 520 self.factory.loop.start() 521 except KeyboardInterrupt: 522 self.log.critical("Interrupted, Exiting...\n") 523 finally: 524 self.cleanup_connection_files() 525 526 527 def launch_new_instance(*args, **kwargs): 528 """Create and run the IPython controller""" 529 if sys.platform == 'win32': 530 # make sure we don't get called from a multiprocessing subprocess 531 # this can result in infinite Controllers being started on Windows 532 # which doesn't have a proper fork, so multiprocessing is wonky 533 534 # this only comes up when IPython has been installed using vanilla 535 # setuptools, and *not* distribute. 536 import multiprocessing 537 p = multiprocessing.current_process() 538 # the main process has name 'MainProcess' 539 # subprocesses will have names like 'Process-1' 540 if p.name != 'MainProcess': 541 # we are a subprocess, don't start another Controller! 542 return 543 return IPControllerApp.launch_instance(*args, **kwargs) 544 545 546 if __name__ == '__main__': 547 launch_new_instance() 548 [end of IPython/parallel/apps/ipcontrollerapp.py] [start of IPython/parallel/apps/ipengineapp.py] 1 #!/usr/bin/env python 2 # encoding: utf-8 3 """ 4 The IPython engine application 5 6 Authors: 7 8 * Brian Granger 9 * MinRK 10 11 """ 12 13 #----------------------------------------------------------------------------- 14 # Copyright (C) 2008-2011 The IPython Development Team 15 # 16 # Distributed under the terms of the BSD License. The full license is in 17 # the file COPYING, distributed as part of this software. 18 #----------------------------------------------------------------------------- 19 20 #----------------------------------------------------------------------------- 21 # Imports 22 #----------------------------------------------------------------------------- 23 24 import json 25 import os 26 import sys 27 import time 28 29 import zmq 30 from zmq.eventloop import ioloop 31 32 from IPython.core.profiledir import ProfileDir 33 from IPython.parallel.apps.baseapp import ( 34 BaseParallelApplication, 35 base_aliases, 36 base_flags, 37 catch_config_error, 38 ) 39 from IPython.kernel.zmq.log import EnginePUBHandler 40 from IPython.kernel.zmq.ipkernel import Kernel 41 from IPython.kernel.zmq.kernelapp import IPKernelApp 42 from IPython.kernel.zmq.session import ( 43 Session, session_aliases, session_flags 44 ) 45 from IPython.kernel.zmq.zmqshell import ZMQInteractiveShell 46 47 from IPython.config.configurable import Configurable 48 49 from IPython.parallel.engine.engine import EngineFactory 50 from IPython.parallel.util import disambiguate_ip_address 51 52 from IPython.utils.importstring import import_item 53 from IPython.utils.py3compat import cast_bytes 54 from IPython.utils.traitlets import Bool, Unicode, Dict, List, Float, Instance 55 56 57 #----------------------------------------------------------------------------- 58 # Module level variables 59 #----------------------------------------------------------------------------- 60 61 _description = """Start an IPython engine for parallel computing. 62 63 IPython engines run in parallel and perform computations on behalf of a client 64 and controller. A controller needs to be started before the engines. The 65 engine can be configured using command line options or using a cluster 66 directory. Cluster directories contain config, log and security files and are 67 usually located in your ipython directory and named as "profile_name". 68 See the `profile` and `profile-dir` options for details. 69 """ 70 71 _examples = """ 72 ipengine --ip=192.168.0.1 --port=1000 # connect to hub at ip and port 73 ipengine --log-to-file --log-level=DEBUG # log to a file with DEBUG verbosity 74 """ 75 76 #----------------------------------------------------------------------------- 77 # MPI configuration 78 #----------------------------------------------------------------------------- 79 80 mpi4py_init = """from mpi4py import MPI as mpi 81 mpi.size = mpi.COMM_WORLD.Get_size() 82 mpi.rank = mpi.COMM_WORLD.Get_rank() 83 """ 84 85 86 pytrilinos_init = """from PyTrilinos import Epetra 87 class SimpleStruct: 88 pass 89 mpi = SimpleStruct() 90 mpi.rank = 0 91 mpi.size = 0 92 """ 93 94 class MPI(Configurable): 95 """Configurable for MPI initialization""" 96 use = Unicode('', config=True, 97 help='How to enable MPI (mpi4py, pytrilinos, or empty string to disable).' 98 ) 99 100 def _use_changed(self, name, old, new): 101 # load default init script if it's not set 102 if not self.init_script: 103 self.init_script = self.default_inits.get(new, '') 104 105 init_script = Unicode('', config=True, 106 help="Initialization code for MPI") 107 108 default_inits = Dict({'mpi4py' : mpi4py_init, 'pytrilinos':pytrilinos_init}, 109 config=True) 110 111 112 #----------------------------------------------------------------------------- 113 # Main application 114 #----------------------------------------------------------------------------- 115 aliases = dict( 116 file = 'IPEngineApp.url_file', 117 c = 'IPEngineApp.startup_command', 118 s = 'IPEngineApp.startup_script', 119 120 url = 'EngineFactory.url', 121 ssh = 'EngineFactory.sshserver', 122 sshkey = 'EngineFactory.sshkey', 123 ip = 'EngineFactory.ip', 124 transport = 'EngineFactory.transport', 125 port = 'EngineFactory.regport', 126 location = 'EngineFactory.location', 127 128 timeout = 'EngineFactory.timeout', 129 130 mpi = 'MPI.use', 131 132 ) 133 aliases.update(base_aliases) 134 aliases.update(session_aliases) 135 flags = {} 136 flags.update(base_flags) 137 flags.update(session_flags) 138 139 class IPEngineApp(BaseParallelApplication): 140 141 name = 'ipengine' 142 description = _description 143 examples = _examples 144 classes = List([ZMQInteractiveShell, ProfileDir, Session, EngineFactory, Kernel, MPI]) 145 146 startup_script = Unicode(u'', config=True, 147 help='specify a script to be run at startup') 148 startup_command = Unicode('', config=True, 149 help='specify a command to be run at startup') 150 151 url_file = Unicode(u'', config=True, 152 help="""The full location of the file containing the connection information for 153 the controller. If this is not given, the file must be in the 154 security directory of the cluster directory. This location is 155 resolved using the `profile` or `profile_dir` options.""", 156 ) 157 wait_for_url_file = Float(5, config=True, 158 help="""The maximum number of seconds to wait for url_file to exist. 159 This is useful for batch-systems and shared-filesystems where the 160 controller and engine are started at the same time and it 161 may take a moment for the controller to write the connector files.""") 162 163 url_file_name = Unicode(u'ipcontroller-engine.json', config=True) 164 165 def _cluster_id_changed(self, name, old, new): 166 if new: 167 base = 'ipcontroller-%s' % new 168 else: 169 base = 'ipcontroller' 170 self.url_file_name = "%s-engine.json" % base 171 172 log_url = Unicode('', config=True, 173 help="""The URL for the iploggerapp instance, for forwarding 174 logging to a central location.""") 175 176 # an IPKernelApp instance, used to setup listening for shell frontends 177 kernel_app = Instance(IPKernelApp) 178 179 aliases = Dict(aliases) 180 flags = Dict(flags) 181 182 @property 183 def kernel(self): 184 """allow access to the Kernel object, so I look like IPKernelApp""" 185 return self.engine.kernel 186 187 def find_url_file(self): 188 """Set the url file. 189 190 Here we don't try to actually see if it exists for is valid as that 191 is hadled by the connection logic. 192 """ 193 config = self.config 194 # Find the actual controller key file 195 if not self.url_file: 196 self.url_file = os.path.join( 197 self.profile_dir.security_dir, 198 self.url_file_name 199 ) 200 201 def load_connector_file(self): 202 """load config from a JSON connector file, 203 at a *lower* priority than command-line/config files. 204 """ 205 206 self.log.info("Loading url_file %r", self.url_file) 207 config = self.config 208 209 with open(self.url_file) as f: 210 num_tries = 0 211 max_tries = 5 212 d = "" 213 while not d: 214 try: 215 d = json.loads(f.read()) 216 except ValueError: 217 if num_tries > max_tries: 218 raise 219 num_tries += 1 220 time.sleep(0.5) 221 222 # allow hand-override of location for disambiguation 223 # and ssh-server 224 if 'EngineFactory.location' not in config: 225 config.EngineFactory.location = d['location'] 226 if 'EngineFactory.sshserver' not in config: 227 config.EngineFactory.sshserver = d.get('ssh') 228 229 location = config.EngineFactory.location 230 231 proto, ip = d['interface'].split('://') 232 ip = disambiguate_ip_address(ip, location) 233 d['interface'] = '%s://%s' % (proto, ip) 234 235 # DO NOT allow override of basic URLs, serialization, or key 236 # JSON file takes top priority there 237 config.Session.key = cast_bytes(d['key']) 238 config.Session.signature_scheme = d['signature_scheme'] 239 240 config.EngineFactory.url = d['interface'] + ':%i' % d['registration'] 241 242 config.Session.packer = d['pack'] 243 config.Session.unpacker = d['unpack'] 244 245 self.log.debug("Config changed:") 246 self.log.debug("%r", config) 247 self.connection_info = d 248 249 def bind_kernel(self, **kwargs): 250 """Promote engine to listening kernel, accessible to frontends.""" 251 if self.kernel_app is not None: 252 return 253 254 self.log.info("Opening ports for direct connections as an IPython kernel") 255 256 kernel = self.kernel 257 258 kwargs.setdefault('config', self.config) 259 kwargs.setdefault('log', self.log) 260 kwargs.setdefault('profile_dir', self.profile_dir) 261 kwargs.setdefault('session', self.engine.session) 262 263 app = self.kernel_app = IPKernelApp(**kwargs) 264 265 # allow IPKernelApp.instance(): 266 IPKernelApp._instance = app 267 268 app.init_connection_file() 269 # relevant contents of init_sockets: 270 271 app.shell_port = app._bind_socket(kernel.shell_streams[0], app.shell_port) 272 app.log.debug("shell ROUTER Channel on port: %i", app.shell_port) 273 274 app.iopub_port = app._bind_socket(kernel.iopub_socket, app.iopub_port) 275 app.log.debug("iopub PUB Channel on port: %i", app.iopub_port) 276 277 kernel.stdin_socket = self.engine.context.socket(zmq.ROUTER) 278 app.stdin_port = app._bind_socket(kernel.stdin_socket, app.stdin_port) 279 app.log.debug("stdin ROUTER Channel on port: %i", app.stdin_port) 280 281 # start the heartbeat, and log connection info: 282 283 app.init_heartbeat() 284 285 app.log_connection_info() 286 app.write_connection_file() 287 288 289 def init_engine(self): 290 # This is the working dir by now. 291 sys.path.insert(0, '') 292 config = self.config 293 # print config 294 self.find_url_file() 295 296 # was the url manually specified? 297 keys = set(self.config.EngineFactory.keys()) 298 keys = keys.union(set(self.config.RegistrationFactory.keys())) 299 300 if keys.intersection(set(['ip', 'url', 'port'])): 301 # Connection info was specified, don't wait for the file 302 url_specified = True 303 self.wait_for_url_file = 0 304 else: 305 url_specified = False 306 307 if self.wait_for_url_file and not os.path.exists(self.url_file): 308 self.log.warn("url_file %r not found", self.url_file) 309 self.log.warn("Waiting up to %.1f seconds for it to arrive.", self.wait_for_url_file) 310 tic = time.time() 311 while not os.path.exists(self.url_file) and (time.time()-tic < self.wait_for_url_file): 312 # wait for url_file to exist, or until time limit 313 time.sleep(0.1) 314 315 if os.path.exists(self.url_file): 316 self.load_connector_file() 317 elif not url_specified: 318 self.log.fatal("Fatal: url file never arrived: %s", self.url_file) 319 self.exit(1) 320 321 exec_lines = [] 322 for app in ('IPKernelApp', 'InteractiveShellApp'): 323 if '%s.exec_lines' in config: 324 exec_lines = config.IPKernelApp.exec_lines = config[app].exec_lines 325 break 326 327 exec_files = [] 328 for app in ('IPKernelApp', 'InteractiveShellApp'): 329 if '%s.exec_files' in config: 330 exec_files = config.IPKernelApp.exec_files = config[app].exec_files 331 break 332 333 if self.startup_script: 334 exec_files.append(self.startup_script) 335 if self.startup_command: 336 exec_lines.append(self.startup_command) 337 338 # Create the underlying shell class and Engine 339 # shell_class = import_item(self.master_config.Global.shell_class) 340 # print self.config 341 try: 342 self.engine = EngineFactory(config=config, log=self.log, 343 connection_info=self.connection_info, 344 ) 345 except: 346 self.log.error("Couldn't start the Engine", exc_info=True) 347 self.exit(1) 348 349 def forward_logging(self): 350 if self.log_url: 351 self.log.info("Forwarding logging to %s", self.log_url) 352 context = self.engine.context 353 lsock = context.socket(zmq.PUB) 354 lsock.connect(self.log_url) 355 handler = EnginePUBHandler(self.engine, lsock) 356 handler.setLevel(self.log_level) 357 self.log.addHandler(handler) 358 359 def init_mpi(self): 360 global mpi 361 self.mpi = MPI(parent=self) 362 363 mpi_import_statement = self.mpi.init_script 364 if mpi_import_statement: 365 try: 366 self.log.info("Initializing MPI:") 367 self.log.info(mpi_import_statement) 368 exec(mpi_import_statement, globals()) 369 except: 370 mpi = None 371 else: 372 mpi = None 373 374 @catch_config_error 375 def initialize(self, argv=None): 376 super(IPEngineApp, self).initialize(argv) 377 self.init_mpi() 378 self.init_engine() 379 self.forward_logging() 380 381 def start(self): 382 self.engine.start() 383 try: 384 self.engine.loop.start() 385 except KeyboardInterrupt: 386 self.log.critical("Engine Interrupted, shutting down...\n") 387 388 389 launch_new_instance = IPEngineApp.launch_instance 390 391 392 if __name__ == '__main__': 393 launch_new_instance() 394 395 [end of IPython/parallel/apps/ipengineapp.py] [start of IPython/parallel/engine/engine.py] 1 """A simple engine that talks to a controller over 0MQ. 2 it handles registration, etc. and launches a kernel 3 connected to the Controller's Schedulers. 4 5 Authors: 6 7 * Min RK 8 """ 9 #----------------------------------------------------------------------------- 10 # Copyright (C) 2010-2011 The IPython Development Team 11 # 12 # Distributed under the terms of the BSD License. The full license is in 13 # the file COPYING, distributed as part of this software. 14 #----------------------------------------------------------------------------- 15 16 from __future__ import print_function 17 18 import sys 19 import time 20 from getpass import getpass 21 22 import zmq 23 from zmq.eventloop import ioloop, zmqstream 24 25 from IPython.external.ssh import tunnel 26 # internal 27 from IPython.utils.localinterfaces import localhost 28 from IPython.utils.traitlets import ( 29 Instance, Dict, Integer, Type, Float, Integer, Unicode, CBytes, Bool 30 ) 31 from IPython.utils.py3compat import cast_bytes 32 33 from IPython.parallel.controller.heartmonitor import Heart 34 from IPython.parallel.factory import RegistrationFactory 35 from IPython.parallel.util import disambiguate_url 36 37 from IPython.kernel.zmq.session import Message 38 from IPython.kernel.zmq.ipkernel import Kernel 39 from IPython.kernel.zmq.kernelapp import IPKernelApp 40 41 class EngineFactory(RegistrationFactory): 42 """IPython engine""" 43 44 # configurables: 45 out_stream_factory=Type('IPython.kernel.zmq.iostream.OutStream', config=True, 46 help="""The OutStream for handling stdout/err. 47 Typically 'IPython.kernel.zmq.iostream.OutStream'""") 48 display_hook_factory=Type('IPython.kernel.zmq.displayhook.ZMQDisplayHook', config=True, 49 help="""The class for handling displayhook. 50 Typically 'IPython.kernel.zmq.displayhook.ZMQDisplayHook'""") 51 location=Unicode(config=True, 52 help="""The location (an IP address) of the controller. This is 53 used for disambiguating URLs, to determine whether 54 loopback should be used to connect or the public address.""") 55 timeout=Float(5.0, config=True, 56 help="""The time (in seconds) to wait for the Controller to respond 57 to registration requests before giving up.""") 58 max_heartbeat_misses=Integer(50, config=True, 59 help="""The maximum number of times a check for the heartbeat ping of a 60 controller can be missed before shutting down the engine. 61 62 If set to 0, the check is disabled.""") 63 sshserver=Unicode(config=True, 64 help="""The SSH server to use for tunneling connections to the Controller.""") 65 sshkey=Unicode(config=True, 66 help="""The SSH private key file to use when tunneling connections to the Controller.""") 67 paramiko=Bool(sys.platform == 'win32', config=True, 68 help="""Whether to use paramiko instead of openssh for tunnels.""") 69 70 71 # not configurable: 72 connection_info = Dict() 73 user_ns = Dict() 74 id = Integer(allow_none=True) 75 registrar = Instance('zmq.eventloop.zmqstream.ZMQStream') 76 kernel = Instance(Kernel) 77 hb_check_period=Integer() 78 79 # States for the heartbeat monitoring 80 # Initial values for monitored and pinged must satisfy "monitored > pinged == False" so that 81 # during the first check no "missed" ping is reported. Must be floats for Python 3 compatibility. 82 _hb_last_pinged = 0.0 83 _hb_last_monitored = 0.0 84 _hb_missed_beats = 0 85 # The zmq Stream which receives the pings from the Heart 86 _hb_listener = None 87 88 bident = CBytes() 89 ident = Unicode() 90 def _ident_changed(self, name, old, new): 91 self.bident = cast_bytes(new) 92 using_ssh=Bool(False) 93 94 95 def __init__(self, **kwargs): 96 super(EngineFactory, self).__init__(**kwargs) 97 self.ident = self.session.session 98 99 def init_connector(self): 100 """construct connection function, which handles tunnels.""" 101 self.using_ssh = bool(self.sshkey or self.sshserver) 102 103 if self.sshkey and not self.sshserver: 104 # We are using ssh directly to the controller, tunneling localhost to localhost 105 self.sshserver = self.url.split('://')[1].split(':')[0] 106 107 if self.using_ssh: 108 if tunnel.try_passwordless_ssh(self.sshserver, self.sshkey, self.paramiko): 109 password=False 110 else: 111 password = getpass("SSH Password for %s: "%self.sshserver) 112 else: 113 password = False 114 115 def connect(s, url): 116 url = disambiguate_url(url, self.location) 117 if self.using_ssh: 118 self.log.debug("Tunneling connection to %s via %s", url, self.sshserver) 119 return tunnel.tunnel_connection(s, url, self.sshserver, 120 keyfile=self.sshkey, paramiko=self.paramiko, 121 password=password, 122 ) 123 else: 124 return s.connect(url) 125 126 def maybe_tunnel(url): 127 """like connect, but don't complete the connection (for use by heartbeat)""" 128 url = disambiguate_url(url, self.location) 129 if self.using_ssh: 130 self.log.debug("Tunneling connection to %s via %s", url, self.sshserver) 131 url,tunnelobj = tunnel.open_tunnel(url, self.sshserver, 132 keyfile=self.sshkey, paramiko=self.paramiko, 133 password=password, 134 ) 135 return str(url) 136 return connect, maybe_tunnel 137 138 def register(self): 139 """send the registration_request""" 140 141 self.log.info("Registering with controller at %s"%self.url) 142 ctx = self.context 143 connect,maybe_tunnel = self.init_connector() 144 reg = ctx.socket(zmq.DEALER) 145 reg.setsockopt(zmq.IDENTITY, self.bident) 146 connect(reg, self.url) 147 self.registrar = zmqstream.ZMQStream(reg, self.loop) 148 149 150 content = dict(uuid=self.ident) 151 self.registrar.on_recv(lambda msg: self.complete_registration(msg, connect, maybe_tunnel)) 152 # print (self.session.key) 153 self.session.send(self.registrar, "registration_request", content=content) 154 155 def _report_ping(self, msg): 156 """Callback for when the heartmonitor.Heart receives a ping""" 157 #self.log.debug("Received a ping: %s", msg) 158 self._hb_last_pinged = time.time() 159 160 def complete_registration(self, msg, connect, maybe_tunnel): 161 # print msg 162 self._abort_dc.stop() 163 ctx = self.context 164 loop = self.loop 165 identity = self.bident 166 idents,msg = self.session.feed_identities(msg) 167 msg = self.session.unserialize(msg) 168 content = msg['content'] 169 info = self.connection_info 170 171 def url(key): 172 """get zmq url for given channel""" 173 return str(info["interface"] + ":%i" % info[key]) 174 175 if content['status'] == 'ok': 176 self.id = int(content['id']) 177 178 # launch heartbeat 179 # possibly forward hb ports with tunnels 180 hb_ping = maybe_tunnel(url('hb_ping')) 181 hb_pong = maybe_tunnel(url('hb_pong')) 182 183 hb_monitor = None 184 if self.max_heartbeat_misses > 0: 185 # Add a monitor socket which will record the last time a ping was seen 186 mon = self.context.socket(zmq.SUB) 187 mport = mon.bind_to_random_port('tcp://%s' % localhost()) 188 mon.setsockopt(zmq.SUBSCRIBE, b"") 189 self._hb_listener = zmqstream.ZMQStream(mon, self.loop) 190 self._hb_listener.on_recv(self._report_ping) 191 192 193 hb_monitor = "tcp://%s:%i" % (localhost(), mport) 194 195 heart = Heart(hb_ping, hb_pong, hb_monitor , heart_id=identity) 196 heart.start() 197 198 # create Shell Connections (MUX, Task, etc.): 199 shell_addrs = url('mux'), url('task') 200 201 # Use only one shell stream for mux and tasks 202 stream = zmqstream.ZMQStream(ctx.socket(zmq.ROUTER), loop) 203 stream.setsockopt(zmq.IDENTITY, identity) 204 shell_streams = [stream] 205 for addr in shell_addrs: 206 connect(stream, addr) 207 208 # control stream: 209 control_addr = url('control') 210 control_stream = zmqstream.ZMQStream(ctx.socket(zmq.ROUTER), loop) 211 control_stream.setsockopt(zmq.IDENTITY, identity) 212 connect(control_stream, control_addr) 213 214 # create iopub stream: 215 iopub_addr = url('iopub') 216 iopub_socket = ctx.socket(zmq.PUB) 217 iopub_socket.setsockopt(zmq.IDENTITY, identity) 218 connect(iopub_socket, iopub_addr) 219 220 # disable history: 221 self.config.HistoryManager.hist_file = ':memory:' 222 223 # Redirect input streams and set a display hook. 224 if self.out_stream_factory: 225 sys.stdout = self.out_stream_factory(self.session, iopub_socket, u'stdout') 226 sys.stdout.topic = cast_bytes('engine.%i.stdout' % self.id) 227 sys.stderr = self.out_stream_factory(self.session, iopub_socket, u'stderr') 228 sys.stderr.topic = cast_bytes('engine.%i.stderr' % self.id) 229 if self.display_hook_factory: 230 sys.displayhook = self.display_hook_factory(self.session, iopub_socket) 231 sys.displayhook.topic = cast_bytes('engine.%i.pyout' % self.id) 232 233 self.kernel = Kernel(parent=self, int_id=self.id, ident=self.ident, session=self.session, 234 control_stream=control_stream, shell_streams=shell_streams, iopub_socket=iopub_socket, 235 loop=loop, user_ns=self.user_ns, log=self.log) 236 237 self.kernel.shell.display_pub.topic = cast_bytes('engine.%i.displaypub' % self.id) 238 239 240 # periodically check the heartbeat pings of the controller 241 # Should be started here and not in "start()" so that the right period can be taken 242 # from the hubs HeartBeatMonitor.period 243 if self.max_heartbeat_misses > 0: 244 # Use a slightly bigger check period than the hub signal period to not warn unnecessary 245 self.hb_check_period = int(content['hb_period'])+10 246 self.log.info("Starting to monitor the heartbeat signal from the hub every %i ms." , self.hb_check_period) 247 self._hb_reporter = ioloop.PeriodicCallback(self._hb_monitor, self.hb_check_period, self.loop) 248 self._hb_reporter.start() 249 else: 250 self.log.info("Monitoring of the heartbeat signal from the hub is not enabled.") 251 252 253 # FIXME: This is a hack until IPKernelApp and IPEngineApp can be fully merged 254 app = IPKernelApp(parent=self, shell=self.kernel.shell, kernel=self.kernel, log=self.log) 255 app.init_profile_dir() 256 app.init_code() 257 258 self.kernel.start() 259 else: 260 self.log.fatal("Registration Failed: %s"%msg) 261 raise Exception("Registration Failed: %s"%msg) 262 263 self.log.info("Completed registration with id %i"%self.id) 264 265 266 def abort(self): 267 self.log.fatal("Registration timed out after %.1f seconds"%self.timeout) 268 if self.url.startswith('127.'): 269 self.log.fatal(""" 270 If the controller and engines are not on the same machine, 271 you will have to instruct the controller to listen on an external IP (in ipcontroller_config.py): 272 c.HubFactory.ip='*' # for all interfaces, internal and external 273 c.HubFactory.ip='192.168.1.101' # or any interface that the engines can see 274 or tunnel connections via ssh. 275 """) 276 self.session.send(self.registrar, "unregistration_request", content=dict(id=self.id)) 277 time.sleep(1) 278 sys.exit(255) 279 280 def _hb_monitor(self): 281 """Callback to monitor the heartbeat from the controller""" 282 self._hb_listener.flush() 283 if self._hb_last_monitored > self._hb_last_pinged: 284 self._hb_missed_beats += 1 285 self.log.warn("No heartbeat in the last %s ms (%s time(s) in a row).", self.hb_check_period, self._hb_missed_beats) 286 else: 287 #self.log.debug("Heartbeat received (after missing %s beats).", self._hb_missed_beats) 288 self._hb_missed_beats = 0 289 290 if self._hb_missed_beats >= self.max_heartbeat_misses: 291 self.log.fatal("Maximum number of heartbeats misses reached (%s times %s ms), shutting down.", 292 self.max_heartbeat_misses, self.hb_check_period) 293 self.session.send(self.registrar, "unregistration_request", content=dict(id=self.id)) 294 self.loop.stop() 295 296 self._hb_last_monitored = time.time() 297 298 299 def start(self): 300 dc = ioloop.DelayedCallback(self.register, 0, self.loop) 301 dc.start() 302 self._abort_dc = ioloop.DelayedCallback(self.abort, self.timeout*1000, self.loop) 303 self._abort_dc.start() 304 305 306 [end of IPython/parallel/engine/engine.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
ipython/ipython
f0efabf5ad16e5552f9a086d90064c51f7575e69
DOC: document how to change ipcontroller-engine.json in case controller was started with --ip="*" When the controller was started and bound to "*" (as the documentation of the config file says), the ipcontroller-engine.json file will include the following line: ``` "interface": "tcp://*", "location": "127.0.0.1" ``` At least in my case it was not enough to change the location to the controller IP, but I had to include the controller IP via the interface line: ``` "interface": "tcp://xx.xx.xx.xx", "location": "xx.xx.xx.xx" ``` It would both be nice to document how to specify the IP of the controller in the tutorial (http://ipython.org/ipython-doc/dev/parallel/parallel_process.html#starting-the-controller-and-engines-on-different-hosts) and/or change the logic to find the IP to bind to in the engine to take the location argument into account when the interface specifies "*".
Under normal circumstances, `location` will be sufficient to disambiguate the IP (that's what it's there for). It is only `'127.0.0.1'` if it can't figure out its own location, in which case you may want to specify the `--location=w.x.y.z`, which skips the guessing. What version of IPython are you using, and on what system? I'm using a git version from yesterday on windows. I think the problem is that "ipconfig" is printing it's output in German but the code is expecting it in English ``` python import IPython.utils.localinterfaces def _populate_from_list(addr): print addr IPython.utils.localinterfaces._populate_from_list = _populate_from_list from IPython.utils.localinterfaces import _load_ips_ipconfig, get_output_error_code _load_ips_ipconfig() ['127.0.0.1'] out, err, rc = get_output_error_code('ipconfig') print out Windows-IP-Konfiguration Drahtlos-LAN-Adapter Drahtlosnetzwerkverbindung 3: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Drahtlos-LAN-Adapter Drahtlosnetzwerkverbindung 2: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Drahtlos-LAN-Adapter Drahtlosnetzwerkverbindung: Verbindungsspezifisches DNS-Suffix: fritz.box Verbindungslokale IPv6-Adresse . : fe80::e1e6:ef28:8414:6965%15 IPv4-Adresse . . . . . . . . . . : 192.168.181.26 Subnetzmaske . . . . . . . . . . : 255.255.255.0 Standardgateway . . . . . . . . . : 192.168.181.1 Ethernet-Adapter Bluetooth-Netzwerkverbindung: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Ethernet-Adapter LAN-Verbindung: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Ethernet-Adapter VirtualBox Host-Only Network: Verbindungsspezifisches DNS-Suffix: Verbindungslokale IPv6-Adresse . : fe80::8519:94b0:31c6:8976%31 IPv4-Adresse . . . . . . . . . . : 192.168.56.1 Subnetzmaske . . . . . . . . . . : 255.255.255.0 Standardgateway . . . . . . . . . : Tunneladapter isatap.{6C6D4B31-DE6C-4D25-A174-D177DBB8802F}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Tunneladapter Teredo Tunneling Pseudo-Interface: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Tunneladapter isatap.{F8E56AA0-96E7-4C2A-A278-DBEF0E3382E4}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Tunneladapter isatap.{B8AA2EFE-2AD8-4F10-89DD-203893621C61}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Tunneladapter isatap.fritz.box: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: fritz.box Tunneladapter isatap.{EDB803A5-FE08-4A3E-A8EC-CF86090064F8}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: Tunneladapter isatap.{C763A4C3-0F9E-46E2-8147-1836853C4096}: Medienstatus. . . . . . . . . . . : Medium getrennt Verbindungsspezifisches DNS-Suffix: ``` The problematic method `_load_ips_ipconfig()`: ``` python def _load_ips_ipconfig(): """load ip addresses from `ipconfig` output (Windows)""" out, err, rc = get_output_error_code('ipconfig') if rc: raise IOError("no ipconfig: %s" % err) lines = out.splitlines() addrs = ['127.0.0.1'] for line in lines: line = line.lower().split() if line[:2] == ['ipv4', 'address']: # FAIL... addrs.append(line.split()[-1]) _populate_from_list(addrs) ``` Aha - an incorrect English assumption. I'll see what I can do about fixing that. http://stackoverflow.com/questions/166506/finding-local-ip-addresses-using-pythons-stdlib ``` python import socket print([ip for ip in socket.gethostbyname_ex(socket.gethostname())[2] ]) ['192.168.181.26', '192.168.56.1'] ``` (The first is my current IP in the local net, the second is the virtualbox adapter) And here the output when in a Cisco VPN: `['xxx.xx.xxx.xxx', '192.168.181.26', '192.168.56.1']` -> It seems that this does "the right thing" :-) That's precisely what this code used to do, and doesn't anymore because it is unreliable, and when it fails it typically does so with a 30 second hang on DNS resolution. Not sure if that helps: from http://stackoverflow.com/questions/5898763/how-do-i-get-the-ip-address-into-a-batch-file-variable ``` C:\Windows\System32>route print =========================================================================== Schnittstellenliste 24...8c 70 5a 9d 8e 21 ......Microsoft Virtual WiFi Miniport Adapter #2 16...8c 70 5a 9d 8e 21 ......Microsoft Virtual WiFi Miniport Adapter 15...8c 70 5a 9d 8e 20 ......Intel(R) Centrino(R) Advanced-N 6205 14...c0 18 85 dd 38 4b ......Bluetooth-Gerät (PAN) 12...3c 97 0e 0e 5c 96 ......Intel(R) 82579LM Gigabit Network Connection 31...08 00 27 00 84 5d ......VirtualBox Host-Only Ethernet Adapter 1...........................Software Loopback Interface 1 34...00 00 00 00 00 00 00 e0 Microsoft-ISATAP-Adapter 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 35...00 00 00 00 00 00 00 e0 Microsoft-ISATAP-Adapter #2 36...00 00 00 00 00 00 00 e0 Microsoft-ISATAP-Adapter #3 56...00 00 00 00 00 00 00 e0 Microsoft-ISATAP-Adapter #5 37...00 00 00 00 00 00 00 e0 Microsoft-ISATAP-Adapter #6 58...00 00 00 00 00 00 00 e0 Microsoft-ISATAP-Adapter #7 =========================================================================== IPv4-Routentabelle =========================================================================== Aktive Routen: Netzwerkziel Netzwerkmaske Gateway Schnittstelle Metrik 0.0.0.0 0.0.0.0 192.168.181.1 192.168.181.26 25 127.0.0.0 255.0.0.0 Auf Verbindung 127.0.0.1 306 127.0.0.1 255.255.255.255 Auf Verbindung 127.0.0.1 306 127.255.255.255 255.255.255.255 Auf Verbindung 127.0.0.1 306 192.168.56.0 255.255.255.0 Auf Verbindung 192.168.56.1 276 192.168.56.1 255.255.255.255 Auf Verbindung 192.168.56.1 276 192.168.56.255 255.255.255.255 Auf Verbindung 192.168.56.1 276 192.168.181.0 255.255.255.0 Auf Verbindung 192.168.181.26 281 192.168.181.26 255.255.255.255 Auf Verbindung 192.168.181.26 281 192.168.181.255 255.255.255.255 Auf Verbindung 192.168.181.26 281 224.0.0.0 240.0.0.0 Auf Verbindung 127.0.0.1 306 224.0.0.0 240.0.0.0 Auf Verbindung 192.168.56.1 276 224.0.0.0 240.0.0.0 Auf Verbindung 192.168.181.26 281 255.255.255.255 255.255.255.255 Auf Verbindung 127.0.0.1 306 255.255.255.255 255.255.255.255 Auf Verbindung 192.168.56.1 276 255.255.255.255 255.255.255.255 Auf Verbindung 192.168.181.26 281 =========================================================================== Ständige Routen: Netzwerkadresse Netzmaske Gatewayadresse Metrik 0.0.0.0 0.0.0.0 xxx.xxx.xx.xxx Standard =========================================================================== IPv6-Routentabelle =========================================================================== Aktive Routen: If Metrik Netzwerkziel Gateway 1 306 ::1/128 Auf Verbindung 31 276 fe80::/64 Auf Verbindung 15 281 fe80::/64 Auf Verbindung 31 276 fe80::8519:94b0:31c6:8976/128 Auf Verbindung 15 281 fe80::e1e6:ef28:8414:6965/128 Auf Verbindung 1 306 ff00::/8 Auf Verbindung 31 276 ff00::/8 Auf Verbindung 15 281 ff00::/8 Auf Verbindung =========================================================================== Ständige Routen: Keine C:\Windows\System32> ``` -> Get to the 0.0.0.0 entry, take the next lines and make a set from all the 4th entries in each line. Maybe it's a bug in `socket`'s call to the native WinAPI. How about calling the API ourselves using PyWin32 or CTypes? No, it's generally caused by an issue in system configuration, not any API - the gethostname / gethostbyname approach involves a DNS lookup. If that fails, it will generally fail by timeout, rather than failing immediately. Ah I see. `ipconfig` may get the addresses from the adapters themselves, maybe a WinAPI call to http://msdn.microsoft.com/en-us/library/aa365915%28v=VS.85%29.aspx and then http://msdn.microsoft.com/en-us/library/ms741516%28VS.85%29.aspx ?
2013-11-15T00:29:00Z
<patch> diff --git a/IPython/utils/localinterfaces.py b/IPython/utils/localinterfaces.py --- a/IPython/utils/localinterfaces.py +++ b/IPython/utils/localinterfaces.py @@ -23,6 +23,7 @@ #----------------------------------------------------------------------------- import os +import re import socket from .data import uniq_stable @@ -118,6 +119,7 @@ def _load_ips_ip(): addrs.append(blocks[1].split('/')[0]) _populate_from_list(addrs) +_ipconfig_ipv4_pat = re.compile(r'ipv4.*(\d+\.\d+\.\d+\.\d+)$', re.IGNORECASE) def _load_ips_ipconfig(): """load ip addresses from `ipconfig` output (Windows)""" @@ -126,11 +128,11 @@ def _load_ips_ipconfig(): raise IOError("no ipconfig: %s" % err) lines = out.splitlines() - addrs = ['127.0.0.1'] + addrs = [] for line in lines: - line = line.lower().split() - if line[:2] == ['ipv4', 'address']: - addrs.append(line.split()[-1]) + m = _ipconfig_ipv4_pat.match(line.strip()) + if m: + addrs.append(m.group(1)) _populate_from_list(addrs) </patch>
[]
[]
pantsbuild__pants-4686
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Support invalidation generically via an option Currently, manual invalidation is only supported via `./pants invalidated`. This is problematic for a few reasons: 1. "invalidate" is not a verb in pants user-space, it's only one we use internally in the codebase 2. The `invalidate` goal acts globally, invalidating all targets across all tasks As @benjyw suggested [here](https://github.com/pantsbuild/pants/pull/4660#pullrequestreview-43801196), a global recursive `--force` flag might take the place of the `invalidate` goal and also allow more targeted invalidation at the task level when placed to the right of a fully qualified goal name on the command line. To-boot, `--force` likely makes more intuitive sense in pants user-space; ie: force this command to run instead of skipping things. </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a build system for software projects in a variety of languages. 4 It works particularly well for a source code repository that contains 5 many distinct projects. 6 7 Friendly documentation: http://www.pantsbuild.org/ 8 9 We release to [PyPI](https://pypi.python.org/pypi) 10 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 11 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 12 [![downloads](https://img.shields.io/pypi/dm/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 13 14 We use [Travis CI](https://travis-ci.org) to verify the build 15 [![Build Status](https://travis-ci.org/pantsbuild/pants.svg?branch=master)](https://travis-ci.org/pantsbuild/pants/branches). 16 17 We use [Coveralls](https://coveralls.io) to monitor test coverage 18 [![Coverage Status](https://coveralls.io/repos/pantsbuild/pants/badge.png?branch=master)](https://coveralls.io/r/pantsbuild/pants). 19 20 # Requirements 21 22 At a minimum, pants requires the following to run properly: 23 24 * Linux or Mac OS X 25 * Python 2.7.x (the latest stable version of 2.7 is recommended) 26 * A C compiler, system headers, Python headers (to compile native Python modules) and the libffi 27 library and headers (to compile and link modules that use CFFI to access native code). 28 * Internet access (so that pants can fully bootstrap itself) 29 30 Additionally, if you use the jvm backend to work with java or scala code (installed by default): 31 32 * OpenJDK or Oracle JDK version 7 or greater 33 [end of README.md] [start of src/python/pants/option/options.py] 1 # coding=utf-8 2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 import copy 9 import sys 10 11 from pants.base.deprecated import warn_or_error 12 from pants.option.arg_splitter import GLOBAL_SCOPE, ArgSplitter 13 from pants.option.global_options import GlobalOptionsRegistrar 14 from pants.option.option_util import is_list_option 15 from pants.option.option_value_container import OptionValueContainer 16 from pants.option.parser_hierarchy import ParserHierarchy, enclosing_scope 17 from pants.option.scope import ScopeInfo 18 19 20 class Options(object): 21 """The outward-facing API for interacting with options. 22 23 Supports option registration and fetching option values. 24 25 Examples: 26 27 The value in global scope of option '--foo-bar' (registered in global scope) will be selected 28 in the following order: 29 - The value of the --foo-bar flag in global scope. 30 - The value of the PANTS_GLOBAL_FOO_BAR environment variable. 31 - The value of the PANTS_FOO_BAR environment variable. 32 - The value of the foo_bar key in the [GLOBAL] section of pants.ini. 33 - The hard-coded value provided at registration time. 34 - None. 35 36 The value in scope 'compile.java' of option '--foo-bar' (registered in global scope) will be 37 selected in the following order: 38 - The value of the --foo-bar flag in scope 'compile.java'. 39 - The value of the --foo-bar flag in scope 'compile'. 40 - The value of the --foo-bar flag in global scope. 41 - The value of the PANTS_COMPILE_JAVA_FOO_BAR environment variable. 42 - The value of the PANTS_COMPILE_FOO_BAR environment variable. 43 - The value of the PANTS_GLOBAL_FOO_BAR environment variable. 44 - The value of the PANTS_FOO_BAR environment variable. 45 - The value of the foo_bar key in the [compile.java] section of pants.ini. 46 - The value of the foo_bar key in the [compile] section of pants.ini. 47 - The value of the foo_bar key in the [GLOBAL] section of pants.ini. 48 - The hard-coded value provided at registration time. 49 - None. 50 51 The value in scope 'compile.java' of option '--foo-bar' (registered in scope 'compile') will be 52 selected in the following order: 53 - The value of the --foo-bar flag in scope 'compile.java'. 54 - The value of the --foo-bar flag in scope 'compile'. 55 - The value of the PANTS_COMPILE_JAVA_FOO_BAR environment variable. 56 - The value of the PANTS_COMPILE_FOO_BAR environment variable. 57 - The value of the foo_bar key in the [compile.java] section of pants.ini. 58 - The value of the foo_bar key in the [compile] section of pants.ini. 59 - The value of the foo_bar key in the [GLOBAL] section of pants.ini 60 (because of automatic config file fallback to that section). 61 - The hard-coded value provided at registration time. 62 - None. 63 """ 64 65 class OptionTrackerRequiredError(Exception): 66 """Options requires an OptionTracker instance.""" 67 68 @classmethod 69 def complete_scopes(cls, scope_infos): 70 """Expand a set of scopes to include all enclosing scopes. 71 72 E.g., if the set contains `foo.bar.baz`, ensure that it also contains `foo.bar` and `foo`. 73 74 Also adds any deprecated scopes. 75 """ 76 ret = {GlobalOptionsRegistrar.get_scope_info()} 77 original_scopes = set() 78 for si in scope_infos: 79 ret.add(si) 80 original_scopes.add(si.scope) 81 if si.deprecated_scope: 82 ret.add(ScopeInfo(si.deprecated_scope, si.category, si.optionable_cls)) 83 original_scopes.add(si.deprecated_scope) 84 85 # TODO: Once scope name validation is enforced (so there can be no dots in scope name 86 # components) we can replace this line with `for si in scope_infos:`, because it will 87 # not be possible for a deprecated_scope to introduce any new intermediate scopes. 88 for si in copy.copy(ret): 89 scope = si.scope 90 while scope != '': 91 if scope not in original_scopes: 92 ret.add(ScopeInfo(scope, ScopeInfo.INTERMEDIATE)) 93 scope = enclosing_scope(scope) 94 return ret 95 96 @classmethod 97 def create(cls, env, config, known_scope_infos, args=None, bootstrap_option_values=None, 98 option_tracker=None): 99 """Create an Options instance. 100 101 :param env: a dict of environment variables. 102 :param :class:`pants.option.config.Config` config: data from a config file. 103 :param known_scope_infos: ScopeInfos for all scopes that may be encountered. 104 :param args: a list of cmd-line args; defaults to `sys.argv` if None is supplied. 105 :param bootstrap_option_values: An optional namespace containing the values of bootstrap 106 options. We can use these values when registering other options. 107 :param :class:`pants.option.option_tracker.OptionTracker` option_tracker: option tracker 108 instance to record how option values were assigned. 109 """ 110 # We need parsers for all the intermediate scopes, so inherited option values 111 # can propagate through them. 112 complete_known_scope_infos = cls.complete_scopes(known_scope_infos) 113 splitter = ArgSplitter(complete_known_scope_infos) 114 args = sys.argv if args is None else args 115 goals, scope_to_flags, target_specs, passthru, passthru_owner = splitter.split_args(args) 116 117 if not option_tracker: 118 raise cls.OptionTrackerRequiredError() 119 120 if bootstrap_option_values: 121 target_spec_files = bootstrap_option_values.target_spec_files 122 if target_spec_files: 123 for spec in target_spec_files: 124 with open(spec) as f: 125 target_specs.extend(filter(None, [line.strip() for line in f])) 126 127 help_request = splitter.help_request 128 129 parser_hierarchy = ParserHierarchy(env, config, complete_known_scope_infos, option_tracker) 130 values_by_scope = {} # Arg values, parsed per-scope on demand. 131 bootstrap_option_values = bootstrap_option_values 132 known_scope_to_info = {s.scope: s for s in complete_known_scope_infos} 133 return cls(goals, scope_to_flags, target_specs, passthru, passthru_owner, help_request, 134 parser_hierarchy, values_by_scope, bootstrap_option_values, known_scope_to_info, 135 option_tracker) 136 137 def __init__(self, goals, scope_to_flags, target_specs, passthru, passthru_owner, help_request, 138 parser_hierarchy, values_by_scope, bootstrap_option_values, known_scope_to_info, 139 option_tracker): 140 """The low-level constructor for an Options instance. 141 142 Dependees should use `Options.create` instead. 143 """ 144 self._goals = goals 145 self._scope_to_flags = scope_to_flags 146 self._target_specs = target_specs 147 self._passthru = passthru 148 self._passthru_owner = passthru_owner 149 self._help_request = help_request 150 self._parser_hierarchy = parser_hierarchy 151 self._values_by_scope = values_by_scope 152 self._bootstrap_option_values = bootstrap_option_values 153 self._known_scope_to_info = known_scope_to_info 154 self._option_tracker = option_tracker 155 156 @property 157 def tracker(self): 158 return self._option_tracker 159 160 @property 161 def help_request(self): 162 """ 163 :API: public 164 """ 165 return self._help_request 166 167 @property 168 def target_specs(self): 169 """The targets to operate on. 170 171 :API: public 172 """ 173 return self._target_specs 174 175 @property 176 def goals(self): 177 """The requested goals, in the order specified on the cmd line. 178 179 :API: public 180 """ 181 return self._goals 182 183 @property 184 def known_scope_to_info(self): 185 return self._known_scope_to_info 186 187 @property 188 def scope_to_flags(self): 189 return self._scope_to_flags 190 191 def drop_flag_values(self): 192 """Returns a copy of these options that ignores values specified via flags. 193 194 Any pre-cached option values are cleared and only option values that come from option defaults, 195 the config or the environment are used. 196 """ 197 # An empty scope_to_flags to force all values to come via the config -> env hierarchy alone 198 # and empty values in case we already cached some from flags. 199 no_flags = {} 200 no_values = {} 201 return Options(self._goals, 202 no_flags, 203 self._target_specs, 204 self._passthru, 205 self._passthru_owner, 206 self._help_request, 207 self._parser_hierarchy, 208 no_values, 209 self._bootstrap_option_values, 210 self._known_scope_to_info, 211 self._option_tracker) 212 213 def is_known_scope(self, scope): 214 """Whether the given scope is known by this instance. 215 216 :API: public 217 """ 218 return scope in self._known_scope_to_info 219 220 def passthru_args_for_scope(self, scope): 221 # Passthru args "belong" to the last scope mentioned on the command-line. 222 223 # Note: If that last scope is a goal, we allow all tasks in that goal to access the passthru 224 # args. This is to allow the more intuitive 225 # pants run <target> -- <passthru args> 226 # instead of requiring 227 # pants run.py <target> -- <passthru args>. 228 # 229 # However note that in the case where multiple tasks run in the same goal, e.g., 230 # pants test <target> -- <passthru args> 231 # Then, e.g., both junit and pytest will get the passthru args even though the user probably 232 # only intended them to go to one of them. If the wrong one is not a no-op then the error will 233 # be unpredictable. However this is not a common case, and can be circumvented with an 234 # explicit test.pytest or test.junit scope. 235 if (scope and self._passthru_owner and scope.startswith(self._passthru_owner) and 236 (len(scope) == len(self._passthru_owner) or scope[len(self._passthru_owner)] == '.')): 237 return self._passthru 238 else: 239 return [] 240 241 def register(self, scope, *args, **kwargs): 242 """Register an option in the given scope.""" 243 self.get_parser(scope).register(*args, **kwargs) 244 deprecated_scope = self.known_scope_to_info[scope].deprecated_scope 245 if deprecated_scope: 246 self.get_parser(deprecated_scope).register(*args, **kwargs) 247 248 def registration_function_for_optionable(self, optionable_class): 249 """Returns a function for registering options on the given scope.""" 250 # TODO(benjy): Make this an instance of a class that implements __call__, so we can 251 # docstring it, and so it's less weird than attatching properties to a function. 252 def register(*args, **kwargs): 253 kwargs['registering_class'] = optionable_class 254 self.register(optionable_class.options_scope, *args, **kwargs) 255 # Clients can access the bootstrap option values as register.bootstrap. 256 register.bootstrap = self.bootstrap_option_values() 257 # Clients can access the scope as register.scope. 258 register.scope = optionable_class.options_scope 259 return register 260 261 def get_parser(self, scope): 262 """Returns the parser for the given scope, so code can register on it directly.""" 263 return self._parser_hierarchy.get_parser_by_scope(scope) 264 265 def walk_parsers(self, callback): 266 self._parser_hierarchy.walk(callback) 267 268 def for_scope(self, scope, inherit_from_enclosing_scope=True): 269 """Return the option values for the given scope. 270 271 Values are attributes of the returned object, e.g., options.foo. 272 Computed lazily per scope. 273 274 :API: public 275 """ 276 # Short-circuit, if already computed. 277 if scope in self._values_by_scope: 278 return self._values_by_scope[scope] 279 280 # First get enclosing scope's option values, if any. 281 if scope == GLOBAL_SCOPE or not inherit_from_enclosing_scope: 282 values = OptionValueContainer() 283 else: 284 values = copy.copy(self.for_scope(enclosing_scope(scope))) 285 286 # Now add our values. 287 flags_in_scope = self._scope_to_flags.get(scope, []) 288 self._parser_hierarchy.get_parser_by_scope(scope).parse_args(flags_in_scope, values) 289 290 # If we're the new name of a deprecated scope, also get values from that scope. 291 deprecated_scope = self.known_scope_to_info[scope].deprecated_scope 292 # Note that deprecated_scope and scope share the same Optionable class, so deprecated_scope's 293 # Optionable has a deprecated_options_scope equal to deprecated_scope. Therefore we must 294 # check that scope != deprecated_scope to prevent infinite recursion. 295 if deprecated_scope is not None and scope != deprecated_scope: 296 # Do the deprecation check only on keys that were explicitly set on the deprecated scope 297 # (and not on its enclosing scopes). 298 explicit_keys = self.for_scope(deprecated_scope, 299 inherit_from_enclosing_scope=False).get_explicit_keys() 300 if explicit_keys: 301 warn_or_error(self.known_scope_to_info[scope].deprecated_scope_removal_version, 302 'scope {}'.format(deprecated_scope), 303 'Use scope {} instead (options: {})'.format(scope, ', '.join(explicit_keys))) 304 # Update our values with those of the deprecated scope (now including values inherited 305 # from its enclosing scope). 306 # Note that a deprecated val will take precedence over a val of equal rank. 307 # This makes the code a bit neater. 308 values.update(self.for_scope(deprecated_scope)) 309 310 # Record the value derivation. 311 for option in values: 312 self._option_tracker.record_option(scope=scope, option=option, value=values[option], 313 rank=values.get_rank(option)) 314 315 # Cache the values. 316 self._values_by_scope[scope] = values 317 318 return values 319 320 def get_fingerprintable_for_scope(self, scope): 321 """Returns a list of fingerprintable (option type, option value) pairs for the given scope. 322 323 Fingerprintable options are options registered via a "fingerprint=True" kwarg. 324 325 :API: public 326 """ 327 pairs = [] 328 # Note that we iterate over options registered at `scope` and at all enclosing scopes, since 329 # option-using code can read those values indirectly via its own OptionValueContainer, so 330 # they can affect that code's output. 331 registration_scope = scope 332 while registration_scope is not None: 333 parser = self._parser_hierarchy.get_parser_by_scope(registration_scope) 334 # Sort the arguments, so that the fingerprint is consistent. 335 for (_, kwargs) in sorted(parser.option_registrations_iter()): 336 if kwargs.get('recursive') and not kwargs.get('recursive_root'): 337 continue # We only need to fprint recursive options once. 338 if kwargs.get('fingerprint') is not True: 339 continue 340 # Note that we read the value from scope, even if the registration was on an enclosing 341 # scope, to get the right value for recursive options (and because this mirrors what 342 # option-using code does). 343 val = self.for_scope(scope)[kwargs['dest']] 344 # If we have a list then we delegate to the fingerprinting implementation of the members. 345 if is_list_option(kwargs): 346 val_type = kwargs.get('member_type', str) 347 else: 348 val_type = kwargs.get('type', str) 349 pairs.append((val_type, val)) 350 registration_scope = (None if registration_scope == '' 351 else enclosing_scope(registration_scope)) 352 return pairs 353 354 def __getitem__(self, scope): 355 # TODO(John Sirois): Mainly supports use of dict<str, dict<str, str>> for mock options in tests, 356 # Consider killing if tests consolidate on using TestOptions instead of the raw dicts. 357 return self.for_scope(scope) 358 359 def bootstrap_option_values(self): 360 """Return the option values for bootstrap options. 361 362 General code can also access these values in the global scope. But option registration code 363 cannot, hence this special-casing of this small set of options. 364 """ 365 return self._bootstrap_option_values 366 367 def for_global_scope(self): 368 """Return the option values for the global scope. 369 370 :API: public 371 """ 372 return self.for_scope(GLOBAL_SCOPE) 373 [end of src/python/pants/option/options.py] [start of src/python/pants/task/task.py] 1 # coding=utf-8 2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 import os 9 from abc import abstractmethod 10 from contextlib import contextmanager 11 from hashlib import sha1 12 from itertools import repeat 13 14 from pants.base.deprecated import deprecated_conditional 15 from pants.base.exceptions import TaskError 16 from pants.base.worker_pool import Work 17 from pants.cache.artifact_cache import UnreadableArtifact, call_insert, call_use_cached_files 18 from pants.cache.cache_setup import CacheSetup 19 from pants.invalidation.build_invalidator import BuildInvalidator, CacheKeyGenerator 20 from pants.invalidation.cache_manager import InvalidationCacheManager, InvalidationCheck 21 from pants.option.optionable import Optionable 22 from pants.option.options_fingerprinter import OptionsFingerprinter 23 from pants.option.scope import ScopeInfo 24 from pants.reporting.reporting_utils import items_to_report_element 25 from pants.subsystem.subsystem_client_mixin import SubsystemClientMixin 26 from pants.util.dirutil import safe_mkdir, safe_rm_oldest_items_in_dir 27 from pants.util.memo import memoized_method, memoized_property 28 from pants.util.meta import AbstractClass 29 30 31 class TaskBase(SubsystemClientMixin, Optionable, AbstractClass): 32 """Defines a lifecycle that prepares a task for execution and provides the base machinery 33 needed to execute it. 34 35 Provides the base lifecycle methods that allow a task to interact with the command line, other 36 tasks and the user. The lifecycle is linear and run via the following sequence: 37 1. register_options - declare options configurable via cmd-line flag or config file. 38 2. product_types - declare the product types your task is capable of producing. 39 3. alternate_target_roots - propose a different set of target roots to use than those specified 40 via the CLI for the active pants run. 41 4. prepare - request any products needed from other tasks. 42 5. __init__ - distill configuration into the information needed to execute. 43 44 Provides access to the current run context for scoping work. 45 46 Also provides the basic facilities for doing work efficiently including providing a work directory 47 for scratch space on disk, an invalidator for checking which targets need work done on, and an 48 artifact cache for re-using previously cached work. 49 50 #TODO(John Sirois): Lifecycle is currently split between TaskBase and Task and lifecycle 51 (interface) and helpers (utility) are currently conflated. Tease these apart and narrow the scope 52 of the helpers. Ideally console tasks don't inherit a workdir, invalidator or build cache for 53 example. 54 """ 55 options_scope_category = ScopeInfo.TASK 56 57 # We set this explicitly on the synthetic subclass, so that it shares a stable name with 58 # its superclass, which is not necessary for regular use, but can be convenient in tests. 59 _stable_name = None 60 61 @classmethod 62 def implementation_version(cls): 63 """ 64 :API: public 65 """ 66 return [('TaskBase', 2)] 67 68 @classmethod 69 @memoized_method 70 def implementation_version_str(cls): 71 return '.'.join(['_'.join(map(str, x)) for x in cls.implementation_version()]) 72 73 @classmethod 74 def stable_name(cls): 75 """The stable name of this task type. 76 77 We synthesize subclasses of the task types at runtime, and these synthesized subclasses 78 may have random names (e.g., in tests), so this gives us a stable name to use across runs, 79 e.g., in artifact cache references. 80 """ 81 return cls._stable_name or cls._compute_stable_name() 82 83 @classmethod 84 def _compute_stable_name(cls): 85 return '{}_{}'.format(cls.__module__, cls.__name__).replace('.', '_') 86 87 @classmethod 88 def subsystem_dependencies(cls): 89 return super(TaskBase, cls).subsystem_dependencies() + (CacheSetup.scoped(cls),) 90 91 @classmethod 92 def product_types(cls): 93 """The list of products this Task produces. Set the product type(s) for this 94 task i.e. the product type(s) this task creates e.g ['classes']. 95 96 By default, each task is considered as creating a unique product type(s). 97 Subclasses that create products, should override this to specify their unique product type(s). 98 99 :API: public 100 """ 101 return [] 102 103 @classmethod 104 def known_scope_infos(cls): 105 """Yields ScopeInfo for all known scopes for this task, in no particular order.""" 106 # The task's own scope. 107 yield cls.get_scope_info() 108 # The scopes of any task-specific subsystems it uses. 109 for dep in cls.subsystem_dependencies_iter(): 110 if not dep.is_global(): 111 yield dep.subsystem_cls.get_scope_info(subscope=dep.scope) 112 113 @classmethod 114 def supports_passthru_args(cls): 115 """Subclasses may override to indicate that they can use passthru args. 116 117 :API: public 118 """ 119 return False 120 121 @classmethod 122 def _scoped_options(cls, options): 123 return options[cls.options_scope] 124 125 @classmethod 126 def get_alternate_target_roots(cls, options, address_mapper, build_graph): 127 # Subclasses should not generally need to override this method. 128 return cls.alternate_target_roots(cls._scoped_options(options), address_mapper, build_graph) 129 130 @classmethod 131 def alternate_target_roots(cls, options, address_mapper, build_graph): 132 """Allows a Task to propose alternate target roots from those specified on the CLI. 133 134 At most 1 unique proposal is allowed amongst all tasks involved in the run. If more than 1 135 unique list of target roots is proposed an error is raised during task scheduling. 136 137 :API: public 138 139 :returns list: The new target roots to use or None to accept the CLI specified target roots. 140 """ 141 142 @classmethod 143 def invoke_prepare(cls, options, round_manager): 144 # Subclasses should not generally need to override this method. 145 return cls.prepare(cls._scoped_options(options), round_manager) 146 147 @classmethod 148 def prepare(cls, options, round_manager): 149 """Prepares a task for execution. 150 151 Called before execution and prior to any tasks that may be (indirectly) depended upon. 152 153 Typically a task that requires products from other goals would register interest in those 154 products here and then retrieve the requested product mappings when executed. 155 156 :API: public 157 """ 158 159 def __init__(self, context, workdir): 160 """Subclass __init__ methods, if defined, *must* follow this idiom: 161 162 class MyTask(Task): 163 def __init__(self, *args, **kwargs): 164 super(MyTask, self).__init__(*args, **kwargs) 165 ... 166 167 This allows us to change Task.__init__()'s arguments without 168 changing every subclass. If the subclass does not need its own 169 initialization, this method can (and should) be omitted entirely. 170 171 :API: public 172 """ 173 super(TaskBase, self).__init__() 174 self.context = context 175 self._workdir = workdir 176 177 self._cache_key_errors = set() 178 179 self._build_invalidator_dir = os.path.join( 180 self.context.options.for_global_scope().pants_workdir, 181 'build_invalidator', 182 self.stable_name()) 183 184 self._cache_factory = CacheSetup.create_cache_factory_for_task(self) 185 186 self._options_fingerprinter = OptionsFingerprinter(self.context.build_graph) 187 188 def get_options(self): 189 """Returns the option values for this task's scope. 190 191 :API: public 192 """ 193 return self.context.options.for_scope(self.options_scope) 194 195 def get_passthru_args(self): 196 """ 197 :API: public 198 """ 199 if not self.supports_passthru_args(): 200 raise TaskError('{0} Does not support passthru args.'.format(self.stable_name())) 201 else: 202 return self.context.options.passthru_args_for_scope(self.options_scope) 203 204 @memoized_property 205 def workdir(self): 206 """A scratch-space for this task that will be deleted by `clean-all`. 207 208 It's guaranteed that no other task has been given this workdir path to use and that the workdir 209 exists. 210 211 :API: public 212 """ 213 safe_mkdir(self._workdir) 214 return self._workdir 215 216 def _options_fingerprint(self, scope): 217 pairs = self.context.options.get_fingerprintable_for_scope(scope) 218 hasher = sha1() 219 for (option_type, option_val) in pairs: 220 fp = self._options_fingerprinter.fingerprint(option_type, option_val) 221 if fp is not None: 222 hasher.update(fp) 223 return hasher.hexdigest() 224 225 @memoized_property 226 def fingerprint(self): 227 """Returns a fingerprint for the identity of the task. 228 229 A task fingerprint is composed of the options the task is currently running under. 230 Useful for invalidating unchanging targets being executed beneath changing task 231 options that affect outputted artifacts. 232 233 A task's fingerprint is only valid afer the task has been fully initialized. 234 """ 235 hasher = sha1() 236 hasher.update(self._options_fingerprint(self.options_scope)) 237 hasher.update(self.implementation_version_str()) 238 # TODO: this is not recursive, but should be: see #2739 239 for dep in self.subsystem_dependencies_iter(): 240 hasher.update(self._options_fingerprint(dep.options_scope())) 241 return str(hasher.hexdigest()) 242 243 def artifact_cache_reads_enabled(self): 244 return self._cache_factory.read_cache_available() 245 246 def artifact_cache_writes_enabled(self): 247 return self._cache_factory.write_cache_available() 248 249 def invalidate(self): 250 """Invalidates all targets for this task.""" 251 BuildInvalidator(self._build_invalidator_dir).force_invalidate_all() 252 253 @property 254 def create_target_dirs(self): 255 """Whether to create a results_dir per VersionedTarget in the workdir of the Task. 256 257 This defaults to the value of `self.cache_target_dirs` (as caching them requires 258 creating them), but may be overridden independently to create the dirs without caching 259 them. 260 261 :API: public 262 """ 263 return self.cache_target_dirs or False 264 265 @property 266 def cache_target_dirs(self): 267 """Whether to cache files in VersionedTarget's results_dir after exiting an invalidated block. 268 269 Subclasses may override this method to return True if they wish to use this style 270 of "automated" caching, where each VersionedTarget is given an associated results directory, 271 which will automatically be uploaded to the cache. Tasks should place the output files 272 for each VersionedTarget in said results directory. It is highly suggested to follow this 273 schema for caching, rather than manually making updates to the artifact cache. 274 275 :API: public 276 """ 277 return False 278 279 @property 280 def incremental(self): 281 """Whether this Task implements incremental building of individual targets. 282 283 Incremental tasks with `cache_target_dirs` set will have the results_dir of the previous build 284 for a target cloned into the results_dir for the current build (where possible). This 285 copy-on-write behaviour allows for immutability of the results_dir once a target has been 286 marked valid. 287 288 :API: public 289 """ 290 return False 291 292 @property 293 def cache_incremental(self): 294 """For incremental tasks, indicates whether the results of incremental builds should be cached. 295 296 Deterministic per-target incremental compilation is a relatively difficult thing to implement, 297 so this property provides an escape hatch to avoid caching things in that riskier case. 298 299 :API: public 300 """ 301 return False 302 303 @contextmanager 304 def invalidated(self, 305 targets, 306 invalidate_dependents=False, 307 silent=False, 308 fingerprint_strategy=None, 309 topological_order=False): 310 """Checks targets for invalidation, first checking the artifact cache. 311 312 Subclasses call this to figure out what to work on. 313 314 :API: public 315 316 :param targets: The targets to check for changes. 317 :param invalidate_dependents: If True then any targets depending on changed targets are 318 invalidated. 319 :param silent: If true, suppress logging information about target invalidation. 320 :param fingerprint_strategy: A FingerprintStrategy instance, which can do per task, 321 finer grained fingerprinting of a given Target. 322 :param topological_order: Whether to invalidate in dependency order. 323 324 If no exceptions are thrown by work in the block, the build cache is updated for the targets. 325 Note: the artifact cache is not updated. That must be done manually. 326 327 :returns: Yields an InvalidationCheck object reflecting the targets. 328 :rtype: InvalidationCheck 329 """ 330 331 cache_key_generator = CacheKeyGenerator( 332 self.context.options.for_global_scope().cache_key_gen_version, 333 self.fingerprint) 334 cache_manager = InvalidationCacheManager(self.workdir, 335 cache_key_generator, 336 self._build_invalidator_dir, 337 invalidate_dependents, 338 fingerprint_strategy=fingerprint_strategy, 339 invalidation_report=self.context.invalidation_report, 340 task_name=type(self).__name__, 341 task_version=self.implementation_version_str(), 342 artifact_write_callback=self.maybe_write_artifact) 343 344 invalidation_check = cache_manager.check(targets, topological_order=topological_order) 345 346 self._maybe_create_results_dirs(invalidation_check.all_vts) 347 348 if invalidation_check.invalid_vts and self.artifact_cache_reads_enabled(): 349 with self.context.new_workunit('cache'): 350 cached_vts, uncached_vts, uncached_causes = \ 351 self.check_artifact_cache(self.check_artifact_cache_for(invalidation_check)) 352 if cached_vts: 353 cached_targets = [vt.target for vt in cached_vts] 354 self.context.run_tracker.artifact_cache_stats.add_hits(cache_manager.task_name, 355 cached_targets) 356 if not silent: 357 self._report_targets('Using cached artifacts for ', cached_targets, '.') 358 if uncached_vts: 359 uncached_targets = [vt.target for vt in uncached_vts] 360 self.context.run_tracker.artifact_cache_stats.add_misses(cache_manager.task_name, 361 uncached_targets, 362 uncached_causes) 363 if not silent: 364 self._report_targets('No cached artifacts for ', uncached_targets, '.') 365 # Now that we've checked the cache, re-partition whatever is still invalid. 366 invalidation_check = \ 367 InvalidationCheck(invalidation_check.all_vts, uncached_vts) 368 369 if not silent: 370 targets = [] 371 for vt in invalidation_check.invalid_vts: 372 targets.extend(vt.targets) 373 374 if len(targets): 375 msg_elements = ['Invalidated ', 376 items_to_report_element([t.address.reference() for t in targets], 'target'), 377 '.'] 378 self.context.log.info(*msg_elements) 379 380 invalidation_report = self.context.invalidation_report 381 if invalidation_report: 382 for vts in invalidation_check.all_vts: 383 invalidation_report.add_vts(cache_manager, vts.targets, vts.cache_key, vts.valid, 384 phase='pre-check') 385 386 # Cache has been checked to create the full list of invalid VTs. 387 # Only copy previous_results for this subset of VTs. 388 if self.incremental: 389 for vts in invalidation_check.invalid_vts: 390 vts.copy_previous_results() 391 392 # Yield the result, and then mark the targets as up to date. 393 yield invalidation_check 394 395 if invalidation_report: 396 for vts in invalidation_check.all_vts: 397 invalidation_report.add_vts(cache_manager, vts.targets, vts.cache_key, vts.valid, 398 phase='post-check') 399 400 for vt in invalidation_check.invalid_vts: 401 vt.update() 402 403 # Background work to clean up previous builds. 404 if self.context.options.for_global_scope().workdir_max_build_entries is not None: 405 self._launch_background_workdir_cleanup(invalidation_check.all_vts) 406 407 def maybe_write_artifact(self, vt): 408 if self._should_cache_target_dir(vt): 409 self.update_artifact_cache([(vt, [vt.current_results_dir])]) 410 411 def _launch_background_workdir_cleanup(self, vts): 412 workdir_build_cleanup_job = Work(self._cleanup_workdir_stale_builds, [(vts,)], 'workdir_build_cleanup') 413 self.context.submit_background_work_chain([workdir_build_cleanup_job]) 414 415 def _cleanup_workdir_stale_builds(self, vts): 416 # workdir_max_build_entries has been assured of not None before invoking this method. 417 max_entries_per_target = max(2, self.context.options.for_global_scope().workdir_max_build_entries) 418 for vt in vts: 419 live_dirs = list(vt.live_dirs()) 420 if not live_dirs: 421 continue 422 root_dir = os.path.dirname(vt.results_dir) 423 safe_rm_oldest_items_in_dir(root_dir, max_entries_per_target, excludes=live_dirs) 424 425 def _should_cache_target_dir(self, vt): 426 """Return true if the given vt should be written to a cache (if configured).""" 427 return ( 428 self.cache_target_dirs and 429 not vt.target.has_label('no_cache') and 430 (not vt.is_incremental or self.cache_incremental) and 431 self.artifact_cache_writes_enabled() 432 ) 433 434 def _maybe_create_results_dirs(self, vts): 435 """If `cache_target_dirs`, create results_dirs for the given versioned targets.""" 436 if self.create_target_dirs: 437 for vt in vts: 438 vt.create_results_dir() 439 440 def check_artifact_cache_for(self, invalidation_check): 441 """Decides which VTS to check the artifact cache for. 442 443 By default we check for each invalid target. Can be overridden, e.g., to 444 instead check only for a single artifact for the entire target set. 445 """ 446 return invalidation_check.invalid_vts 447 448 def check_artifact_cache(self, vts): 449 """Checks the artifact cache for the specified list of VersionedTargetSets. 450 451 Returns a tuple (cached, uncached, uncached_causes) of VersionedTargets that were 452 satisfied/unsatisfied from the cache. Uncached VTS are also attached with their 453 causes for the miss: `False` indicates a legit miss while `UnreadableArtifact` 454 is due to either local or remote cache failures. 455 """ 456 return self.do_check_artifact_cache(vts) 457 458 def do_check_artifact_cache(self, vts, post_process_cached_vts=None): 459 """Checks the artifact cache for the specified list of VersionedTargetSets. 460 461 Returns a pair (cached, uncached) of VersionedTargets that were 462 satisfied/unsatisfied from the cache. 463 """ 464 if not vts: 465 return [], [], [] 466 467 read_cache = self._cache_factory.get_read_cache() 468 items = [(read_cache, vt.cache_key, vt.current_results_dir if self.cache_target_dirs else None) 469 for vt in vts] 470 res = self.context.subproc_map(call_use_cached_files, items) 471 472 cached_vts = [] 473 uncached_vts = [] 474 uncached_causes = [] 475 476 # Note that while the input vts may represent multiple targets (for tasks that overrride 477 # check_artifact_cache_for), the ones we return must represent single targets. 478 # Once flattened, cached/uncached vts are in separate lists. Each uncached vts is paired 479 # with why it is missed for stat reporting purpose. 480 for vt, was_in_cache in zip(vts, res): 481 if was_in_cache: 482 cached_vts.extend(vt.versioned_targets) 483 else: 484 uncached_vts.extend(vt.versioned_targets) 485 uncached_causes.extend(repeat(was_in_cache, len(vt.versioned_targets))) 486 if isinstance(was_in_cache, UnreadableArtifact): 487 self._cache_key_errors.update(was_in_cache.key) 488 489 if post_process_cached_vts: 490 post_process_cached_vts(cached_vts) 491 for vt in cached_vts: 492 vt.update() 493 return cached_vts, uncached_vts, uncached_causes 494 495 def update_artifact_cache(self, vts_artifactfiles_pairs): 496 """Write to the artifact cache, if we're configured to. 497 498 vts_artifactfiles_pairs - a list of pairs (vts, artifactfiles) where 499 - vts is single VersionedTargetSet. 500 - artifactfiles is a list of absolute paths to artifacts for the VersionedTargetSet. 501 """ 502 update_artifact_cache_work = self._get_update_artifact_cache_work(vts_artifactfiles_pairs) 503 if update_artifact_cache_work: 504 self.context.submit_background_work_chain([update_artifact_cache_work], 505 parent_workunit_name='cache') 506 507 def _get_update_artifact_cache_work(self, vts_artifactfiles_pairs): 508 """Create a Work instance to update an artifact cache, if we're configured to. 509 510 vts_artifactfiles_pairs - a list of pairs (vts, artifactfiles) where 511 - vts is single VersionedTargetSet. 512 - artifactfiles is a list of paths to artifacts for the VersionedTargetSet. 513 """ 514 cache = self._cache_factory.get_write_cache() 515 if cache: 516 if len(vts_artifactfiles_pairs) == 0: 517 return None 518 # Do some reporting. 519 targets = set() 520 for vts, _ in vts_artifactfiles_pairs: 521 targets.update(vts.targets) 522 523 self._report_targets( 524 'Caching artifacts for ', 525 list(targets), 526 '.', 527 logger=self.context.log.debug, 528 ) 529 530 always_overwrite = self._cache_factory.overwrite() 531 532 # Cache the artifacts. 533 args_tuples = [] 534 for vts, artifactfiles in vts_artifactfiles_pairs: 535 overwrite = always_overwrite or vts.cache_key in self._cache_key_errors 536 args_tuples.append((cache, vts.cache_key, artifactfiles, overwrite)) 537 538 return Work(lambda x: self.context.subproc_map(call_insert, x), [(args_tuples,)], 'insert') 539 else: 540 return None 541 542 def _report_targets(self, prefix, targets, suffix, logger=None): 543 logger = logger or self.context.log.info 544 logger( 545 prefix, 546 items_to_report_element([t.address.reference() for t in targets], 'target'), 547 suffix, 548 ) 549 550 def require_single_root_target(self): 551 """If a single target was specified on the cmd line, returns that target. 552 553 Otherwise throws TaskError. 554 555 :API: public 556 """ 557 target_roots = self.context.target_roots 558 if len(target_roots) == 0: 559 raise TaskError('No target specified.') 560 elif len(target_roots) > 1: 561 raise TaskError('Multiple targets specified: {}' 562 .format(', '.join([repr(t) for t in target_roots]))) 563 return target_roots[0] 564 565 def determine_target_roots(self, goal_name, predicate=None): 566 """Helper for tasks that scan for default target roots. 567 568 :param string goal_name: The goal name to use for any warning emissions. 569 :param callable predicate: The predicate to pass to `context.scan().targets(predicate=X)`. 570 """ 571 deprecated_conditional( 572 lambda: not self.context.target_roots, 573 '1.5.0.dev0', 574 '`./pants {0}` (with no explicit targets) will soon become an error. Please specify ' 575 'one or more explicit target specs (e.g. `./pants {0} ::`).'.format(goal_name)) 576 if not self.context.target_roots and not self.get_options().enable_v2_engine: 577 # For the v1 path, continue the behavior of e.g. `./pants list` implies `./pants list ::`. 578 return self.context.scan().targets(predicate=predicate) 579 580 # For the v2 path, e.g. `./pants list` is a functional no-op. This matches the v2 mode behavior 581 # of e.g. `./pants --changed-parent=HEAD list` (w/ no changes) returning an empty result. 582 return self.context.target_roots 583 584 585 class Task(TaskBase): 586 """An executable task. 587 588 Tasks form the atoms of work done by pants and when executed generally produce artifacts as a 589 side effect whether these be files on disk (for example compilation outputs) or characters output 590 to the terminal (for example dependency graph metadata). 591 592 :API: public 593 """ 594 595 def __init__(self, context, workdir): 596 """ 597 Add pass-thru Task Constructor for public API visibility. 598 599 :API: public 600 """ 601 super(Task, self).__init__(context, workdir) 602 603 @abstractmethod 604 def execute(self): 605 """Executes this task. 606 607 :API: public 608 """ 609 610 611 class QuietTaskMixin(object): 612 """A mixin to signal that pants shouldn't print verbose progress information for this task.""" 613 pass 614 [end of src/python/pants/task/task.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
d8303cb260635fb95ff2e3548049095030a489d3
Support invalidation generically via an option Currently, manual invalidation is only supported via `./pants invalidated`. This is problematic for a few reasons: 1. "invalidate" is not a verb in pants user-space, it's only one we use internally in the codebase 2. The `invalidate` goal acts globally, invalidating all targets across all tasks As @benjyw suggested [here](https://github.com/pantsbuild/pants/pull/4660#pullrequestreview-43801196), a global recursive `--force` flag might take the place of the `invalidate` goal and also allow more targeted invalidation at the task level when placed to the right of a fully qualified goal name on the command line. To-boot, `--force` likely makes more intuitive sense in pants user-space; ie: force this command to run instead of skipping things.
2017-06-20T17:23:48Z
<patch> diff --git a/contrib/kythe/src/python/pants/contrib/kythe/tasks/index_java.py b/contrib/kythe/src/python/pants/contrib/kythe/tasks/index_java.py --- a/contrib/kythe/src/python/pants/contrib/kythe/tasks/index_java.py +++ b/contrib/kythe/src/python/pants/contrib/kythe/tasks/index_java.py @@ -37,7 +37,8 @@ def prepare(cls, options, round_manager): def register_options(cls, register): super(IndexJava, cls).register_options(register) register('--force', type=bool, fingerprint=True, - help='Re-index all targets, even if they are valid.') + help='Re-index all targets, even if they are valid.', + removal_version='1.6.0.dev0', removal_hint='Use --cache-ignore instead.') cls.register_jvm_tool(register, 'kythe-indexer', main=cls._KYTHE_INDEXER_MAIN) @@ -50,6 +51,8 @@ def entries_file(_vt): with self.invalidated(indexable_targets, invalidate_dependents=True) as invalidation_check: kindex_files = self.context.products.get_data('kindex_files') + # TODO(John Sirois): `vts_to_index` should be inlined to `invalidation_check.invalid_vts` + # when the deprecation cycle for `--force` is completed. vts_to_index = (invalidation_check.all_vts if self.get_options().force else invalidation_check.invalid_vts) diff --git a/src/python/pants/cache/cache_setup.py b/src/python/pants/cache/cache_setup.py --- a/src/python/pants/cache/cache_setup.py +++ b/src/python/pants/cache/cache_setup.py @@ -50,6 +50,8 @@ class CacheSetup(Subsystem): def register_options(cls, register): super(CacheSetup, cls).register_options(register) default_cache = [os.path.join(get_buildroot(), '.cache')] + register('--ignore', type=bool, + help='Ignore all other cache configuration and skip using the cache.') register('--read', type=bool, default=True, help='Read build artifacts from cache, if available.') register('--write', type=bool, default=True, @@ -129,11 +131,15 @@ def __init__(self, options, log, stable_name, pinger=None, resolver=None): else: self._resolver = NoopResolver() + @property + def ignore(self): + return self._options.ignore + def read_cache_available(self): - return self._options.read and bool(self._options.read_from) and self.get_read_cache() + return not self.ignore and self._options.read and self.get_read_cache() def write_cache_available(self): - return self._options.write and bool(self._options.write_to) and self.get_write_cache() + return not self.ignore and self._options.write and self.get_write_cache() def overwrite(self): return self._options.overwrite diff --git a/src/python/pants/core_tasks/invalidate.py b/src/python/pants/core_tasks/invalidate.py --- a/src/python/pants/core_tasks/invalidate.py +++ b/src/python/pants/core_tasks/invalidate.py @@ -5,15 +5,17 @@ from __future__ import (absolute_import, division, generators, nested_scopes, print_function, unicode_literals, with_statement) -import os - +from pants.base.deprecated import deprecated from pants.task.task import Task -from pants.util.dirutil import safe_rmtree class Invalidate(Task): """Invalidate the entire build.""" + @deprecated(removal_version='1.6.0.dev0', + hint_message='Use `./pants --cache-ignore ...` instead.') def execute(self): - build_invalidator_dir = os.path.join(self.get_options().pants_workdir, 'build_invalidator') - safe_rmtree(build_invalidator_dir) + # TODO(John Sirois): Remove the `root` argument `_build_invalidator` once this deprecation cycle + # is complete. This is the only caller using the argument: + # https://github.com/pantsbuild/pants/issues/4697 + self._build_invalidator(root=True).force_invalidate_all() diff --git a/src/python/pants/invalidation/build_invalidator.py b/src/python/pants/invalidation/build_invalidator.py --- a/src/python/pants/invalidation/build_invalidator.py +++ b/src/python/pants/invalidation/build_invalidator.py @@ -13,6 +13,7 @@ from pants.base.hash_utils import hash_all from pants.build_graph.target import Target from pants.fs.fs import safe_filename +from pants.subsystem.subsystem import Subsystem from pants.util.dirutil import safe_mkdir @@ -93,8 +94,30 @@ def key_for_target(self, target, transitive=False, fingerprint_strategy=None): class BuildInvalidator(object): """Invalidates build targets based on the SHA1 hash of source files and other inputs.""" - def __init__(self, root): - self._root = os.path.join(root, GLOBAL_CACHE_KEY_GEN_VERSION) + class Factory(Subsystem): + options_scope = 'build-invalidator' + + @classmethod + def create(cls, build_task=None): + """Creates a build invalidator optionally scoped to a task. + + :param str build_task: An optional task name to scope the build invalidator to. If not + supplied the build invalidator will act globally across all build + tasks. + """ + root = os.path.join(cls.global_instance().get_options().pants_workdir, 'build_invalidator') + return BuildInvalidator(root, scope=build_task) + + def __init__(self, root, scope=None): + """Create a build invalidator using the given root fingerprint database directory. + + :param str root: The root directory to use for storing build invalidation fingerprints. + :param str scope: The scope of this invalidator; if `None` then this invalidator will be global. + """ + root = os.path.join(root, GLOBAL_CACHE_KEY_GEN_VERSION) + if scope: + root = os.path.join(root, scope) + self._root = root safe_mkdir(self._root) def previous_key(self, cache_key): diff --git a/src/python/pants/invalidation/cache_manager.py b/src/python/pants/invalidation/cache_manager.py --- a/src/python/pants/invalidation/cache_manager.py +++ b/src/python/pants/invalidation/cache_manager.py @@ -245,7 +245,7 @@ class CacheValidationError(Exception): def __init__(self, results_dir_root, cache_key_generator, - build_invalidator_dir, + build_invalidator, invalidate_dependents, fingerprint_strategy=None, invalidation_report=None, @@ -259,7 +259,7 @@ def __init__(self, self._task_name = task_name or 'UNKNOWN' self._task_version = task_version or 'Unknown_0' self._invalidate_dependents = invalidate_dependents - self._invalidator = BuildInvalidator(build_invalidator_dir) + self._invalidator = build_invalidator self._fingerprint_strategy = fingerprint_strategy self._artifact_write_callback = artifact_write_callback self.invalidation_report = invalidation_report diff --git a/src/python/pants/task/task.py b/src/python/pants/task/task.py --- a/src/python/pants/task/task.py +++ b/src/python/pants/task/task.py @@ -86,7 +86,8 @@ def _compute_stable_name(cls): @classmethod def subsystem_dependencies(cls): - return super(TaskBase, cls).subsystem_dependencies() + (CacheSetup.scoped(cls),) + return super(TaskBase, cls).subsystem_dependencies() + (CacheSetup.scoped(cls), + BuildInvalidator.Factory) @classmethod def product_types(cls): @@ -175,15 +176,14 @@ def __init__(self, *args, **kwargs): self._workdir = workdir self._cache_key_errors = set() - - self._build_invalidator_dir = os.path.join( - self.context.options.for_global_scope().pants_workdir, - 'build_invalidator', - self.stable_name()) - self._cache_factory = CacheSetup.create_cache_factory_for_task(self) - self._options_fingerprinter = OptionsFingerprinter(self.context.build_graph) + self._force_invalidated = False + + @memoized_method + def _build_invalidator(self, root=False): + build_task = None if root else self.stable_name() + return BuildInvalidator.Factory.create(build_task=build_task) def get_options(self): """Returns the option values for this task's scope. @@ -248,7 +248,7 @@ def artifact_cache_writes_enabled(self): def invalidate(self): """Invalidates all targets for this task.""" - BuildInvalidator(self._build_invalidator_dir).force_invalidate_all() + self._build_invalidator().force_invalidate_all() @property def create_target_dirs(self): @@ -333,7 +333,7 @@ def invalidated(self, self.fingerprint) cache_manager = InvalidationCacheManager(self.workdir, cache_key_generator, - self._build_invalidator_dir, + self._build_invalidator(), invalidate_dependents, fingerprint_strategy=fingerprint_strategy, invalidation_report=self.context.invalidation_report, @@ -341,6 +341,11 @@ def invalidated(self, task_version=self.implementation_version_str(), artifact_write_callback=self.maybe_write_artifact) + # If this Task's execution has been forced, invalidate all our target fingerprints. + if self._cache_factory.ignore and not self._force_invalidated: + self.invalidate() + self._force_invalidated = True + invalidation_check = cache_manager.check(targets, topological_order=topological_order) self._maybe_create_results_dirs(invalidation_check.all_vts) </patch>
[]
[]
pantsbuild__pants-7115
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> ergonomically typechecking datatype tuple contents As per [this thread in `#engine` on the pantsbuild slack a few days ago](https://pantsbuild.slack.com/archives/C0D7TNJHL/p1544557163035200): > @illicitonion: Right now we can check that a field is a tuple, but that's less useful documentation than `tuple<HydratedTarget>` > @cosmicexplorer: there's a [comment i left in `Collection.of()`](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L412) to type-check its contents somehow, let me look at that for a sec There are two ways I was thinking about this in the default values for datatypes diff (#6374) (which does some horrible things that we shouldn't do here, or yet): 1. Where we check types in ctors, use [`satisfied_by`](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L305), and allow `TypeConstraint`s to do arbitrary checking things there, with the knowledge that it runs synchronously. I don't think this is a truly horrible idea, but it is a pretty general construction that may be hard to remove later. 2. Expand what we allow for `obj_type` in [`satisfied_by_type`](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L313) (for example) to also allow an instance of some other specific class which represents a collection (also not a bad idea, less scary and probably cleaner -- **I think we should do this**). If we do (2), we may want to [do what we've done with the datatype `__eq__()` method and have a canary](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L67) so `satisfied_by()` can't be overridden (which gives us the ability to do (1) later, if we want). One initial implementation might be: we add a `@classproperty` to datatypes which returns a tuple of types for the datatype fields, or instead of a type, a field may instead be associated with an object that represents a parameterized collection. [This is essentially what we are informally doing in the the type-checked `__new__()` method already](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L75) -- the only addition would be to edit all the `TypeConstraint`s that currently exist (thankfully all are in `objects.py`) to support checking against a `type`, or a new collection type, which might look something like: ```python class TypedTuple(object): def __init__(self, element_type): assert(isinstance(element_type, type)) self.element_type = element_type ``` And `TypeConstraint`s would take care of checking the elements of the tuple -- this could be made simple(r) with some helper method in the base `TypeConstraint` class. *Note*: only tuples are immediately hashable, and so for use in the engine, we would want these to be tuples instead of lists -- I think we can absolutely expand this later to include unordered collections if that becomes useful/necessary. Starting off with `satisfied_by_type()` as described in (2) seems to be the narrowest and most hygeinic scope to start off with, as `validate_satisfied_by()` delegates to this eventually anyway. </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a build system for software projects in a variety of languages. 4 It works particularly well for a source code repository that contains 5 many distinct projects. 6 7 Friendly documentation: http://www.pantsbuild.org/ 8 9 We release to [PyPI](https://pypi.org/pypi) 10 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 11 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 12 13 We use [Travis CI](https://travis-ci.org) to verify the build 14 [![Build Status](https://travis-ci.org/pantsbuild/pants.svg?branch=master)](https://travis-ci.org/pantsbuild/pants/branches). 15 16 We use [Coveralls](https://coveralls.io) to monitor test coverage 17 [![Coverage Status](https://coveralls.io/repos/pantsbuild/pants/badge.png?branch=master)](https://coveralls.io/r/pantsbuild/pants). 18 19 # Requirements 20 21 At a minimum, pants requires the following to run properly: 22 23 * Linux or Mac OS X 24 * Python 2.7.x (the latest stable version of 2.7 is recommended) 25 * A C compiler, system headers, Python headers (to compile native Python modules) and the libffi 26 library and headers (to compile and link modules that use CFFI to access native code). 27 * Internet access (so that pants can fully bootstrap itself) 28 29 Additionally, if you use the jvm backend to work with java or scala code (installed by default): 30 31 * OpenJDK or Oracle JDK version 7 or greater 32 33 [end of README.md] [start of src/python/pants/base/hash_utils.py] 1 # coding=utf-8 2 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import absolute_import, division, print_function, unicode_literals 6 7 import hashlib 8 import json 9 from builtins import bytes, object, open, str 10 11 from future.utils import PY3 12 from twitter.common.collections import OrderedSet 13 14 from pants.base.deprecated import deprecated 15 from pants.util.collections_abc_backport import Iterable, Mapping, OrderedDict, Set 16 from pants.util.strutil import ensure_binary 17 18 19 def hash_all(strs, digest=None): 20 """Returns a hash of the concatenation of all the strings in strs. 21 22 If a hashlib message digest is not supplied a new sha1 message digest is used. 23 """ 24 digest = digest or hashlib.sha1() 25 for s in strs: 26 s = ensure_binary(s) 27 digest.update(s) 28 return digest.hexdigest() if PY3 else digest.hexdigest().decode('utf-8') 29 30 31 def hash_file(path, digest=None): 32 """Hashes the contents of the file at the given path and returns the hash digest in hex form. 33 34 If a hashlib message digest is not supplied a new sha1 message digest is used. 35 """ 36 digest = digest or hashlib.sha1() 37 with open(path, 'rb') as fd: 38 s = fd.read(8192) 39 while s: 40 digest.update(s) 41 s = fd.read(8192) 42 return digest.hexdigest() if PY3 else digest.hexdigest().decode('utf-8') 43 44 45 class CoercingEncoder(json.JSONEncoder): 46 """An encoder which performs coercions in order to serialize many otherwise illegal objects. 47 48 The python documentation (https://docs.python.org/2/library/json.html#json.dumps) states that 49 dict keys are coerced to strings in json.dumps, but this appears to be incorrect -- it throws a 50 TypeError on things we might to throw at it, like a set, or a dict with tuple keys. 51 """ 52 53 def _maybe_encode_dict_key(self, key_obj): 54 # If dict keys aren't strings, recursively encode them until they are. Checking for strings here 55 # means we don't touch keys that are already strings (instead of quoting them). 56 if isinstance(key_obj, bytes): 57 # Bytes often occur as dict keys in python 2 code, but in python 3, trying to encode bytes 58 # keys raises a TypeError. We explicitly check for that here and convert to str. 59 return self.default(key_obj.decode('utf-8')) 60 elif isinstance(key_obj, str): 61 return self.default(key_obj) 62 else: 63 return self.encode(key_obj) 64 65 def default(self, o): 66 if isinstance(o, Mapping): 67 # Preserve order to avoid collisions for OrderedDict inputs to json.dumps(). We don't do this 68 # for general mappings because dicts have an arbitrary key ordering in some versions of python 69 # 3 (2.7 and 3.6-3.7 are known to have sorted keys, but with different definitions of sorted 70 # orders across versions, including insertion order). We want unordered dicts to collide if 71 # they have the same keys, in the same way we special-case sets below. Calling sorted() should 72 # be very fast if the keys happen to be pre-sorted. Pants options don't support OrderedDict 73 # inputs, and allowing them creates an ambiguity we don't need to deal with right now. See 74 # discussion in #6475. 75 if isinstance(o, OrderedDict): 76 raise TypeError('{cls} does not support OrderedDict inputs: {val!r}.' 77 .format(cls=type(self).__name__, val=o)) 78 # TODO(#7082): we can remove the sorted() and OrderedDict when we drop python 2.7 and simply 79 # ensure we encode the keys/values as we do right here. 80 ordered_kv_pairs = sorted(o.items(), key=lambda x: x[0]) 81 return OrderedDict( 82 (self._maybe_encode_dict_key(k), self.default(v)) 83 for k, v in ordered_kv_pairs) 84 elif isinstance(o, Set): 85 # We disallow OrderedSet (although it is not a stdlib collection) for the same reasons as 86 # OrderedDict above. 87 if isinstance(o, OrderedSet): 88 raise TypeError('{cls} does not support OrderedSet inputs: {val!r}.' 89 .format(cls=type(self).__name__, val=o)) 90 # Set order is arbitrary in python 3.6 and 3.7, so we need to keep this sorted() call. 91 return sorted(self.default(i) for i in o) 92 elif isinstance(o, Iterable) and not isinstance(o, (bytes, list, str)): 93 return list(self.default(i) for i in o) 94 return o 95 96 def encode(self, o): 97 return super(CoercingEncoder, self).encode(self.default(o)) 98 99 100 @deprecated( 101 '1.16.0.dev1', 102 'Please use pants.base.hash_utils.stable_json_sha1 instead.') 103 def stable_json_hash(obj, digest=None, encoder=None): 104 """Hashes `obj` stably; ie repeated calls with the same inputs will produce the same hash. 105 106 :param obj: An object that can be rendered to json using the given `encoder`. 107 :param digest: An optional `hashlib` compatible message digest. Defaults to `hashlib.sha1`. 108 :param encoder: An optional custom json encoder. 109 :type encoder: :class:`json.JSONEncoder` 110 :returns: A stable hash of the given `obj`. 111 :rtype: str 112 113 :API: public 114 """ 115 return json_hash(obj, digest=digest, encoder=encoder) 116 117 118 def json_hash(obj, digest=None, encoder=None): 119 """Hashes `obj` by dumping to JSON. 120 121 :param obj: An object that can be rendered to json using the given `encoder`. 122 :param digest: An optional `hashlib` compatible message digest. Defaults to `hashlib.sha1`. 123 :param encoder: An optional custom json encoder. 124 :type encoder: :class:`json.JSONEncoder` 125 :returns: A hash of the given `obj` according to the given `encoder`. 126 :rtype: str 127 128 :API: public 129 """ 130 json_str = json.dumps(obj, ensure_ascii=True, allow_nan=False, sort_keys=True, cls=encoder) 131 return hash_all(json_str, digest=digest) 132 133 134 # TODO(#6513): something like python 3's @lru_cache decorator could be useful here! 135 def stable_json_sha1(obj, digest=None): 136 """Hashes `obj` stably; ie repeated calls with the same inputs will produce the same hash. 137 138 :param obj: An object that can be rendered to json using a :class:`CoercingEncoder`. 139 :param digest: An optional `hashlib` compatible message digest. Defaults to `hashlib.sha1`. 140 :returns: A stable hash of the given `obj`. 141 :rtype: str 142 143 :API: public 144 """ 145 return json_hash(obj, digest=digest, encoder=CoercingEncoder) 146 147 148 class Sharder(object): 149 """Assigns strings to shards pseudo-randomly, but stably.""" 150 151 class InvalidShardSpec(Exception): 152 """Indicates an invalid shard spec.""" 153 154 def __init__(self, shard_spec): 155 """ 156 :param string shard_spec: A string of the form M/N where M, N are ints and 0 <= M < N. 157 """ 158 super(Sharder.InvalidShardSpec, self).__init__( 159 "Invalid shard spec '{}', should be of the form M/N, where M, N are ints " 160 "and 0 <= M < N.".format(shard_spec)) 161 162 @staticmethod 163 def compute_shard(s, mod): 164 """Computes the mod-hash of the given string, using a sha1 hash. 165 166 :param string s: The string to compute a shard for. 167 """ 168 return int(hash_all([s]), 16) % mod 169 170 def __init__(self, shard_spec): 171 """ 172 :param string shard_spec: A string of the form M/N where M, N are ints and 0 <= M < N. 173 """ 174 def ensure_int(s): 175 try: 176 return int(s) 177 except ValueError: 178 raise self.InvalidShardSpec(shard_spec) 179 180 if shard_spec is None: 181 raise self.InvalidShardSpec('None') 182 shard_str, _, nshards_str = shard_spec.partition('/') 183 self._shard = ensure_int(shard_str) 184 self._nshards = ensure_int(nshards_str) 185 186 if self._shard < 0 or self._shard >= self._nshards: 187 raise self.InvalidShardSpec(shard_spec) 188 189 def is_in_shard(self, s): 190 """Returns True iff the string s is in this shard. 191 192 :param string s: The string to check. 193 """ 194 return self.compute_shard(s, self._nshards) == self._shard 195 196 @property 197 def shard(self): 198 return self._shard 199 200 @property 201 def nshards(self): 202 return self._nshards 203 [end of src/python/pants/base/hash_utils.py] [start of src/python/pants/engine/rules.py] 1 # coding=utf-8 2 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import absolute_import, division, print_function, unicode_literals 6 7 import ast 8 import functools 9 import inspect 10 import itertools 11 import logging 12 from abc import abstractproperty 13 from builtins import bytes, str 14 from types import GeneratorType 15 16 import asttokens 17 from future.utils import PY2 18 from twitter.common.collections import OrderedSet 19 20 from pants.engine.selectors import Get, type_or_constraint_repr 21 from pants.util.collections import assert_single_element 22 from pants.util.collections_abc_backport import Iterable, OrderedDict 23 from pants.util.memo import memoized 24 from pants.util.meta import AbstractClass 25 from pants.util.objects import Exactly, datatype 26 27 28 logger = logging.getLogger(__name__) 29 30 31 class _RuleVisitor(ast.NodeVisitor): 32 """Pull `Get` calls out of an @rule body and validate `yield` statements.""" 33 34 def __init__(self, func, func_node, func_source, orig_indent, frame, parents_table): 35 super(_RuleVisitor, self).__init__() 36 self._gets = [] 37 self._func = func 38 self._func_node = func_node 39 self._func_source = func_source 40 self._orig_indent = orig_indent 41 self._frame = frame 42 self._parents_table = parents_table 43 self._yields_in_assignments = set() 44 45 @property 46 def gets(self): 47 return self._gets 48 49 def _generate_ast_error_message(self, node, msg): 50 # This is the location info of the start of the decorated @rule. 51 filename, line_number, _, context_lines, _ = inspect.getframeinfo(self._frame, context=4) 52 53 # The asttokens library is able to keep track of line numbers and column offsets for us -- the 54 # stdlib ast library only provides these relative to each parent node. 55 tokenized_rule_body = asttokens.ASTTokens(self._func_source, 56 tree=self._func_node, 57 filename=filename) 58 start_offset, _ = tokenized_rule_body.get_text_range(node) 59 line_offset, col_offset = asttokens.LineNumbers(self._func_source).offset_to_line(start_offset) 60 node_file_line = line_number + line_offset - 1 61 # asttokens also very helpfully lets us provide the exact text of the node we want to highlight 62 # in an error message. 63 node_text = tokenized_rule_body.get_text(node) 64 65 fully_indented_node_col = col_offset + self._orig_indent 66 indented_node_text = '{}{}'.format( 67 # The node text doesn't have any initial whitespace, so we have to add it back. 68 col_offset * ' ', 69 '\n'.join( 70 # We removed the indentation from the original source in order to parse it with the ast 71 # library (otherwise it raises an exception), so we add it back here. 72 '{}{}'.format(self._orig_indent * ' ', l) 73 for l in node_text.split('\n'))) 74 75 return ("""In function {func_name}: {msg} 76 The invalid statement was: 77 {filename}:{node_line_number}:{node_col} 78 {node_text} 79 80 The rule defined by function `{func_name}` begins at: 81 {filename}:{line_number}:{orig_indent} 82 {context_lines} 83 """.format(func_name=self._func.__name__, msg=msg, 84 filename=filename, line_number=line_number, orig_indent=self._orig_indent, 85 node_line_number=node_file_line, 86 node_col=fully_indented_node_col, 87 node_text=indented_node_text, 88 # Strip any leading or trailing newlines from the start of the rule body. 89 context_lines=''.join(context_lines).strip('\n'))) 90 91 class YieldVisitError(Exception): pass 92 93 @staticmethod 94 def _maybe_end_of_stmt_list(attr_value): 95 """If `attr_value` is a non-empty iterable, return its final element.""" 96 if (attr_value is not None) and isinstance(attr_value, Iterable): 97 result = list(attr_value) 98 if len(result) > 0: 99 return result[-1] 100 return None 101 102 def _stmt_is_at_end_of_parent_list(self, stmt): 103 """Determine if `stmt` is at the end of a list of statements (i.e. can be an implicit `return`). 104 105 If there are any statements following `stmt` at the same level of nesting, this method returns 106 False, such as the following (if `stmt` is the Expr for `yield 'good'`): 107 108 if 2 + 2 == 5: 109 yield 'good' 110 a = 3 111 112 Note that this returns False even if the statement following `stmt` is a `return`. 113 114 However, if `stmt` is at the end of a list of statements, it can be made more clear that `stmt` 115 is intended to represent a `return`. Another way to view this method is as a dead code 116 elimination check, for a `stmt` which is intended to represent control flow moving out of the 117 current @rule. For example, this method would return True for both of the yield Expr statements 118 in the below snippet. 119 120 if True: 121 yield 3 122 else: 123 a = 3 124 yield a 125 126 This checking is performed by getting the parent of `stmt` with a pre-generated table passed 127 into the constructor. 128 129 See https://docs.python.org/2/library/ast.html#abstract-grammar for the grammar specification. 130 'body', 'orelse', and 'finalbody' are the only attributes on any AST nodes which can contain 131 lists of stmts. 'body' is also an attribute in the Exec statement for some reason, but as a 132 single expr, so we simply check if it is iterable in `_maybe_end_of_stmt_list()`. 133 """ 134 parent_stmt = self._parents_table[stmt] 135 last_body_stmt = self._maybe_end_of_stmt_list(getattr(parent_stmt, 'body', None)) 136 if stmt == last_body_stmt: 137 return True 138 last_orelse_stmt = self._maybe_end_of_stmt_list(getattr(parent_stmt, 'orelse', None)) 139 if stmt == last_orelse_stmt: 140 return True 141 last_finally_stmt = self._maybe_end_of_stmt_list(getattr(parent_stmt, 'finalbody', None)) 142 if stmt == last_finally_stmt: 143 return True 144 return False 145 146 def visit_Call(self, node): 147 if isinstance(node.func, ast.Name) and node.func.id == Get.__name__: 148 self._gets.append(Get.extract_constraints(node)) 149 150 def visit_Assign(self, node): 151 if isinstance(node.value, ast.Yield): 152 self._yields_in_assignments.add(node.value) 153 self.generic_visit(node) 154 155 def visit_Yield(self, node): 156 if node in self._yields_in_assignments: 157 self.generic_visit(node) 158 else: 159 # The current yield "expr" is the child of an "Expr" "stmt". 160 expr_for_yield = self._parents_table[node] 161 162 if not self._stmt_is_at_end_of_parent_list(expr_for_yield): 163 raise self.YieldVisitError( 164 self._generate_ast_error_message(node, """\ 165 yield in @rule without assignment must come at the end of a series of statements. 166 167 A yield in an @rule without an assignment is equivalent to a return, and we 168 currently require that no statements follow such a yield at the same level of nesting. 169 Use `_ = yield Get(...)` if you wish to yield control to the engine and discard the result. 170 """)) 171 172 173 class _GoalProduct(object): 174 """GoalProduct is a factory for anonymous singleton types representing the execution of goals. 175 176 The created types are returned by `@console_rule` instances, which may not have any outputs 177 of their own. 178 """ 179 PRODUCT_MAP = {} 180 181 @staticmethod 182 def _synthesize_goal_product(name): 183 product_type_name = '{}GoalExecution'.format(name.capitalize()) 184 if PY2: 185 product_type_name = product_type_name.encode('utf-8') 186 return type(product_type_name, (datatype([]),), {}) 187 188 @classmethod 189 def for_name(cls, name): 190 assert isinstance(name, (bytes, str)) 191 if name is bytes: 192 name = name.decode('utf-8') 193 if name not in cls.PRODUCT_MAP: 194 cls.PRODUCT_MAP[name] = cls._synthesize_goal_product(name) 195 return cls.PRODUCT_MAP[name] 196 197 198 def _terminated(generator, terminator): 199 """A generator that "appends" the given terminator value to the given generator.""" 200 gen_input = None 201 try: 202 while True: 203 res = generator.send(gen_input) 204 gen_input = yield res 205 except StopIteration: 206 yield terminator 207 208 209 @memoized 210 def optionable_rule(optionable_factory): 211 """Returns a TaskRule that constructs an instance of the Optionable for the given OptionableFactory. 212 213 TODO: This API is slightly awkward for two reasons: 214 1) We should consider whether Subsystems/Optionables should be constructed explicitly using 215 `@rule`s, which would allow them to have non-option dependencies that would be explicit in 216 their constructors (which would avoid the need for the `Subsystem.Factory` pattern). 217 2) Optionable depending on TaskRule would create a cycle in the Python package graph. 218 """ 219 return TaskRule(**optionable_factory.signature()) 220 221 222 def _get_starting_indent(source): 223 """Remove leading indentation from `source` so ast.parse() doesn't raise an exception.""" 224 if source.startswith(" "): 225 return sum(1 for _ in itertools.takewhile(lambda c: c in {' ', b' '}, source)) 226 return 0 227 228 229 def _make_rule(output_type, input_selectors, for_goal=None, cacheable=True): 230 """A @decorator that declares that a particular static function may be used as a TaskRule. 231 232 :param Constraint output_type: The return/output type for the Rule. This may be either a 233 concrete Python type, or an instance of `Exactly` representing a union of multiple types. 234 :param list input_selectors: A list of Selector instances that matches the number of arguments 235 to the @decorated function. 236 :param str for_goal: If this is a @console_rule, which goal string it's called for. 237 """ 238 239 def wrapper(func): 240 if not inspect.isfunction(func): 241 raise ValueError('The @rule decorator must be applied innermost of all decorators.') 242 243 caller_frame = inspect.stack()[1][0] 244 source = inspect.getsource(func) 245 beginning_indent = _get_starting_indent(source) 246 if beginning_indent: 247 source = "\n".join(line[beginning_indent:] for line in source.split("\n")) 248 module_ast = ast.parse(source) 249 250 def resolve_type(name): 251 resolved = caller_frame.f_globals.get(name) or caller_frame.f_builtins.get(name) 252 if not isinstance(resolved, (type, Exactly)): 253 # TODO: should this say "...or Exactly instance;"? 254 raise ValueError('Expected either a `type` constructor or TypeConstraint instance; ' 255 'got: {}'.format(name)) 256 return resolved 257 258 gets = OrderedSet() 259 rule_func_node = assert_single_element( 260 node for node in ast.iter_child_nodes(module_ast) 261 if isinstance(node, ast.FunctionDef) and node.name == func.__name__ 262 ) 263 264 parents_table = {} 265 for parent in ast.walk(rule_func_node): 266 for child in ast.iter_child_nodes(parent): 267 parents_table[child] = parent 268 269 rule_visitor = _RuleVisitor( 270 func=func, 271 func_node=rule_func_node, 272 func_source=source, 273 orig_indent=beginning_indent, 274 frame=caller_frame, 275 parents_table=parents_table, 276 ) 277 rule_visitor.visit(rule_func_node) 278 gets.update(Get(resolve_type(p), resolve_type(s)) for p, s in rule_visitor.gets) 279 280 # For @console_rule, redefine the function to avoid needing a literal return of the output type. 281 if for_goal: 282 def goal_and_return(*args, **kwargs): 283 res = func(*args, **kwargs) 284 if isinstance(res, GeneratorType): 285 # Return a generator with an output_type instance appended. 286 return _terminated(res, output_type()) 287 elif res is not None: 288 raise Exception('A @console_rule should not have a return value.') 289 return output_type() 290 functools.update_wrapper(goal_and_return, func) 291 wrapped_func = goal_and_return 292 else: 293 wrapped_func = func 294 295 wrapped_func.rule = TaskRule( 296 output_type, 297 tuple(input_selectors), 298 wrapped_func, 299 input_gets=tuple(gets), 300 goal=for_goal, 301 cacheable=cacheable 302 ) 303 304 return wrapped_func 305 return wrapper 306 307 308 def rule(output_type, input_selectors): 309 return _make_rule(output_type, input_selectors) 310 311 312 def console_rule(goal_name, input_selectors): 313 output_type = _GoalProduct.for_name(goal_name) 314 return _make_rule(output_type, input_selectors, goal_name, False) 315 316 317 class Rule(AbstractClass): 318 """Rules declare how to produce products for the product graph. 319 320 A rule describes what dependencies must be provided to produce a particular product. They also act 321 as factories for constructing the nodes within the graph. 322 """ 323 324 @abstractproperty 325 def output_constraint(self): 326 """An output Constraint type for the rule.""" 327 328 @abstractproperty 329 def dependency_optionables(self): 330 """A tuple of Optionable classes that are known to be necessary to run this rule.""" 331 332 333 class TaskRule(datatype([ 334 'output_constraint', 335 ('input_selectors', tuple), 336 ('input_gets', tuple), 337 'func', 338 'goal', 339 ('dependency_optionables', tuple), 340 ('cacheable', bool), 341 ]), Rule): 342 """A Rule that runs a task function when all of its input selectors are satisfied. 343 344 NB: This API is experimental, and not meant for direct consumption. To create a `TaskRule` you 345 should always prefer the `@rule` constructor, and in cases where that is too constraining 346 (likely due to #4535) please bump or open a ticket to explain the usecase. 347 """ 348 349 def __new__(cls, 350 output_type, 351 input_selectors, 352 func, 353 input_gets, 354 goal=None, 355 dependency_optionables=None, 356 cacheable=True): 357 # Validate result type. 358 if isinstance(output_type, Exactly): 359 constraint = output_type 360 elif isinstance(output_type, type): 361 constraint = Exactly(output_type) 362 else: 363 raise TypeError("Expected an output_type for rule `{}`, got: {}".format( 364 func.__name__, output_type)) 365 366 return super(TaskRule, cls).__new__( 367 cls, 368 constraint, 369 input_selectors, 370 input_gets, 371 func, 372 goal, 373 dependency_optionables or tuple(), 374 cacheable, 375 ) 376 377 def __str__(self): 378 return '({}, {!r}, {})'.format(type_or_constraint_repr(self.output_constraint), 379 self.input_selectors, 380 self.func.__name__) 381 382 383 class SingletonRule(datatype(['output_constraint', 'value']), Rule): 384 """A default rule for a product, which is thus a singleton for that product.""" 385 386 @classmethod 387 def from_instance(cls, obj): 388 return cls(type(obj), obj) 389 390 def __new__(cls, output_type, value): 391 # Validate result type. 392 if isinstance(output_type, Exactly): 393 constraint = output_type 394 elif isinstance(output_type, type): 395 constraint = Exactly(output_type) 396 else: 397 raise TypeError("Expected an output_type for rule; got: {}".format(output_type)) 398 399 # Create. 400 return super(SingletonRule, cls).__new__(cls, constraint, value) 401 402 @property 403 def dependency_optionables(self): 404 return tuple() 405 406 def __repr__(self): 407 return '{}({}, {})'.format(type(self).__name__, type_or_constraint_repr(self.output_constraint), self.value) 408 409 410 class RootRule(datatype(['output_constraint']), Rule): 411 """Represents a root input to an execution of a rule graph. 412 413 Roots act roughly like parameters, in that in some cases the only source of a 414 particular type might be when a value is provided as a root subject at the beginning 415 of an execution. 416 """ 417 418 @property 419 def dependency_optionables(self): 420 return tuple() 421 422 423 class RuleIndex(datatype(['rules', 'roots'])): 424 """Holds a normalized index of Rules used to instantiate Nodes.""" 425 426 @classmethod 427 def create(cls, rule_entries): 428 """Creates a RuleIndex with tasks indexed by their output type.""" 429 serializable_rules = OrderedDict() 430 serializable_roots = OrderedSet() 431 432 def add_task(product_type, rule): 433 if product_type not in serializable_rules: 434 serializable_rules[product_type] = OrderedSet() 435 serializable_rules[product_type].add(rule) 436 437 def add_rule(rule): 438 if isinstance(rule, RootRule): 439 serializable_roots.add(rule) 440 return 441 # TODO: Ensure that interior types work by indexing on the list of types in 442 # the constraint. This heterogenity has some confusing implications: 443 # see https://github.com/pantsbuild/pants/issues/4005 444 for kind in rule.output_constraint.types: 445 add_task(kind, rule) 446 add_task(rule.output_constraint, rule) 447 448 for entry in rule_entries: 449 if isinstance(entry, Rule): 450 add_rule(entry) 451 elif hasattr(entry, '__call__'): 452 rule = getattr(entry, 'rule', None) 453 if rule is None: 454 raise TypeError("Expected callable {} to be decorated with @rule.".format(entry)) 455 add_rule(rule) 456 else: 457 raise TypeError("Unexpected rule type: {}. " 458 "Rules either extend Rule, or are static functions " 459 "decorated with @rule.".format(type(entry))) 460 461 return cls(serializable_rules, serializable_roots) 462 463 def normalized_rules(self): 464 rules = OrderedSet(rule 465 for ruleset in self.rules.values() 466 for rule in ruleset) 467 rules.update(self.roots) 468 return rules 469 [end of src/python/pants/engine/rules.py] [start of src/python/pants/util/objects.py] 1 # coding=utf-8 2 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import absolute_import, division, print_function, unicode_literals 6 7 import sys 8 from abc import abstractmethod 9 from builtins import object, zip 10 from collections import namedtuple 11 12 from future.utils import PY2 13 from twitter.common.collections import OrderedSet 14 15 from pants.util.collections_abc_backport import OrderedDict 16 from pants.util.memo import memoized, memoized_classproperty 17 from pants.util.meta import AbstractClass 18 19 20 def datatype(field_decls, superclass_name=None, **kwargs): 21 """A wrapper for `namedtuple` that accounts for the type of the object in equality. 22 23 Field declarations can be a string, which declares a field with that name and 24 no type checking. Field declarations can also be a tuple `('field_name', 25 field_type)`, which declares a field named `field_name` which is type-checked 26 at construction. If a type is given, the value provided to the constructor for 27 that field must be exactly that type (i.e. `type(x) == field_type`), and not 28 e.g. a subclass. 29 30 :param field_decls: Iterable of field declarations. 31 :return: A type object which can then be subclassed. 32 :raises: :class:`TypeError` 33 """ 34 field_names = [] 35 fields_with_constraints = OrderedDict() 36 for maybe_decl in field_decls: 37 # ('field_name', type) 38 if isinstance(maybe_decl, tuple): 39 field_name, type_spec = maybe_decl 40 if isinstance(type_spec, type): 41 type_constraint = Exactly(type_spec) 42 elif isinstance(type_spec, TypeConstraint): 43 type_constraint = type_spec 44 else: 45 raise TypeError( 46 "type spec for field '{}' was not a type or TypeConstraint: was {!r} (type {!r})." 47 .format(field_name, type_spec, type(type_spec).__name__)) 48 fields_with_constraints[field_name] = type_constraint 49 else: 50 # interpret it as a field name without a type to check 51 field_name = maybe_decl 52 # namedtuple() already checks field uniqueness 53 field_names.append(field_name) 54 55 if not superclass_name: 56 superclass_name = '_anonymous_namedtuple_subclass' 57 58 namedtuple_cls = namedtuple(superclass_name, field_names, **kwargs) 59 60 class DataType(namedtuple_cls): 61 @classmethod 62 def make_type_error(cls, msg, *args, **kwargs): 63 return TypeCheckError(cls.__name__, msg, *args, **kwargs) 64 65 def __new__(cls, *args, **kwargs): 66 # TODO: Ideally we could execute this exactly once per `cls` but it should be a 67 # relatively cheap check. 68 if not hasattr(cls.__eq__, '_eq_override_canary'): 69 raise cls.make_type_error('Should not override __eq__.') 70 71 try: 72 this_object = super(DataType, cls).__new__(cls, *args, **kwargs) 73 except TypeError as e: 74 raise cls.make_type_error(e) 75 76 # TODO: Make this kind of exception pattern (filter for errors then display them all at once) 77 # more ergonomic. 78 type_failure_msgs = [] 79 for field_name, field_constraint in fields_with_constraints.items(): 80 field_value = getattr(this_object, field_name) 81 try: 82 field_constraint.validate_satisfied_by(field_value) 83 except TypeConstraintError as e: 84 type_failure_msgs.append( 85 "field '{}' was invalid: {}".format(field_name, e)) 86 if type_failure_msgs: 87 raise cls.make_type_error('\n'.join(type_failure_msgs)) 88 89 return this_object 90 91 def __eq__(self, other): 92 if self is other: 93 return True 94 95 # Compare types and fields. 96 if type(self) != type(other): 97 return False 98 # Explicitly return super.__eq__'s value in case super returns NotImplemented 99 return super(DataType, self).__eq__(other) 100 # We define an attribute on the `cls` level definition of `__eq__` that will allow us to detect 101 # that it has been overridden. 102 __eq__._eq_override_canary = None 103 104 def __ne__(self, other): 105 return not (self == other) 106 107 def __hash__(self): 108 return super(DataType, self).__hash__() 109 110 # NB: As datatype is not iterable, we need to override both __iter__ and all of the 111 # namedtuple methods that expect self to be iterable. 112 def __iter__(self): 113 raise TypeError("'{}' object is not iterable".format(type(self).__name__)) 114 115 def _super_iter(self): 116 return super(DataType, self).__iter__() 117 118 def _asdict(self): 119 '''Return a new OrderedDict which maps field names to their values''' 120 return OrderedDict(zip(self._fields, self._super_iter())) 121 122 def _replace(_self, **kwds): 123 '''Return a new datatype object replacing specified fields with new values''' 124 field_dict = _self._asdict() 125 field_dict.update(**kwds) 126 return type(_self)(**field_dict) 127 128 copy = _replace 129 130 # NB: it is *not* recommended to rely on the ordering of the tuple returned by this method. 131 def __getnewargs__(self): 132 '''Return self as a plain tuple. Used by copy and pickle.''' 133 return tuple(self._super_iter()) 134 135 def __repr__(self): 136 args_formatted = [] 137 for field_name in field_names: 138 field_value = getattr(self, field_name) 139 args_formatted.append("{}={!r}".format(field_name, field_value)) 140 return '{class_name}({args_joined})'.format( 141 class_name=type(self).__name__, 142 args_joined=', '.join(args_formatted)) 143 144 def __str__(self): 145 elements_formatted = [] 146 for field_name in field_names: 147 constraint_for_field = fields_with_constraints.get(field_name, None) 148 field_value = getattr(self, field_name) 149 if not constraint_for_field: 150 elements_formatted.append( 151 # TODO: consider using the repr of arguments in this method. 152 "{field_name}={field_value}" 153 .format(field_name=field_name, 154 field_value=field_value)) 155 else: 156 elements_formatted.append( 157 "{field_name}<{type_constraint}>={field_value}" 158 .format(field_name=field_name, 159 type_constraint=constraint_for_field, 160 field_value=field_value)) 161 return '{class_name}({typed_tagged_elements})'.format( 162 class_name=type(self).__name__, 163 typed_tagged_elements=', '.join(elements_formatted)) 164 165 # Return a new type with the given name, inheriting from the DataType class 166 # just defined, with an empty class body. 167 try: # Python3 168 return type(superclass_name, (DataType,), {}) 169 except TypeError: # Python2 170 return type(superclass_name.encode('utf-8'), (DataType,), {}) 171 172 173 def enum(field_name, all_values): 174 """A datatype which can take on a finite set of values. This method is experimental and unstable. 175 176 Any enum subclass can be constructed with its create() classmethod. This method will use the first 177 element of `all_values` as the enum value if none is specified. 178 179 :param field_name: A string used as the field for the datatype. Note that enum does not yet 180 support type checking as with datatype. 181 :param all_values: An iterable of objects representing all possible values for the enum. 182 NB: `all_values` must be a finite, non-empty iterable with unique values! 183 """ 184 185 # This call to list() will eagerly evaluate any `all_values` which would otherwise be lazy, such 186 # as a generator. 187 all_values_realized = list(all_values) 188 # `OrderedSet` maintains the order of the input iterable, but is faster to check membership. 189 allowed_values_set = OrderedSet(all_values_realized) 190 191 if len(allowed_values_set) < len(all_values_realized): 192 raise ValueError("When converting all_values ({}) to a set, at least one duplicate " 193 "was detected. The unique elements of all_values were: {}." 194 .format(all_values_realized, allowed_values_set)) 195 196 class ChoiceDatatype(datatype([field_name])): 197 allowed_values = allowed_values_set 198 default_value = next(iter(allowed_values)) 199 200 @memoized_classproperty 201 def _singletons(cls): 202 """Generate memoized instances of this enum wrapping each of this enum's allowed values.""" 203 return { value: cls(value) for value in cls.allowed_values } 204 205 @classmethod 206 def _check_value(cls, value): 207 if value not in cls.allowed_values: 208 raise cls.make_type_error( 209 "Value {!r} for '{}' must be one of: {!r}." 210 .format(value, field_name, cls.allowed_values)) 211 212 @classmethod 213 def create(cls, value=None): 214 # If we get an instance of this enum class, just return it. This means you can call .create() 215 # on None, an allowed value for the enum, or an existing instance of the enum. 216 if isinstance(value, cls): 217 return value 218 219 # Providing an explicit value that is not None will *not* use the default value! 220 if value is None: 221 value = cls.default_value 222 223 # We actually circumvent the constructor in this method due to the cls._singletons 224 # memoized_classproperty, but we want to raise the same error, so we move checking into a 225 # common method. 226 cls._check_value(value) 227 228 return cls._singletons[value] 229 230 def __new__(cls, *args, **kwargs): 231 this_object = super(ChoiceDatatype, cls).__new__(cls, *args, **kwargs) 232 233 field_value = getattr(this_object, field_name) 234 235 cls._check_value(field_value) 236 237 return this_object 238 239 return ChoiceDatatype 240 241 242 class TypedDatatypeClassConstructionError(Exception): 243 244 # TODO: make some wrapper exception class to make this kind of 245 # prefixing easy (maybe using a class field format string?). 246 def __init__(self, type_name, msg, *args, **kwargs): 247 full_msg = "error: while trying to generate typed datatype {}: {}".format( 248 type_name, msg) 249 super(TypedDatatypeClassConstructionError, self).__init__( 250 full_msg, *args, **kwargs) 251 252 253 class TypedDatatypeInstanceConstructionError(TypeError): 254 255 def __init__(self, type_name, msg, *args, **kwargs): 256 full_msg = "error: in constructor of type {}: {}".format(type_name, msg) 257 super(TypedDatatypeInstanceConstructionError, self).__init__( 258 full_msg, *args, **kwargs) 259 260 261 class TypeCheckError(TypedDatatypeInstanceConstructionError): 262 263 def __init__(self, type_name, msg, *args, **kwargs): 264 formatted_msg = "type check error:\n{}".format(msg) 265 super(TypeCheckError, self).__init__( 266 type_name, formatted_msg, *args, **kwargs) 267 268 269 class TypeConstraintError(TypeError): 270 """Indicates a :class:`TypeConstraint` violation.""" 271 272 273 class TypeConstraint(AbstractClass): 274 """Represents a type constraint. 275 276 Not intended for direct use; instead, use one of :class:`SuperclassesOf`, :class:`Exact` or 277 :class:`SubclassesOf`. 278 """ 279 280 def __init__(self, *types, **kwargs): 281 """Creates a type constraint centered around the given types. 282 283 The type constraint is satisfied as a whole if satisfied for at least one of the given types. 284 285 :param type *types: The focus of this type constraint. 286 :param str description: A description for this constraint if the list of types is too long. 287 """ 288 if not types: 289 raise ValueError('Must supply at least one type') 290 if any(not isinstance(t, type) for t in types): 291 raise TypeError('Supplied types must be types. {!r}'.format(types)) 292 293 # NB: `types` is converted to tuple here because self.types's docstring says 294 # it returns a tuple. Does it matter what type this field is? 295 self._types = tuple(types) 296 self._desc = kwargs.get('description', None) 297 298 @property 299 def types(self): 300 """Return the subject types of this type constraint. 301 302 :type: tuple of type 303 """ 304 return self._types 305 306 def satisfied_by(self, obj): 307 """Return `True` if the given object satisfies this type constraint. 308 309 :rtype: bool 310 """ 311 return self.satisfied_by_type(type(obj)) 312 313 @abstractmethod 314 def satisfied_by_type(self, obj_type): 315 """Return `True` if the given object satisfies this type constraint. 316 317 :rtype: bool 318 """ 319 320 def validate_satisfied_by(self, obj): 321 """Return `obj` if the object satisfies this type constraint, or raise. 322 323 :raises: `TypeConstraintError` if `obj` does not satisfy the constraint. 324 """ 325 326 if self.satisfied_by(obj): 327 return obj 328 329 raise TypeConstraintError( 330 "value {!r} (with type {!r}) must satisfy this type constraint: {!r}." 331 .format(obj, type(obj).__name__, self)) 332 333 def __hash__(self): 334 return hash((type(self), self._types)) 335 336 def __eq__(self, other): 337 return type(self) == type(other) and self._types == other._types 338 339 def __ne__(self, other): 340 return not (self == other) 341 342 def __str__(self): 343 if self._desc: 344 constrained_type = '({})'.format(self._desc) 345 else: 346 if len(self._types) == 1: 347 constrained_type = self._types[0].__name__ 348 else: 349 constrained_type = '({})'.format(', '.join(t.__name__ for t in self._types)) 350 return '{variance_symbol}{constrained_type}'.format(variance_symbol=self._variance_symbol, 351 constrained_type=constrained_type) 352 353 def __repr__(self): 354 if self._desc: 355 constrained_type = self._desc 356 else: 357 constrained_type = ', '.join(t.__name__ for t in self._types) 358 return ('{type_constraint_type}({constrained_type})' 359 .format(type_constraint_type=type(self).__name__, 360 constrained_type=constrained_type)) 361 362 363 class SuperclassesOf(TypeConstraint): 364 """Objects of the exact type as well as any super-types are allowed.""" 365 366 _variance_symbol = '-' 367 368 def satisfied_by_type(self, obj_type): 369 return any(issubclass(t, obj_type) for t in self._types) 370 371 372 class Exactly(TypeConstraint): 373 """Only objects of the exact type are allowed.""" 374 375 _variance_symbol = '=' 376 377 def satisfied_by_type(self, obj_type): 378 return obj_type in self._types 379 380 def graph_str(self): 381 if len(self.types) == 1: 382 return self.types[0].__name__ 383 else: 384 return repr(self) 385 386 387 class SubclassesOf(TypeConstraint): 388 """Objects of the exact type as well as any sub-types are allowed.""" 389 390 _variance_symbol = '+' 391 392 def satisfied_by_type(self, obj_type): 393 return issubclass(obj_type, self._types) 394 395 396 class Collection(object): 397 """Constructs classes representing collections of objects of a particular type. 398 399 The produced class will expose its values under a field named dependencies - this is a stable API 400 which may be consumed e.g. over FFI from the engine. 401 402 Python consumers of a Collection should prefer to use its standard iteration API. 403 """ 404 # TODO: could we check that the input is iterable in the ctor? 405 406 @classmethod 407 @memoized 408 def of(cls, *element_types): 409 union = '|'.join(element_type.__name__ for element_type in element_types) 410 type_name = '{}.of({})'.format(cls.__name__, union) 411 if PY2: 412 type_name = type_name.encode('utf-8') 413 # TODO: could we allow type checking in the datatype() invocation here? 414 supertypes = (cls, datatype(['dependencies'], superclass_name='Collection')) 415 properties = {'element_types': element_types} 416 collection_of_type = type(type_name, supertypes, properties) 417 418 # Expose the custom class type at the module level to be pickle compatible. 419 setattr(sys.modules[cls.__module__], type_name, collection_of_type) 420 421 return collection_of_type 422 423 def __iter__(self): 424 return iter(self.dependencies) 425 [end of src/python/pants/util/objects.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
224c2a0d4c64bc69f3ebc2cb1a9c2ff56a2fd438
ergonomically typechecking datatype tuple contents As per [this thread in `#engine` on the pantsbuild slack a few days ago](https://pantsbuild.slack.com/archives/C0D7TNJHL/p1544557163035200): > @illicitonion: Right now we can check that a field is a tuple, but that's less useful documentation than `tuple<HydratedTarget>` > @cosmicexplorer: there's a [comment i left in `Collection.of()`](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L412) to type-check its contents somehow, let me look at that for a sec There are two ways I was thinking about this in the default values for datatypes diff (#6374) (which does some horrible things that we shouldn't do here, or yet): 1. Where we check types in ctors, use [`satisfied_by`](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L305), and allow `TypeConstraint`s to do arbitrary checking things there, with the knowledge that it runs synchronously. I don't think this is a truly horrible idea, but it is a pretty general construction that may be hard to remove later. 2. Expand what we allow for `obj_type` in [`satisfied_by_type`](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L313) (for example) to also allow an instance of some other specific class which represents a collection (also not a bad idea, less scary and probably cleaner -- **I think we should do this**). If we do (2), we may want to [do what we've done with the datatype `__eq__()` method and have a canary](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L67) so `satisfied_by()` can't be overridden (which gives us the ability to do (1) later, if we want). One initial implementation might be: we add a `@classproperty` to datatypes which returns a tuple of types for the datatype fields, or instead of a type, a field may instead be associated with an object that represents a parameterized collection. [This is essentially what we are informally doing in the the type-checked `__new__()` method already](https://github.com/pantsbuild/pants/blob/287626443c7a1928c5c41b9c01da60ec3404db76/src/python/pants/util/objects.py#L75) -- the only addition would be to edit all the `TypeConstraint`s that currently exist (thankfully all are in `objects.py`) to support checking against a `type`, or a new collection type, which might look something like: ```python class TypedTuple(object): def __init__(self, element_type): assert(isinstance(element_type, type)) self.element_type = element_type ``` And `TypeConstraint`s would take care of checking the elements of the tuple -- this could be made simple(r) with some helper method in the base `TypeConstraint` class. *Note*: only tuples are immediately hashable, and so for use in the engine, we would want these to be tuples instead of lists -- I think we can absolutely expand this later to include unordered collections if that becomes useful/necessary. Starting off with `satisfied_by_type()` as described in (2) seems to be the narrowest and most hygeinic scope to start off with, as `validate_satisfied_by()` delegates to this eventually anyway.
For fields like `dependencies` which are a convention shared by many datatypes (and are expected in the Rust code which unpacks many python objects), we would probably want to make this even more ergonomic somehow -- that is a not-quite-orthogonal but not-quite-parallel concern.
2019-01-20T03:12:29Z
<patch> diff --git a/src/python/pants/engine/addressable.py b/src/python/pants/engine/addressable.py --- a/src/python/pants/engine/addressable.py +++ b/src/python/pants/engine/addressable.py @@ -11,9 +11,9 @@ from future.utils import string_types from pants.build_graph.address import Address, BuildFileAddress -from pants.engine.objects import Resolvable, Serializable +from pants.engine.objects import Collection, Resolvable, Serializable from pants.util.collections_abc_backport import MutableMapping, MutableSequence -from pants.util.objects import Collection, TypeConstraintError +from pants.util.objects import TypeConstraintError Addresses = Collection.of(Address) diff --git a/src/python/pants/engine/fs.py b/src/python/pants/engine/fs.py --- a/src/python/pants/engine/fs.py +++ b/src/python/pants/engine/fs.py @@ -7,10 +7,11 @@ from future.utils import binary_type, text_type from pants.base.project_tree import Dir, File +from pants.engine.objects import Collection from pants.engine.rules import RootRule from pants.option.custom_types import GlobExpansionConjunction from pants.option.global_options import GlobMatchErrorBehavior -from pants.util.objects import Collection, datatype +from pants.util.objects import datatype class FileContent(datatype([('path', text_type), ('content', binary_type)])): diff --git a/src/python/pants/engine/legacy/graph.py b/src/python/pants/engine/legacy/graph.py --- a/src/python/pants/engine/legacy/graph.py +++ b/src/python/pants/engine/legacy/graph.py @@ -26,13 +26,14 @@ from pants.engine.legacy.address_mapper import LegacyAddressMapper from pants.engine.legacy.structs import BundleAdaptor, BundlesField, SourcesField, TargetAdaptor from pants.engine.mapper import AddressMapper +from pants.engine.objects import Collection from pants.engine.parser import SymbolTable, TargetAdaptorContainer from pants.engine.rules import RootRule, rule from pants.engine.selectors import Get, Select from pants.option.global_options import GlobMatchErrorBehavior from pants.source.filespec import any_matches_filespec from pants.source.wrapped_globs import EagerFilesetWithSpec, FilesetRelPathWrapper -from pants.util.objects import Collection, datatype +from pants.util.objects import datatype logger = logging.getLogger(__name__) diff --git a/src/python/pants/engine/objects.py b/src/python/pants/engine/objects.py --- a/src/python/pants/engine/objects.py +++ b/src/python/pants/engine/objects.py @@ -5,10 +5,16 @@ from __future__ import absolute_import, division, print_function, unicode_literals import inspect +import sys from abc import abstractmethod, abstractproperty +from builtins import object from collections import namedtuple +from future.utils import PY2 + +from pants.util.memo import memoized_classmethod from pants.util.meta import AbstractClass +from pants.util.objects import Exactly, TypedCollection, datatype class SerializationError(Exception): @@ -146,3 +152,38 @@ def validate(self): :raises: :class:`ValidationError` if this object is invalid. """ + + +class Collection(object): + """Constructs classes representing collections of objects of a particular type. + + The produced class will expose its values under a field named dependencies - this is a stable API + which may be consumed e.g. over FFI from the engine. + + Python consumers of a Collection should prefer to use its standard iteration API. + + Note that elements of a Collection are type-checked upon construction. + """ + + @memoized_classmethod + def of(cls, *element_types): + union = '|'.join(element_type.__name__ for element_type in element_types) + type_name = '{}.of({})'.format(cls.__name__, union) + if PY2: + type_name = type_name.encode('utf-8') + type_checked_collection_class = datatype([ + # Create a datatype with a single field 'dependencies' which is type-checked on construction + # to be a collection containing elements of only the exact `element_types` specified. + ('dependencies', TypedCollection(Exactly(*element_types))) + ], superclass_name=cls.__name__) + supertypes = (cls, type_checked_collection_class) + properties = {'element_types': element_types} + collection_of_type = type(type_name, supertypes, properties) + + # Expose the custom class type at the module level to be pickle compatible. + setattr(sys.modules[cls.__module__], type_name, collection_of_type) + + return collection_of_type + + def __iter__(self): + return iter(self.dependencies) diff --git a/src/python/pants/engine/scheduler.py b/src/python/pants/engine/scheduler.py --- a/src/python/pants/engine/scheduler.py +++ b/src/python/pants/engine/scheduler.py @@ -19,12 +19,13 @@ from pants.engine.isolated_process import ExecuteProcessRequest, FallibleExecuteProcessResult from pants.engine.native import Function, TypeConstraint, TypeId from pants.engine.nodes import Return, Throw +from pants.engine.objects import Collection from pants.engine.rules import RuleIndex, SingletonRule, TaskRule from pants.engine.selectors import Params, Select, constraint_for from pants.rules.core.exceptions import GracefulTerminationException from pants.util.contextutil import temporary_file_path from pants.util.dirutil import check_no_overlapping_paths -from pants.util.objects import Collection, datatype +from pants.util.objects import datatype from pants.util.strutil import pluralize diff --git a/src/python/pants/util/objects.py b/src/python/pants/util/objects.py --- a/src/python/pants/util/objects.py +++ b/src/python/pants/util/objects.py @@ -4,17 +4,15 @@ from __future__ import absolute_import, division, print_function, unicode_literals -import sys from abc import abstractmethod -from builtins import object, zip +from builtins import zip from collections import namedtuple -from future.utils import PY2 from twitter.common.collections import OrderedSet -from pants.util.collections_abc_backport import OrderedDict -from pants.util.memo import memoized, memoized_classproperty -from pants.util.meta import AbstractClass +from pants.util.collections_abc_backport import Iterable, OrderedDict +from pants.util.memo import memoized_classproperty +from pants.util.meta import AbstractClass, classproperty def datatype(field_decls, superclass_name=None, **kwargs): @@ -266,6 +264,7 @@ def __init__(self, type_name, msg, *args, **kwargs): type_name, formatted_msg, *args, **kwargs) +# TODO: make these members of the `TypeConstraint` class! class TypeConstraintError(TypeError): """Indicates a :class:`TypeConstraint` violation.""" @@ -273,43 +272,99 @@ class TypeConstraintError(TypeError): class TypeConstraint(AbstractClass): """Represents a type constraint. - Not intended for direct use; instead, use one of :class:`SuperclassesOf`, :class:`Exact` or + Not intended for direct use; instead, use one of :class:`SuperclassesOf`, :class:`Exactly` or :class:`SubclassesOf`. """ - def __init__(self, *types, **kwargs): + def __init__(self, description): """Creates a type constraint centered around the given types. The type constraint is satisfied as a whole if satisfied for at least one of the given types. - :param type *types: The focus of this type constraint. - :param str description: A description for this constraint if the list of types is too long. + :param str description: A concise, readable description of what the type constraint represents. + Used directly as the __str__ implementation. """ + self._description = description + + @abstractmethod + def satisfied_by(self, obj): + """Return `True` if the given object satisfies this type constraint. + + :rtype: bool + """ + + def make_type_constraint_error(self, obj, constraint): + return TypeConstraintError( + "value {!r} (with type {!r}) must satisfy this type constraint: {}." + .format(obj, type(obj).__name__, constraint)) + + # TODO: disallow overriding this method with some form of mixin/decorator along with datatype + # __eq__! + def validate_satisfied_by(self, obj): + """Return `obj` if the object satisfies this type constraint, or raise. + + :raises: `TypeConstraintError` if `obj` does not satisfy the constraint. + """ + + if self.satisfied_by(obj): + return obj + + raise self.make_type_constraint_error(obj, self) + + def __ne__(self, other): + return not (self == other) + + def __str__(self): + return self._description + + +class TypeOnlyConstraint(TypeConstraint): + """A `TypeConstraint` predicated only on the object's type. + + `TypeConstraint` subclasses may override `.satisfied_by()` to perform arbitrary validation on the + object itself -- however, this class implements `.satisfied_by()` with a guarantee that it will + only act on the object's `type` via `.satisfied_by_type()`. This kind of type checking is faster + and easier to understand than the more complex validation allowed by `.satisfied_by()`. + """ + + # TODO: make an @abstract_classproperty decorator to do this boilerplate! + @classproperty + def _variance_symbol(cls): + """This is propagated to the the `TypeConstraint` constructor.""" + raise NotImplementedError('{} must implement the _variance_symbol classproperty!' + .format(cls.__name__)) + + def __init__(self, *types): + """Creates a type constraint based on some logic to match the given types. + + NB: A `TypeOnlyConstraint` implementation should ensure that the type constraint is satisfied as + a whole if satisfied for at least one of the given `types`. + + :param type *types: The types this constraint will match in some way. + """ + if not types: raise ValueError('Must supply at least one type') if any(not isinstance(t, type) for t in types): raise TypeError('Supplied types must be types. {!r}'.format(types)) - # NB: `types` is converted to tuple here because self.types's docstring says - # it returns a tuple. Does it matter what type this field is? + if len(types) == 1: + type_list = types[0].__name__ + else: + type_list = ' or '.join(t.__name__ for t in types) + description = '{}({})'.format(type(self).__name__, type_list) + + super(TypeOnlyConstraint, self).__init__(description=description) + + # NB: This is made into a tuple so that we can use self._types in issubclass() and others! self._types = tuple(types) - self._desc = kwargs.get('description', None) + # TODO(#7114): remove this after the engine is converted to use `TypeId` instead of + # `TypeConstraint`! @property def types(self): - """Return the subject types of this type constraint. - - :type: tuple of type - """ return self._types - def satisfied_by(self, obj): - """Return `True` if the given object satisfies this type constraint. - - :rtype: bool - """ - return self.satisfied_by_type(type(obj)) - @abstractmethod def satisfied_by_type(self, obj_type): """Return `True` if the given object satisfies this type constraint. @@ -317,18 +372,8 @@ def satisfied_by_type(self, obj_type): :rtype: bool """ - def validate_satisfied_by(self, obj): - """Return `obj` if the object satisfies this type constraint, or raise. - - :raises: `TypeConstraintError` if `obj` does not satisfy the constraint. - """ - - if self.satisfied_by(obj): - return obj - - raise TypeConstraintError( - "value {!r} (with type {!r}) must satisfy this type constraint: {!r}." - .format(obj, type(obj).__name__, self)) + def satisfied_by(self, obj): + return self.satisfied_by_type(type(obj)) def __hash__(self): return hash((type(self), self._types)) @@ -336,44 +381,23 @@ def __hash__(self): def __eq__(self, other): return type(self) == type(other) and self._types == other._types - def __ne__(self, other): - return not (self == other) - - def __str__(self): - if self._desc: - constrained_type = '({})'.format(self._desc) - else: - if len(self._types) == 1: - constrained_type = self._types[0].__name__ - else: - constrained_type = '({})'.format(', '.join(t.__name__ for t in self._types)) - return '{variance_symbol}{constrained_type}'.format(variance_symbol=self._variance_symbol, - constrained_type=constrained_type) - def __repr__(self): - if self._desc: - constrained_type = self._desc - else: - constrained_type = ', '.join(t.__name__ for t in self._types) + constrained_type = ', '.join(t.__name__ for t in self._types) return ('{type_constraint_type}({constrained_type})' .format(type_constraint_type=type(self).__name__, - constrained_type=constrained_type)) + constrained_type=constrained_type)) -class SuperclassesOf(TypeConstraint): +class SuperclassesOf(TypeOnlyConstraint): """Objects of the exact type as well as any super-types are allowed.""" - _variance_symbol = '-' - def satisfied_by_type(self, obj_type): return any(issubclass(t, obj_type) for t in self._types) -class Exactly(TypeConstraint): +class Exactly(TypeOnlyConstraint): """Only objects of the exact type are allowed.""" - _variance_symbol = '=' - def satisfied_by_type(self, obj_type): return obj_type in self._types @@ -384,41 +408,66 @@ def graph_str(self): return repr(self) -class SubclassesOf(TypeConstraint): +class SubclassesOf(TypeOnlyConstraint): """Objects of the exact type as well as any sub-types are allowed.""" - _variance_symbol = '+' - def satisfied_by_type(self, obj_type): return issubclass(obj_type, self._types) -class Collection(object): - """Constructs classes representing collections of objects of a particular type. +class TypedCollection(TypeConstraint): + """A `TypeConstraint` which accepts a TypeOnlyConstraint and validates a collection.""" - The produced class will expose its values under a field named dependencies - this is a stable API - which may be consumed e.g. over FFI from the engine. + _iterable_constraint = SubclassesOf(Iterable) - Python consumers of a Collection should prefer to use its standard iteration API. - """ - # TODO: could we check that the input is iterable in the ctor? - - @classmethod - @memoized - def of(cls, *element_types): - union = '|'.join(element_type.__name__ for element_type in element_types) - type_name = '{}.of({})'.format(cls.__name__, union) - if PY2: - type_name = type_name.encode('utf-8') - # TODO: could we allow type checking in the datatype() invocation here? - supertypes = (cls, datatype(['dependencies'], superclass_name='Collection')) - properties = {'element_types': element_types} - collection_of_type = type(type_name, supertypes, properties) - - # Expose the custom class type at the module level to be pickle compatible. - setattr(sys.modules[cls.__module__], type_name, collection_of_type) - - return collection_of_type - - def __iter__(self): - return iter(self.dependencies) + def __init__(self, constraint): + """Create a `TypeConstraint` which validates each member of a collection with `constraint`. + + :param TypeOnlyConstraint constraint: the `TypeConstraint` to apply to each element. This is + currently required to be a `TypeOnlyConstraint` to avoid + complex prototypal type relationships. + """ + + if not isinstance(constraint, TypeOnlyConstraint): + raise TypeError("constraint for collection must be a {}! was: {}" + .format(TypeOnlyConstraint.__name__, constraint)) + + description = '{}({})'.format(type(self).__name__, constraint) + + self._constraint = constraint + + super(TypedCollection, self).__init__(description=description) + + # TODO: consider making this a private method of TypeConstraint, as it now duplicates the logic in + # self.validate_satisfied_by()! + def satisfied_by(self, obj): + if self._iterable_constraint.satisfied_by(obj): + return all(self._constraint.satisfied_by(el) for el in obj) + return False + + def make_collection_type_constraint_error(self, base_obj, el): + base_error = self.make_type_constraint_error(el, self._constraint) + return TypeConstraintError("in wrapped constraint {} matching iterable object {}: {}" + .format(self, base_obj, base_error)) + + def validate_satisfied_by(self, obj): + if self._iterable_constraint.satisfied_by(obj): + for el in obj: + if not self._constraint.satisfied_by(el): + raise self.make_collection_type_constraint_error(obj, el) + return obj + + base_iterable_error = self.make_type_constraint_error(obj, self._iterable_constraint) + raise TypeConstraintError( + "in wrapped constraint {}: {}".format(self, base_iterable_error)) + + def __hash__(self): + return hash((type(self), self._constraint)) + + def __eq__(self, other): + return type(self) == type(other) and self._constraint == other._constraint + + def __repr__(self): + return ('{type_constraint_type}({constraint!r})' + .format(type_constraint_type=type(self).__name__, + constraint=self._constraint)) </patch>
[]
[]
googleapis__google-cloud-python-2390
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Make sure our 2 years old license headers are in line with Came up because of #2370. Related to #2371. To paraphrase @rimey: The Apache license header given in our internal instructions differs from the one used in this project. /cc @rimey @jgeewax </issue> <code> [start of README.rst] 1 Google Cloud Python Client 2 ========================== 3 4 Python idiomatic client for `Google Cloud Platform`_ services. 5 6 .. _Google Cloud Platform: https://cloud.google.com/ 7 8 |pypi| |build| |appveyor| |coverage| |versions| 9 10 - `Homepage`_ 11 - `API Documentation`_ 12 13 .. _Homepage: https://googlecloudplatform.github.io/google-cloud-python/ 14 .. _API Documentation: http://googlecloudplatform.github.io/google-cloud-python/ 15 16 This client supports the following Google Cloud Platform services: 17 18 - `Google Cloud Datastore`_ 19 - `Google Cloud Storage`_ 20 - `Google Cloud Pub/Sub`_ 21 - `Google BigQuery`_ 22 - `Google Cloud Resource Manager`_ 23 - `Google Stackdriver Logging`_ 24 - `Google Stackdriver Monitoring`_ 25 26 .. _Google Cloud Datastore: https://github.com/GoogleCloudPlatform/google-cloud-python#google-cloud-datastore 27 .. _Google Cloud Storage: https://github.com/GoogleCloudPlatform/google-cloud-python#google-cloud-storage 28 .. _Google Cloud Pub/Sub: https://github.com/GoogleCloudPlatform/google-cloud-python#google-cloud-pubsub 29 .. _Google BigQuery: https://github.com/GoogleCloudPlatform/google-cloud-python#google-bigquery 30 .. _Google Cloud Resource Manager: https://github.com/GoogleCloudPlatform/google-cloud-python#google-cloud-resource-manager 31 .. _Google Stackdriver Logging: https://github.com/GoogleCloudPlatform/google-cloud-python#google-stackdriver-logging 32 .. _Google Stackdriver Monitoring: https://github.com/GoogleCloudPlatform/google-cloud-python#google-stackdriver-monitoring 33 34 If you need support for other Google APIs, check out the 35 `Google APIs Python Client library`_. 36 37 .. _Google APIs Python Client library: https://github.com/google/google-api-python-client 38 39 Quick Start 40 ----------- 41 42 :: 43 44 $ pip install --upgrade google-cloud 45 46 Example Applications 47 -------------------- 48 49 - `getting-started-python`_ - A sample and `tutorial`_ that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine. 50 - `google-cloud-python-expenses-demo`_ - A sample expenses demo using Cloud Datastore and Cloud Storage 51 52 .. _getting-started-python: https://github.com/GoogleCloudPlatform/getting-started-python 53 .. _tutorial: https://cloud.google.com/python 54 .. _google-cloud-python-expenses-demo: https://github.com/GoogleCloudPlatform/google-cloud-python-expenses-demo 55 56 Authentication 57 -------------- 58 59 With ``google-cloud-python`` we try to make authentication as painless as possible. 60 Check out the `Authentication section`_ in our documentation to learn more. 61 You may also find the `authentication document`_ shared by all the 62 ``google-cloud-*`` libraries to be helpful. 63 64 .. _Authentication section: http://google-cloud-python.readthedocs.io/en/latest/google-cloud-auth.html 65 .. _authentication document: https://github.com/GoogleCloudPlatform/gcloud-common/tree/master/authentication 66 67 Google Cloud Datastore 68 ---------------------- 69 70 Google `Cloud Datastore`_ (`Datastore API docs`_) is a fully managed, schemaless 71 database for storing non-relational data. Cloud Datastore automatically scales 72 with your users and supports ACID transactions, high availability of reads and 73 writes, strong consistency for reads and ancestor queries, and eventual 74 consistency for all other queries. 75 76 .. _Cloud Datastore: https://cloud.google.com/datastore/docs 77 .. _Datastore API docs: https://cloud.google.com/datastore/docs/ 78 79 See the ``google-cloud-python`` API `datastore documentation`_ to learn how to 80 interact with the Cloud Datastore using this Client Library. 81 82 .. _datastore documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/datastore-client.html 83 84 See the `official Google Cloud Datastore documentation`_ for more details on how 85 to activate Cloud Datastore for your project. 86 87 .. _official Google Cloud Datastore documentation: https://cloud.google.com/datastore/docs/activate 88 89 .. code:: python 90 91 from google.cloud import datastore 92 # Create, populate and persist an entity 93 entity = datastore.Entity(key=datastore.Key('EntityKind')) 94 entity.update({ 95 'foo': u'bar', 96 'baz': 1337, 97 'qux': False, 98 }) 99 # Then query for entities 100 query = datastore.Query(kind='EntityKind') 101 for result in query.fetch(): 102 print result 103 104 Google Cloud Storage 105 -------------------- 106 107 Google `Cloud Storage`_ (`Storage API docs`_) allows you to store data on Google 108 infrastructure with very high reliability, performance and availability, and can 109 be used to distribute large data objects to users via direct download. 110 111 .. _Cloud Storage: https://cloud.google.com/storage/docs 112 .. _Storage API docs: https://cloud.google.com/storage/docs/json_api/v1 113 114 See the ``google-cloud-python`` API `storage documentation`_ to learn how to connect 115 to Cloud Storage using this Client Library. 116 117 .. _storage documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/storage-client.html 118 119 You need to create a Google Cloud Storage bucket to use this client library. 120 Follow along with the `official Google Cloud Storage documentation`_ to learn 121 how to create a bucket. 122 123 .. _official Google Cloud Storage documentation: https://cloud.google.com/storage/docs/cloud-console#_creatingbuckets 124 125 .. code:: python 126 127 from google.cloud import storage 128 client = storage.Client() 129 bucket = client.get_bucket('bucket-id-here') 130 # Then do other things... 131 blob = bucket.get_blob('remote/path/to/file.txt') 132 print blob.download_as_string() 133 blob.upload_from_string('New contents!') 134 blob2 = bucket.blob('remote/path/storage.txt') 135 blob2.upload_from_filename(filename='/local/path.txt') 136 137 Google Cloud Pub/Sub 138 -------------------- 139 140 Google `Cloud Pub/Sub`_ (`Pub/Sub API docs`_) is designed to provide reliable, 141 many-to-many, asynchronous messaging between applications. Publisher 142 applications can send messages to a ``topic`` and other applications can 143 subscribe to that topic to receive the messages. By decoupling senders and 144 receivers, Google Cloud Pub/Sub allows developers to communicate between 145 independently written applications. 146 147 .. _Cloud Pub/Sub: https://cloud.google.com/pubsub/docs 148 .. _Pub/Sub API docs: https://cloud.google.com/pubsub/reference/rest/ 149 150 See the ``google-cloud-python`` API `Pub/Sub documentation`_ to learn how to connect 151 to Cloud Pub/Sub using this Client Library. 152 153 .. _Pub/Sub documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/pubsub-usage.html 154 155 To get started with this API, you'll need to create 156 157 .. code:: python 158 159 from google.cloud import pubsub 160 161 client = pubsub.Client() 162 topic = client.topic('topic_name') 163 topic.create() 164 165 topic.publish('this is the message_payload', 166 attr1='value1', attr2='value2') 167 168 Google BigQuery 169 --------------- 170 171 Querying massive datasets can be time consuming and expensive without the 172 right hardware and infrastructure. Google `BigQuery`_ (`BigQuery API docs`_) 173 solves this problem by enabling super-fast, SQL-like queries against 174 append-only tables, using the processing power of Google's infrastructure. 175 176 .. _BigQuery: https://cloud.google.com/bigquery/what-is-bigquery 177 .. _BigQuery API docs: https://cloud.google.com/bigquery/docs/reference/v2/ 178 179 This package is still being implemented, but it is almost complete! 180 181 Load data from CSV 182 ~~~~~~~~~~~~~~~~~~ 183 184 .. code:: python 185 186 import csv 187 188 from google.cloud import bigquery 189 from google.cloud.bigquery import SchemaField 190 191 client = bigquery.Client() 192 193 dataset = client.dataset('dataset_name') 194 dataset.create() # API request 195 196 SCHEMA = [ 197 SchemaField('full_name', 'STRING', mode='required'), 198 SchemaField('age', 'INTEGER', mode='required'), 199 ] 200 table = dataset.table('table_name', SCHEMA) 201 table.create() 202 203 with open('csv_file', 'rb') as readable: 204 table.upload_from_file( 205 readable, source_format='CSV', skip_leading_rows=1) 206 207 Perform a synchronous query 208 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 209 210 .. code:: python 211 212 # Perform a synchronous query. 213 QUERY = ( 214 'SELECT name FROM [bigquery-public-data:usa_names.usa_1910_2013] ' 215 'WHERE state = "TX"') 216 query = client.run_sync_query('%s LIMIT 100' % QUERY) 217 query.timeout_ms = TIMEOUT_MS 218 query.run() 219 220 for row in query.rows: 221 print row 222 223 224 See the ``google-cloud-python`` API `BigQuery documentation`_ to learn how to connect 225 to BigQuery using this Client Library. 226 227 .. _BigQuery documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-usage.html 228 229 Google Cloud Resource Manager 230 ----------------------------- 231 232 The Cloud `Resource Manager`_ API (`Resource Manager API docs`_) provides 233 methods that you can use to programmatically manage your projects in the 234 Google Cloud Platform. 235 236 .. _Resource Manager: https://cloud.google.com/resource-manager/ 237 .. _Resource Manager API docs: https://cloud.google.com/resource-manager/reference/rest/ 238 239 See the ``google-cloud-python`` API `Resource Manager documentation`_ to learn how to 240 manage projects using this Client Library. 241 242 .. _Resource Manager documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/resource-manager-api.html 243 244 Google Stackdriver Logging 245 -------------------------- 246 247 `Stackdriver Logging`_ API (`Logging API docs`_) allows you to store, search, 248 analyze, monitor, and alert on log data and events from Google Cloud Platform. 249 250 .. _Stackdriver Logging: https://cloud.google.com/logging/ 251 .. _Logging API docs: https://cloud.google.com/logging/docs/ 252 253 .. code:: python 254 255 from google.cloud import logging 256 client = logging.Client() 257 logger = client.logger('log_name') 258 logger.log_text("A simple entry") # API call 259 260 Example of fetching entries: 261 262 .. code:: python 263 264 entries, token = logger.list_entries() 265 for entry in entries: 266 print entry.payload 267 268 See the ``google-cloud-python`` API `logging documentation`_ to learn how to connect 269 to Stackdriver Logging using this Client Library. 270 271 .. _logging documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/logging-usage.html 272 273 Google Stackdriver Monitoring 274 ----------------------------- 275 276 `Stackdriver Monitoring`_ (`Monitoring API docs`_) collects metrics, 277 events, and metadata from Google Cloud Platform, Amazon Web Services (AWS), 278 hosted uptime probes, application instrumentation, and a variety of common 279 application components including Cassandra, Nginx, Apache Web Server, 280 Elasticsearch and many others. Stackdriver ingests that data and generates 281 insights via dashboards, charts, and alerts. 282 283 This package currently supports all Monitoring API operations other than 284 writing custom metrics. 285 286 .. _Stackdriver Monitoring: https://cloud.google.com/monitoring/ 287 .. _Monitoring API docs: https://cloud.google.com/monitoring/api/ref_v3/rest/ 288 289 List available metric types: 290 291 .. code:: python 292 293 from google.cloud import monitoring 294 client = monitoring.Client() 295 for descriptor in client.list_metric_descriptors(): 296 print(descriptor.type) 297 298 Display CPU utilization across your GCE instances during the last five minutes: 299 300 .. code:: python 301 302 metric = 'compute.googleapis.com/instance/cpu/utilization' 303 query = client.query(metric, minutes=5) 304 print(query.as_dataframe()) 305 306 See the ``google-cloud-python`` API `monitoring documentation`_ to learn how to connect 307 to Stackdriver Monitoring using this Client Library. 308 309 .. _monitoring documentation: https://googlecloudplatform.github.io/google-cloud-python/stable/monitoring-usage.html 310 311 Contributing 312 ------------ 313 314 Contributions to this library are always welcome and highly encouraged. 315 316 See `CONTRIBUTING`_ for more information on how to get started. 317 318 .. _CONTRIBUTING: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/CONTRIBUTING.rst 319 320 License 321 ------- 322 323 Apache 2.0 - See `LICENSE`_ for more information. 324 325 .. _LICENSE: https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/LICENSE 326 327 .. |build| image:: https://travis-ci.org/GoogleCloudPlatform/google-cloud-python.svg?branch=master 328 :target: https://travis-ci.org/GoogleCloudPlatform/google-cloud-python 329 .. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/googlecloudplatform/google-cloud-python?branch=master&svg=true 330 :target: https://ci.appveyor.com/project/GoogleCloudPlatform/google-cloud-python 331 .. |coverage| image:: https://coveralls.io/repos/GoogleCloudPlatform/google-cloud-python/badge.png?branch=master 332 :target: https://coveralls.io/r/GoogleCloudPlatform/google-cloud-python?branch=master 333 .. |pypi| image:: https://img.shields.io/pypi/v/google-cloud.svg 334 :target: https://pypi.python.org/pypi/google-cloud 335 .. |versions| image:: https://img.shields.io/pypi/pyversions/google-cloud.svg 336 :target: https://pypi.python.org/pypi/google-cloud 337 [end of README.rst] [start of google/cloud/_helpers.py] 1 # Copyright 2014 Google Inc. All rights reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """Shared helpers for Google Cloud packages. 16 17 This module is not part of the public API surface. 18 """ 19 20 import calendar 21 import datetime 22 import json 23 import os 24 import re 25 import socket 26 from threading import local as Local 27 28 from google.protobuf import timestamp_pb2 29 try: 30 from google.appengine.api import app_identity 31 except ImportError: 32 app_identity = None 33 try: 34 import grpc 35 except ImportError: # pragma: NO COVER 36 grpc = None 37 import six 38 from six.moves import http_client 39 from six.moves import configparser 40 41 # pylint: disable=ungrouped-imports 42 from google.cloud.environment_vars import PROJECT 43 from google.cloud.environment_vars import CREDENTIALS 44 # pylint: enable=ungrouped-imports 45 46 47 _NOW = datetime.datetime.utcnow # To be replaced by tests. 48 _RFC3339_MICROS = '%Y-%m-%dT%H:%M:%S.%fZ' 49 _RFC3339_NO_FRACTION = '%Y-%m-%dT%H:%M:%S' 50 # datetime.strptime cannot handle nanosecond precision: parse w/ regex 51 _RFC3339_NANOS = re.compile(r""" 52 (?P<no_fraction> 53 \d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2} # YYYY-MM-DDTHH:MM:SS 54 ) 55 \. # decimal point 56 (?P<nanos>\d{1,9}) # nanoseconds, maybe truncated 57 Z # Zulu 58 """, re.VERBOSE) 59 # NOTE: Catching this ImportError is a workaround for GAE not supporting the 60 # "pwd" module which is imported lazily when "expanduser" is called. 61 try: 62 _USER_ROOT = os.path.expanduser('~') 63 except ImportError: # pragma: NO COVER 64 _USER_ROOT = None 65 _GCLOUD_CONFIG_FILE = os.path.join( 66 'gcloud', 'configurations', 'config_default') 67 _GCLOUD_CONFIG_SECTION = 'core' 68 _GCLOUD_CONFIG_KEY = 'project' 69 70 71 class _LocalStack(Local): 72 """Manage a thread-local LIFO stack of resources. 73 74 Intended for use in :class:`google.cloud.datastore.batch.Batch.__enter__`, 75 :class:`google.cloud.storage.batch.Batch.__enter__`, etc. 76 """ 77 def __init__(self): 78 super(_LocalStack, self).__init__() 79 self._stack = [] 80 81 def __iter__(self): 82 """Iterate the stack in LIFO order. 83 """ 84 return iter(reversed(self._stack)) 85 86 def push(self, resource): 87 """Push a resource onto our stack. 88 """ 89 self._stack.append(resource) 90 91 def pop(self): 92 """Pop a resource from our stack. 93 94 :rtype: object 95 :returns: the top-most resource, after removing it. 96 :raises IndexError: if the stack is empty. 97 """ 98 return self._stack.pop() 99 100 @property 101 def top(self): 102 """Get the top-most resource 103 104 :rtype: object 105 :returns: the top-most item, or None if the stack is empty. 106 """ 107 if len(self._stack) > 0: 108 return self._stack[-1] 109 110 111 class _UTC(datetime.tzinfo): 112 """Basic UTC implementation. 113 114 Implementing a small surface area to avoid depending on ``pytz``. 115 """ 116 117 _dst = datetime.timedelta(0) 118 _tzname = 'UTC' 119 _utcoffset = _dst 120 121 def dst(self, dt): # pylint: disable=unused-argument 122 """Daylight savings time offset.""" 123 return self._dst 124 125 def fromutc(self, dt): 126 """Convert a timestamp from (naive) UTC to this timezone.""" 127 if dt.tzinfo is None: 128 return dt.replace(tzinfo=self) 129 return super(_UTC, self).fromutc(dt) 130 131 def tzname(self, dt): # pylint: disable=unused-argument 132 """Get the name of this timezone.""" 133 return self._tzname 134 135 def utcoffset(self, dt): # pylint: disable=unused-argument 136 """UTC offset of this timezone.""" 137 return self._utcoffset 138 139 def __repr__(self): 140 return '<%s>' % (self._tzname,) 141 142 def __str__(self): 143 return self._tzname 144 145 146 def _ensure_tuple_or_list(arg_name, tuple_or_list): 147 """Ensures an input is a tuple or list. 148 149 This effectively reduces the iterable types allowed to a very short 150 whitelist: list and tuple. 151 152 :type arg_name: str 153 :param arg_name: Name of argument to use in error message. 154 155 :type tuple_or_list: sequence of str 156 :param tuple_or_list: Sequence to be verified. 157 158 :rtype: list of str 159 :returns: The ``tuple_or_list`` passed in cast to a ``list``. 160 :raises TypeError: if the ``tuple_or_list`` is not a tuple or list. 161 """ 162 if not isinstance(tuple_or_list, (tuple, list)): 163 raise TypeError('Expected %s to be a tuple or list. ' 164 'Received %r' % (arg_name, tuple_or_list)) 165 return list(tuple_or_list) 166 167 168 def _app_engine_id(): 169 """Gets the App Engine application ID if it can be inferred. 170 171 :rtype: str or ``NoneType`` 172 :returns: App Engine application ID if running in App Engine, 173 else ``None``. 174 """ 175 if app_identity is None: 176 return None 177 178 return app_identity.get_application_id() 179 180 181 def _file_project_id(): 182 """Gets the project ID from the credentials file if one is available. 183 184 :rtype: str or ``NoneType`` 185 :returns: Project ID from JSON credentials file if value exists, 186 else ``None``. 187 """ 188 credentials_file_path = os.getenv(CREDENTIALS) 189 if credentials_file_path: 190 with open(credentials_file_path, 'rb') as credentials_file: 191 credentials_json = credentials_file.read() 192 credentials = json.loads(credentials_json.decode('utf-8')) 193 return credentials.get('project_id') 194 195 196 def _get_nix_config_path(): 197 """Get the ``gcloud`` CLI config path on *nix systems. 198 199 :rtype: str 200 :returns: The filename on a *nix system containing the CLI 201 config file. 202 """ 203 return os.path.join(_USER_ROOT, '.config', _GCLOUD_CONFIG_FILE) 204 205 206 def _get_windows_config_path(): 207 """Get the ``gcloud`` CLI config path on Windows systems. 208 209 :rtype: str 210 :returns: The filename on a Windows system containing the CLI 211 config file. 212 """ 213 appdata_dir = os.getenv('APPDATA', '') 214 return os.path.join(appdata_dir, _GCLOUD_CONFIG_FILE) 215 216 217 def _default_service_project_id(): 218 """Retrieves the project ID from the gcloud command line tool. 219 220 This assumes the ``.config`` directory is stored 221 - in ~/.config on *nix systems 222 - in the %APPDATA% directory on Windows systems 223 224 Additionally, the ${HOME} / "~" directory may not be present on Google 225 App Engine, so this may be conditionally ignored. 226 227 Files that cannot be opened with configparser are silently ignored; this is 228 designed so that you can specify a list of potential configuration file 229 locations. 230 231 :rtype: str or ``NoneType`` 232 :returns: Project-ID from default configuration file else ``None`` 233 """ 234 search_paths = [] 235 if _USER_ROOT is not None: 236 search_paths.append(_get_nix_config_path()) 237 238 if os.name == 'nt': 239 search_paths.append(_get_windows_config_path()) 240 241 config = configparser.RawConfigParser() 242 config.read(search_paths) 243 244 if config.has_section(_GCLOUD_CONFIG_SECTION): 245 return config.get(_GCLOUD_CONFIG_SECTION, _GCLOUD_CONFIG_KEY) 246 247 248 def _compute_engine_id(): 249 """Gets the Compute Engine project ID if it can be inferred. 250 251 Uses 169.254.169.254 for the metadata server to avoid request 252 latency from DNS lookup. 253 254 See https://cloud.google.com/compute/docs/metadata#metadataserver 255 for information about this IP address. (This IP is also used for 256 Amazon EC2 instances, so the metadata flavor is crucial.) 257 258 See https://github.com/google/oauth2client/issues/93 for context about 259 DNS latency. 260 261 :rtype: str or ``NoneType`` 262 :returns: Compute Engine project ID if the metadata service is available, 263 else ``None``. 264 """ 265 host = '169.254.169.254' 266 uri_path = '/computeMetadata/v1/project/project-id' 267 headers = {'Metadata-Flavor': 'Google'} 268 connection = http_client.HTTPConnection(host, timeout=0.1) 269 270 try: 271 connection.request('GET', uri_path, headers=headers) 272 response = connection.getresponse() 273 if response.status == 200: 274 return response.read() 275 except socket.error: # socket.timeout or socket.error(64, 'Host is down') 276 pass 277 finally: 278 connection.close() 279 280 281 def _get_production_project(): 282 """Gets the production project if it can be inferred.""" 283 return os.getenv(PROJECT) 284 285 286 def _determine_default_project(project=None): 287 """Determine default project ID explicitly or implicitly as fall-back. 288 289 In implicit case, supports three environments. In order of precedence, the 290 implicit environments are: 291 292 * GOOGLE_CLOUD_PROJECT environment variable 293 * GOOGLE_APPLICATION_CREDENTIALS JSON file 294 * Get default service project from 295 ``$ gcloud beta auth application-default login`` 296 * Google App Engine application ID 297 * Google Compute Engine project ID (from metadata server) 298 299 :type project: str 300 :param project: Optional. The project name to use as default. 301 302 :rtype: str or ``NoneType`` 303 :returns: Default project if it can be determined. 304 """ 305 if project is None: 306 project = _get_production_project() 307 308 if project is None: 309 project = _file_project_id() 310 311 if project is None: 312 project = _default_service_project_id() 313 314 if project is None: 315 project = _app_engine_id() 316 317 if project is None: 318 project = _compute_engine_id() 319 320 return project 321 322 323 def _millis(when): 324 """Convert a zone-aware datetime to integer milliseconds. 325 326 :type when: :class:`datetime.datetime` 327 :param when: the datetime to convert 328 329 :rtype: int 330 :returns: milliseconds since epoch for ``when`` 331 """ 332 micros = _microseconds_from_datetime(when) 333 return micros // 1000 334 335 336 def _datetime_from_microseconds(value): 337 """Convert timestamp to datetime, assuming UTC. 338 339 :type value: float 340 :param value: The timestamp to convert 341 342 :rtype: :class:`datetime.datetime` 343 :returns: The datetime object created from the value. 344 """ 345 return _EPOCH + datetime.timedelta(microseconds=value) 346 347 348 def _microseconds_from_datetime(value): 349 """Convert non-none datetime to microseconds. 350 351 :type value: :class:`datetime.datetime` 352 :param value: The timestamp to convert. 353 354 :rtype: int 355 :returns: The timestamp, in microseconds. 356 """ 357 if not value.tzinfo: 358 value = value.replace(tzinfo=UTC) 359 # Regardless of what timezone is on the value, convert it to UTC. 360 value = value.astimezone(UTC) 361 # Convert the datetime to a microsecond timestamp. 362 return int(calendar.timegm(value.timetuple()) * 1e6) + value.microsecond 363 364 365 def _millis_from_datetime(value): 366 """Convert non-none datetime to timestamp, assuming UTC. 367 368 :type value: :class:`datetime.datetime`, or None 369 :param value: the timestamp 370 371 :rtype: int, or ``NoneType`` 372 :returns: the timestamp, in milliseconds, or None 373 """ 374 if value is not None: 375 return _millis(value) 376 377 378 def _date_from_iso8601_date(value): 379 """Convert a ISO8601 date string to native datetime date 380 381 :type value: str 382 :param value: The date string to convert 383 384 :rtype: :class:`datetime.date` 385 :returns: A datetime date object created from the string 386 387 """ 388 return datetime.datetime.strptime(value, '%Y-%m-%d').date() 389 390 391 def _rfc3339_to_datetime(dt_str): 392 """Convert a microsecond-precision timetamp to a native datetime. 393 394 :type dt_str: str 395 :param dt_str: The string to convert. 396 397 :rtype: :class:`datetime.datetime` 398 :returns: The datetime object created from the string. 399 """ 400 return datetime.datetime.strptime( 401 dt_str, _RFC3339_MICROS).replace(tzinfo=UTC) 402 403 404 def _rfc3339_nanos_to_datetime(dt_str): 405 """Convert a nanosecond-precision timestamp to a native datetime. 406 407 .. note:: 408 409 Python datetimes do not support nanosecond precision; this function 410 therefore truncates such values to microseconds. 411 412 :type dt_str: str 413 :param dt_str: The string to convert. 414 415 :rtype: :class:`datetime.datetime` 416 :returns: The datetime object created from the string. 417 :raises ValueError: If the timestamp does not match the RFC 3339 418 regular expression. 419 """ 420 with_nanos = _RFC3339_NANOS.match(dt_str) 421 if with_nanos is None: 422 raise ValueError( 423 'Timestamp: %r, does not match pattern: %r' % ( 424 dt_str, _RFC3339_NANOS.pattern)) 425 bare_seconds = datetime.datetime.strptime( 426 with_nanos.group('no_fraction'), _RFC3339_NO_FRACTION) 427 fraction = with_nanos.group('nanos') 428 scale = 9 - len(fraction) 429 nanos = int(fraction) * (10 ** scale) 430 micros = nanos // 1000 431 return bare_seconds.replace(microsecond=micros, tzinfo=UTC) 432 433 434 def _datetime_to_rfc3339(value, ignore_zone=True): 435 """Convert a timestamp to a string. 436 437 :type value: :class:`datetime.datetime` 438 :param value: The datetime object to be converted to a string. 439 440 :type ignore_zone: boolean 441 :param ignore_zone: If True, then the timezone (if any) of the datetime 442 object is ignored. 443 444 :rtype: str 445 :returns: The string representing the datetime stamp. 446 """ 447 if not ignore_zone and value.tzinfo is not None: 448 # Convert to UTC and remove the time zone info. 449 value = value.replace(tzinfo=None) - value.utcoffset() 450 451 return value.strftime(_RFC3339_MICROS) 452 453 454 def _to_bytes(value, encoding='ascii'): 455 """Converts a string value to bytes, if necessary. 456 457 Unfortunately, ``six.b`` is insufficient for this task since in 458 Python2 it does not modify ``unicode`` objects. 459 460 :type value: str / bytes or unicode 461 :param value: The string/bytes value to be converted. 462 463 :type encoding: str 464 :param encoding: The encoding to use to convert unicode to bytes. Defaults 465 to "ascii", which will not allow any characters from 466 ordinals larger than 127. Other useful values are 467 "latin-1", which which will only allows byte ordinals 468 (up to 255) and "utf-8", which will encode any unicode 469 that needs to be. 470 471 :rtype: str / bytes 472 :returns: The original value converted to bytes (if unicode) or as passed 473 in if it started out as bytes. 474 :raises TypeError: if the value could not be converted to bytes. 475 """ 476 result = (value.encode(encoding) 477 if isinstance(value, six.text_type) else value) 478 if isinstance(result, six.binary_type): 479 return result 480 else: 481 raise TypeError('%r could not be converted to bytes' % (value,)) 482 483 484 def _bytes_to_unicode(value): 485 """Converts bytes to a unicode value, if necessary. 486 487 :type value: bytes 488 :param value: bytes value to attempt string conversion on. 489 490 :rtype: str 491 :returns: The original value converted to unicode (if bytes) or as passed 492 in if it started out as unicode. 493 494 :raises ValueError: if the value could not be converted to unicode. 495 """ 496 result = (value.decode('utf-8') 497 if isinstance(value, six.binary_type) else value) 498 if isinstance(result, six.text_type): 499 return result 500 else: 501 raise ValueError('%r could not be converted to unicode' % (value,)) 502 503 504 def _pb_timestamp_to_datetime(timestamp_pb): 505 """Convert a Timestamp protobuf to a datetime object. 506 507 :type timestamp_pb: :class:`google.protobuf.timestamp_pb2.Timestamp` 508 :param timestamp_pb: A Google returned timestamp protobuf. 509 510 :rtype: :class:`datetime.datetime` 511 :returns: A UTC datetime object converted from a protobuf timestamp. 512 """ 513 return ( 514 _EPOCH + 515 datetime.timedelta( 516 seconds=timestamp_pb.seconds, 517 microseconds=(timestamp_pb.nanos / 1000.0), 518 ) 519 ) 520 521 522 def _pb_timestamp_to_rfc3339(timestamp_pb): 523 """Convert a Timestamp protobuf to an RFC 3339 string. 524 525 :type timestamp_pb: :class:`google.protobuf.timestamp_pb2.Timestamp` 526 :param timestamp_pb: A Google returned timestamp protobuf. 527 528 :rtype: string 529 :returns: An RFC 3339 formatted timestamp string. 530 """ 531 timestamp = _pb_timestamp_to_datetime(timestamp_pb) 532 return _datetime_to_rfc3339(timestamp) 533 534 535 def _datetime_to_pb_timestamp(when): 536 """Convert a datetime object to a Timestamp protobuf. 537 538 :type when: :class:`datetime.datetime` 539 :param when: the datetime to convert 540 541 :rtype: :class:`google.protobuf.timestamp_pb2.Timestamp` 542 :returns: A timestamp protobuf corresponding to the object. 543 """ 544 ms_value = _microseconds_from_datetime(when) 545 seconds, micros = divmod(ms_value, 10**6) 546 nanos = micros * 10**3 547 return timestamp_pb2.Timestamp(seconds=seconds, nanos=nanos) 548 549 550 def _name_from_project_path(path, project, template): 551 """Validate a URI path and get the leaf object's name. 552 553 :type path: str 554 :param path: URI path containing the name. 555 556 :type project: str or NoneType 557 :param project: The project associated with the request. It is 558 included for validation purposes. If passed as None, 559 disables validation. 560 561 :type template: str 562 :param template: Template regex describing the expected form of the path. 563 The regex must have two named groups, 'project' and 564 'name'. 565 566 :rtype: str 567 :returns: Name parsed from ``path``. 568 :raises ValueError: if the ``path`` is ill-formed or if the project from 569 the ``path`` does not agree with the ``project`` 570 passed in. 571 """ 572 if isinstance(template, str): 573 template = re.compile(template) 574 575 match = template.match(path) 576 577 if not match: 578 raise ValueError('path "%s" did not match expected pattern "%s"' % ( 579 path, template.pattern,)) 580 581 if project is not None: 582 found_project = match.group('project') 583 if found_project != project: 584 raise ValueError( 585 'Project from client (%s) should agree with ' 586 'project from resource(%s).' % (project, found_project)) 587 588 return match.group('name') 589 590 591 class MetadataPlugin(object): 592 """Callable class to transform metadata for gRPC requests. 593 594 :type credentials: :class:`oauth2client.client.OAuth2Credentials` 595 :param credentials: The OAuth2 Credentials to use for creating 596 access tokens. 597 """ 598 599 def __init__(self, credentials): 600 self._credentials = credentials 601 602 def __call__(self, unused_context, callback): 603 """Adds authorization header to request metadata. 604 605 :type unused_context: object 606 :param unused_context: A gRPC context which is not needed 607 to modify headers. 608 609 :type callback: callable 610 :param callback: A callback which will use the headers. 611 """ 612 access_token = self._credentials.get_access_token().access_token 613 headers = [ 614 ('authorization', 'Bearer ' + access_token), 615 ] 616 callback(headers, None) 617 618 619 def make_secure_stub(credentials, user_agent, stub_class, host): 620 """Makes a secure stub for an RPC service. 621 622 Uses / depends on gRPC. 623 624 :type credentials: :class:`oauth2client.client.OAuth2Credentials` 625 :param credentials: The OAuth2 Credentials to use for creating 626 access tokens. 627 628 :type user_agent: str 629 :param user_agent: (Optional) The user agent to be used with API requests. 630 631 :type stub_class: type 632 :param stub_class: A gRPC stub type for a given service. 633 634 :type host: str 635 :param host: The host for the service. 636 637 :rtype: object, instance of ``stub_class`` 638 :returns: The stub object used to make gRPC requests to a given API. 639 """ 640 # ssl_channel_credentials() loads root certificates from 641 # `grpc/_adapter/credentials/roots.pem`. 642 transport_creds = grpc.ssl_channel_credentials() 643 custom_metadata_plugin = MetadataPlugin(credentials) 644 auth_creds = grpc.metadata_call_credentials( 645 custom_metadata_plugin, name='google_creds') 646 channel_creds = grpc.composite_channel_credentials( 647 transport_creds, auth_creds) 648 target = '%s:%d' % (host, http_client.HTTPS_PORT) 649 channel_args = ( 650 ('grpc.primary_user_agent', user_agent), 651 ) 652 channel = grpc.secure_channel(target, channel_creds, 653 options=channel_args) 654 return stub_class(channel) 655 656 657 def make_insecure_stub(stub_class, host, port=None): 658 """Makes an insecure stub for an RPC service. 659 660 Uses / depends on gRPC. 661 662 :type stub_class: type 663 :param stub_class: A gRPC stub type for a given service. 664 665 :type host: str 666 :param host: The host for the service. May also include the port 667 if ``port`` is unspecified. 668 669 :type port: int 670 :param port: (Optional) The port for the service. 671 672 :rtype: object, instance of ``stub_class`` 673 :returns: The stub object used to make gRPC requests to a given API. 674 """ 675 if port is None: 676 target = host 677 else: 678 # NOTE: This assumes port != http_client.HTTPS_PORT: 679 target = '%s:%d' % (host, port) 680 channel = grpc.insecure_channel(target) 681 return stub_class(channel) 682 683 684 try: 685 from pytz import UTC # pylint: disable=unused-import,wrong-import-order 686 except ImportError: 687 UTC = _UTC() # Singleton instance to be used throughout. 688 689 # Need to define _EPOCH at the end of module since it relies on UTC. 690 _EPOCH = datetime.datetime.utcfromtimestamp(0).replace(tzinfo=UTC) 691 [end of google/cloud/_helpers.py] [start of google/cloud/storage/batch.py] 1 # Copyright 2014 Google Inc. All rights reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 """Batch updates / deletes of storage buckets / blobs. 15 16 See: https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch 17 """ 18 from email.encoders import encode_noop 19 from email.generator import Generator 20 from email.mime.application import MIMEApplication 21 from email.mime.multipart import MIMEMultipart 22 from email.parser import Parser 23 import io 24 import json 25 26 import httplib2 27 import six 28 29 from google.cloud.exceptions import make_exception 30 from google.cloud.storage.connection import Connection 31 32 33 class MIMEApplicationHTTP(MIMEApplication): 34 """MIME type for ``application/http``. 35 36 Constructs payload from headers and body 37 38 :type method: str 39 :param method: HTTP method 40 41 :type uri: str 42 :param uri: URI for HTTP request 43 44 :type headers: dict 45 :param headers: HTTP headers 46 47 :type body: str or None 48 :param body: HTTP payload 49 50 """ 51 def __init__(self, method, uri, headers, body): 52 if isinstance(body, dict): 53 body = json.dumps(body) 54 headers['Content-Type'] = 'application/json' 55 headers['Content-Length'] = len(body) 56 if body is None: 57 body = '' 58 lines = ['%s %s HTTP/1.1' % (method, uri)] 59 lines.extend(['%s: %s' % (key, value) 60 for key, value in sorted(headers.items())]) 61 lines.append('') 62 lines.append(body) 63 payload = '\r\n'.join(lines) 64 if six.PY2: 65 # email.message.Message is an old-style class, so we 66 # cannot use 'super()'. 67 MIMEApplication.__init__(self, payload, 'http', encode_noop) 68 else: # pragma: NO COVER Python3 69 super_init = super(MIMEApplicationHTTP, self).__init__ 70 super_init(payload, 'http', encode_noop) 71 72 73 class NoContent(object): 74 """Emulate an HTTP '204 No Content' response.""" 75 status = 204 76 77 78 class _FutureDict(object): 79 """Class to hold a future value for a deferred request. 80 81 Used by for requests that get sent in a :class:`Batch`. 82 """ 83 84 @staticmethod 85 def get(key, default=None): 86 """Stand-in for dict.get. 87 88 :type key: object 89 :param key: Hashable dictionary key. 90 91 :type default: object 92 :param default: Fallback value to dict.get. 93 94 :raises: :class:`KeyError` always since the future is intended to fail 95 as a dictionary. 96 """ 97 raise KeyError('Cannot get(%r, default=%r) on a future' % ( 98 key, default)) 99 100 def __getitem__(self, key): 101 """Stand-in for dict[key]. 102 103 :type key: object 104 :param key: Hashable dictionary key. 105 106 :raises: :class:`KeyError` always since the future is intended to fail 107 as a dictionary. 108 """ 109 raise KeyError('Cannot get item %r from a future' % (key,)) 110 111 def __setitem__(self, key, value): 112 """Stand-in for dict[key] = value. 113 114 :type key: object 115 :param key: Hashable dictionary key. 116 117 :type value: object 118 :param value: Dictionary value. 119 120 :raises: :class:`KeyError` always since the future is intended to fail 121 as a dictionary. 122 """ 123 raise KeyError('Cannot set %r -> %r on a future' % (key, value)) 124 125 126 class Batch(Connection): 127 """Proxy an underlying connection, batching up change operations. 128 129 :type client: :class:`google.cloud.storage.client.Client` 130 :param client: The client to use for making connections. 131 """ 132 _MAX_BATCH_SIZE = 1000 133 134 def __init__(self, client): 135 super(Batch, self).__init__() 136 self._client = client 137 self._requests = [] 138 self._target_objects = [] 139 140 def _do_request(self, method, url, headers, data, target_object): 141 """Override Connection: defer actual HTTP request. 142 143 Only allow up to ``_MAX_BATCH_SIZE`` requests to be deferred. 144 145 :type method: str 146 :param method: The HTTP method to use in the request. 147 148 :type url: str 149 :param url: The URL to send the request to. 150 151 :type headers: dict 152 :param headers: A dictionary of HTTP headers to send with the request. 153 154 :type data: str 155 :param data: The data to send as the body of the request. 156 157 :type target_object: object or :class:`NoneType` 158 :param target_object: This allows us to enable custom behavior in our 159 batch connection. Here we defer an HTTP request 160 and complete initialization of the object at a 161 later time. 162 163 :rtype: tuple of ``response`` (a dictionary of sorts) 164 and ``content`` (a string). 165 :returns: The HTTP response object and the content of the response. 166 """ 167 if len(self._requests) >= self._MAX_BATCH_SIZE: 168 raise ValueError("Too many deferred requests (max %d)" % 169 self._MAX_BATCH_SIZE) 170 self._requests.append((method, url, headers, data)) 171 result = _FutureDict() 172 self._target_objects.append(target_object) 173 if target_object is not None: 174 target_object._properties = result 175 return NoContent(), result 176 177 def _prepare_batch_request(self): 178 """Prepares headers and body for a batch request. 179 180 :rtype: tuple (dict, str) 181 :returns: The pair of headers and body of the batch request to be sent. 182 :raises: :class:`ValueError` if no requests have been deferred. 183 """ 184 if len(self._requests) == 0: 185 raise ValueError("No deferred requests") 186 187 multi = MIMEMultipart() 188 189 for method, uri, headers, body in self._requests: 190 subrequest = MIMEApplicationHTTP(method, uri, headers, body) 191 multi.attach(subrequest) 192 193 # The `email` package expects to deal with "native" strings 194 if six.PY3: # pragma: NO COVER Python3 195 buf = io.StringIO() 196 else: 197 buf = io.BytesIO() 198 generator = Generator(buf, False, 0) 199 generator.flatten(multi) 200 payload = buf.getvalue() 201 202 # Strip off redundant header text 203 _, body = payload.split('\n\n', 1) 204 return dict(multi._headers), body 205 206 def _finish_futures(self, responses): 207 """Apply all the batch responses to the futures created. 208 209 :type responses: list of (headers, payload) tuples. 210 :param responses: List of headers and payloads from each response in 211 the batch. 212 213 :raises: :class:`ValueError` if no requests have been deferred. 214 """ 215 # If a bad status occurs, we track it, but don't raise an exception 216 # until all futures have been populated. 217 exception_args = None 218 219 if len(self._target_objects) != len(responses): 220 raise ValueError('Expected a response for every request.') 221 222 for target_object, sub_response in zip(self._target_objects, 223 responses): 224 resp_headers, sub_payload = sub_response 225 if not 200 <= resp_headers.status < 300: 226 exception_args = exception_args or (resp_headers, 227 sub_payload) 228 elif target_object is not None: 229 target_object._properties = sub_payload 230 231 if exception_args is not None: 232 raise make_exception(*exception_args) 233 234 def finish(self): 235 """Submit a single `multipart/mixed` request with deferred requests. 236 237 :rtype: list of tuples 238 :returns: one ``(headers, payload)`` tuple per deferred request. 239 """ 240 headers, body = self._prepare_batch_request() 241 242 url = '%s/batch' % self.API_BASE_URL 243 244 # Use the private ``_connection`` rather than the public 245 # ``.connection``, since the public connection may be this 246 # current batch. 247 response, content = self._client._connection._make_request( 248 'POST', url, data=body, headers=headers) 249 responses = list(_unpack_batch_response(response, content)) 250 self._finish_futures(responses) 251 return responses 252 253 def current(self): 254 """Return the topmost batch, or None.""" 255 return self._client.current_batch 256 257 def __enter__(self): 258 self._client._push_batch(self) 259 return self 260 261 def __exit__(self, exc_type, exc_val, exc_tb): 262 try: 263 if exc_type is None: 264 self.finish() 265 finally: 266 self._client._pop_batch() 267 268 269 def _generate_faux_mime_message(parser, response, content): 270 """Convert response, content -> (multipart) email.message. 271 272 Helper for _unpack_batch_response. 273 """ 274 # We coerce to bytes to get consistent concat across 275 # Py2 and Py3. Percent formatting is insufficient since 276 # it includes the b in Py3. 277 if not isinstance(content, six.binary_type): 278 content = content.encode('utf-8') 279 content_type = response['content-type'] 280 if not isinstance(content_type, six.binary_type): 281 content_type = content_type.encode('utf-8') 282 faux_message = b''.join([ 283 b'Content-Type: ', 284 content_type, 285 b'\nMIME-Version: 1.0\n\n', 286 content, 287 ]) 288 289 if six.PY2: 290 return parser.parsestr(faux_message) 291 else: # pragma: NO COVER Python3 292 return parser.parsestr(faux_message.decode('utf-8')) 293 294 295 def _unpack_batch_response(response, content): 296 """Convert response, content -> [(headers, payload)]. 297 298 Creates a generator of tuples of emulating the responses to 299 :meth:`httplib2.Http.request` (a pair of headers and payload). 300 301 :type response: :class:`httplib2.Response` 302 :param response: HTTP response / headers from a request. 303 304 :type content: str 305 :param content: Response payload with a batch response. 306 """ 307 parser = Parser() 308 message = _generate_faux_mime_message(parser, response, content) 309 310 if not isinstance(message._payload, list): 311 raise ValueError('Bad response: not multi-part') 312 313 for subrequest in message._payload: 314 status_line, rest = subrequest._payload.split('\n', 1) 315 _, status, _ = status_line.split(' ', 2) 316 sub_message = parser.parsestr(rest) 317 payload = sub_message._payload 318 ctype = sub_message['Content-Type'] 319 msg_headers = dict(sub_message._headers) 320 msg_headers['status'] = status 321 headers = httplib2.Response(msg_headers) 322 if ctype and ctype.startswith('application/json'): 323 payload = json.loads(payload) 324 yield headers, payload 325 [end of google/cloud/storage/batch.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
googleapis/google-cloud-python
9e23e1c259a60d783c6401bf141caca201e47318
Make sure our 2 years old license headers are in line with Came up because of #2370. Related to #2371. To paraphrase @rimey: The Apache license header given in our internal instructions differs from the one used in this project. /cc @rimey @jgeewax
The correct one (according to our open-source friends) is: ``` Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
2016-09-22T20:07:07Z
<patch> diff --git a/docs/bigquery_snippets.py b/docs/bigquery_snippets.py --- a/docs/bigquery_snippets.py +++ b/docs/bigquery_snippets.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/docs/conf.py b/docs/conf.py --- a/docs/conf.py +++ b/docs/conf.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/docs/pubsub_snippets.py b/docs/pubsub_snippets.py --- a/docs/pubsub_snippets.py +++ b/docs/pubsub_snippets.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/__init__.py b/google/__init__.py --- a/google/__init__.py +++ b/google/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/__init__.py b/google/cloud/__init__.py --- a/google/cloud/__init__.py +++ b/google/cloud/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/_helpers.py b/google/cloud/_helpers.py --- a/google/cloud/_helpers.py +++ b/google/cloud/_helpers.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/__init__.py b/google/cloud/bigquery/__init__.py --- a/google/cloud/bigquery/__init__.py +++ b/google/cloud/bigquery/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/_helpers.py b/google/cloud/bigquery/_helpers.py --- a/google/cloud/bigquery/_helpers.py +++ b/google/cloud/bigquery/_helpers.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/client.py b/google/cloud/bigquery/client.py --- a/google/cloud/bigquery/client.py +++ b/google/cloud/bigquery/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/connection.py b/google/cloud/bigquery/connection.py --- a/google/cloud/bigquery/connection.py +++ b/google/cloud/bigquery/connection.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/dataset.py b/google/cloud/bigquery/dataset.py --- a/google/cloud/bigquery/dataset.py +++ b/google/cloud/bigquery/dataset.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/job.py b/google/cloud/bigquery/job.py --- a/google/cloud/bigquery/job.py +++ b/google/cloud/bigquery/job.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/query.py b/google/cloud/bigquery/query.py --- a/google/cloud/bigquery/query.py +++ b/google/cloud/bigquery/query.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/schema.py b/google/cloud/bigquery/schema.py --- a/google/cloud/bigquery/schema.py +++ b/google/cloud/bigquery/schema.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigquery/table.py b/google/cloud/bigquery/table.py --- a/google/cloud/bigquery/table.py +++ b/google/cloud/bigquery/table.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/__init__.py b/google/cloud/bigtable/__init__.py --- a/google/cloud/bigtable/__init__.py +++ b/google/cloud/bigtable/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/_generated/__init__.py b/google/cloud/bigtable/_generated/__init__.py --- a/google/cloud/bigtable/_generated/__init__.py +++ b/google/cloud/bigtable/_generated/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/client.py b/google/cloud/bigtable/client.py --- a/google/cloud/bigtable/client.py +++ b/google/cloud/bigtable/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/cluster.py b/google/cloud/bigtable/cluster.py --- a/google/cloud/bigtable/cluster.py +++ b/google/cloud/bigtable/cluster.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/column_family.py b/google/cloud/bigtable/column_family.py --- a/google/cloud/bigtable/column_family.py +++ b/google/cloud/bigtable/column_family.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/instance.py b/google/cloud/bigtable/instance.py --- a/google/cloud/bigtable/instance.py +++ b/google/cloud/bigtable/instance.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/row.py b/google/cloud/bigtable/row.py --- a/google/cloud/bigtable/row.py +++ b/google/cloud/bigtable/row.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/row_data.py b/google/cloud/bigtable/row_data.py --- a/google/cloud/bigtable/row_data.py +++ b/google/cloud/bigtable/row_data.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/row_filters.py b/google/cloud/bigtable/row_filters.py --- a/google/cloud/bigtable/row_filters.py +++ b/google/cloud/bigtable/row_filters.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/bigtable/table.py b/google/cloud/bigtable/table.py --- a/google/cloud/bigtable/table.py +++ b/google/cloud/bigtable/table.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/client.py b/google/cloud/client.py --- a/google/cloud/client.py +++ b/google/cloud/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/connection.py b/google/cloud/connection.py --- a/google/cloud/connection.py +++ b/google/cloud/connection.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/credentials.py b/google/cloud/credentials.py --- a/google/cloud/credentials.py +++ b/google/cloud/credentials.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/__init__.py b/google/cloud/datastore/__init__.py --- a/google/cloud/datastore/__init__.py +++ b/google/cloud/datastore/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/_generated/__init__.py b/google/cloud/datastore/_generated/__init__.py --- a/google/cloud/datastore/_generated/__init__.py +++ b/google/cloud/datastore/_generated/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/batch.py b/google/cloud/datastore/batch.py --- a/google/cloud/datastore/batch.py +++ b/google/cloud/datastore/batch.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/client.py b/google/cloud/datastore/client.py --- a/google/cloud/datastore/client.py +++ b/google/cloud/datastore/client.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/connection.py b/google/cloud/datastore/connection.py --- a/google/cloud/datastore/connection.py +++ b/google/cloud/datastore/connection.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/entity.py b/google/cloud/datastore/entity.py --- a/google/cloud/datastore/entity.py +++ b/google/cloud/datastore/entity.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/helpers.py b/google/cloud/datastore/helpers.py --- a/google/cloud/datastore/helpers.py +++ b/google/cloud/datastore/helpers.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/key.py b/google/cloud/datastore/key.py --- a/google/cloud/datastore/key.py +++ b/google/cloud/datastore/key.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/query.py b/google/cloud/datastore/query.py --- a/google/cloud/datastore/query.py +++ b/google/cloud/datastore/query.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/datastore/transaction.py b/google/cloud/datastore/transaction.py --- a/google/cloud/datastore/transaction.py +++ b/google/cloud/datastore/transaction.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/dns/__init__.py b/google/cloud/dns/__init__.py --- a/google/cloud/dns/__init__.py +++ b/google/cloud/dns/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/dns/changes.py b/google/cloud/dns/changes.py --- a/google/cloud/dns/changes.py +++ b/google/cloud/dns/changes.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/dns/client.py b/google/cloud/dns/client.py --- a/google/cloud/dns/client.py +++ b/google/cloud/dns/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/dns/connection.py b/google/cloud/dns/connection.py --- a/google/cloud/dns/connection.py +++ b/google/cloud/dns/connection.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/dns/resource_record_set.py b/google/cloud/dns/resource_record_set.py --- a/google/cloud/dns/resource_record_set.py +++ b/google/cloud/dns/resource_record_set.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/dns/zone.py b/google/cloud/dns/zone.py --- a/google/cloud/dns/zone.py +++ b/google/cloud/dns/zone.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/environment_vars.py b/google/cloud/environment_vars.py --- a/google/cloud/environment_vars.py +++ b/google/cloud/environment_vars.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/error_reporting/__init__.py b/google/cloud/error_reporting/__init__.py --- a/google/cloud/error_reporting/__init__.py +++ b/google/cloud/error_reporting/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/error_reporting/client.py b/google/cloud/error_reporting/client.py --- a/google/cloud/error_reporting/client.py +++ b/google/cloud/error_reporting/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/exceptions.py b/google/cloud/exceptions.py --- a/google/cloud/exceptions.py +++ b/google/cloud/exceptions.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/iterator.py b/google/cloud/iterator.py --- a/google/cloud/iterator.py +++ b/google/cloud/iterator.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/__init__.py b/google/cloud/language/__init__.py --- a/google/cloud/language/__init__.py +++ b/google/cloud/language/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/client.py b/google/cloud/language/client.py --- a/google/cloud/language/client.py +++ b/google/cloud/language/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/connection.py b/google/cloud/language/connection.py --- a/google/cloud/language/connection.py +++ b/google/cloud/language/connection.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/document.py b/google/cloud/language/document.py --- a/google/cloud/language/document.py +++ b/google/cloud/language/document.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/entity.py b/google/cloud/language/entity.py --- a/google/cloud/language/entity.py +++ b/google/cloud/language/entity.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/sentiment.py b/google/cloud/language/sentiment.py --- a/google/cloud/language/sentiment.py +++ b/google/cloud/language/sentiment.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/language/syntax.py b/google/cloud/language/syntax.py --- a/google/cloud/language/syntax.py +++ b/google/cloud/language/syntax.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/__init__.py b/google/cloud/logging/__init__.py --- a/google/cloud/logging/__init__.py +++ b/google/cloud/logging/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/_gax.py b/google/cloud/logging/_gax.py --- a/google/cloud/logging/_gax.py +++ b/google/cloud/logging/_gax.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/client.py b/google/cloud/logging/client.py --- a/google/cloud/logging/client.py +++ b/google/cloud/logging/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/connection.py b/google/cloud/logging/connection.py --- a/google/cloud/logging/connection.py +++ b/google/cloud/logging/connection.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/entries.py b/google/cloud/logging/entries.py --- a/google/cloud/logging/entries.py +++ b/google/cloud/logging/entries.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/handlers/__init__.py b/google/cloud/logging/handlers/__init__.py --- a/google/cloud/logging/handlers/__init__.py +++ b/google/cloud/logging/handlers/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/handlers/handlers.py b/google/cloud/logging/handlers/handlers.py --- a/google/cloud/logging/handlers/handlers.py +++ b/google/cloud/logging/handlers/handlers.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/handlers/transports/__init__.py b/google/cloud/logging/handlers/transports/__init__.py --- a/google/cloud/logging/handlers/transports/__init__.py +++ b/google/cloud/logging/handlers/transports/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/handlers/transports/background_thread.py b/google/cloud/logging/handlers/transports/background_thread.py --- a/google/cloud/logging/handlers/transports/background_thread.py +++ b/google/cloud/logging/handlers/transports/background_thread.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/handlers/transports/base.py b/google/cloud/logging/handlers/transports/base.py --- a/google/cloud/logging/handlers/transports/base.py +++ b/google/cloud/logging/handlers/transports/base.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/handlers/transports/sync.py b/google/cloud/logging/handlers/transports/sync.py --- a/google/cloud/logging/handlers/transports/sync.py +++ b/google/cloud/logging/handlers/transports/sync.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/logger.py b/google/cloud/logging/logger.py --- a/google/cloud/logging/logger.py +++ b/google/cloud/logging/logger.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/metric.py b/google/cloud/logging/metric.py --- a/google/cloud/logging/metric.py +++ b/google/cloud/logging/metric.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/logging/sink.py b/google/cloud/logging/sink.py --- a/google/cloud/logging/sink.py +++ b/google/cloud/logging/sink.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/__init__.py b/google/cloud/monitoring/__init__.py --- a/google/cloud/monitoring/__init__.py +++ b/google/cloud/monitoring/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/_dataframe.py b/google/cloud/monitoring/_dataframe.py --- a/google/cloud/monitoring/_dataframe.py +++ b/google/cloud/monitoring/_dataframe.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/client.py b/google/cloud/monitoring/client.py --- a/google/cloud/monitoring/client.py +++ b/google/cloud/monitoring/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/connection.py b/google/cloud/monitoring/connection.py --- a/google/cloud/monitoring/connection.py +++ b/google/cloud/monitoring/connection.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/group.py b/google/cloud/monitoring/group.py --- a/google/cloud/monitoring/group.py +++ b/google/cloud/monitoring/group.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/label.py b/google/cloud/monitoring/label.py --- a/google/cloud/monitoring/label.py +++ b/google/cloud/monitoring/label.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/metric.py b/google/cloud/monitoring/metric.py --- a/google/cloud/monitoring/metric.py +++ b/google/cloud/monitoring/metric.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/query.py b/google/cloud/monitoring/query.py --- a/google/cloud/monitoring/query.py +++ b/google/cloud/monitoring/query.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/resource.py b/google/cloud/monitoring/resource.py --- a/google/cloud/monitoring/resource.py +++ b/google/cloud/monitoring/resource.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/monitoring/timeseries.py b/google/cloud/monitoring/timeseries.py --- a/google/cloud/monitoring/timeseries.py +++ b/google/cloud/monitoring/timeseries.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/operation.py b/google/cloud/operation.py --- a/google/cloud/operation.py +++ b/google/cloud/operation.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/__init__.py b/google/cloud/pubsub/__init__.py --- a/google/cloud/pubsub/__init__.py +++ b/google/cloud/pubsub/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/_gax.py b/google/cloud/pubsub/_gax.py --- a/google/cloud/pubsub/_gax.py +++ b/google/cloud/pubsub/_gax.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/_helpers.py b/google/cloud/pubsub/_helpers.py --- a/google/cloud/pubsub/_helpers.py +++ b/google/cloud/pubsub/_helpers.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/client.py b/google/cloud/pubsub/client.py --- a/google/cloud/pubsub/client.py +++ b/google/cloud/pubsub/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/connection.py b/google/cloud/pubsub/connection.py --- a/google/cloud/pubsub/connection.py +++ b/google/cloud/pubsub/connection.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/iam.py b/google/cloud/pubsub/iam.py --- a/google/cloud/pubsub/iam.py +++ b/google/cloud/pubsub/iam.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/message.py b/google/cloud/pubsub/message.py --- a/google/cloud/pubsub/message.py +++ b/google/cloud/pubsub/message.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/subscription.py b/google/cloud/pubsub/subscription.py --- a/google/cloud/pubsub/subscription.py +++ b/google/cloud/pubsub/subscription.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/pubsub/topic.py b/google/cloud/pubsub/topic.py --- a/google/cloud/pubsub/topic.py +++ b/google/cloud/pubsub/topic.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/resource_manager/__init__.py b/google/cloud/resource_manager/__init__.py --- a/google/cloud/resource_manager/__init__.py +++ b/google/cloud/resource_manager/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/resource_manager/client.py b/google/cloud/resource_manager/client.py --- a/google/cloud/resource_manager/client.py +++ b/google/cloud/resource_manager/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/resource_manager/connection.py b/google/cloud/resource_manager/connection.py --- a/google/cloud/resource_manager/connection.py +++ b/google/cloud/resource_manager/connection.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/resource_manager/project.py b/google/cloud/resource_manager/project.py --- a/google/cloud/resource_manager/project.py +++ b/google/cloud/resource_manager/project.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/speech/__init__.py b/google/cloud/speech/__init__.py --- a/google/cloud/speech/__init__.py +++ b/google/cloud/speech/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/speech/client.py b/google/cloud/speech/client.py --- a/google/cloud/speech/client.py +++ b/google/cloud/speech/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/speech/connection.py b/google/cloud/speech/connection.py --- a/google/cloud/speech/connection.py +++ b/google/cloud/speech/connection.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/__init__.py b/google/cloud/storage/__init__.py --- a/google/cloud/storage/__init__.py +++ b/google/cloud/storage/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/_helpers.py b/google/cloud/storage/_helpers.py --- a/google/cloud/storage/_helpers.py +++ b/google/cloud/storage/_helpers.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/acl.py b/google/cloud/storage/acl.py --- a/google/cloud/storage/acl.py +++ b/google/cloud/storage/acl.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/batch.py b/google/cloud/storage/batch.py --- a/google/cloud/storage/batch.py +++ b/google/cloud/storage/batch.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/blob.py b/google/cloud/storage/blob.py --- a/google/cloud/storage/blob.py +++ b/google/cloud/storage/blob.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/bucket.py b/google/cloud/storage/bucket.py --- a/google/cloud/storage/bucket.py +++ b/google/cloud/storage/bucket.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/client.py b/google/cloud/storage/client.py --- a/google/cloud/storage/client.py +++ b/google/cloud/storage/client.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/storage/connection.py b/google/cloud/storage/connection.py --- a/google/cloud/storage/connection.py +++ b/google/cloud/storage/connection.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/__init__.py b/google/cloud/streaming/__init__.py --- a/google/cloud/streaming/__init__.py +++ b/google/cloud/streaming/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/buffered_stream.py b/google/cloud/streaming/buffered_stream.py --- a/google/cloud/streaming/buffered_stream.py +++ b/google/cloud/streaming/buffered_stream.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/exceptions.py b/google/cloud/streaming/exceptions.py --- a/google/cloud/streaming/exceptions.py +++ b/google/cloud/streaming/exceptions.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/http_wrapper.py b/google/cloud/streaming/http_wrapper.py --- a/google/cloud/streaming/http_wrapper.py +++ b/google/cloud/streaming/http_wrapper.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/stream_slice.py b/google/cloud/streaming/stream_slice.py --- a/google/cloud/streaming/stream_slice.py +++ b/google/cloud/streaming/stream_slice.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/transfer.py b/google/cloud/streaming/transfer.py --- a/google/cloud/streaming/transfer.py +++ b/google/cloud/streaming/transfer.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/streaming/util.py b/google/cloud/streaming/util.py --- a/google/cloud/streaming/util.py +++ b/google/cloud/streaming/util.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/translate/__init__.py b/google/cloud/translate/__init__.py --- a/google/cloud/translate/__init__.py +++ b/google/cloud/translate/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/translate/client.py b/google/cloud/translate/client.py --- a/google/cloud/translate/client.py +++ b/google/cloud/translate/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/translate/connection.py b/google/cloud/translate/connection.py --- a/google/cloud/translate/connection.py +++ b/google/cloud/translate/connection.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/__init__.py b/google/cloud/vision/__init__.py --- a/google/cloud/vision/__init__.py +++ b/google/cloud/vision/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/client.py b/google/cloud/vision/client.py --- a/google/cloud/vision/client.py +++ b/google/cloud/vision/client.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/color.py b/google/cloud/vision/color.py --- a/google/cloud/vision/color.py +++ b/google/cloud/vision/color.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/connection.py b/google/cloud/vision/connection.py --- a/google/cloud/vision/connection.py +++ b/google/cloud/vision/connection.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/entity.py b/google/cloud/vision/entity.py --- a/google/cloud/vision/entity.py +++ b/google/cloud/vision/entity.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/face.py b/google/cloud/vision/face.py --- a/google/cloud/vision/face.py +++ b/google/cloud/vision/face.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/feature.py b/google/cloud/vision/feature.py --- a/google/cloud/vision/feature.py +++ b/google/cloud/vision/feature.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/geometry.py b/google/cloud/vision/geometry.py --- a/google/cloud/vision/geometry.py +++ b/google/cloud/vision/geometry.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/image.py b/google/cloud/vision/image.py --- a/google/cloud/vision/image.py +++ b/google/cloud/vision/image.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/likelihood.py b/google/cloud/vision/likelihood.py --- a/google/cloud/vision/likelihood.py +++ b/google/cloud/vision/likelihood.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/google/cloud/vision/safe.py b/google/cloud/vision/safe.py --- a/google/cloud/vision/safe.py +++ b/google/cloud/vision/safe.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/generate_json_docs.py b/scripts/generate_json_docs.py --- a/scripts/generate_json_docs.py +++ b/scripts/generate_json_docs.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/get_version.py b/scripts/get_version.py --- a/scripts/get_version.py +++ b/scripts/get_version.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/make_datastore_grpc.py b/scripts/make_datastore_grpc.py --- a/scripts/make_datastore_grpc.py +++ b/scripts/make_datastore_grpc.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/make_operations_grpc.py b/scripts/make_operations_grpc.py --- a/scripts/make_operations_grpc.py +++ b/scripts/make_operations_grpc.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/pycodestyle_on_repo.py b/scripts/pycodestyle_on_repo.py --- a/scripts/pycodestyle_on_repo.py +++ b/scripts/pycodestyle_on_repo.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/rewrite_imports.py b/scripts/rewrite_imports.py --- a/scripts/rewrite_imports.py +++ b/scripts/rewrite_imports.py @@ -1,4 +1,4 @@ -# Copyright 2015 Google Inc. All rights reserved. +# Copyright 2015 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/run_pylint.py b/scripts/run_pylint.py --- a/scripts/run_pylint.py +++ b/scripts/run_pylint.py @@ -1,4 +1,4 @@ -# Copyright 2014 Google Inc. All rights reserved. +# Copyright 2014 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/scripts/verify_included_modules.py b/scripts/verify_included_modules.py --- a/scripts/verify_included_modules.py +++ b/scripts/verify_included_modules.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google Inc. All rights reserved. +# Copyright 2016 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. </patch>
[]
[]
Lightning-AI__lightning-1724
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Also update progress_bar in training_epoch_end ## 🚀 Feature [PR 1357](https://github.com/PyTorchLightning/pytorch-lightning/pull/1357) implements training_epoch_end to log metrics. The comments in the issue suggest that it should behave like validation_epoch_end, but the PR only replicates the callbacks and metric logging. This feature would add updates to the progress bar as well. ### Motivation Motivation is for the same usecase as having a progress bar. While tensorboard or any other logger is a great resource to have. It is painful to set up in environments like slurm where you can only submit jobs and possibly cannot forward ports or set up the logging. ### Pitch The changes should only entail adding an update to tqdm [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/b2707c9b2ebeac03f19a3939df9432ac8859d894/pytorch_lightning/trainer/training_loop.py#L503) </issue> <code> [start of README.md] 1 <div align="center"> 2 3 ![Logo](docs/source/_images/logos/lightning_logo.svg) 4 5 # PyTorch Lightning 6 7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.** 8 9 10 [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning) 11 [![PyPI Status](https://pepy.tech/badge/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning) 12 [![codecov](https://codecov.io/gh/PyTorchLightning/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/PyTorchLightning/pytorch-lightning) 13 [![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning) 14 15 [![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=0.7.5)](https://pytorch-lightning.readthedocs.io/en/stable/) 16 [![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ) 17 [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE) 18 [![Next Release](https://img.shields.io/badge/Next%20Release-May%2006-<COLOR>.svg)](https://shields.io/) 19 20 <!-- 21 removed until codecov badge isn't empy. likely a config error showing nothing on master. 22 [![codecov](https://codecov.io/gh/Borda/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/Borda/pytorch-lightning) 23 --> 24 </div> 25 26 --- 27 ## Continuous Integration 28 <center> 29 30 | System / PyTorch ver. | 1.1 (min. reg) | 1.2 | 1.3 | 1.4 | 1.5 (latest) | 31 | :---: | :---: | :---: | :---: | :---: | :---: | 32 | Linux py3.6 [CPU] | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | [![CircleCI](https://circleci.com/gh/PyTorchLightning/pytorch-lightning.svg?style=svg)](https://circleci.com/gh/PyTorchLightning/pytorch-lightning) | 33 | Linux py3.7 [GPU] | - | - | - | - | [![Build Status](http://35.192.60.23/api/badges/PyTorchLightning/pytorch-lightning/status.svg)](http://35.192.60.23/PyTorchLightning/pytorch-lightning) | 34 | Linux py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 35 | OSX py3.6 / py3.7 / py3.8| [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 36 | Windows py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | 37 38 </center> 39 40 Simple installation from PyPI 41 ```bash 42 pip install pytorch-lightning 43 ``` 44 45 ## Docs 46 - [master](https://pytorch-lightning.readthedocs.io/en/latest) 47 - [0.7.5](https://pytorch-lightning.readthedocs.io/en/0.7.5/) 48 - [0.7.3](https://pytorch-lightning.readthedocs.io/en/0.7.3/) 49 - [0.7.1](https://pytorch-lightning.readthedocs.io/en/0.7.1/) 50 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/) 51 - [0.5.3.2](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/) 52 53 ## Demo 54 [MNIST, GAN, BERT, DQN on COLAB!](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 55 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3) 56 57 ## What is it? 58 [READ THIS QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html) 59 60 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. 61 It's more of a PyTorch style-guide than a framework. 62 63 In Lightning, you organize your code into 3 distinct categories: 64 65 1. Research code (goes in the LightningModule). 66 2. Engineering code (you delete, and is handled by the Trainer). 67 3. Non-essential research code (logging, etc... this goes in Callbacks). 68 69 Here's an example of how to refactor your research code into a [LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html). 70 71 ![PT to PL](docs/source/_images/lightning_module/pt_to_pl.png) 72 73 The rest of the code is automated by the [Trainer](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html)! 74 ![PT to PL](docs/source/_images/lightning_module/pt_trainer.png) 75 76 ## Testing Rigour 77 All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). 78 79 In fact, we also train a few models using a vanilla PyTorch loop and compare with the same model trained using the Trainer to make sure we achieve the EXACT same results. [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks). 80 81 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts. 82 83 ## How flexible is it? 84 As you see, you're just organizing your PyTorch code - there's no abstraction. 85 86 And for the stuff that the Trainer abstracts out you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backwards pass. 87 88 For example, here you could do your own backward pass 89 90 ```python 91 class LitModel(LightningModule): 92 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, 93 second_order_closure=None): 94 optimizer.step() 95 optimizer.zero_grad() 96 ``` 97 98 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer. 99 100 ## Who is Lightning for? 101 - Professional researchers 102 - PhD students 103 - Corporate production teams 104 105 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :) 106 107 ## What does lightning control for me? 108 109 Everything in Blue! 110 This is how lightning separates the science (red) from the engineering (blue). 111 112 ![Overview](docs/source/_images/general/pl_overview.gif) 113 114 ## How much effort is it to convert? 115 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour. 116 If your code IS a mess, then you needed to clean up anyhow ;) 117 118 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09). 119 120 121 ## Starting a new project? 122 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed) 123 124 ## Why do I want to use lightning? 125 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you. 126 127 ## Support 128 - [8 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, PhD students from top AI labs. 129 - 100+ community contributors. 130 131 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support. 132 133 --- 134 135 ## README Table of Contents 136 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) 137 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me) 138 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard) 139 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable) 140 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples) 141 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials) 142 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help) 143 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md) 144 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge) 145 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles) 146 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team) 147 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq) 148 149 --- 150 151 ## Realistic example 152 Here's how you would organize a realistic PyTorch project into Lightning. 153 154 ![PT to PL](docs/source/_images/mnist_imgs/pt_to_pl.jpg) 155 156 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... 157 It can ALSO define a simple classifier. 158 159 In summary, you: 160 161 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) 162 ```python 163 class LitSystem(pl.LightningModule): 164 165 def __init__(self): 166 super().__init__() 167 # not the best model... 168 self.l1 = torch.nn.Linear(28 * 28, 10) 169 170 def forward(self, x): 171 return torch.relu(self.l1(x.view(x.size(0), -1))) 172 173 def training_step(self, batch, batch_idx): 174 ... 175 ``` 176 177 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html) 178 ```python 179 from pytorch_lightning import Trainer 180 181 model = LitSystem() 182 183 # most basic trainer, uses good defaults 184 trainer = Trainer() 185 trainer.fit(model) 186 ``` 187 188 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 189 190 ## What types of research works? 191 Anything! Remember, that this is just organized PyTorch code. 192 The Training step defines the core complexity found in the training loop. 193 194 #### Could be as complex as a seq2seq 195 196 ```python 197 # define what happens for training here 198 def training_step(self, batch, batch_idx): 199 x, y = batch 200 201 # define your own forward and loss calculation 202 hidden_states = self.encoder(x) 203 204 # even as complex as a seq-2-seq + attn model 205 # (this is just a toy, non-working example to illustrate) 206 start_token = '<SOS>' 207 last_hidden = torch.zeros(...) 208 loss = 0 209 for step in range(max_seq_len): 210 attn_context = self.attention_nn(hidden_states, start_token) 211 pred = self.decoder(start_token, attn_context, last_hidden) 212 last_hidden = pred 213 pred = self.predict_nn(pred) 214 loss += self.loss(last_hidden, y[step]) 215 216 #toy example as well 217 loss = loss / max_seq_len 218 return {'loss': loss} 219 ``` 220 221 #### Or as basic as CNN image classification 222 223 ```python 224 # define what happens for validation here 225 def validation_step(self, batch, batch_idx): 226 x, y = batch 227 228 # or as basic as a CNN classification 229 out = self(x) 230 loss = my_loss(out, y) 231 return {'loss': loss} 232 ``` 233 234 And without changing a single line of code, you could run on CPUs 235 ```python 236 trainer = Trainer(max_epochs=1) 237 ``` 238 239 240 Or GPUs 241 ```python 242 # 8 GPUs 243 trainer = Trainer(max_epochs=1, gpus=8) 244 245 # 256 GPUs 246 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32) 247 ``` 248 249 Or TPUs 250 ```python 251 trainer = Trainer(num_tpu_cores=8) 252 ``` 253 254 When you're done training, run the test accuracy 255 ```python 256 trainer.test() 257 ``` 258 259 ## Visualization 260 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks 261 262 - [Tensorboard](https://pytorch.org/docs/stable/tensorboard.html) 263 - [MLFlow](https://mlflow.org/) 264 - [Neptune.ai](https://neptune.ai/) 265 - [Comet.ml](https://www.comet.ml/site/) 266 - [Wandb](https://www.wandb.com/) 267 - [Trains](https://github.com/allegroai/trains) 268 - ... 269 270 ![tensorboard-support](docs/source/_images/general/tf_loss.png) 271 272 273 ## Lightning automates 40+ parts of DL/ML research 274 - GPU training 275 - Distributed GPU (cluster) training 276 - TPU training 277 - EarlyStopping 278 - Logging/Visualizing 279 - Checkpointing 280 - Experiment management 281 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) 282 283 284 ## Examples 285 Check out this awesome list of research papers and implementations done with Lightning. 286 287 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer) 288 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0) 289 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py) 290 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting) 291 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_) 292 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D) 293 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture) 294 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes) 295 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks) 296 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning) 297 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf) 298 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification) 299 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE) 300 301 ## Tutorials 302 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started. 303 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials). 304 305 --- 306 307 ## Asking for help 308 Welcome to the Lightning community! 309 310 If you have any questions, feel free to: 311 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/). 312 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question). 313 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning. 314 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/enQtODU5ODIyNTUzODQwLTFkMDg5Mzc1MDBmNjEzMDgxOTVmYTdhYjA1MDdmODUyOTg2OGQ1ZWZkYTQzODhhNzdhZDA3YmNhMDhlMDY4YzQ). 315 316 --- 317 ## FAQ 318 **How do I use Lightning for rapid research?** 319 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) 320 321 **Why was Lightning created?** 322 Lightning has 3 goals in mind: 323 324 1. Maximal flexibility while abstracting out the common boilerplate across research projects. 325 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format. 326 3. Democratizing PyTorch power user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning. 327 328 **How does Lightning compare with Ignite and fast.ai?** 329 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a). 330 331 **Is this another library I have to learn?** 332 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions! 333 334 **Are there plans to support Python 2?** 335 Nope. 336 337 **Are there plans to support virtualenv?** 338 Nope. Please use anaconda or miniconda. 339 340 **Which PyTorch versions do you support?** 341 - **PyTorch 1.1.0** 342 ```bash 343 # install pytorch 1.1.0 using the official instructions 344 345 # install test-tube 0.6.7.6 which supports 1.1.0 346 pip install test-tube==0.6.7.6 347 348 # install latest Lightning version without upgrading deps 349 pip install -U --no-deps pytorch-lightning 350 ``` 351 - **PyTorch 1.2.0, 1.3.0,** 352 Install via pip as normal 353 354 ## Custom installation 355 356 ### Bleeding edge 357 358 If you can't wait for the next release, install the most up to date code with: 359 * using GIT (locally clone whole repo with full history) 360 ```bash 361 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade 362 ``` 363 * using instant zip (last state of the repo without git history) 364 ```bash 365 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade 366 ``` 367 368 ### Any release installation 369 370 You can also install any past release `0.X.Y` from this repository: 371 ```bash 372 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade 373 ``` 374 375 ### Lightning team 376 377 #### Leads 378 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder) 379 - Jirka Borovec [(Borda)](https://github.com/Borda) (ghost :) 380 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder) 381 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder) 382 - Justus Schock [(justusschock)](https://github.com/justusschock) (Former Core Member PyTorch Ignite) 383 384 #### Core Maintainers 385 386 - Nick Eggert [(neggert)](https://github.com/neggert) 387 - Jeff Ling [(jeffling)](https://github.com/jeffling) 388 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan) 389 - Tullie Murrell [(tullie)](https://github.com/tullie) 390 - Adrian Wälchli [(awaelchli)](https://github.com/awaelchli) 391 392 ## Bibtex 393 If you want to cite the framework feel free to use this (but only if you loved it 😊): 394 395 ```bibtex 396 @article{falcon2019pytorch, 397 title={PyTorch Lightning}, 398 author={Falcon, WA}, 399 journal={GitHub. Note: https://github. com/williamFalcon/pytorch-lightning Cited by}, 400 volume={3}, 401 year={2019} 402 } 403 ``` 404 [end of README.md] [start of pytorch_lightning/callbacks/progress.py] 1 """ 2 Progress Bars 3 ============= 4 5 Use or override one of the progress bar callbacks. 6 7 """ 8 import sys 9 10 from tqdm.auto import tqdm 11 12 from pytorch_lightning.callbacks import Callback 13 14 15 class ProgressBarBase(Callback): 16 r""" 17 The base class for progress bars in Lightning. It is a :class:`~pytorch_lightning.callbacks.Callback` 18 that keeps track of the batch progress in the :class:`~pytorch_lightning.trainer.trainer.Trainer`. 19 You should implement your highly custom progress bars with this as the base class. 20 21 Example:: 22 23 class LitProgressBar(ProgressBarBase): 24 25 def __init__(self): 26 super().__init__() # don't forget this :) 27 self.enable = True 28 29 def disable(self): 30 self.enable = False 31 32 def on_batch_end(self, trainer, pl_module): 33 super().on_batch_end(trainer, pl_module) # don't forget this :) 34 percent = (self.train_batch_idx / self.total_train_batches) * 100 35 sys.stdout.flush() 36 sys.stdout.write(f'{percent:.01f} percent complete \r') 37 38 bar = LitProgressBar() 39 trainer = Trainer(callbacks=[bar]) 40 41 """ 42 def __init__(self): 43 44 self._trainer = None 45 self._train_batch_idx = 0 46 self._val_batch_idx = 0 47 self._test_batch_idx = 0 48 49 @property 50 def trainer(self): 51 return self._trainer 52 53 @property 54 def train_batch_idx(self) -> int: 55 """ 56 The current batch index being processed during training. 57 Use this to update your progress bar. 58 """ 59 return self._train_batch_idx 60 61 @property 62 def val_batch_idx(self) -> int: 63 """ 64 The current batch index being processed during validation. 65 Use this to update your progress bar. 66 """ 67 return self._val_batch_idx 68 69 @property 70 def test_batch_idx(self) -> int: 71 """ 72 The current batch index being processed during testing. 73 Use this to update your progress bar. 74 """ 75 return self._test_batch_idx 76 77 @property 78 def total_train_batches(self) -> int: 79 """ 80 The total number of training batches during training, which may change from epoch to epoch. 81 Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the 82 training dataloader is of infinite size. 83 """ 84 total_train_batches = 1 if self.trainer.fast_dev_run else self.trainer.num_training_batches 85 return total_train_batches 86 87 @property 88 def total_val_batches(self) -> int: 89 """ 90 The total number of training batches during validation, which may change from epoch to epoch. 91 Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the 92 validation dataloader is of infinite size. 93 """ 94 trainer = self.trainer 95 total_val_batches = 0 96 if trainer.fast_dev_run: 97 total_val_batches = len(trainer.val_dataloaders) 98 elif not self.trainer.disable_validation: 99 is_val_epoch = (trainer.current_epoch + 1) % trainer.check_val_every_n_epoch == 0 100 total_val_batches = trainer.num_val_batches if is_val_epoch else 0 101 return total_val_batches 102 103 @property 104 def total_test_batches(self) -> int: 105 """ 106 The total number of training batches during testing, which may change from epoch to epoch. 107 Use this to set the total number of iterations in the progress bar. Can return ``inf`` if the 108 test dataloader is of infinite size. 109 """ 110 if self.trainer.fast_dev_run: 111 total_test_batches = len(self.trainer.test_dataloaders) 112 else: 113 total_test_batches = self.trainer.num_test_batches 114 return total_test_batches 115 116 def disable(self): 117 """ 118 You should provide a way to disable the progress bar. 119 The :class:`~pytorch_lightning.trainer.trainer.Trainer` will call this to disable the 120 output on processes that have a rank different from 0, e.g., in multi-node training. 121 """ 122 raise NotImplementedError 123 124 def enable(self): 125 """ 126 You should provide a way to enable the progress bar. 127 The :class:`~pytorch_lightning.trainer.trainer.Trainer` will call this in e.g. pre-training 128 routines like the `learning rate finder <lr_finder.rst>`_ to temporarily enable and 129 disable the main progress bar. 130 """ 131 raise NotImplementedError 132 133 def on_init_end(self, trainer): 134 self._trainer = trainer 135 136 def on_train_start(self, trainer, pl_module): 137 self._train_batch_idx = trainer.batch_idx 138 139 def on_epoch_start(self, trainer, pl_module): 140 self._train_batch_idx = 0 141 142 def on_batch_end(self, trainer, pl_module): 143 self._train_batch_idx += 1 144 145 def on_validation_start(self, trainer, pl_module): 146 self._val_batch_idx = 0 147 148 def on_validation_batch_end(self, trainer, pl_module): 149 self._val_batch_idx += 1 150 151 def on_test_start(self, trainer, pl_module): 152 self._test_batch_idx = 0 153 154 def on_test_batch_end(self, trainer, pl_module): 155 self._test_batch_idx += 1 156 157 158 class ProgressBar(ProgressBarBase): 159 r""" 160 This is the default progress bar used by Lightning. It prints to `stdout` using the 161 :mod:`tqdm` package and shows up to four different bars: 162 163 - **sanity check progress:** the progress during the sanity check run 164 - **main progress:** shows training + validation progress combined. It also accounts for 165 multiple validation runs during training when 166 :paramref:`~pytorch_lightning.trainer.trainer.Trainer.val_check_interval` is used. 167 - **validation progress:** only visible during validation; 168 shows total progress over all validation datasets. 169 - **test progress:** only active when testing; shows total progress over all test datasets. 170 171 For infinite datasets, the progress bar never ends. 172 173 If you want to customize the default ``tqdm`` progress bars used by Lightning, you can override 174 specific methods of the callback class and pass your custom implementation to the 175 :class:`~pytorch_lightning.trainer.trainer.Trainer`: 176 177 Example:: 178 179 class LitProgressBar(ProgressBar): 180 181 def init_validation_tqdm(self): 182 bar = super().init_validation_tqdm() 183 bar.set_description('running validation ...') 184 return bar 185 186 bar = LitProgressBar() 187 trainer = Trainer(callbacks=[bar]) 188 189 Args: 190 refresh_rate: 191 Determines at which rate (in number of batches) the progress bars get updated. 192 Set it to ``0`` to disable the display. By default, the 193 :class:`~pytorch_lightning.trainer.trainer.Trainer` uses this implementation of the progress 194 bar and sets the refresh rate to the value provided to the 195 :paramref:`~pytorch_lightning.trainer.trainer.Trainer.progress_bar_refresh_rate` argument in the 196 :class:`~pytorch_lightning.trainer.trainer.Trainer`. 197 process_position: 198 Set this to a value greater than ``0`` to offset the progress bars by this many lines. 199 This is useful when you have progress bars defined elsewhere and want to show all of them 200 together. This corresponds to 201 :paramref:`~pytorch_lightning.trainer.trainer.Trainer.process_position` in the 202 :class:`~pytorch_lightning.trainer.trainer.Trainer`. 203 204 """ 205 def __init__(self, refresh_rate: int = 1, process_position: int = 0): 206 super().__init__() 207 self._refresh_rate = refresh_rate 208 self._process_position = process_position 209 self._enabled = True 210 self.main_progress_bar = None 211 self.val_progress_bar = None 212 self.test_progress_bar = None 213 214 def __getstate__(self): 215 # can't pickle the tqdm objects 216 state = self.__dict__.copy() 217 state['main_progress_bar'] = None 218 state['val_progress_bar'] = None 219 state['test_progress_bar'] = None 220 return state 221 222 @property 223 def refresh_rate(self) -> int: 224 return self._refresh_rate 225 226 @property 227 def process_position(self) -> int: 228 return self._process_position 229 230 @property 231 def is_enabled(self) -> bool: 232 return self._enabled and self.refresh_rate > 0 233 234 @property 235 def is_disabled(self) -> bool: 236 return not self.is_enabled 237 238 def disable(self) -> None: 239 self._enabled = False 240 241 def enable(self) -> None: 242 self._enabled = True 243 244 def init_sanity_tqdm(self) -> tqdm: 245 """ Override this to customize the tqdm bar for the validation sanity run. """ 246 bar = tqdm( 247 desc='Validation sanity check', 248 position=(2 * self.process_position), 249 disable=self.is_disabled, 250 leave=False, 251 dynamic_ncols=True, 252 file=sys.stdout, 253 ) 254 return bar 255 256 def init_train_tqdm(self) -> tqdm: 257 """ Override this to customize the tqdm bar for training. """ 258 bar = tqdm( 259 desc='Training', 260 initial=self.train_batch_idx, 261 position=(2 * self.process_position), 262 disable=self.is_disabled, 263 leave=True, 264 dynamic_ncols=True, 265 file=sys.stdout, 266 smoothing=0, 267 ) 268 return bar 269 270 def init_validation_tqdm(self) -> tqdm: 271 """ Override this to customize the tqdm bar for validation. """ 272 bar = tqdm( 273 desc='Validating', 274 position=(2 * self.process_position + 1), 275 disable=self.is_disabled, 276 leave=False, 277 dynamic_ncols=True, 278 file=sys.stdout 279 ) 280 return bar 281 282 def init_test_tqdm(self) -> tqdm: 283 """ Override this to customize the tqdm bar for testing. """ 284 bar = tqdm( 285 desc='Testing', 286 position=(2 * self.process_position), 287 disable=self.is_disabled, 288 leave=True, 289 dynamic_ncols=True, 290 file=sys.stdout 291 ) 292 return bar 293 294 def on_sanity_check_start(self, trainer, pl_module): 295 super().on_sanity_check_start(trainer, pl_module) 296 self.val_progress_bar = self.init_sanity_tqdm() 297 self.val_progress_bar.total = trainer.num_sanity_val_steps * len(trainer.val_dataloaders) 298 self.main_progress_bar = tqdm(disable=True) # dummy progress bar 299 300 def on_sanity_check_end(self, trainer, pl_module): 301 super().on_sanity_check_end(trainer, pl_module) 302 self.main_progress_bar.close() 303 self.val_progress_bar.close() 304 305 def on_train_start(self, trainer, pl_module): 306 super().on_train_start(trainer, pl_module) 307 self.main_progress_bar = self.init_train_tqdm() 308 309 def on_epoch_start(self, trainer, pl_module): 310 super().on_epoch_start(trainer, pl_module) 311 total_train_batches = self.total_train_batches 312 total_val_batches = self.total_val_batches 313 if total_train_batches != float('inf') and not trainer.fast_dev_run: 314 # val can be checked multiple times per epoch 315 val_checks_per_epoch = total_train_batches // trainer.val_check_batch 316 total_val_batches = total_val_batches * val_checks_per_epoch 317 total_batches = total_train_batches + total_val_batches 318 if not self.main_progress_bar.disable: 319 self.main_progress_bar.reset(convert_inf(total_batches)) 320 self.main_progress_bar.set_description(f'Epoch {trainer.current_epoch + 1}') 321 322 def on_batch_end(self, trainer, pl_module): 323 super().on_batch_end(trainer, pl_module) 324 if self.is_enabled and self.train_batch_idx % self.refresh_rate == 0: 325 self.main_progress_bar.update(self.refresh_rate) 326 self.main_progress_bar.set_postfix(**trainer.progress_bar_dict) 327 328 def on_validation_start(self, trainer, pl_module): 329 super().on_validation_start(trainer, pl_module) 330 self.val_progress_bar = self.init_validation_tqdm() 331 self.val_progress_bar.total = convert_inf(self.total_val_batches) 332 333 def on_validation_batch_end(self, trainer, pl_module): 334 super().on_validation_batch_end(trainer, pl_module) 335 if self.is_enabled and self.val_batch_idx % self.refresh_rate == 0: 336 self.val_progress_bar.update(self.refresh_rate) 337 self.main_progress_bar.update(self.refresh_rate) 338 339 def on_validation_end(self, trainer, pl_module): 340 super().on_validation_end(trainer, pl_module) 341 self.main_progress_bar.set_postfix(**trainer.progress_bar_dict) 342 self.val_progress_bar.close() 343 344 def on_train_end(self, trainer, pl_module): 345 super().on_train_end(trainer, pl_module) 346 self.main_progress_bar.close() 347 348 def on_test_start(self, trainer, pl_module): 349 super().on_test_start(trainer, pl_module) 350 self.test_progress_bar = self.init_test_tqdm() 351 self.test_progress_bar.total = convert_inf(self.total_test_batches) 352 353 def on_test_batch_end(self, trainer, pl_module): 354 super().on_test_batch_end(trainer, pl_module) 355 if self.is_enabled and self.test_batch_idx % self.refresh_rate == 0: 356 self.test_progress_bar.update(self.refresh_rate) 357 358 def on_test_end(self, trainer, pl_module): 359 super().on_test_end(trainer, pl_module) 360 self.test_progress_bar.close() 361 362 363 def convert_inf(x): 364 """ The tqdm doesn't support inf values. We have to convert it to None. """ 365 if x == float('inf'): 366 return None 367 return x 368 [end of pytorch_lightning/callbacks/progress.py] [start of pytorch_lightning/core/__init__.py] 1 """ 2 A :class:`~LightningModule` organizes your PyTorch code into the following sections: 3 4 .. figure:: /_images/lightning_module/pt_to_pl.png 5 :alt: Convert from PyTorch to Lightning 6 7 8 Notice a few things. 9 10 1. It's the SAME code. 11 2. The PyTorch code IS NOT abstracted - just organized. 12 3. All the other code that's not in the :class:`~LightningModule` 13 has been automated for you by the trainer. 14 15 .. code-block:: python 16 17 net = Net() 18 trainer = Trainer() 19 trainer.fit(net) 20 21 4. There are no .cuda() or .to() calls... Lightning does these for you. 22 23 .. code-block:: python 24 25 # don't do in lightning 26 x = torch.Tensor(2, 3) 27 x = x.cuda() 28 x = x.to(device) 29 30 # do this instead 31 x = x # leave it alone! 32 33 # or to init a new tensor 34 new_x = torch.Tensor(2, 3) 35 new_x = new_x.type_as(x.type()) 36 37 5. There are no samplers for distributed, Lightning also does this for you. 38 39 .. code-block:: python 40 41 # Don't do in Lightning... 42 data = MNIST(...) 43 sampler = DistributedSampler(data) 44 DataLoader(data, sampler=sampler) 45 46 # do this instead 47 data = MNIST(...) 48 DataLoader(data) 49 50 6. A :class:`~LightningModule` is a :class:`torch.nn.Module` but with added functionality. Use it as such! 51 52 .. code-block:: python 53 54 net = Net.load_from_checkpoint(PATH) 55 net.freeze() 56 out = net(x) 57 58 Thus, to use Lightning, you just need to organize your code which takes about 30 minutes, 59 (and let's be real, you probably should do anyhow). 60 61 ------------ 62 63 Minimal Example 64 --------------- 65 66 Here are the only required methods. 67 68 .. code-block:: python 69 70 >>> import pytorch_lightning as pl 71 >>> class LitModel(pl.LightningModule): 72 ... 73 ... def __init__(self): 74 ... super().__init__() 75 ... self.l1 = torch.nn.Linear(28 * 28, 10) 76 ... 77 ... def forward(self, x): 78 ... return torch.relu(self.l1(x.view(x.size(0), -1))) 79 ... 80 ... def training_step(self, batch, batch_idx): 81 ... x, y = batch 82 ... y_hat = self(x) 83 ... return {'loss': F.cross_entropy(y_hat, y)} 84 ... 85 ... def train_dataloader(self): 86 ... return DataLoader(MNIST(os.getcwd(), train=True, download=True, 87 ... transform=transforms.ToTensor()), batch_size=32) 88 ... 89 ... def configure_optimizers(self): 90 ... return torch.optim.Adam(self.parameters(), lr=0.02) 91 92 Which you can train by doing: 93 94 .. code-block:: python 95 96 trainer = pl.Trainer() 97 model = LitModel() 98 99 trainer.fit(model) 100 101 ---------- 102 103 Training loop structure 104 ----------------------- 105 106 The general pattern is that each loop (training, validation, test loop) 107 has 3 methods: 108 109 - ``___step`` 110 - ``___step_end`` 111 - ``___epoch_end`` 112 113 To show how Lightning calls these, let's use the validation loop as an example: 114 115 .. code-block:: python 116 117 val_outs = [] 118 for val_batch in val_data: 119 # do something with each batch 120 out = validation_step(val_batch) 121 val_outs.append(out) 122 123 # do something with the outputs for all batches 124 # like calculate validation set accuracy or loss 125 validation_epoch_end(val_outs) 126 127 If we use dp or ddp2 mode, we can also define the ``XXX_step_end`` method to operate 128 on all parts of the batch:: 129 130 val_outs = [] 131 for val_batch in val_data: 132 batches = split_batch(val_batch) 133 dp_outs = [] 134 for sub_batch in batches: 135 dp_out = validation_step(sub_batch) 136 dp_outs.append(dp_out) 137 138 out = validation_step_end(dp_outs) 139 val_outs.append(out) 140 141 # do something with the outputs for all batches 142 # like calculate validation set accuracy or loss 143 validation_epoch_end(val_outs) 144 145 146 Add validation loop 147 ^^^^^^^^^^^^^^^^^^^ 148 149 Thus, if we wanted to add a validation loop you would add this to your 150 :class:`~LightningModule`: 151 152 >>> class LitModel(pl.LightningModule): 153 ... def validation_step(self, batch, batch_idx): 154 ... x, y = batch 155 ... y_hat = self(x) 156 ... return {'val_loss': F.cross_entropy(y_hat, y)} 157 ... 158 ... def validation_epoch_end(self, outputs): 159 ... val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean() 160 ... return {'val_loss': val_loss_mean} 161 ... 162 ... def val_dataloader(self): 163 ... # can also return a list of val dataloaders 164 ... return DataLoader(...) 165 166 Add test loop 167 ^^^^^^^^^^^^^ 168 169 >>> class LitModel(pl.LightningModule): 170 ... def test_step(self, batch, batch_idx): 171 ... x, y = batch 172 ... y_hat = self(x) 173 ... return {'test_loss': F.cross_entropy(y_hat, y)} 174 ... 175 ... def test_epoch_end(self, outputs): 176 ... test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean() 177 ... return {'test_loss': test_loss_mean} 178 ... 179 ... def test_dataloader(self): 180 ... # can also return a list of test dataloaders 181 ... return DataLoader(...) 182 183 However, the test loop won't ever be called automatically to make sure you 184 don't run your test data by accident. Instead you have to explicitly call: 185 186 .. code-block:: python 187 188 # call after training 189 trainer = Trainer() 190 trainer.fit(model) 191 trainer.test() 192 193 # or call with pretrained model 194 model = MyLightningModule.load_from_checkpoint(PATH) 195 trainer = Trainer() 196 trainer.test(model) 197 198 ---------- 199 200 Training_step_end method 201 ------------------------ 202 When using :class:`~pytorch_lightning.overrides.data_parallel.LightningDataParallel` or 203 :class:`~pytorch_lightning.overrides.data_parallel.LightningDistributedDataParallel`, the 204 :meth:`~LightningModule.training_step` 205 will be operating on a portion of the batch. This is normally ok but in special 206 cases like calculating NCE loss using negative samples, we might want to 207 perform a softmax across all samples in the batch. 208 209 For these types of situations, each loop has an additional ``__step_end`` method 210 which allows you to operate on the pieces of the batch: 211 212 .. code-block:: python 213 214 training_outs = [] 215 for train_batch in train_data: 216 # dp, ddp2 splits the batch 217 sub_batches = split_batches_for_dp(batch) 218 219 # run training_step on each piece of the batch 220 batch_parts_outputs = [training_step(sub_batch) for sub_batch in sub_batches] 221 222 # do softmax with all pieces 223 out = training_step_end(batch_parts_outputs) 224 training_outs.append(out) 225 226 # do something with the outputs for all batches 227 # like calculate validation set accuracy or loss 228 training_epoch_end(val_outs) 229 230 ---------- 231 232 Remove cuda calls 233 ----------------- 234 In a :class:`~LightningModule`, all calls to ``.cuda()`` 235 and ``.to(device)`` should be removed. Lightning will do these 236 automatically. This will allow your code to work on CPUs, TPUs and GPUs. 237 238 When you init a new tensor in your code, just use :meth:`~torch.Tensor.type_as`: 239 240 .. code-block:: python 241 242 def training_step(self, batch, batch_idx): 243 x, y = batch 244 245 # put the z on the appropriate gpu or tpu core 246 z = sample_noise() 247 z = z.type_as(x) 248 249 ---------- 250 251 Data preparation 252 ---------------- 253 Data preparation in PyTorch follows 5 steps: 254 255 1. Download 256 2. Clean and (maybe) save to disk 257 3. Load inside :class:`~torch.utils.data.Dataset` 258 4. Apply transforms (rotate, tokenize, etc...) 259 5. Wrap inside a :class:`~torch.utils.data.DataLoader` 260 261 When working in distributed settings, steps 1 and 2 have to be done 262 from a single GPU, otherwise you will overwrite these files from 263 every GPU. The :class:`~LightningModule` has the 264 :class:`~LightningModule.prepare_data` method to 265 allow for this: 266 267 >>> class LitModel(pl.LightningModule): 268 ... def prepare_data(self): 269 ... # download 270 ... mnist_train = MNIST(os.getcwd(), train=True, download=True, 271 ... transform=transforms.ToTensor()) 272 ... mnist_test = MNIST(os.getcwd(), train=False, download=True, 273 ... transform=transforms.ToTensor()) 274 ... 275 ... # train/val split 276 ... mnist_train, mnist_val = random_split(mnist_train, [55000, 5000]) 277 ... 278 ... # assign to use in dataloaders 279 ... self.train_dataset = mnist_train 280 ... self.val_dataset = mnist_val 281 ... self.test_dataset = mnist_test 282 ... 283 ... def train_dataloader(self): 284 ... return DataLoader(self.train_dataset, batch_size=64) 285 ... 286 ... def val_dataloader(self): 287 ... return DataLoader(self.mnist_val, batch_size=64) 288 ... 289 ... def test_dataloader(self): 290 ... return DataLoader(self.mnist_test, batch_size=64) 291 292 Note: 293 :meth:`~LightningModule.prepare_data` is called once. 294 295 Note: 296 Do anything with data that needs to happen ONLY once here, like download, tokenize, etc... 297 298 299 Lifecycle 300 --------- 301 The methods in the :class:`~LightningModule` are called in this order: 302 303 1. :meth:`~LightningModule.__init__` 304 2. :meth:`~LightningModule.prepare_data` 305 3. :meth:`~LightningModule.configure_optimizers` 306 4. :meth:`~LightningModule.train_dataloader` 307 308 If you define a validation loop then 309 310 5. :meth:`~LightningModule.val_dataloader` 311 312 And if you define a test loop: 313 314 6. :meth:`~LightningModule.test_dataloader` 315 316 Note: 317 :meth:`~LightningModule.test_dataloader` is only called with ``.test()`` 318 319 In every epoch, the loop methods are called in this frequency: 320 321 1. :meth:`~LightningModule.validation_step` called every batch 322 2. :meth:`~LightningModule.validation_epoch_end` called every epoch 323 324 Live demo 325 --------- 326 Check out this 327 `COLAB <https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg>`_ 328 for a live demo. 329 330 LightningModule Class 331 --------------------- 332 333 """ 334 335 from pytorch_lightning.core.decorators import data_loader 336 from pytorch_lightning.core.lightning import LightningModule 337 338 __all__ = ['LightningModule', 'data_loader'] 339 # __call__ = __all__ 340 [end of pytorch_lightning/core/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Lightning-AI/lightning
0cb58fbb4cd38142636a52f34fdb948ab45b7043
Also update progress_bar in training_epoch_end ## 🚀 Feature [PR 1357](https://github.com/PyTorchLightning/pytorch-lightning/pull/1357) implements training_epoch_end to log metrics. The comments in the issue suggest that it should behave like validation_epoch_end, but the PR only replicates the callbacks and metric logging. This feature would add updates to the progress bar as well. ### Motivation Motivation is for the same usecase as having a progress bar. While tensorboard or any other logger is a great resource to have. It is painful to set up in environments like slurm where you can only submit jobs and possibly cannot forward ports or set up the logging. ### Pitch The changes should only entail adding an update to tqdm [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/b2707c9b2ebeac03f19a3939df9432ac8859d894/pytorch_lightning/trainer/training_loop.py#L503)
Hi! thanks for your contribution!, great first issue! I will look into this after #1450 is done :)
2020-05-04T01:24:47Z
<patch> diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py --- a/pytorch_lightning/core/lightning.py +++ b/pytorch_lightning/core/lightning.py @@ -257,6 +257,7 @@ def training_epoch_end( May contain the following optional keys: - log (metrics to be added to the logger; only tensors) + - progress_bar (dict for progress bar display) - any metric used in a callback (e.g. early stopping). Note: @@ -280,7 +281,8 @@ def training_epoch_end(self, outputs): # log training accuracy at the end of an epoch results = { - 'log': {'train_acc': train_acc_mean.item()} + 'log': {'train_acc': train_acc_mean.item()}, + 'progress_bar': {'train_acc': train_acc_mean}, } return results @@ -303,6 +305,7 @@ def training_epoch_end(self, outputs): # log training accuracy at the end of an epoch results = { 'log': {'train_acc': train_acc_mean.item(), 'step': self.current_epoch} + 'progress_bar': {'train_acc': train_acc_mean}, } return results """ diff --git a/pytorch_lightning/trainer/training_loop.py b/pytorch_lightning/trainer/training_loop.py --- a/pytorch_lightning/trainer/training_loop.py +++ b/pytorch_lightning/trainer/training_loop.py @@ -491,6 +491,7 @@ def run_training_epoch(self): callback_epoch_metrics = _processed_outputs[3] self.log_metrics(log_epoch_metrics, {}) self.callback_metrics.update(callback_epoch_metrics) + self.add_progress_bar_metrics(_processed_outputs[1]) # when no val loop is present or fast-dev-run still need to call checkpoints if not self.is_overriden('validation_step') and not (self.fast_dev_run or should_check_val): </patch>
[]
[]
numpy__numpy-14345
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Weird behavior of structured_to_unstructured on non-trivial dtypes With the following helpers: ```python import numpy.lib.recfunctions as rfn def subarray(dt, shape): return np.dtype((dt, shape)) def structured(*dts): return np.dtype([('x{}'.format(i), dt) for i, dt in enumerate(dts)]) def inspect(dt): arr = np.zeros((), dt) ret = rfn.structured_to_unstructured(arr) print(ret.shape, ret.dtype) ``` We can try a bunch of uses of `structured_to_unstructured` (added in #11526): ```python >>> inspect(structured(int, int)) (2,) int32 # obviously ok >>> inspect(structured(int, structured(int, int))) (3,) int32 # nested types are flattened, ok >>> inspect(structured(int, subarray(int, 2))) (3,) int32 # ok: 1 + 2 >>> inspect(structured(int, subarray(int, (2, 2)))) (5,) int32 # ok: 1 + 2*2 ``` Here's where things start to go bad: ```python >>> inspect(structured(subarray(structured(int, int), 3))) (3,) [('x0', '<i4'), ('x1', '<i4')] # bug? ``` ```python >>> inspect(structured(subarray(subarray(int, 2), 2))) (2, 2) int32 # bug ``` ```python >>> inspect(structured(int)) () int32 # bug ``` (#13334) ```python >>> inspect(structured(int, subarray(subarray(int, 2), 2))) TypeError: invalid type promotion ``` ```python >>> inspect(structured()) dts, counts, offsets = zip(*fields) ValueError: not enough values to unpack (expected 3, got 0) ``` A lot of this behavior looks undesirable to me. @ahaldane, which cases were actually intended to be supported? Rather than locking ourselve into some of these weird constucts, we might want to raise an error for anything non-trivial. </issue> <code> [start of README.md] 1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60"> 2 3 [![Travis](https://img.shields.io/travis/numpy/numpy/master.svg?label=Travis%20CI)]( 4 https://travis-ci.org/numpy/numpy) 5 [![AppVeyor](https://img.shields.io/appveyor/ci/charris/numpy/master.svg?label=AppVeyor)]( 6 https://ci.appveyor.com/project/charris/numpy) 7 [![Azure](https://dev.azure.com/numpy/numpy/_apis/build/status/azure-pipeline%20numpy.numpy)]( 8 https://dev.azure.com/numpy/numpy/_build/latest?definitionId=5) 9 [![codecov](https://codecov.io/gh/numpy/numpy/branch/master/graph/badge.svg)]( 10 https://codecov.io/gh/numpy/numpy) 11 12 NumPy is the fundamental package needed for scientific computing with Python. 13 14 - **Website:** https://www.numpy.org 15 - **Documentation:** http://docs.scipy.org/ 16 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion 17 - **Source code:** https://github.com/numpy/numpy 18 - **Contributing:** https://www.numpy.org/devdocs/dev/index.html 19 - **Bug reports:** https://github.com/numpy/numpy/issues 20 - **Report a security vulnerability:** https://tidelift.com/docs/security 21 22 It provides: 23 24 - a powerful N-dimensional array object 25 - sophisticated (broadcasting) functions 26 - tools for integrating C/C++ and Fortran code 27 - useful linear algebra, Fourier transform, and random number capabilities 28 29 Testing: 30 31 - NumPy versions &ge; 1.15 require `pytest` 32 - NumPy versions &lt; 1.15 require `nose` 33 34 Tests can then be run after installation with: 35 36 python -c 'import numpy; numpy.test()' 37 38 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org) 39 [end of README.md] [start of numpy/core/_dtype.py] 1 """ 2 A place for code to be called from the implementation of np.dtype 3 4 String handling is much easier to do correctly in python. 5 """ 6 from __future__ import division, absolute_import, print_function 7 8 import sys 9 10 import numpy as np 11 12 13 _kind_to_stem = { 14 'u': 'uint', 15 'i': 'int', 16 'c': 'complex', 17 'f': 'float', 18 'b': 'bool', 19 'V': 'void', 20 'O': 'object', 21 'M': 'datetime', 22 'm': 'timedelta' 23 } 24 if sys.version_info[0] >= 3: 25 _kind_to_stem.update({ 26 'S': 'bytes', 27 'U': 'str' 28 }) 29 else: 30 _kind_to_stem.update({ 31 'S': 'string', 32 'U': 'unicode' 33 }) 34 35 36 def _kind_name(dtype): 37 try: 38 return _kind_to_stem[dtype.kind] 39 except KeyError: 40 raise RuntimeError( 41 "internal dtype error, unknown kind {!r}" 42 .format(dtype.kind) 43 ) 44 45 46 def __str__(dtype): 47 if dtype.fields is not None: 48 return _struct_str(dtype, include_align=True) 49 elif dtype.subdtype: 50 return _subarray_str(dtype) 51 elif issubclass(dtype.type, np.flexible) or not dtype.isnative: 52 return dtype.str 53 else: 54 return dtype.name 55 56 57 def __repr__(dtype): 58 arg_str = _construction_repr(dtype, include_align=False) 59 if dtype.isalignedstruct: 60 arg_str = arg_str + ", align=True" 61 return "dtype({})".format(arg_str) 62 63 64 def _unpack_field(dtype, offset, title=None): 65 """ 66 Helper function to normalize the items in dtype.fields. 67 68 Call as: 69 70 dtype, offset, title = _unpack_field(*dtype.fields[name]) 71 """ 72 return dtype, offset, title 73 74 75 def _isunsized(dtype): 76 # PyDataType_ISUNSIZED 77 return dtype.itemsize == 0 78 79 80 def _construction_repr(dtype, include_align=False, short=False): 81 """ 82 Creates a string repr of the dtype, excluding the 'dtype()' part 83 surrounding the object. This object may be a string, a list, or 84 a dict depending on the nature of the dtype. This 85 is the object passed as the first parameter to the dtype 86 constructor, and if no additional constructor parameters are 87 given, will reproduce the exact memory layout. 88 89 Parameters 90 ---------- 91 short : bool 92 If true, this creates a shorter repr using 'kind' and 'itemsize', instead 93 of the longer type name. 94 95 include_align : bool 96 If true, this includes the 'align=True' parameter 97 inside the struct dtype construction dict when needed. Use this flag 98 if you want a proper repr string without the 'dtype()' part around it. 99 100 If false, this does not preserve the 101 'align=True' parameter or sticky NPY_ALIGNED_STRUCT flag for 102 struct arrays like the regular repr does, because the 'align' 103 flag is not part of first dtype constructor parameter. This 104 mode is intended for a full 'repr', where the 'align=True' is 105 provided as the second parameter. 106 """ 107 if dtype.fields is not None: 108 return _struct_str(dtype, include_align=include_align) 109 elif dtype.subdtype: 110 return _subarray_str(dtype) 111 else: 112 return _scalar_str(dtype, short=short) 113 114 115 def _scalar_str(dtype, short): 116 byteorder = _byte_order_str(dtype) 117 118 if dtype.type == np.bool_: 119 if short: 120 return "'?'" 121 else: 122 return "'bool'" 123 124 elif dtype.type == np.object_: 125 # The object reference may be different sizes on different 126 # platforms, so it should never include the itemsize here. 127 return "'O'" 128 129 elif dtype.type == np.string_: 130 if _isunsized(dtype): 131 return "'S'" 132 else: 133 return "'S%d'" % dtype.itemsize 134 135 elif dtype.type == np.unicode_: 136 if _isunsized(dtype): 137 return "'%sU'" % byteorder 138 else: 139 return "'%sU%d'" % (byteorder, dtype.itemsize / 4) 140 141 # unlike the other types, subclasses of void are preserved - but 142 # historically the repr does not actually reveal the subclass 143 elif issubclass(dtype.type, np.void): 144 if _isunsized(dtype): 145 return "'V'" 146 else: 147 return "'V%d'" % dtype.itemsize 148 149 elif dtype.type == np.datetime64: 150 return "'%sM8%s'" % (byteorder, _datetime_metadata_str(dtype)) 151 152 elif dtype.type == np.timedelta64: 153 return "'%sm8%s'" % (byteorder, _datetime_metadata_str(dtype)) 154 155 elif np.issubdtype(dtype, np.number): 156 # Short repr with endianness, like '<f8' 157 if short or dtype.byteorder not in ('=', '|'): 158 return "'%s%c%d'" % (byteorder, dtype.kind, dtype.itemsize) 159 160 # Longer repr, like 'float64' 161 else: 162 return "'%s%d'" % (_kind_name(dtype), 8*dtype.itemsize) 163 164 elif dtype.isbuiltin == 2: 165 return dtype.type.__name__ 166 167 else: 168 raise RuntimeError( 169 "Internal error: NumPy dtype unrecognized type number") 170 171 172 def _byte_order_str(dtype): 173 """ Normalize byteorder to '<' or '>' """ 174 # hack to obtain the native and swapped byte order characters 175 swapped = np.dtype(int).newbyteorder('s') 176 native = swapped.newbyteorder('s') 177 178 byteorder = dtype.byteorder 179 if byteorder == '=': 180 return native.byteorder 181 if byteorder == 's': 182 # TODO: this path can never be reached 183 return swapped.byteorder 184 elif byteorder == '|': 185 return '' 186 else: 187 return byteorder 188 189 190 def _datetime_metadata_str(dtype): 191 # TODO: this duplicates the C append_metastr_to_string 192 unit, count = np.datetime_data(dtype) 193 if unit == 'generic': 194 return '' 195 elif count == 1: 196 return '[{}]'.format(unit) 197 else: 198 return '[{}{}]'.format(count, unit) 199 200 201 def _struct_dict_str(dtype, includealignedflag): 202 # unpack the fields dictionary into ls 203 names = dtype.names 204 fld_dtypes = [] 205 offsets = [] 206 titles = [] 207 for name in names: 208 fld_dtype, offset, title = _unpack_field(*dtype.fields[name]) 209 fld_dtypes.append(fld_dtype) 210 offsets.append(offset) 211 titles.append(title) 212 213 # Build up a string to make the dictionary 214 215 # First, the names 216 ret = "{'names':[" 217 ret += ",".join(repr(name) for name in names) 218 219 # Second, the formats 220 ret += "], 'formats':[" 221 ret += ",".join( 222 _construction_repr(fld_dtype, short=True) for fld_dtype in fld_dtypes) 223 224 # Third, the offsets 225 ret += "], 'offsets':[" 226 ret += ",".join("%d" % offset for offset in offsets) 227 228 # Fourth, the titles 229 if any(title is not None for title in titles): 230 ret += "], 'titles':[" 231 ret += ",".join(repr(title) for title in titles) 232 233 # Fifth, the itemsize 234 ret += "], 'itemsize':%d" % dtype.itemsize 235 236 if (includealignedflag and dtype.isalignedstruct): 237 # Finally, the aligned flag 238 ret += ", 'aligned':True}" 239 else: 240 ret += "}" 241 242 return ret 243 244 245 def _is_packed(dtype): 246 """ 247 Checks whether the structured data type in 'dtype' 248 has a simple layout, where all the fields are in order, 249 and follow each other with no alignment padding. 250 251 When this returns true, the dtype can be reconstructed 252 from a list of the field names and dtypes with no additional 253 dtype parameters. 254 255 Duplicates the C `is_dtype_struct_simple_unaligned_layout` functio. 256 """ 257 total_offset = 0 258 for name in dtype.names: 259 fld_dtype, fld_offset, title = _unpack_field(*dtype.fields[name]) 260 if fld_offset != total_offset: 261 return False 262 total_offset += fld_dtype.itemsize 263 if total_offset != dtype.itemsize: 264 return False 265 return True 266 267 268 def _struct_list_str(dtype): 269 items = [] 270 for name in dtype.names: 271 fld_dtype, fld_offset, title = _unpack_field(*dtype.fields[name]) 272 273 item = "(" 274 if title is not None: 275 item += "({!r}, {!r}), ".format(title, name) 276 else: 277 item += "{!r}, ".format(name) 278 # Special case subarray handling here 279 if fld_dtype.subdtype is not None: 280 base, shape = fld_dtype.subdtype 281 item += "{}, {}".format( 282 _construction_repr(base, short=True), 283 shape 284 ) 285 else: 286 item += _construction_repr(fld_dtype, short=True) 287 288 item += ")" 289 items.append(item) 290 291 return "[" + ", ".join(items) + "]" 292 293 294 def _struct_str(dtype, include_align): 295 # The list str representation can't include the 'align=' flag, 296 # so if it is requested and the struct has the aligned flag set, 297 # we must use the dict str instead. 298 if not (include_align and dtype.isalignedstruct) and _is_packed(dtype): 299 sub = _struct_list_str(dtype) 300 301 else: 302 sub = _struct_dict_str(dtype, include_align) 303 304 # If the data type isn't the default, void, show it 305 if dtype.type != np.void: 306 return "({t.__module__}.{t.__name__}, {f})".format(t=dtype.type, f=sub) 307 else: 308 return sub 309 310 311 def _subarray_str(dtype): 312 base, shape = dtype.subdtype 313 return "({}, {})".format( 314 _construction_repr(base, short=True), 315 shape 316 ) 317 318 319 def _name_get(dtype): 320 # provides dtype.name.__get__ 321 322 if dtype.isbuiltin == 2: 323 # user dtypes don't promise to do anything special 324 return dtype.type.__name__ 325 326 # Builtin classes are documented as returning a "bit name" 327 name = dtype.type.__name__ 328 329 # handle bool_, str_, etc 330 if name[-1] == '_': 331 name = name[:-1] 332 333 # append bit counts to str, unicode, and void 334 if np.issubdtype(dtype, np.flexible) and not _isunsized(dtype): 335 name += "{}".format(dtype.itemsize * 8) 336 337 # append metadata to datetimes 338 elif dtype.type in (np.datetime64, np.timedelta64): 339 name += _datetime_metadata_str(dtype) 340 341 return name 342 [end of numpy/core/_dtype.py] [start of numpy/doc/structured_arrays.py] 1 """ 2 ================= 3 Structured Arrays 4 ================= 5 6 Introduction 7 ============ 8 9 Structured arrays are ndarrays whose datatype is a composition of simpler 10 datatypes organized as a sequence of named :term:`fields <field>`. For example, 11 :: 12 13 >>> x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)], 14 ... dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')]) 15 >>> x 16 array([('Rex', 9, 81.), ('Fido', 3, 27.)], 17 dtype=[('name', 'U10'), ('age', '<i4'), ('weight', '<f4')]) 18 19 Here ``x`` is a one-dimensional array of length two whose datatype is a 20 structure with three fields: 1. A string of length 10 or less named 'name', 2. 21 a 32-bit integer named 'age', and 3. a 32-bit float named 'weight'. 22 23 If you index ``x`` at position 1 you get a structure:: 24 25 >>> x[1] 26 ('Fido', 3, 27.0) 27 28 You can access and modify individual fields of a structured array by indexing 29 with the field name:: 30 31 >>> x['age'] 32 array([9, 3], dtype=int32) 33 >>> x['age'] = 5 34 >>> x 35 array([('Rex', 5, 81.), ('Fido', 5, 27.)], 36 dtype=[('name', 'U10'), ('age', '<i4'), ('weight', '<f4')]) 37 38 Structured datatypes are designed to be able to mimic 'structs' in the C 39 language, and share a similar memory layout. They are meant for interfacing with 40 C code and for low-level manipulation of structured buffers, for example for 41 interpreting binary blobs. For these purposes they support specialized features 42 such as subarrays, nested datatypes, and unions, and allow control over the 43 memory layout of the structure. 44 45 Users looking to manipulate tabular data, such as stored in csv files, may find 46 other pydata projects more suitable, such as xarray, pandas, or DataArray. 47 These provide a high-level interface for tabular data analysis and are better 48 optimized for that use. For instance, the C-struct-like memory layout of 49 structured arrays in numpy can lead to poor cache behavior in comparison. 50 51 .. _defining-structured-types: 52 53 Structured Datatypes 54 ==================== 55 56 A structured datatype can be thought of as a sequence of bytes of a certain 57 length (the structure's :term:`itemsize`) which is interpreted as a collection 58 of fields. Each field has a name, a datatype, and a byte offset within the 59 structure. The datatype of a field may be any numpy datatype including other 60 structured datatypes, and it may also be a :term:`subarray data type` which 61 behaves like an ndarray of a specified shape. The offsets of the fields are 62 arbitrary, and fields may even overlap. These offsets are usually determined 63 automatically by numpy, but can also be specified. 64 65 Structured Datatype Creation 66 ---------------------------- 67 68 Structured datatypes may be created using the function :func:`numpy.dtype`. 69 There are 4 alternative forms of specification which vary in flexibility and 70 conciseness. These are further documented in the 71 :ref:`Data Type Objects <arrays.dtypes.constructing>` reference page, and in 72 summary they are: 73 74 1. A list of tuples, one tuple per field 75 76 Each tuple has the form ``(fieldname, datatype, shape)`` where shape is 77 optional. ``fieldname`` is a string (or tuple if titles are used, see 78 :ref:`Field Titles <titles>` below), ``datatype`` may be any object 79 convertible to a datatype, and ``shape`` is a tuple of integers specifying 80 subarray shape. 81 82 >>> np.dtype([('x', 'f4'), ('y', np.float32), ('z', 'f4', (2, 2))]) 83 dtype([('x', '<f4'), ('y', '<f4'), ('z', '<f4', (2, 2))]) 84 85 If ``fieldname`` is the empty string ``''``, the field will be given a 86 default name of the form ``f#``, where ``#`` is the integer index of the 87 field, counting from 0 from the left:: 88 89 >>> np.dtype([('x', 'f4'), ('', 'i4'), ('z', 'i8')]) 90 dtype([('x', '<f4'), ('f1', '<i4'), ('z', '<i8')]) 91 92 The byte offsets of the fields within the structure and the total 93 structure itemsize are determined automatically. 94 95 2. A string of comma-separated dtype specifications 96 97 In this shorthand notation any of the :ref:`string dtype specifications 98 <arrays.dtypes.constructing>` may be used in a string and separated by 99 commas. The itemsize and byte offsets of the fields are determined 100 automatically, and the field names are given the default names ``f0``, 101 ``f1``, etc. :: 102 103 >>> np.dtype('i8, f4, S3') 104 dtype([('f0', '<i8'), ('f1', '<f4'), ('f2', 'S3')]) 105 >>> np.dtype('3int8, float32, (2, 3)float64') 106 dtype([('f0', 'i1', (3,)), ('f1', '<f4'), ('f2', '<f8', (2, 3))]) 107 108 3. A dictionary of field parameter arrays 109 110 This is the most flexible form of specification since it allows control 111 over the byte-offsets of the fields and the itemsize of the structure. 112 113 The dictionary has two required keys, 'names' and 'formats', and four 114 optional keys, 'offsets', 'itemsize', 'aligned' and 'titles'. The values 115 for 'names' and 'formats' should respectively be a list of field names and 116 a list of dtype specifications, of the same length. The optional 'offsets' 117 value should be a list of integer byte-offsets, one for each field within 118 the structure. If 'offsets' is not given the offsets are determined 119 automatically. The optional 'itemsize' value should be an integer 120 describing the total size in bytes of the dtype, which must be large 121 enough to contain all the fields. 122 :: 123 124 >>> np.dtype({'names': ['col1', 'col2'], 'formats': ['i4', 'f4']}) 125 dtype([('col1', '<i4'), ('col2', '<f4')]) 126 >>> np.dtype({'names': ['col1', 'col2'], 127 ... 'formats': ['i4', 'f4'], 128 ... 'offsets': [0, 4], 129 ... 'itemsize': 12}) 130 dtype({'names':['col1','col2'], 'formats':['<i4','<f4'], 'offsets':[0,4], 'itemsize':12}) 131 132 Offsets may be chosen such that the fields overlap, though this will mean 133 that assigning to one field may clobber any overlapping field's data. As 134 an exception, fields of :class:`numpy.object` type cannot overlap with 135 other fields, because of the risk of clobbering the internal object 136 pointer and then dereferencing it. 137 138 The optional 'aligned' value can be set to ``True`` to make the automatic 139 offset computation use aligned offsets (see :ref:`offsets-and-alignment`), 140 as if the 'align' keyword argument of :func:`numpy.dtype` had been set to 141 True. 142 143 The optional 'titles' value should be a list of titles of the same length 144 as 'names', see :ref:`Field Titles <titles>` below. 145 146 4. A dictionary of field names 147 148 The use of this form of specification is discouraged, but documented here 149 because older numpy code may use it. The keys of the dictionary are the 150 field names and the values are tuples specifying type and offset:: 151 152 >>> np.dtype({'col1': ('i1', 0), 'col2': ('f4', 1)}) 153 dtype([('col1', 'i1'), ('col2', '<f4')]) 154 155 This form is discouraged because Python dictionaries do not preserve order 156 in Python versions before Python 3.6, and the order of the fields in a 157 structured dtype has meaning. :ref:`Field Titles <titles>` may be 158 specified by using a 3-tuple, see below. 159 160 Manipulating and Displaying Structured Datatypes 161 ------------------------------------------------ 162 163 The list of field names of a structured datatype can be found in the ``names`` 164 attribute of the dtype object:: 165 166 >>> d = np.dtype([('x', 'i8'), ('y', 'f4')]) 167 >>> d.names 168 ('x', 'y') 169 170 The field names may be modified by assigning to the ``names`` attribute using a 171 sequence of strings of the same length. 172 173 The dtype object also has a dictionary-like attribute, ``fields``, whose keys 174 are the field names (and :ref:`Field Titles <titles>`, see below) and whose 175 values are tuples containing the dtype and byte offset of each field. :: 176 177 >>> d.fields 178 mappingproxy({'x': (dtype('int64'), 0), 'y': (dtype('float32'), 8)}) 179 180 Both the ``names`` and ``fields`` attributes will equal ``None`` for 181 unstructured arrays. The recommended way to test if a dtype is structured is 182 with `if dt.names is not None` rather than `if dt.names`, to account for dtypes 183 with 0 fields. 184 185 The string representation of a structured datatype is shown in the "list of 186 tuples" form if possible, otherwise numpy falls back to using the more general 187 dictionary form. 188 189 .. _offsets-and-alignment: 190 191 Automatic Byte Offsets and Alignment 192 ------------------------------------ 193 194 Numpy uses one of two methods to automatically determine the field byte offsets 195 and the overall itemsize of a structured datatype, depending on whether 196 ``align=True`` was specified as a keyword argument to :func:`numpy.dtype`. 197 198 By default (``align=False``), numpy will pack the fields together such that 199 each field starts at the byte offset the previous field ended, and the fields 200 are contiguous in memory. :: 201 202 >>> def print_offsets(d): 203 ... print("offsets:", [d.fields[name][1] for name in d.names]) 204 ... print("itemsize:", d.itemsize) 205 >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2')) 206 offsets: [0, 1, 2, 6, 7, 15] 207 itemsize: 17 208 209 If ``align=True`` is set, numpy will pad the structure in the same way many C 210 compilers would pad a C-struct. Aligned structures can give a performance 211 improvement in some cases, at the cost of increased datatype size. Padding 212 bytes are inserted between fields such that each field's byte offset will be a 213 multiple of that field's alignment, which is usually equal to the field's size 214 in bytes for simple datatypes, see :c:member:`PyArray_Descr.alignment`. The 215 structure will also have trailing padding added so that its itemsize is a 216 multiple of the largest field's alignment. :: 217 218 >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2', align=True)) 219 offsets: [0, 1, 4, 8, 16, 24] 220 itemsize: 32 221 222 Note that although almost all modern C compilers pad in this way by default, 223 padding in C structs is C-implementation-dependent so this memory layout is not 224 guaranteed to exactly match that of a corresponding struct in a C program. Some 225 work may be needed, either on the numpy side or the C side, to obtain exact 226 correspondence. 227 228 If offsets were specified using the optional ``offsets`` key in the 229 dictionary-based dtype specification, setting ``align=True`` will check that 230 each field's offset is a multiple of its size and that the itemsize is a 231 multiple of the largest field size, and raise an exception if not. 232 233 If the offsets of the fields and itemsize of a structured array satisfy the 234 alignment conditions, the array will have the ``ALIGNED`` :attr:`flag 235 <numpy.ndarray.flags>` set. 236 237 A convenience function :func:`numpy.lib.recfunctions.repack_fields` converts an 238 aligned dtype or array to a packed one and vice versa. It takes either a dtype 239 or structured ndarray as an argument, and returns a copy with fields re-packed, 240 with or without padding bytes. 241 242 .. _titles: 243 244 Field Titles 245 ------------ 246 247 In addition to field names, fields may also have an associated :term:`title`, 248 an alternate name, which is sometimes used as an additional description or 249 alias for the field. The title may be used to index an array, just like a 250 field name. 251 252 To add titles when using the list-of-tuples form of dtype specification, the 253 field name may be specified as a tuple of two strings instead of a single 254 string, which will be the field's title and field name respectively. For 255 example:: 256 257 >>> np.dtype([(('my title', 'name'), 'f4')]) 258 dtype([(('my title', 'name'), '<f4')]) 259 260 When using the first form of dictionary-based specification, the titles may be 261 supplied as an extra ``'titles'`` key as described above. When using the second 262 (discouraged) dictionary-based specification, the title can be supplied by 263 providing a 3-element tuple ``(datatype, offset, title)`` instead of the usual 264 2-element tuple:: 265 266 >>> np.dtype({'name': ('i4', 0, 'my title')}) 267 dtype([(('my title', 'name'), '<i4')]) 268 269 The ``dtype.fields`` dictionary will contain titles as keys, if any 270 titles are used. This means effectively that a field with a title will be 271 represented twice in the fields dictionary. The tuple values for these fields 272 will also have a third element, the field title. Because of this, and because 273 the ``names`` attribute preserves the field order while the ``fields`` 274 attribute may not, it is recommended to iterate through the fields of a dtype 275 using the ``names`` attribute of the dtype, which will not list titles, as 276 in:: 277 278 >>> for name in d.names: 279 ... print(d.fields[name][:2]) 280 (dtype('int64'), 0) 281 (dtype('float32'), 8) 282 283 Union types 284 ----------- 285 286 Structured datatypes are implemented in numpy to have base type 287 :class:`numpy.void` by default, but it is possible to interpret other numpy 288 types as structured types using the ``(base_dtype, dtype)`` form of dtype 289 specification described in 290 :ref:`Data Type Objects <arrays.dtypes.constructing>`. Here, ``base_dtype`` is 291 the desired underlying dtype, and fields and flags will be copied from 292 ``dtype``. This dtype is similar to a 'union' in C. 293 294 Indexing and Assignment to Structured arrays 295 ============================================ 296 297 Assigning data to a Structured Array 298 ------------------------------------ 299 300 There are a number of ways to assign values to a structured array: Using python 301 tuples, using scalar values, or using other structured arrays. 302 303 Assignment from Python Native Types (Tuples) 304 ```````````````````````````````````````````` 305 306 The simplest way to assign values to a structured array is using python tuples. 307 Each assigned value should be a tuple of length equal to the number of fields 308 in the array, and not a list or array as these will trigger numpy's 309 broadcasting rules. The tuple's elements are assigned to the successive fields 310 of the array, from left to right:: 311 312 >>> x = np.array([(1, 2, 3), (4, 5, 6)], dtype='i8, f4, f8') 313 >>> x[1] = (7, 8, 9) 314 >>> x 315 array([(1, 2., 3.), (7, 8., 9.)], 316 dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '<f8')]) 317 318 Assignment from Scalars 319 ``````````````````````` 320 321 A scalar assigned to a structured element will be assigned to all fields. This 322 happens when a scalar is assigned to a structured array, or when an 323 unstructured array is assigned to a structured array:: 324 325 >>> x = np.zeros(2, dtype='i8, f4, ?, S1') 326 >>> x[:] = 3 327 >>> x 328 array([(3, 3., True, b'3'), (3, 3., True, b'3')], 329 dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')]) 330 >>> x[:] = np.arange(2) 331 >>> x 332 array([(0, 0., False, b'0'), (1, 1., True, b'1')], 333 dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')]) 334 335 Structured arrays can also be assigned to unstructured arrays, but only if the 336 structured datatype has just a single field:: 337 338 >>> twofield = np.zeros(2, dtype=[('A', 'i4'), ('B', 'i4')]) 339 >>> onefield = np.zeros(2, dtype=[('A', 'i4')]) 340 >>> nostruct = np.zeros(2, dtype='i4') 341 >>> nostruct[:] = twofield 342 Traceback (most recent call last): 343 ... 344 TypeError: Cannot cast scalar from dtype([('A', '<i4'), ('B', '<i4')]) to dtype('int32') according to the rule 'unsafe' 345 346 Assignment from other Structured Arrays 347 ``````````````````````````````````````` 348 349 Assignment between two structured arrays occurs as if the source elements had 350 been converted to tuples and then assigned to the destination elements. That 351 is, the first field of the source array is assigned to the first field of the 352 destination array, and the second field likewise, and so on, regardless of 353 field names. Structured arrays with a different number of fields cannot be 354 assigned to each other. Bytes of the destination structure which are not 355 included in any of the fields are unaffected. :: 356 357 >>> a = np.zeros(3, dtype=[('a', 'i8'), ('b', 'f4'), ('c', 'S3')]) 358 >>> b = np.ones(3, dtype=[('x', 'f4'), ('y', 'S3'), ('z', 'O')]) 359 >>> b[:] = a 360 >>> b 361 array([(0., b'0.0', b''), (0., b'0.0', b''), (0., b'0.0', b'')], 362 dtype=[('x', '<f4'), ('y', 'S3'), ('z', 'O')]) 363 364 365 Assignment involving subarrays 366 `````````````````````````````` 367 368 When assigning to fields which are subarrays, the assigned value will first be 369 broadcast to the shape of the subarray. 370 371 Indexing Structured Arrays 372 -------------------------- 373 374 Accessing Individual Fields 375 ``````````````````````````` 376 377 Individual fields of a structured array may be accessed and modified by indexing 378 the array with the field name. :: 379 380 >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')]) 381 >>> x['foo'] 382 array([1, 3]) 383 >>> x['foo'] = 10 384 >>> x 385 array([(10, 2.), (10, 4.)], 386 dtype=[('foo', '<i8'), ('bar', '<f4')]) 387 388 The resulting array is a view into the original array. It shares the same 389 memory locations and writing to the view will modify the original array. :: 390 391 >>> y = x['bar'] 392 >>> y[:] = 11 393 >>> x 394 array([(10, 11.), (10, 11.)], 395 dtype=[('foo', '<i8'), ('bar', '<f4')]) 396 397 This view has the same dtype and itemsize as the indexed field, so it is 398 typically a non-structured array, except in the case of nested structures. 399 400 >>> y.dtype, y.shape, y.strides 401 (dtype('float32'), (2,), (12,)) 402 403 If the accessed field is a subarray, the dimensions of the subarray 404 are appended to the shape of the result:: 405 406 >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))]) 407 >>> x['a'].shape 408 (2, 2) 409 >>> x['b'].shape 410 (2, 2, 3, 3) 411 412 Accessing Multiple Fields 413 ``````````````````````````` 414 415 One can index and assign to a structured array with a multi-field index, where 416 the index is a list of field names. 417 418 .. warning:: 419 The behavior of multi-field indexes changed from Numpy 1.15 to Numpy 1.16. 420 421 The result of indexing with a multi-field index is a view into the original 422 array, as follows:: 423 424 >>> a = np.zeros(3, dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'f4')]) 425 >>> a[['a', 'c']] 426 array([(0, 0.), (0, 0.), (0, 0.)], 427 dtype={'names':['a','c'], 'formats':['<i4','<f4'], 'offsets':[0,8], 'itemsize':12}) 428 429 Assignment to the view modifies the original array. The view's fields will be 430 in the order they were indexed. Note that unlike for single-field indexing, the 431 dtype of the view has the same itemsize as the original array, and has fields 432 at the same offsets as in the original array, and unindexed fields are merely 433 missing. 434 435 .. warning:: 436 In Numpy 1.15, indexing an array with a multi-field index returned a copy of 437 the result above, but with fields packed together in memory as if 438 passed through :func:`numpy.lib.recfunctions.repack_fields`. 439 440 The new behavior as of Numpy 1.16 leads to extra "padding" bytes at the 441 location of unindexed fields compared to 1.15. You will need to update any 442 code which depends on the data having a "packed" layout. For instance code 443 such as:: 444 445 >>> a[['a', 'c']].view('i8') # Fails in Numpy 1.16 446 Traceback (most recent call last): 447 File "<stdin>", line 1, in <module> 448 ValueError: When changing to a smaller dtype, its size must be a divisor of the size of original dtype 449 450 will need to be changed. This code has raised a ``FutureWarning`` since 451 Numpy 1.12, and similar code has raised ``FutureWarning`` since 1.7. 452 453 In 1.16 a number of functions have been introduced in the 454 :mod:`numpy.lib.recfunctions` module to help users account for this 455 change. These are 456 :func:`numpy.lib.recfunctions.repack_fields`. 457 :func:`numpy.lib.recfunctions.structured_to_unstructured`, 458 :func:`numpy.lib.recfunctions.unstructured_to_structured`, 459 :func:`numpy.lib.recfunctions.apply_along_fields`, 460 :func:`numpy.lib.recfunctions.assign_fields_by_name`, and 461 :func:`numpy.lib.recfunctions.require_fields`. 462 463 The function :func:`numpy.lib.recfunctions.repack_fields` can always be 464 used to reproduce the old behavior, as it will return a packed copy of the 465 structured array. The code above, for example, can be replaced with: 466 467 >>> from numpy.lib.recfunctions import repack_fields 468 >>> repack_fields(a[['a', 'c']]).view('i8') # supported in 1.16 469 array([0, 0, 0]) 470 471 Furthermore, numpy now provides a new function 472 :func:`numpy.lib.recfunctions.structured_to_unstructured` which is a safer 473 and more efficient alternative for users who wish to convert structured 474 arrays to unstructured arrays, as the view above is often indeded to do. 475 This function allows safe conversion to an unstructured type taking into 476 account padding, often avoids a copy, and also casts the datatypes 477 as needed, unlike the view. Code such as: 478 479 >>> b = np.zeros(3, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')]) 480 >>> b[['x', 'z']].view('f4') 481 array([0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32) 482 483 can be made safer by replacing with: 484 485 >>> from numpy.lib.recfunctions import structured_to_unstructured 486 >>> structured_to_unstructured(b[['x', 'z']]) 487 array([0, 0, 0]) 488 489 490 Assignment to an array with a multi-field index modifies the original array:: 491 492 >>> a[['a', 'c']] = (2, 3) 493 >>> a 494 array([(2, 0, 3.), (2, 0, 3.), (2, 0, 3.)], 495 dtype=[('a', '<i4'), ('b', '<i4'), ('c', '<f4')]) 496 497 This obeys the structured array assignment rules described above. For example, 498 this means that one can swap the values of two fields using appropriate 499 multi-field indexes:: 500 501 >>> a[['a', 'c']] = a[['c', 'a']] 502 503 Indexing with an Integer to get a Structured Scalar 504 ``````````````````````````````````````````````````` 505 506 Indexing a single element of a structured array (with an integer index) returns 507 a structured scalar:: 508 509 >>> x = np.array([(1, 2., 3.)], dtype='i, f, f') 510 >>> scalar = x[0] 511 >>> scalar 512 (1, 2., 3.) 513 >>> type(scalar) 514 <class 'numpy.void'> 515 516 Unlike other numpy scalars, structured scalars are mutable and act like views 517 into the original array, such that modifying the scalar will modify the 518 original array. Structured scalars also support access and assignment by field 519 name:: 520 521 >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')]) 522 >>> s = x[0] 523 >>> s['bar'] = 100 524 >>> x 525 array([(1, 100.), (3, 4.)], 526 dtype=[('foo', '<i8'), ('bar', '<f4')]) 527 528 Similarly to tuples, structured scalars can also be indexed with an integer:: 529 530 >>> scalar = np.array([(1, 2., 3.)], dtype='i, f, f')[0] 531 >>> scalar[0] 532 1 533 >>> scalar[1] = 4 534 535 Thus, tuples might be thought of as the native Python equivalent to numpy's 536 structured types, much like native python integers are the equivalent to 537 numpy's integer types. Structured scalars may be converted to a tuple by 538 calling :func:`ndarray.item`:: 539 540 >>> scalar.item(), type(scalar.item()) 541 ((1, 4.0, 3.0), <class 'tuple'>) 542 543 Viewing Structured Arrays Containing Objects 544 -------------------------------------------- 545 546 In order to prevent clobbering object pointers in fields of 547 :class:`numpy.object` type, numpy currently does not allow views of structured 548 arrays containing objects. 549 550 Structure Comparison 551 -------------------- 552 553 If the dtypes of two void structured arrays are equal, testing the equality of 554 the arrays will result in a boolean array with the dimensions of the original 555 arrays, with elements set to ``True`` where all fields of the corresponding 556 structures are equal. Structured dtypes are equal if the field names, 557 dtypes and titles are the same, ignoring endianness, and the fields are in 558 the same order:: 559 560 >>> a = np.zeros(2, dtype=[('a', 'i4'), ('b', 'i4')]) 561 >>> b = np.ones(2, dtype=[('a', 'i4'), ('b', 'i4')]) 562 >>> a == b 563 array([False, False]) 564 565 Currently, if the dtypes of two void structured arrays are not equivalent the 566 comparison fails, returning the scalar value ``False``. This behavior is 567 deprecated as of numpy 1.10 and will raise an error or perform elementwise 568 comparison in the future. 569 570 The ``<`` and ``>`` operators always return ``False`` when comparing void 571 structured arrays, and arithmetic and bitwise operations are not supported. 572 573 Record Arrays 574 ============= 575 576 As an optional convenience numpy provides an ndarray subclass, 577 :class:`numpy.recarray`, and associated helper functions in the 578 :mod:`numpy.rec` submodule, that allows access to fields of structured arrays 579 by attribute instead of only by index. Record arrays also use a special 580 datatype, :class:`numpy.record`, that allows field access by attribute on the 581 structured scalars obtained from the array. 582 583 The simplest way to create a record array is with :func:`numpy.rec.array`:: 584 585 >>> recordarr = np.rec.array([(1, 2., 'Hello'), (2, 3., "World")], 586 ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) 587 >>> recordarr.bar 588 array([ 2., 3.], dtype=float32) 589 >>> recordarr[1:2] 590 rec.array([(2, 3., b'World')], 591 dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) 592 >>> recordarr[1:2].foo 593 array([2], dtype=int32) 594 >>> recordarr.foo[1:2] 595 array([2], dtype=int32) 596 >>> recordarr[1].baz 597 b'World' 598 599 :func:`numpy.rec.array` can convert a wide variety of arguments into record 600 arrays, including structured arrays:: 601 602 >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")], 603 ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) 604 >>> recordarr = np.rec.array(arr) 605 606 The :mod:`numpy.rec` module provides a number of other convenience functions for 607 creating record arrays, see :ref:`record array creation routines 608 <routines.array-creation.rec>`. 609 610 A record array representation of a structured array can be obtained using the 611 appropriate `view <numpy-ndarray-view>`_:: 612 613 >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")], 614 ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) 615 >>> recordarr = arr.view(dtype=np.dtype((np.record, arr.dtype)), 616 ... type=np.recarray) 617 618 For convenience, viewing an ndarray as type :class:`np.recarray` will 619 automatically convert to :class:`np.record` datatype, so the dtype can be left 620 out of the view:: 621 622 >>> recordarr = arr.view(np.recarray) 623 >>> recordarr.dtype 624 dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) 625 626 To get back to a plain ndarray both the dtype and type must be reset. The 627 following view does so, taking into account the unusual case that the 628 recordarr was not a structured type:: 629 630 >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) 631 632 Record array fields accessed by index or by attribute are returned as a record 633 array if the field has a structured type but as a plain ndarray otherwise. :: 634 635 >>> recordarr = np.rec.array([('Hello', (1, 2)), ("World", (3, 4))], 636 ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) 637 >>> type(recordarr.foo) 638 <class 'numpy.ndarray'> 639 >>> type(recordarr.bar) 640 <class 'numpy.recarray'> 641 642 Note that if a field has the same name as an ndarray attribute, the ndarray 643 attribute takes precedence. Such fields will be inaccessible by attribute but 644 will still be accessible by index. 645 646 """ 647 from __future__ import division, absolute_import, print_function 648 [end of numpy/doc/structured_arrays.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
numpy/numpy
94a2142b7c8accad9f9050b84faf119983c18f07
Weird behavior of structured_to_unstructured on non-trivial dtypes With the following helpers: ```python import numpy.lib.recfunctions as rfn def subarray(dt, shape): return np.dtype((dt, shape)) def structured(*dts): return np.dtype([('x{}'.format(i), dt) for i, dt in enumerate(dts)]) def inspect(dt): arr = np.zeros((), dt) ret = rfn.structured_to_unstructured(arr) print(ret.shape, ret.dtype) ``` We can try a bunch of uses of `structured_to_unstructured` (added in #11526): ```python >>> inspect(structured(int, int)) (2,) int32 # obviously ok >>> inspect(structured(int, structured(int, int))) (3,) int32 # nested types are flattened, ok >>> inspect(structured(int, subarray(int, 2))) (3,) int32 # ok: 1 + 2 >>> inspect(structured(int, subarray(int, (2, 2)))) (5,) int32 # ok: 1 + 2*2 ``` Here's where things start to go bad: ```python >>> inspect(structured(subarray(structured(int, int), 3))) (3,) [('x0', '<i4'), ('x1', '<i4')] # bug? ``` ```python >>> inspect(structured(subarray(subarray(int, 2), 2))) (2, 2) int32 # bug ``` ```python >>> inspect(structured(int)) () int32 # bug ``` (#13334) ```python >>> inspect(structured(int, subarray(subarray(int, 2), 2))) TypeError: invalid type promotion ``` ```python >>> inspect(structured()) dts, counts, offsets = zip(*fields) ValueError: not enough values to unpack (expected 3, got 0) ``` A lot of this behavior looks undesirable to me. @ahaldane, which cases were actually intended to be supported? Rather than locking ourselve into some of these weird constucts, we might want to raise an error for anything non-trivial.
Good catches, I don't think the "weird" cases were intended. Looks like you already have the fix for the one-field bug in #13332. For the subarray bug, there is evidently some wrong recursion code. The bug contradicts the docstring for `_get_fields_and_offsets`. Can we close this? #13332 and #13334 were merged No, the other weird cases still apply Pushed off to 1.17.2. Thanks for the ping, I'd forgotten this was hanging. I've got a fix for the cases above, will make PR soon.
2019-08-23T19:30:08Z
<patch> diff --git a/numpy/lib/recfunctions.py b/numpy/lib/recfunctions.py --- a/numpy/lib/recfunctions.py +++ b/numpy/lib/recfunctions.py @@ -874,16 +874,35 @@ def _get_fields_and_offsets(dt, offset=0): scalar fields in the dtype "dt", including nested fields, in left to right order. """ + + # counts up elements in subarrays, including nested subarrays, and returns + # base dtype and count + def count_elem(dt): + count = 1 + while dt.shape != (): + for size in dt.shape: + count *= size + dt = dt.base + return dt, count + fields = [] for name in dt.names: field = dt.fields[name] - if field[0].names is None: - count = 1 - for size in field[0].shape: - count *= size - fields.append((field[0], count, field[1] + offset)) + f_dt, f_offset = field[0], field[1] + f_dt, n = count_elem(f_dt) + + if f_dt.names is None: + fields.append((np.dtype((f_dt, (n,))), n, f_offset + offset)) else: - fields.extend(_get_fields_and_offsets(field[0], field[1] + offset)) + subfields = _get_fields_and_offsets(f_dt, f_offset + offset) + size = f_dt.itemsize + + for i in range(n): + if i == 0: + # optimization: avoid list comprehension if no subarray + fields.extend(subfields) + else: + fields.extend([(d, c, o + i*size) for d, c, o in subfields]) return fields @@ -948,6 +967,12 @@ def structured_to_unstructured(arr, dtype=None, copy=False, casting='unsafe'): fields = _get_fields_and_offsets(arr.dtype) n_fields = len(fields) + if n_fields == 0 and dtype is None: + raise ValueError("arr has no fields. Unable to guess dtype") + elif n_fields == 0: + # too many bugs elsewhere for this to work now + raise NotImplementedError("arr with no fields is not supported") + dts, counts, offsets = zip(*fields) names = ['f{}'.format(n) for n in range(n_fields)] @@ -1039,6 +1064,9 @@ def unstructured_to_structured(arr, dtype=None, names=None, align=False, if arr.shape == (): raise ValueError('arr must have at least one dimension') n_elem = arr.shape[-1] + if n_elem == 0: + # too many bugs elsewhere for this to work now + raise NotImplementedError("last axis with size 0 is not supported") if dtype is None: if names is None: @@ -1051,7 +1079,11 @@ def unstructured_to_structured(arr, dtype=None, names=None, align=False, raise ValueError("don't supply both dtype and names") # sanity check of the input dtype fields = _get_fields_and_offsets(dtype) - dts, counts, offsets = zip(*fields) + if len(fields) == 0: + dts, counts, offsets = [], [], [] + else: + dts, counts, offsets = zip(*fields) + if n_elem != sum(counts): raise ValueError('The length of the last dimension of arr must ' 'be equal to the number of fields in dtype') </patch>
[]
[]
Qiskit__qiskit-989
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Changing style in matplotlib circuit_drawer changes the circuit layout <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Informations - **Qiskit Terra version**: current master - **Python version**: 3.7 - **Operating system**: OSX ### What is the current behavior? The output from the Matplotlib circuit drawer changes when the `style` kwarg is modified ### Steps to reproduce the problem `circuit_drawer(circ)` vs `circuit_drawer(circ,style=qx_color_scheme())`: (Here I show only the first pane of the circuit) <img width="964" alt="screen shot 2018-09-25 at 5 44 48 am" src="https://user-images.githubusercontent.com/1249193/46006709-298c2080-c086-11e8-9c62-f05c5a91153e.png"> <img width="965" alt="screen shot 2018-09-25 at 5 45 33 am" src="https://user-images.githubusercontent.com/1249193/46006778-4de7fd00-c086-11e8-897d-433f48219b98.png"> ### What is the expected behavior? I would assume that the style does not change the circuit layout in the figure. ### Suggested solutions </issue> <code> [start of README.md] 1 # Quantum Information Science Kit (Qiskit) 2 3 [![PyPI](https://img.shields.io/pypi/v/qiskit.svg)](https://pypi.python.org/pypi/qiskit) 4 [![Build Status](https://travis-ci.org/Qiskit/qiskit-terra.svg?branch=master)](https://travis-ci.org/Qiskit/qiskit-terra) 5 [![Build Status IBM Q](https://travis-matrix-badges.herokuapp.com/repos/Qiskit/qiskit-terra/branches/master/8)](https://travis-ci.org/Qiskit/qiskit-terra) 6 7 The Quantum Information Science Kit (**Qiskit** for short) is a software development kit (SDK) for 8 working with [OpenQASM](https://github.com/Qiskit/qiskit-openqasm) and the 9 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/). 10 11 Use **Qiskit** to create quantum computing programs, compile them, and execute them on one of 12 several backends (online Real quantum processors, online simulators, and local simulators). For 13 the online backends, Qiskit uses our [python API client](https://github.com/Qiskit/qiskit-api-py) 14 to connect to the IBM Q Experience. 15 16 **We use GitHub issues for tracking requests and bugs. Please see the** 17 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for 18 questions and discussion.** 19 20 **If you'd like to contribute to Qiskit, please take a look at our** 21 [contribution guidelines](.github/CONTRIBUTING.rst). 22 23 Links to Sections: 24 25 * [Installation](#installation) 26 * [Creating your first Quantum Program](#creating-your-first-quantum-program) 27 * [More Information](#more-information) 28 * [Authors](#authors-alphabetical) 29 30 ## Installation 31 32 ### Dependencies 33 34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In 35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended 36 for interacting with the tutorials. 37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads) 38 python distribution, as it comes with all of these dependencies pre-installed. 39 40 In addition, a basic understanding of quantum information is very helpful when interacting with 41 Qiskit. If you're new to quantum, start with our 42 [User Guides](https://github.com/Qiskit/ibmqx-user-guides)! 43 44 ### Instructions 45 46 We encourage to install Qiskit via the PIP tool (a python package manager): 47 48 ```bash 49 pip install qiskit 50 ``` 51 52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version. 53 54 PIP package comes with prebuilt binaries for these platforms: 55 56 * Linux x86_64 57 * Darwin 58 * Win64 59 60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html). 61 62 If during the installation PIP doesn't succeed to build, don't worry, you will have Qiskit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback. 63 64 #### Setup your environment 65 66 We recommend using python virtual environments to improve your experience. Refer to our 67 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information. 68 69 ## Creating your first Quantum Program 70 71 Now that the SDK is installed, it's time to begin working with Qiskit. 72 73 We are ready to try out a quantum circuit example, which runs via the local simulator. 74 75 This is a simple example that makes an entangled state. 76 77 ```python 78 # Import the Qiskit SDK 79 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister 80 from qiskit import available_backends, execute 81 82 # Create a Quantum Register with 2 qubits. 83 q = QuantumRegister(2) 84 # Create a Classical Register with 2 bits. 85 c = ClassicalRegister(2) 86 # Create a Quantum Circuit 87 qc = QuantumCircuit(q, c) 88 89 # Add a H gate on qubit 0, putting this qubit in superposition. 90 qc.h(q[0]) 91 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting 92 # the qubits in a Bell state. 93 qc.cx(q[0], q[1]) 94 # Add a Measure gate to see the state. 95 qc.measure(q, c) 96 97 # See a list of available local simulators 98 print("Local backends: ", available_backends({'local': True})) 99 100 # Compile and run the Quantum circuit on a simulator backend 101 job_sim = execute(qc, "local_qasm_simulator") 102 sim_result = job_sim.result() 103 104 # Show the results 105 print("simulation: ", sim_result) 106 print(sim_result.get_counts(qc)) 107 ``` 108 109 In this case, the output will be: 110 111 ```python 112 COMPLETED 113 {'counts': {'00': 512, '11': 512}} 114 ``` 115 116 This script is available [here](examples/python/hello_quantum.py), where we also show how to 117 run the same program on a real quantum computer. 118 119 ### Executing your code on a real Quantum chip 120 121 You can also use Qiskit to execute your code on a 122 [real quantum chip](https://github.com/Qiskit/ibmqx-backend-information). 123 In order to do so, you need to configure the SDK for using the credentials in 124 your IBM Q Experience account: 125 126 #### Configure your API token and QX credentials 127 128 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so. 129 130 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst). 131 132 3. We are now going to add the necessary credentials to QISKit. Take your token 133 from step 2, here called `MY_API_TOKEN`, and pass it to the 134 `IBMQ.add_account()` function: 135 136 ```python 137 from qiskit import IBMQ 138 139 IBMQ.add_account('MY_API_TOKEN') 140 ``` 141 142 4. If you have access to the IBM Q Network features, you also need to pass the 143 url listed on your IBM Q account page to `store_credentials`. 144 145 After calling `IBMQ.add_account()`, your credentials will be stored into disk. 146 Once they are stored, Qiskit will automatically load and use them in your program 147 via: 148 149 ```python 150 from qiskit import IBMQ 151 152 IBMQ.load_accounts() 153 ``` 154 155 For more details on installing Qiskit and for alternative methods for passing 156 the IBM QX credentials, such as using environment variables, sending them 157 explicitly and support for the `Qconfig.py` method available in previous 158 versions, please check 159 [our Qiskit documentation](https://www.qiskit.org/documentation/). 160 161 ### Next Steps 162 163 Now you're set up and ready to check out some of the other examples from our 164 [Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository. Start with the 165 [index tutorial](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and then go to 166 the [‘Getting Started’ example](https://github.com/Qiskit/qiskit-tutorial/blob/master/reference/tools/getting_started.ipynb). 167 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html), 168 you can copy and modify the notebooks to create your own experiments. 169 170 To install the tutorials as part of the Qiskit SDK, see the following 171 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK 172 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in 173 [the official Qiskit site](https://www.qiskit.org/documentation). 174 175 ## More Information 176 177 For more information on how to use Qiskit, tutorial examples, and other helpful links, take a look 178 at these resources: 179 180 * **[User Guides](https://github.com/Qiskit/ibmqx-user-guides)**, 181 a good starting place for learning about quantum information and computing 182 * **[Tutorials](https://github.com/Qiskit/qiskit-tutorial)**, 183 for example notebooks, start with the [index](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/Qiskit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb) 184 * **[OpenQASM](https://github.com/Qiskit/openqasm)**, 185 for additional information and examples of QASM code 186 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**, 187 a GUI for interacting with real and simulated quantum computers 188 * **[QISkit Python API](https://github.com/Qiskit/qiskit-api-py)**, an API to use the IBM Quantum 189 Experience in Python 190 191 Qiskit was originally developed by researchers and developers on the 192 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/), 193 with the aim of offering a high level development kit to work with quantum computers. 194 195 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for 196 questions and discussions on Qiskit and quantum computing more broadly. If you'd like to 197 contribute to Qiskit, please take a look at our [contribution guidelines](.github/CONTRIBUTING.rst). 198 199 ## Multilanguage guide 200 201 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean. 202 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese. 203 204 ## Authors (alphabetical) 205 206 Qiskit was originally authored by 207 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez, 208 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood. 209 210 And continues to grow with the help and work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 211 to the project at different levels. 212 [end of README.md] [start of doc/conf.py] 1 #!/usr/bin/env python3 2 # -*- coding: utf-8 -*- 3 # 4 # Qiskit documentation build configuration file, created by 5 # sphinx-quickstart on Tue Jul 25 18:13:28 2017. 6 # 7 # This file is execfile()d with the current directory set to its 8 # containing dir. 9 # 10 # Note that not all possible configuration values are present in this 11 # autogenerated file. 12 # 13 # All configuration values have a default; values that are commented out 14 # serve to show the default. 15 16 # If extensions (or modules to document with autodoc) are in another directory, 17 # add these directories to sys.path here. If the directory is relative to the 18 # documentation root, use os.path.abspath to make it absolute, like shown here. 19 # 20 import os 21 import sys 22 from qiskit import __version__ 23 sys.path.insert(0, os.path.abspath('.')) 24 25 # Imported manually, as otherwise it will not be fully imported. 26 import qiskit.extensions.simulator 27 28 # -- General configuration ------------------------------------------------ 29 30 # If your documentation needs a minimal Sphinx version, state it here. 31 # 32 # needs_sphinx = '1.0' 33 34 # Add any Sphinx extension module names here, as strings. They can be 35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 36 # ones. 37 extensions = ['sphinx.ext.autodoc', 38 'sphinx.ext.autosummary', 39 'sphinx.ext.napoleon', 40 'sphinx.ext.doctest', 41 'sphinx.ext.coverage', 42 'sphinx.ext.mathjax', 43 'sphinx.ext.viewcode', 44 'sphinx.ext.githubpages', 45 'sphinxcontrib.fulltoc'] 46 47 # Napoleon settings 48 napoleon_google_docstring = True 49 napoleon_numpy_docstring = False 50 napoleon_include_init_with_doc = True 51 napoleon_include_private_with_doc = False 52 napoleon_include_special_with_doc = False 53 napoleon_use_admonition_for_examples = False 54 napoleon_use_admonition_for_notes = False 55 napoleon_use_admonition_for_references = False 56 napoleon_use_ivar = False 57 napoleon_use_param = True 58 napoleon_use_rtype = True 59 60 autoclass_content = 'both' 61 62 # Add any paths that contain templates here, relative to this directory. 63 templates_path = ['_templates'] 64 65 # The suffix(es) of source filenames. 66 # You can specify multiple suffix as a list of string: 67 # 68 # source_suffix = ['.rst', '.md'] 69 source_suffix = '.rst' 70 71 # The master toctree document. 72 master_doc = 'index' 73 74 # General information about the project. 75 project = 'Qiskit SDK' 76 copyright = '2017-2018 IBM Research' 77 author = 'IBM Research' 78 79 # Add description 80 html_context = { 81 'description': 'Quantum Information Science Kit' 82 } 83 84 # The version info for the project you're documenting, acts as replacement for 85 # |version| and |release|, also used in various other places throughout the 86 # built documents. 87 # 88 # The short X.Y version. 89 version = __version__ 90 # The full version, including alpha/beta/rc tags. 91 release = version 92 93 # The language for content autogenerated by Sphinx. Refer to documentation 94 # for a list of supported languages. 95 # 96 # This is also used if you do content translation via gettext catalogs. 97 # Usually you set "language" from the command line for these cases. 98 language = None 99 100 # List of patterns, relative to source directory, that match files and 101 # directories to ignore when looking for source files. 102 # This patterns also effect to html_static_path and html_extra_path 103 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 104 '_autodoc/modules.rst', 'de', 'ja'] 105 106 # The name of the Pygments (syntax highlighting) style to use. 107 pygments_style = 'sphinx' 108 109 # If true, `todo` and `todoList` produce output, else they produce nothing. 110 todo_include_todos = False 111 112 113 # -- Options for HTML output ---------------------------------------------- 114 115 # The theme to use for HTML and HTML Help pages. See the documentation for 116 # a list of builtin themes. 117 # 118 # html_theme = 'alabaster' 119 # html_theme = 'bizstyle' 120 # html_theme = agogo 121 122 html_theme = 'theme' # use the theme in subdir 'theme' 123 html_theme_path = ['./'] # make sphinx search for themes in current dir 124 125 126 # Theme options are theme-specific and customize the look and feel of a theme 127 # further. For a list of options available for each theme, see the 128 # documentation. 129 # 130 html_theme_options = {} 131 132 # Add any paths that contain custom static files (such as style sheets) here, 133 # relative to this directory. They are copied after the builtin static files, 134 # so a file named "default.css" will overwrite the builtin "default.css". 135 html_static_path = [] 136 137 # The name of an image file (relative to this directory) to place at the top 138 # of the sidebar. 139 html_logo = 'theme/static/qiskit-logo-white-no-margin.gif' 140 141 html_favicon = 'theme/static/favicon.ico' 142 143 html_last_updated_fmt = '%Y/%m/%d' 144 145 # -- Options for HTMLHelp output ------------------------------------------ 146 147 # Output file base name for HTML help builder. 148 htmlhelp_basename = 'QISKitdoc' 149 150 151 # -- Options for LaTeX output --------------------------------------------- 152 153 latex_elements = { 154 # The paper size ('letterpaper' or 'a4paper'). 155 # 156 # 'papersize': 'letterpaper', 157 158 # The font size ('10pt', '11pt' or '12pt'). 159 # 160 # 'pointsize': '10pt', 161 162 # Additional stuff for the LaTeX preamble. 163 # 164 # 'preamble': '', 165 166 # Latex figure (float) alignment 167 # 168 # 'figure_align': 'htbp', 169 } 170 171 # Grouping the document tree into LaTeX files. List of tuples 172 # (source start file, target name, title, 173 # author, documentclass [howto, manual, or own class]). 174 latex_documents = [ 175 (master_doc, 'QISKit.tex', 'Qiskit Documentation', 176 '''Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Jesus Perez, 177 and John Smolin''', 'manual'), 178 ] 179 180 181 # -- Options for manual page output --------------------------------------- 182 183 # One entry per manual page. List of tuples 184 # (source start file, name, description, authors, manual section). 185 man_pages = [ 186 (master_doc, 'qiskit', 'Qiskit Documentation', 187 [author], 1) 188 ] 189 190 191 # -- Options for Texinfo output ------------------------------------------- 192 193 # Grouping the document tree into Texinfo files. List of tuples 194 # (source start file, target name, title, author, 195 # dir menu entry, description, category) 196 texinfo_documents = [ 197 (master_doc, 'Qiskit', 'Qiskit Documentation', 198 author, 'Qiskit', 'One line description of project.', 199 'Miscellaneous'), 200 ] 201 202 203 # Avoid a warning and treat the docstrings of the QasmLexer tokens as verbatim, 204 # as PLY uses docstring as a way to define the patterns the token matches. 205 def remove_module_docstring(app, what, name, obj, options, lines): 206 if name.startswith('qiskit.qasm._qasmlexer.QasmLexer.t_') and lines: 207 lines[0] = u'Token matching: ``%s``' % lines[0] 208 209 210 def setup(app): 211 app.connect('autodoc-process-docstring', remove_module_docstring) 212 [end of doc/conf.py] [start of examples/python/ghz.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 """ 9 GHZ state example. It also compares running on experiment and simulator 10 11 Note: if you have only cloned the Qiskit repository but not 12 used `pip install`, the examples only work from the root directory. 13 """ 14 15 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit 16 from qiskit import IBMQ, Aer, execute 17 from qiskit.backends.ibmq import least_busy 18 19 20 ############################################################### 21 # Make a quantum circuit for the GHZ state. 22 ############################################################### 23 q = QuantumRegister(5, "q") 24 c = ClassicalRegister(5, "c") 25 qc = QuantumCircuit(q, c, name='ghz') 26 27 # Create a GHZ state 28 qc.h(q[0]) 29 for i in range(4): 30 qc.cx(q[i], q[i+1]) 31 # Insert a barrier before measurement 32 qc.barrier() 33 # Measure all of the qubits in the standard basis 34 for i in range(5): 35 qc.measure(q[i], c[i]) 36 37 ############################################################### 38 # Set up the API and execute the program. 39 ############################################################### 40 try: 41 import Qconfig 42 IBMQ.use_account(Qconfig.APItoken, Qconfig.config['url']) 43 except: 44 print("""WARNING: There's no connection with the API for remote backends. 45 Have you initialized a Qconfig.py file with your personal token? 46 For now, there's only access to local simulator backends...""") 47 48 # First version: simulator 49 sim_backend = Aer.get_backend('qasm_simulator') 50 job = execute(qc, sim_backend, shots=1024) 51 result = job.result() 52 print('Qasm simulator') 53 print(result) 54 print(result.get_counts(qc)) 55 56 # Second version: real device 57 least_busy_device = least_busy(IBMQ.backends(simulator=False, 58 filters=lambda x: x.configuration()['n_qubits'] > 4)) 59 print("Running on current least busy device: ", least_busy_device) 60 job = execute(qc, least_busy_device, shots=1024) 61 result = job.result() 62 print(result) 63 print(result.get_counts(qc)) 64 [end of examples/python/ghz.py] [start of qiskit/tools/visualization/bloch.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 9 # This file is part of QuTiP: Quantum Toolbox in Python. 10 # 11 # Copyright (c) 2011 and later, Paul D. Nation and Robert J. Johansson. 12 # All rights reserved. 13 # 14 # Redistribution and use in source and binary forms, with or without 15 # modification, are permitted provided that the following conditions are 16 # met: 17 # 18 # 1. Redistributions of source code must retain the above copyright notice, 19 # this list of conditions and the following disclaimer. 20 # 21 # 2. Redistributions in binary form must reproduce the above copyright 22 # notice, this list of conditions and the following disclaimer in the 23 # documentation and/or other materials provided with the distribution. 24 # 25 # 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names 26 # of its contributors may be used to endorse or promote products derived 27 # from this software without specific prior written permission. 28 # 29 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 30 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 31 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 32 # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 33 # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 34 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 35 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 36 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 37 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 38 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 39 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 40 ############################################################################### 41 42 43 """Bloch sphere""" 44 45 __all__ = ['Bloch'] 46 47 import os 48 import numpy as np 49 import matplotlib.pyplot as plt 50 from matplotlib.patches import FancyArrowPatch 51 from mpl_toolkits.mplot3d import (Axes3D, proj3d) 52 53 54 class Arrow3D(FancyArrowPatch): 55 """Makes a fancy arrow""" 56 def __init__(self, xs, ys, zs, *args, **kwargs): 57 FancyArrowPatch.__init__(self, (0, 0), (0, 0), *args, **kwargs) 58 self._verts3d = xs, ys, zs 59 60 def draw(self, renderer): 61 xs3d, ys3d, zs3d = self._verts3d 62 x_s, y_s, _ = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M) 63 self.set_positions((x_s[0], y_s[0]), (x_s[1], y_s[1])) 64 FancyArrowPatch.draw(self, renderer) 65 66 67 class Bloch(): 68 """Class for plotting data on the Bloch sphere. Valid data can be 69 either points, vectors, or qobj objects. 70 71 Attributes: 72 axes (instance): 73 User supplied Matplotlib axes for Bloch sphere animation. 74 fig (instance): 75 User supplied Matplotlib Figure instance for plotting Bloch sphere. 76 font_color (str): 77 Color of font used for Bloch sphere labels. 78 font_size (int): 79 Size of font used for Bloch sphere labels. 80 frame_alpha (float): 81 Sets transparency of Bloch sphere frame. 82 frame_color (str): 83 Color of sphere wireframe. 84 frame_width (int): 85 Width of wireframe. 86 point_color (list): 87 List of colors for Bloch sphere point markers to cycle through. 88 i.e. By default, points 0 and 4 will both be blue ('b'). 89 point_marker (list): 90 List of point marker shapes to cycle through. 91 point_size (list): 92 List of point marker sizes. Note, not all point markers look 93 the same size when plotted! 94 sphere_alpha (float): 95 Transparency of Bloch sphere itself. 96 sphere_color (str): 97 Color of Bloch sphere. 98 figsize (list): 99 Figure size of Bloch sphere plot. Best to have both numbers the same; 100 otherwise you will have a Bloch sphere that looks like a football. 101 vector_color (list): 102 List of vector colors to cycle through. 103 vector_width (int): 104 Width of displayed vectors. 105 vector_style (str): 106 Vector arrowhead style (from matplotlib's arrow style). 107 vector_mutation (int): 108 Width of vectors arrowhead. 109 view (list): 110 Azimuthal and Elevation viewing angles. 111 xlabel (list): 112 List of strings corresponding to +x and -x axes labels, respectively. 113 xlpos (list): 114 Positions of +x and -x labels respectively. 115 ylabel (list): 116 List of strings corresponding to +y and -y axes labels, respectively. 117 ylpos (list): 118 Positions of +y and -y labels respectively. 119 zlabel (list): 120 List of strings corresponding to +z and -z axes labels, respectively. 121 zlpos (list): 122 Positions of +z and -z labels respectively. 123 """ 124 125 def __init__(self, fig=None, axes=None, view=None, figsize=None, 126 background=False): 127 128 # Figure and axes 129 self.fig = fig 130 self.axes = axes 131 # Background axes, default = False 132 self.background = background 133 # The size of the figure in inches, default = [5,5]. 134 self.figsize = figsize if figsize else [5, 5] 135 # Azimuthal and Elvation viewing angles, default = [-60,30]. 136 self.view = view if view else [-60, 30] 137 # Color of Bloch sphere, default = #FFDDDD 138 self.sphere_color = '#FFDDDD' 139 # Transparency of Bloch sphere, default = 0.2 140 self.sphere_alpha = 0.2 141 # Color of wireframe, default = 'gray' 142 self.frame_color = 'gray' 143 # Width of wireframe, default = 1 144 self.frame_width = 1 145 # Transparency of wireframe, default = 0.2 146 self.frame_alpha = 0.2 147 # Labels for x-axis (in LaTex), default = ['$x$', ''] 148 self.xlabel = ['$x$', ''] 149 # Position of x-axis labels, default = [1.2, -1.2] 150 self.xlpos = [1.2, -1.2] 151 # Labels for y-axis (in LaTex), default = ['$y$', ''] 152 self.ylabel = ['$y$', ''] 153 # Position of y-axis labels, default = [1.1, -1.1] 154 self.ylpos = [1.2, -1.2] 155 # Labels for z-axis (in LaTex), 156 # default = [r'$\left|1\right>$', r'$\left|0\right>$'] 157 self.zlabel = [r'$\left|0\right>$', r'$\left|1\right>$'] 158 # Position of z-axis labels, default = [1.2, -1.2] 159 self.zlpos = [1.2, -1.2] 160 # ---font options--- 161 # Color of fonts, default = 'black' 162 self.font_color = 'black' 163 # Size of fonts, default = 20 164 self.font_size = 20 165 166 # ---vector options--- 167 # List of colors for Bloch vectors, default = ['b','g','r','y'] 168 self.vector_color = ['#dc267f', '#648fff', '#fe6100', '#785ef0', 169 '#ffb000'] 170 #: Width of Bloch vectors, default = 5 171 self.vector_width = 5 172 #: Style of Bloch vectors, default = '-|>' (or 'simple') 173 self.vector_style = '-|>' 174 #: Sets the width of the vectors arrowhead 175 self.vector_mutation = 20 176 177 # ---point options--- 178 # List of colors for Bloch point markers, default = ['b','g','r','y'] 179 self.point_color = ['b', 'r', 'g', '#CC6600'] 180 # Size of point markers, default = 25 181 self.point_size = [25, 32, 35, 45] 182 # Shape of point markers, default = ['o','^','d','s'] 183 self.point_marker = ['o', 's', 'd', '^'] 184 185 # ---data lists--- 186 # Data for point markers 187 self.points = [] 188 # Data for Bloch vectors 189 self.vectors = [] 190 # Data for annotations 191 self.annotations = [] 192 # Number of times sphere has been saved 193 self.savenum = 0 194 # Style of points, 'm' for multiple colors, 's' for single color 195 self.point_style = [] 196 197 # status of rendering 198 self._rendered = False 199 200 def set_label_convention(self, convention): 201 """Set x, y and z labels according to one of conventions. 202 203 Args: 204 convention (str): 205 One of the following: 206 - "original" 207 - "xyz" 208 - "sx sy sz" 209 - "01" 210 - "polarization jones" 211 - "polarization jones letters" 212 see also: http://en.wikipedia.org/wiki/Jones_calculus 213 - "polarization stokes" 214 see also: http://en.wikipedia.org/wiki/Stokes_parameters 215 Raises: 216 Exception: If convention is not valid. 217 """ 218 ketex = "$\\left.|%s\\right\\rangle$" 219 # \left.| is on purpose, so that every ket has the same size 220 221 if convention == "original": 222 self.xlabel = ['$x$', ''] 223 self.ylabel = ['$y$', ''] 224 self.zlabel = ['$\\left|0\\right>$', '$\\left|1\\right>$'] 225 elif convention == "xyz": 226 self.xlabel = ['$x$', ''] 227 self.ylabel = ['$y$', ''] 228 self.zlabel = ['$z$', ''] 229 elif convention == "sx sy sz": 230 self.xlabel = ['$s_x$', ''] 231 self.ylabel = ['$s_y$', ''] 232 self.zlabel = ['$s_z$', ''] 233 elif convention == "01": 234 self.xlabel = ['', ''] 235 self.ylabel = ['', ''] 236 self.zlabel = ['$\\left|0\\right>$', '$\\left|1\\right>$'] 237 elif convention == "polarization jones": 238 self.xlabel = [ketex % "\\nearrow\\hspace{-1.46}\\swarrow", 239 ketex % "\\nwarrow\\hspace{-1.46}\\searrow"] 240 self.ylabel = [ketex % "\\circlearrowleft", ketex % 241 "\\circlearrowright"] 242 self.zlabel = [ketex % "\\leftrightarrow", ketex % "\\updownarrow"] 243 elif convention == "polarization jones letters": 244 self.xlabel = [ketex % "D", ketex % "A"] 245 self.ylabel = [ketex % "L", ketex % "R"] 246 self.zlabel = [ketex % "H", ketex % "V"] 247 elif convention == "polarization stokes": 248 self.ylabel = ["$\\nearrow\\hspace{-1.46}\\swarrow$", 249 "$\\nwarrow\\hspace{-1.46}\\searrow$"] 250 self.zlabel = ["$\\circlearrowleft$", "$\\circlearrowright$"] 251 self.xlabel = ["$\\leftrightarrow$", "$\\updownarrow$"] 252 else: 253 raise Exception("No such convention.") 254 255 def __str__(self): 256 string = "" 257 string += "Bloch data:\n" 258 string += "-----------\n" 259 string += "Number of points: " + str(len(self.points)) + "\n" 260 string += "Number of vectors: " + str(len(self.vectors)) + "\n" 261 string += "\n" 262 string += "Bloch sphere properties:\n" 263 string += "------------------------\n" 264 string += "font_color: " + str(self.font_color) + "\n" 265 string += "font_size: " + str(self.font_size) + "\n" 266 string += "frame_alpha: " + str(self.frame_alpha) + "\n" 267 string += "frame_color: " + str(self.frame_color) + "\n" 268 string += "frame_width: " + str(self.frame_width) + "\n" 269 string += "point_color: " + str(self.point_color) + "\n" 270 string += "point_marker: " + str(self.point_marker) + "\n" 271 string += "point_size: " + str(self.point_size) + "\n" 272 string += "sphere_alpha: " + str(self.sphere_alpha) + "\n" 273 string += "sphere_color: " + str(self.sphere_color) + "\n" 274 string += "figsize: " + str(self.figsize) + "\n" 275 string += "vector_color: " + str(self.vector_color) + "\n" 276 string += "vector_width: " + str(self.vector_width) + "\n" 277 string += "vector_style: " + str(self.vector_style) + "\n" 278 string += "vector_mutation: " + str(self.vector_mutation) + "\n" 279 string += "view: " + str(self.view) + "\n" 280 string += "xlabel: " + str(self.xlabel) + "\n" 281 string += "xlpos: " + str(self.xlpos) + "\n" 282 string += "ylabel: " + str(self.ylabel) + "\n" 283 string += "ylpos: " + str(self.ylpos) + "\n" 284 string += "zlabel: " + str(self.zlabel) + "\n" 285 string += "zlpos: " + str(self.zlpos) + "\n" 286 return string 287 288 def clear(self): 289 """Resets Bloch sphere data sets to empty. 290 """ 291 self.points = [] 292 self.vectors = [] 293 self.point_style = [] 294 self.annotations = [] 295 296 def add_points(self, points, meth='s'): 297 """Add a list of data points to bloch sphere. 298 Args: 299 points (array_like): 300 Collection of data points. 301 meth (str): 302 Type of points to plot, use 'm' for multicolored, 'l' for points 303 connected with a line. 304 """ 305 if not isinstance(points[0], (list, np.ndarray)): 306 points = [[points[0]], [points[1]], [points[2]]] 307 points = np.array(points) 308 if meth == 's': 309 if len(points[0]) == 1: 310 pnts = np.array([[points[0][0]], 311 [points[1][0]], [points[2][0]]]) 312 pnts = np.append(pnts, points, axis=1) 313 else: 314 pnts = points 315 self.points.append(pnts) 316 self.point_style.append('s') 317 elif meth == 'l': 318 self.points.append(points) 319 self.point_style.append('l') 320 else: 321 self.points.append(points) 322 self.point_style.append('m') 323 324 def add_vectors(self, vectors): 325 """Add a list of vectors to Bloch sphere. 326 327 Args: 328 vectors (array_like): 329 Array with vectors of unit length or smaller. 330 """ 331 if isinstance(vectors[0], (list, np.ndarray)): 332 for vec in vectors: 333 self.vectors.append(vec) 334 else: 335 self.vectors.append(vectors) 336 337 def add_annotation(self, state_or_vector, text, **kwargs): 338 """Add a text or LaTeX annotation to Bloch sphere, 339 parametrized by a qubit state or a vector. 340 341 Args: 342 state_or_vector (array_like): 343 Position for the annotaion. 344 Qobj of a qubit or a vector of 3 elements. 345 text (str): 346 Annotation text. 347 You can use LaTeX, but remember to use raw string 348 e.g. r"$\\langle x \\rangle$" 349 or escape backslashes 350 e.g. "$\\\\langle x \\\\rangle$". 351 **kwargs: 352 Options as for mplot3d.axes3d.text, including: 353 fontsize, color, horizontalalignment, verticalalignment. 354 Raises: 355 Exception: If input not array_like or tuple. 356 """ 357 if isinstance(state_or_vector, (list, np.ndarray, tuple)) \ 358 and len(state_or_vector) == 3: 359 vec = state_or_vector 360 else: 361 raise Exception("Position needs to be specified by a qubit " + 362 "state or a 3D vector.") 363 self.annotations.append({'position': vec, 364 'text': text, 365 'opts': kwargs}) 366 367 def make_sphere(self): 368 """ 369 Plots Bloch sphere and data sets. 370 """ 371 self.render(self.fig, self.axes) 372 373 def render(self, fig=None, axes=None, title=''): 374 """ 375 Render the Bloch sphere and its data sets in on given figure and axes. 376 """ 377 if self._rendered: 378 self.axes.clear() 379 380 self._rendered = True 381 382 # Figure instance for Bloch sphere plot 383 if not fig: 384 self.fig = plt.figure(figsize=self.figsize) 385 386 if not axes: 387 self.axes = Axes3D(self.fig, azim=self.view[0], elev=self.view[1]) 388 389 if self.background: 390 self.axes.clear() 391 self.axes.set_xlim3d(-1.3, 1.3) 392 self.axes.set_ylim3d(-1.3, 1.3) 393 self.axes.set_zlim3d(-1.3, 1.3) 394 else: 395 self.plot_axes() 396 self.axes.set_axis_off() 397 self.axes.set_xlim3d(-0.7, 0.7) 398 self.axes.set_ylim3d(-0.7, 0.7) 399 self.axes.set_zlim3d(-0.7, 0.7) 400 401 self.axes.grid(False) 402 self.plot_back() 403 self.plot_points() 404 self.plot_vectors() 405 self.plot_front() 406 self.plot_axes_labels() 407 self.plot_annotations() 408 self.axes.set_title(title, fontsize=self.font_size, y=1.08) 409 410 def plot_back(self): 411 """back half of sphere""" 412 u_angle = np.linspace(0, np.pi, 25) 413 v_angle = np.linspace(0, np.pi, 25) 414 x_dir = np.outer(np.cos(u_angle), np.sin(v_angle)) 415 y_dir = np.outer(np.sin(u_angle), np.sin(v_angle)) 416 z_dir = np.outer(np.ones(u_angle.shape[0]), np.cos(v_angle)) 417 self.axes.plot_surface(x_dir, y_dir, z_dir, rstride=2, cstride=2, 418 color=self.sphere_color, linewidth=0, 419 alpha=self.sphere_alpha) 420 # wireframe 421 self.axes.plot_wireframe(x_dir, y_dir, z_dir, rstride=5, cstride=5, 422 color=self.frame_color, 423 alpha=self.frame_alpha) 424 # equator 425 self.axes.plot(1.0 * np.cos(u_angle), 1.0 * np.sin(u_angle), zs=0, zdir='z', 426 lw=self.frame_width, color=self.frame_color) 427 self.axes.plot(1.0 * np.cos(u_angle), 1.0 * np.sin(u_angle), zs=0, zdir='x', 428 lw=self.frame_width, color=self.frame_color) 429 430 def plot_front(self): 431 """front half of sphere""" 432 u_angle = np.linspace(-np.pi, 0, 25) 433 v_angle = np.linspace(0, np.pi, 25) 434 x_dir = np.outer(np.cos(u_angle), np.sin(v_angle)) 435 y_dir = np.outer(np.sin(u_angle), np.sin(v_angle)) 436 z_dir = np.outer(np.ones(u_angle.shape[0]), np.cos(v_angle)) 437 self.axes.plot_surface(x_dir, y_dir, z_dir, rstride=2, cstride=2, 438 color=self.sphere_color, linewidth=0, 439 alpha=self.sphere_alpha) 440 # wireframe 441 self.axes.plot_wireframe(x_dir, y_dir, z_dir, rstride=5, cstride=5, 442 color=self.frame_color, 443 alpha=self.frame_alpha) 444 # equator 445 self.axes.plot(1.0 * np.cos(u_angle), 1.0 * np.sin(u_angle), 446 zs=0, zdir='z', lw=self.frame_width, 447 color=self.frame_color) 448 self.axes.plot(1.0 * np.cos(u_angle), 1.0 * np.sin(u_angle), 449 zs=0, zdir='x', lw=self.frame_width, 450 color=self.frame_color) 451 452 def plot_axes(self): 453 """axes""" 454 span = np.linspace(-1.0, 1.0, 2) 455 self.axes.plot(span, 0 * span, zs=0, zdir='z', label='X', 456 lw=self.frame_width, color=self.frame_color) 457 self.axes.plot(0 * span, span, zs=0, zdir='z', label='Y', 458 lw=self.frame_width, color=self.frame_color) 459 self.axes.plot(0 * span, span, zs=0, zdir='y', label='Z', 460 lw=self.frame_width, color=self.frame_color) 461 462 def plot_axes_labels(self): 463 """axes labels""" 464 opts = {'fontsize': self.font_size, 465 'color': self.font_color, 466 'horizontalalignment': 'center', 467 'verticalalignment': 'center'} 468 self.axes.text(0, -self.xlpos[0], 0, self.xlabel[0], **opts) 469 self.axes.text(0, -self.xlpos[1], 0, self.xlabel[1], **opts) 470 471 self.axes.text(self.ylpos[0], 0, 0, self.ylabel[0], **opts) 472 self.axes.text(self.ylpos[1], 0, 0, self.ylabel[1], **opts) 473 474 self.axes.text(0, 0, self.zlpos[0], self.zlabel[0], **opts) 475 self.axes.text(0, 0, self.zlpos[1], self.zlabel[1], **opts) 476 477 for item in (self.axes.w_xaxis.get_ticklines() + 478 self.axes.w_xaxis.get_ticklabels()): 479 item.set_visible(False) 480 for item in (self.axes.w_yaxis.get_ticklines() + 481 self.axes.w_yaxis.get_ticklabels()): 482 item.set_visible(False) 483 for item in (self.axes.w_zaxis.get_ticklines() + 484 self.axes.w_zaxis.get_ticklabels()): 485 item.set_visible(False) 486 487 def plot_vectors(self): 488 """Plot vector""" 489 # -X and Y data are switched for plotting purposes 490 for k in range(len(self.vectors)): 491 492 xs3d = self.vectors[k][1] * np.array([0, 1]) 493 ys3d = -self.vectors[k][0] * np.array([0, 1]) 494 zs3d = self.vectors[k][2] * np.array([0, 1]) 495 496 color = self.vector_color[np.mod(k, len(self.vector_color))] 497 498 if self.vector_style == '': 499 # simple line style 500 self.axes.plot(xs3d, ys3d, zs3d, 501 zs=0, zdir='z', label='Z', 502 lw=self.vector_width, color=color) 503 else: 504 # decorated style, with arrow heads 505 arr = Arrow3D(xs3d, ys3d, zs3d, 506 mutation_scale=self.vector_mutation, 507 lw=self.vector_width, 508 arrowstyle=self.vector_style, 509 color=color) 510 511 self.axes.add_artist(arr) 512 513 def plot_points(self): 514 """Plot points""" 515 # -X and Y data are switched for plotting purposes 516 for k in range(len(self.points)): 517 num = len(self.points[k][0]) 518 dist = [np.sqrt(self.points[k][0][j] ** 2 + 519 self.points[k][1][j] ** 2 + 520 self.points[k][2][j] ** 2) for j in range(num)] 521 if any(abs(dist - dist[0]) / dist[0] > 1e-12): 522 # combine arrays so that they can be sorted together 523 zipped = list(zip(dist, range(num))) 524 zipped.sort() # sort rates from lowest to highest 525 dist, indperm = zip(*zipped) 526 indperm = np.array(indperm) 527 else: 528 indperm = np.arange(num) 529 if self.point_style[k] == 's': 530 self.axes.scatter( 531 np.real(self.points[k][1][indperm]), 532 - np.real(self.points[k][0][indperm]), 533 np.real(self.points[k][2][indperm]), 534 s=self.point_size[np.mod(k, len(self.point_size))], 535 alpha=1, 536 edgecolor='none', 537 zdir='z', 538 color=self.point_color[np.mod(k, len(self.point_color))], 539 marker=self.point_marker[np.mod(k, 540 len(self.point_marker))]) 541 542 elif self.point_style[k] == 'm': 543 pnt_colors = np.array(self.point_color * 544 int(np.ceil(num / 545 float(len(self.point_color))))) 546 547 pnt_colors = pnt_colors[0:num] 548 pnt_colors = list(pnt_colors[indperm]) 549 marker = self.point_marker[np.mod(k, len(self.point_marker))] 550 pnt_size = self.point_size[np.mod(k, len(self.point_size))] 551 self.axes.scatter(np.real(self.points[k][1][indperm]), 552 -np.real(self.points[k][0][indperm]), 553 np.real(self.points[k][2][indperm]), 554 s=pnt_size, alpha=1, edgecolor='none', 555 zdir='z', color=pnt_colors, 556 marker=marker) 557 558 elif self.point_style[k] == 'l': 559 color = self.point_color[np.mod(k, len(self.point_color))] 560 self.axes.plot(np.real(self.points[k][1]), 561 -np.real(self.points[k][0]), 562 np.real(self.points[k][2]), 563 alpha=0.75, zdir='z', 564 color=color) 565 566 def plot_annotations(self): 567 """Plot annotations""" 568 # -X and Y data are switched for plotting purposes 569 for annotation in self.annotations: 570 vec = annotation['position'] 571 opts = {'fontsize': self.font_size, 572 'color': self.font_color, 573 'horizontalalignment': 'center', 574 'verticalalignment': 'center'} 575 opts.update(annotation['opts']) 576 self.axes.text(vec[1], -vec[0], vec[2], 577 annotation['text'], **opts) 578 579 def show(self, title=''): 580 """ 581 Display Bloch sphere and corresponding data sets. 582 """ 583 self.render(self.fig, self.axes, title=title) 584 if self.fig: 585 plt.show(self.fig) 586 587 def save(self, name=None, output='png', dirc=None): 588 """Saves Bloch sphere to file of type ``format`` in directory ``dirc``. 589 Args: 590 name (str): 591 Name of saved image. Must include path and format as well. 592 i.e. '/Users/Paul/Desktop/bloch.png' 593 This overrides the 'format' and 'dirc' arguments. 594 output (str): 595 Format of output image. 596 dirc (str): 597 Directory for output images. Defaults to current working directory. 598 """ 599 600 self.render(self.fig, self.axes) 601 if dirc: 602 if not os.path.isdir(os.getcwd() + "/" + str(dirc)): 603 os.makedirs(os.getcwd() + "/" + str(dirc)) 604 if name is None: 605 if dirc: 606 self.fig.savefig(os.getcwd() + "/" + str(dirc) + '/bloch_' + 607 str(self.savenum) + '.' + output) 608 else: 609 self.fig.savefig(os.getcwd() + '/bloch_' + str(self.savenum) + 610 '.' + output) 611 else: 612 self.fig.savefig(name) 613 self.savenum += 1 614 if self.fig: 615 plt.close(self.fig) 616 617 618 def _hide_tick_lines_and_labels(axis): 619 """ 620 Set visible property of ticklines and ticklabels of an axis to False 621 """ 622 for item in axis.get_ticklines() + axis.get_ticklabels(): 623 item.set_visible(False) 624 [end of qiskit/tools/visualization/bloch.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
195a38c295de248008c737147ff6db7dc2459c9c
Changing style in matplotlib circuit_drawer changes the circuit layout <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Informations - **Qiskit Terra version**: current master - **Python version**: 3.7 - **Operating system**: OSX ### What is the current behavior? The output from the Matplotlib circuit drawer changes when the `style` kwarg is modified ### Steps to reproduce the problem `circuit_drawer(circ)` vs `circuit_drawer(circ,style=qx_color_scheme())`: (Here I show only the first pane of the circuit) <img width="964" alt="screen shot 2018-09-25 at 5 44 48 am" src="https://user-images.githubusercontent.com/1249193/46006709-298c2080-c086-11e8-9c62-f05c5a91153e.png"> <img width="965" alt="screen shot 2018-09-25 at 5 45 33 am" src="https://user-images.githubusercontent.com/1249193/46006778-4de7fd00-c086-11e8-897d-433f48219b98.png"> ### What is the expected behavior? I would assume that the style does not change the circuit layout in the figure. ### Suggested solutions
Hi. @nkanazawa1989 and I will check the code.
2018-10-01T08:08:08Z
<patch> diff --git a/qiskit/tools/visualization/_circuit_visualization.py b/qiskit/tools/visualization/_circuit_visualization.py --- a/qiskit/tools/visualization/_circuit_visualization.py +++ b/qiskit/tools/visualization/_circuit_visualization.py @@ -32,7 +32,6 @@ patches as patches, pyplot as plt from qiskit._qiskiterror import QISKitError -from qiskit._quantumcircuit import QuantumCircuit from qiskit.wrapper import load_qasm_file from qiskit.dagcircuit import DAGCircuit from qiskit.tools.visualization._error import VisualizationError @@ -243,7 +242,7 @@ def qx_color_scheme(): "cregbundle": False, "plotbarrier": False, "showindex": False, - "compress": False, + "compress": True, "margin": [2.0, 0.0, 0.0, 0.3], "creglinestyle": "solid", "reversebits": False @@ -1464,7 +1463,7 @@ def load_qasm_file(self, filename): circuit = load_qasm_file(filename, name='draw', basis_gates=self._basis) self.parse_circuit(circuit) - def parse_circuit(self, circuit: QuantumCircuit): + def parse_circuit(self, circuit): dag_circuit = DAGCircuit.fromQuantumCircuit(circuit, expand_gates=False) self._ast = transpile(dag_circuit, basis_gates=self._basis, format='json') self._registers() </patch>
[]
[]
pantsbuild__pants-11400
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Ctrl+C delayed when pantsd is disabled #11223 broke `Ctrl+C` when `pantsd` is disabled, because we no longer explicitly poll the Python code inside the `Scheduler::execute` loop. </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a scalable build system for _monorepos_: codebases containing 4 multiple projects, often using multiple programming languages and frameworks, 5 in a single unified code repository. 6 7 Some noteworthy features include: 8 9 * Explicit dependency modeling. 10 * Fine-grained invalidation. 11 * Shared result caching. 12 * Concurrent execution. 13 * Remote execution. 14 * Unified interface for multiple tools and languages. 15 * Extensibility and customizability via a plugin API. 16 17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/). 18 19 We release to [PyPI](https://pypi.org/pypi) 20 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 21 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 22 23 We use [Travis CI](https://travis-ci.org) to verify the build 24 [![Build Status](https://travis-ci.com/pantsbuild/pants.svg?branch=master)](https://travis-ci.com/pantsbuild/pants/branches). 25 26 # Requirements 27 28 To run Pants, you need: 29 30 * Linux or macOS. 31 * Python 3.7+ discoverable on your `PATH`. 32 * A C compiler, system headers and Python headers (to compile native Python modules). 33 * Internet access (so that Pants can fully bootstrap itself). 34 [end of README.md] [start of src/python/pants/base/exception_sink.py] 1 # Copyright 2018 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 import datetime 5 import faulthandler 6 import logging 7 import os 8 import signal 9 import sys 10 import threading 11 import traceback 12 from contextlib import contextmanager 13 from typing import Callable, Dict, Iterator 14 15 import psutil 16 import setproctitle 17 18 from pants.engine.internals.native_engine import session_cancel_all 19 from pants.util.dirutil import safe_mkdir, safe_open 20 from pants.util.osutil import Pid 21 22 logger = logging.getLogger(__name__) 23 24 25 class SignalHandler: 26 """A specification for how to handle a fixed set of nonfatal signals. 27 28 This is subclassed and registered with ExceptionSink.reset_signal_handler() whenever the signal 29 handling behavior is modified for different pants processes, for example in the remote client when 30 pantsd is enabled. The default behavior is to exit "gracefully" by leaving a detailed log of which 31 signal was received, then exiting with failure. 32 33 Note that the terminal will convert a ctrl-c from the user into a SIGINT. 34 """ 35 36 @property 37 def signal_handler_mapping(self) -> Dict[signal.Signals, Callable]: 38 """A dict mapping (signal number) -> (a method handling the signal).""" 39 # Could use an enum here, but we never end up doing any matching on the specific signal value, 40 # instead just iterating over the registered signals to set handlers, so a dict is probably 41 # better. 42 return { 43 signal.SIGINT: self._handle_sigint_if_enabled, 44 signal.SIGQUIT: self.handle_sigquit, 45 signal.SIGTERM: self.handle_sigterm, 46 } 47 48 def __init__(self, *, pantsd_instance: bool): 49 self._ignore_sigint_lock = threading.Lock() 50 self._ignoring_sigint = False 51 self._pantsd_instance = pantsd_instance 52 53 def _handle_sigint_if_enabled(self, signum: int, _frame): 54 with self._ignore_sigint_lock: 55 if not self._ignoring_sigint: 56 self.handle_sigint(signum, _frame) 57 58 def _toggle_ignoring_sigint(self, toggle: bool) -> None: 59 if not self._pantsd_instance: 60 with self._ignore_sigint_lock: 61 self._ignoring_sigint = toggle 62 63 def _send_signal_to_children(self, received_signal: int, signame: str) -> None: 64 """Send a signal to any children of this process in order. 65 66 Pants may have spawned multiple subprocesses via Python or Rust. Upon receiving a signal, 67 this method is invoked to propagate the signal to all children, regardless of how they were 68 spawned. 69 """ 70 71 self_process = psutil.Process() 72 children = self_process.children() 73 logger.debug(f"Sending signal {signame} ({received_signal}) to child processes: {children}") 74 for child_process in children: 75 child_process.send_signal(received_signal) 76 77 def handle_sigint(self, signum: int, _frame): 78 session_cancel_all() 79 self._send_signal_to_children(signum, "SIGINT") 80 raise KeyboardInterrupt("User interrupted execution with control-c!") 81 82 # TODO(#7406): figure out how to let sys.exit work in a signal handler instead of having to raise 83 # this exception! 84 class SignalHandledNonLocalExit(Exception): 85 """Raised in handlers for non-fatal signals to overcome Python limitations. 86 87 When waiting on a subprocess and in a signal handler, sys.exit appears to be ignored, and 88 causes the signal handler to return. We want to (eventually) exit after these signals, not 89 ignore them, so we raise this exception instead and check it in our sys.excepthook override. 90 """ 91 92 def __init__(self, signum, signame): 93 self.signum = signum 94 self.signame = signame 95 self.traceback_lines = traceback.format_stack() 96 super(SignalHandler.SignalHandledNonLocalExit, self).__init__() 97 98 if "I/O operation on closed file" in self.traceback_lines: 99 logger.debug( 100 "SignalHandledNonLocalExit: unexpected appearance of " 101 "'I/O operation on closed file' in traceback" 102 ) 103 104 def handle_sigquit(self, signum, _frame): 105 session_cancel_all() 106 self._send_signal_to_children(signum, "SIGQUIT") 107 raise self.SignalHandledNonLocalExit(signum, "SIGQUIT") 108 109 def handle_sigterm(self, signum, _frame): 110 session_cancel_all() 111 self._send_signal_to_children(signum, "SIGTERM") 112 raise self.SignalHandledNonLocalExit(signum, "SIGTERM") 113 114 115 class ExceptionSink: 116 """A mutable singleton object representing where exceptions should be logged to. 117 118 The ExceptionSink should be installed in any process that is running Pants @rules via the 119 engine. Notably, this does _not_ include the pantsd client, which does its own signal handling 120 directly in order to forward information to the pantsd server. 121 """ 122 123 # NB: see the bottom of this file where we call reset_log_location() and other mutators in order 124 # to properly setup global state. 125 _log_dir = None 126 127 # Where to log stacktraces to in a SIGUSR2 handler. 128 _interactive_output_stream = None 129 130 # An instance of `SignalHandler` which is invoked to handle a static set of specific nonfatal 131 # signals (these signal handlers are allowed to make pants exit, but unlike SIGSEGV they don't 132 # need to exit immediately). 133 _signal_handler: SignalHandler = SignalHandler(pantsd_instance=False) 134 135 # These persistent open file descriptors are kept so the signal handler can do almost no work 136 # (and lets faulthandler figure out signal safety). 137 _pid_specific_error_fileobj = None 138 _shared_error_fileobj = None 139 140 def __new__(cls, *args, **kwargs): 141 raise TypeError( 142 "Instances of {} are not allowed to be constructed! Call install() instead.".format( 143 cls.__name__ 144 ) 145 ) 146 147 class ExceptionSinkError(Exception): 148 pass 149 150 @classmethod 151 def install(cls, log_location: str, pantsd_instance: bool) -> None: 152 """Setup global state for this process, such as signal handlers and sys.excepthook.""" 153 154 # Set the log location for writing logs before bootstrap options are parsed. 155 cls.reset_log_location(log_location) 156 157 # NB: Mutate process-global state! 158 sys.excepthook = ExceptionSink.log_exception 159 160 # Setup a default signal handler. 161 cls.reset_signal_handler(SignalHandler(pantsd_instance=pantsd_instance)) 162 163 # All reset_* methods are ~idempotent! 164 @classmethod 165 def reset_log_location(cls, new_log_location: str) -> None: 166 """Re-acquire file handles to error logs based in the new location. 167 168 Class state: 169 - Overwrites `cls._log_dir`, `cls._pid_specific_error_fileobj`, and 170 `cls._shared_error_fileobj`. 171 OS state: 172 - May create a new directory. 173 - Overwrites signal handlers for many fatal and non-fatal signals (but not SIGUSR2). 174 175 :raises: :class:`ExceptionSink.ExceptionSinkError` if the directory does not exist or is not 176 writable. 177 """ 178 # We could no-op here if the log locations are the same, but there's no reason not to have the 179 # additional safety of re-acquiring file descriptors each time (and erroring out early if the 180 # location is no longer writable). 181 try: 182 safe_mkdir(new_log_location) 183 except Exception as e: 184 raise cls.ExceptionSinkError( 185 "The provided log location path at '{}' is not writable or could not be created: {}.".format( 186 new_log_location, str(e) 187 ), 188 e, 189 ) 190 191 pid = os.getpid() 192 pid_specific_log_path = cls.exceptions_log_path(for_pid=pid, in_dir=new_log_location) 193 shared_log_path = cls.exceptions_log_path(in_dir=new_log_location) 194 assert pid_specific_log_path != shared_log_path 195 try: 196 pid_specific_error_stream = safe_open(pid_specific_log_path, mode="w") 197 shared_error_stream = safe_open(shared_log_path, mode="a") 198 except Exception as e: 199 raise cls.ExceptionSinkError( 200 "Error opening fatal error log streams for log location '{}': {}".format( 201 new_log_location, str(e) 202 ) 203 ) 204 205 # NB: mutate process-global state! 206 if faulthandler.is_enabled(): 207 logger.debug("re-enabling faulthandler") 208 # Call Py_CLEAR() on the previous error stream: 209 # https://github.com/vstinner/faulthandler/blob/master/faulthandler.c 210 faulthandler.disable() 211 # Send a stacktrace to this file if interrupted by a fatal error. 212 faulthandler.enable(file=pid_specific_error_stream, all_threads=True) 213 214 # NB: mutate the class variables! 215 cls._log_dir = new_log_location 216 cls._pid_specific_error_fileobj = pid_specific_error_stream 217 cls._shared_error_fileobj = shared_error_stream 218 219 @classmethod 220 def exceptions_log_path(cls, for_pid=None, in_dir=None): 221 """Get the path to either the shared or pid-specific fatal errors log file.""" 222 if for_pid is None: 223 intermediate_filename_component = "" 224 else: 225 assert isinstance(for_pid, Pid) 226 intermediate_filename_component = ".{}".format(for_pid) 227 in_dir = in_dir or cls._log_dir 228 return os.path.join( 229 in_dir, ".pids", "exceptions{}.log".format(intermediate_filename_component) 230 ) 231 232 @classmethod 233 def _log_exception(cls, msg): 234 """Try to log an error message to this process's error log and the shared error log. 235 236 NB: Doesn't raise (logs an error instead). 237 """ 238 pid = os.getpid() 239 fatal_error_log_entry = cls._format_exception_message(msg, pid) 240 241 # We care more about this log than the shared log, so write to it first. 242 try: 243 cls._try_write_with_flush(cls._pid_specific_error_fileobj, fatal_error_log_entry) 244 except Exception as e: 245 logger.error( 246 "Error logging the message '{}' to the pid-specific file handle for {} at pid {}:\n{}".format( 247 msg, cls._log_dir, pid, e 248 ) 249 ) 250 251 # Write to the shared log. 252 try: 253 # TODO: we should probably guard this against concurrent modification by other pants 254 # subprocesses somehow. 255 cls._try_write_with_flush(cls._shared_error_fileobj, fatal_error_log_entry) 256 except Exception as e: 257 logger.error( 258 "Error logging the message '{}' to the shared file handle for {} at pid {}:\n{}".format( 259 msg, cls._log_dir, pid, e 260 ) 261 ) 262 263 @classmethod 264 def _try_write_with_flush(cls, fileobj, payload): 265 """This method is here so that it can be patched to simulate write errors. 266 267 This is because mock can't patch primitive objects like file objects. 268 """ 269 fileobj.write(payload) 270 fileobj.flush() 271 272 @classmethod 273 def reset_signal_handler(cls, signal_handler: SignalHandler) -> SignalHandler: 274 """Given a SignalHandler, uses the `signal` std library functionality to set the pants 275 process's signal handlers to those specified in the object. 276 277 Note that since this calls `signal.signal()`, it will crash if not the main thread. Returns 278 the previously-registered signal handler. 279 """ 280 281 for signum, handler in signal_handler.signal_handler_mapping.items(): 282 signal.signal(signum, handler) 283 # Retry any system calls interrupted by any of the signals we just installed handlers for 284 # (instead of having them raise EINTR). siginterrupt(3) says this is the default behavior on 285 # Linux and OSX. 286 signal.siginterrupt(signum, False) 287 288 previous_signal_handler = cls._signal_handler 289 cls._signal_handler = signal_handler 290 291 return previous_signal_handler 292 293 @classmethod 294 @contextmanager 295 def trapped_signals(cls, new_signal_handler: SignalHandler) -> Iterator[None]: 296 """A contextmanager which temporarily overrides signal handling. 297 298 NB: This method calls signal.signal(), which will crash if not called from the main thread! 299 """ 300 previous_signal_handler = cls.reset_signal_handler(new_signal_handler) 301 try: 302 yield 303 finally: 304 cls.reset_signal_handler(previous_signal_handler) 305 306 @classmethod 307 @contextmanager 308 def ignoring_sigint(cls) -> Iterator[None]: 309 """This method provides a context that temporarily disables responding to the SIGINT signal 310 sent by a Ctrl-C in the terminal. 311 312 We currently only use this to implement disabling catching SIGINT while an 313 InteractiveProcess is running (where we want that process to catch it), and only when pantsd 314 is not enabled (if pantsd is enabled, the client will actually catch SIGINT and forward it 315 to the server, so we don't want the server process to ignore it. 316 """ 317 318 try: 319 cls._signal_handler._toggle_ignoring_sigint(True) 320 yield 321 finally: 322 cls._signal_handler._toggle_ignoring_sigint(False) 323 324 @classmethod 325 def _iso_timestamp_for_now(cls): 326 return datetime.datetime.now().isoformat() 327 328 # NB: This includes a trailing newline, but no leading newline. 329 _EXCEPTION_LOG_FORMAT = """\ 330 timestamp: {timestamp} 331 process title: {process_title} 332 sys.argv: {args} 333 pid: {pid} 334 {message} 335 """ 336 337 @classmethod 338 def _format_exception_message(cls, msg, pid): 339 return cls._EXCEPTION_LOG_FORMAT.format( 340 timestamp=cls._iso_timestamp_for_now(), 341 process_title=setproctitle.getproctitle(), 342 args=sys.argv, 343 pid=pid, 344 message=msg, 345 ) 346 347 _traceback_omitted_default_text = "(backtrace omitted)" 348 349 @classmethod 350 def _format_traceback(cls, traceback_lines, should_print_backtrace): 351 if should_print_backtrace: 352 traceback_string = "\n{}".format("".join(traceback_lines)) 353 else: 354 traceback_string = " {}".format(cls._traceback_omitted_default_text) 355 return traceback_string 356 357 _UNHANDLED_EXCEPTION_LOG_FORMAT = """\ 358 Exception caught: ({exception_type}){backtrace} 359 Exception message: {exception_message}{maybe_newline} 360 """ 361 362 @classmethod 363 def _format_unhandled_exception_log(cls, exc, tb, add_newline, should_print_backtrace): 364 exc_type = type(exc) 365 exception_full_name = "{}.{}".format(exc_type.__module__, exc_type.__name__) 366 exception_message = str(exc) if exc else "(no message)" 367 maybe_newline = "\n" if add_newline else "" 368 return cls._UNHANDLED_EXCEPTION_LOG_FORMAT.format( 369 exception_type=exception_full_name, 370 backtrace=cls._format_traceback( 371 traceback_lines=traceback.format_tb(tb), 372 should_print_backtrace=should_print_backtrace, 373 ), 374 exception_message=exception_message, 375 maybe_newline=maybe_newline, 376 ) 377 378 @classmethod 379 def log_exception(cls, exc_class=None, exc=None, tb=None, add_newline=False): 380 """Logs an unhandled exception to a variety of locations.""" 381 exc_class = exc_class or sys.exc_info()[0] 382 exc = exc or sys.exc_info()[1] 383 tb = tb or sys.exc_info()[2] 384 385 # This exception was raised by a signal handler with the intent to exit the program. 386 if exc_class == SignalHandler.SignalHandledNonLocalExit: 387 return cls._handle_signal_gracefully(exc.signum, exc.signame, exc.traceback_lines) 388 389 extra_err_msg = None 390 try: 391 # Always output the unhandled exception details into a log file, including the 392 # traceback. 393 exception_log_entry = cls._format_unhandled_exception_log( 394 exc, tb, add_newline, should_print_backtrace=True 395 ) 396 cls._log_exception(exception_log_entry) 397 except Exception as e: 398 extra_err_msg = "Additional error logging unhandled exception {}: {}".format(exc, e) 399 logger.error(extra_err_msg) 400 401 # The rust logger implementation will have its own stacktrace, but at import time, we want 402 # to be able to see any stacktrace to know where the error is being raised, so we reproduce 403 # it here. 404 exception_log_entry = cls._format_unhandled_exception_log( 405 exc, tb, add_newline, should_print_backtrace=True 406 ) 407 logger.exception(exception_log_entry) 408 409 @classmethod 410 def _handle_signal_gracefully(cls, signum, signame, traceback_lines): 411 """Signal handler for non-fatal signals which raises or logs an error.""" 412 413 def gen_formatted(formatted_traceback: str) -> str: 414 return f"Signal {signum} ({signame}) was raised. Exiting with failure.{formatted_traceback}" 415 416 # Extract the stack, and format an entry to be written to the exception log. 417 formatted_traceback = cls._format_traceback( 418 traceback_lines=traceback_lines, should_print_backtrace=True 419 ) 420 421 signal_error_log_entry = gen_formatted(formatted_traceback) 422 423 # TODO: determine the appropriate signal-safe behavior here (to avoid writing to our file 424 # descriptors reentrantly, which raises an IOError). 425 # This method catches any exceptions raised within it. 426 cls._log_exception(signal_error_log_entry) 427 428 # Create a potentially-abbreviated traceback for the terminal or other interactive stream. 429 formatted_traceback_for_terminal = cls._format_traceback( 430 traceback_lines=traceback_lines, 431 should_print_backtrace=True, 432 ) 433 434 terminal_log_entry = gen_formatted(formatted_traceback_for_terminal) 435 436 # Print the output via standard logging. 437 logger.error(terminal_log_entry) 438 [end of src/python/pants/base/exception_sink.py] [start of src/python/pants/bin/daemon_pants_runner.py] 1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 import logging 5 import os 6 import sys 7 import time 8 from contextlib import contextmanager 9 from threading import Lock 10 from typing import Dict, Tuple 11 12 from pants.base.exiter import PANTS_FAILED_EXIT_CODE, ExitCode 13 from pants.bin.local_pants_runner import LocalPantsRunner 14 from pants.engine.internals.native import Native, RawFdRunner 15 from pants.engine.internals.native_engine import PySessionCancellationLatch 16 from pants.init.logging import ( 17 clear_logging_handlers, 18 get_logging_handlers, 19 set_logging_handlers, 20 setup_logging, 21 ) 22 from pants.init.util import clean_global_runtime_state 23 from pants.option.options_bootstrapper import OptionsBootstrapper 24 from pants.pantsd.pants_daemon_core import PantsDaemonCore 25 from pants.util.contextutil import argv_as, hermetic_environment_as, stdio_as 26 27 logger = logging.getLogger(__name__) 28 29 30 class ExclusiveRequestTimeout(Exception): 31 """Represents a timeout while waiting for another request to complete.""" 32 33 34 class DaemonPantsRunner(RawFdRunner): 35 """A RawFdRunner (callable) that will be called for each client request to Pantsd.""" 36 37 def __init__(self, core: PantsDaemonCore) -> None: 38 super().__init__() 39 self._core = core 40 self._run_lock = Lock() 41 42 @staticmethod 43 def _send_stderr(stderr_fd: int, msg: str) -> None: 44 """Used to send stderr on a raw filehandle _before_ stdio replacement. 45 46 After stdio replacement has happened via `stdio_as` (which mutates sys.std*, and thus cannot 47 happen until the request lock has been acquired), sys.std* should be used directly. 48 """ 49 with os.fdopen(stderr_fd, mode="w", closefd=False) as stderr: 50 print(msg, file=stderr, flush=True) 51 52 @contextmanager 53 def _one_run_at_a_time( 54 self, stderr_fd: int, cancellation_latch: PySessionCancellationLatch, timeout: float 55 ): 56 """Acquires exclusive access within the daemon. 57 58 Periodically prints a message on the given stderr_fd while exclusive access cannot be 59 acquired. 60 61 TODO: This method will be removed as part of #7654, so it currently polls the lock and 62 cancellation latch rather than waiting for both of them asynchronously, which would be a bit 63 cleaner. 64 """ 65 66 render_timeout = 5 67 should_poll_forever = timeout <= 0 68 start = time.time() 69 render_deadline = start + render_timeout 70 deadline = None if should_poll_forever else start + timeout 71 72 def should_keep_polling(now): 73 return not cancellation_latch.is_cancelled() and (not deadline or deadline > now) 74 75 acquired = self._run_lock.acquire(blocking=False) 76 if not acquired: 77 # If we don't acquire immediately, send an explanation. 78 length = "forever" if should_poll_forever else "up to {} seconds".format(timeout) 79 self._send_stderr( 80 stderr_fd, 81 f"Another pants invocation is running. Will wait {length} for it to finish before giving up.\n" 82 "If you don't want to wait for the first run to finish, please press Ctrl-C and run " 83 "this command with PANTS_CONCURRENT=True in the environment.\n", 84 ) 85 while True: 86 now = time.time() 87 if acquired: 88 try: 89 yield 90 break 91 finally: 92 self._run_lock.release() 93 elif should_keep_polling(now): 94 if now > render_deadline: 95 self._send_stderr( 96 stderr_fd, 97 f"Waiting for invocation to finish (waited for {int(now - start)}s so far)...\n", 98 ) 99 render_deadline = now + render_timeout 100 acquired = self._run_lock.acquire(blocking=True, timeout=0.1) 101 else: 102 raise ExclusiveRequestTimeout( 103 "Timed out while waiting for another pants invocation to finish." 104 ) 105 106 @contextmanager 107 def _stderr_logging(self, global_bootstrap_options): 108 """Temporarily replaces existing handlers (ie, the pantsd handler) with a stderr handler. 109 110 In the context of pantsd, there will be an existing handler for the pantsd log, which we 111 temporarily replace. Making them additive would cause per-run logs to go to pantsd, which 112 we don't want. 113 114 TODO: It would be good to handle logging destinations entirely via the threadlocal state 115 rather than via handler mutations. 116 """ 117 handlers = get_logging_handlers() 118 try: 119 clear_logging_handlers() 120 Native().override_thread_logging_destination_to_just_stderr() 121 setup_logging(global_bootstrap_options, stderr_logging=True) 122 yield 123 finally: 124 Native().override_thread_logging_destination_to_just_pantsd() 125 set_logging_handlers(handlers) 126 127 def single_daemonized_run( 128 self, working_dir: str, cancellation_latch: PySessionCancellationLatch 129 ) -> ExitCode: 130 """Run a single daemonized run of Pants. 131 132 All aspects of the `sys` global should already have been replaced in `__call__`, so this 133 method should not need any special handling for the fact that it's running in a proxied 134 environment. 135 """ 136 137 # Capture the client's start time, which we propagate here in order to get an accurate 138 # view of total time. 139 env_start_time = os.environ.get("PANTSD_RUNTRACKER_CLIENT_START_TIME", None) 140 start_time = float(env_start_time) if env_start_time else time.time() 141 142 # Clear global mutable state before entering `LocalPantsRunner`. Note that we use 143 # `sys.argv` and `os.environ`, since they have been mutated to maintain the illusion 144 # of a local run: once we allow for concurrent runs, this information should be 145 # propagated down from the caller. 146 # see https://github.com/pantsbuild/pants/issues/7654 147 clean_global_runtime_state() 148 options_bootstrapper = OptionsBootstrapper.create( 149 env=os.environ, args=sys.argv, allow_pantsrc=True 150 ) 151 bootstrap_options = options_bootstrapper.bootstrap_options 152 global_bootstrap_options = bootstrap_options.for_global_scope() 153 154 # Run using the pre-warmed Session. 155 with self._stderr_logging(global_bootstrap_options): 156 try: 157 scheduler = self._core.prepare_scheduler(options_bootstrapper) 158 runner = LocalPantsRunner.create( 159 os.environ, 160 options_bootstrapper, 161 scheduler=scheduler, 162 cancellation_latch=cancellation_latch, 163 ) 164 return runner.run(start_time) 165 except Exception as e: 166 logger.exception(e) 167 return PANTS_FAILED_EXIT_CODE 168 except KeyboardInterrupt: 169 print("Interrupted by user.\n", file=sys.stderr) 170 return PANTS_FAILED_EXIT_CODE 171 172 def __call__( 173 self, 174 command: str, 175 args: Tuple[str, ...], 176 env: Dict[str, str], 177 working_directory: bytes, 178 cancellation_latch: PySessionCancellationLatch, 179 stdin_fd: int, 180 stdout_fd: int, 181 stderr_fd: int, 182 ) -> ExitCode: 183 request_timeout = float(env.get("PANTSD_REQUEST_TIMEOUT_LIMIT", -1)) 184 # NB: Order matters: we acquire a lock before mutating either `sys.std*`, `os.environ`, etc. 185 with self._one_run_at_a_time( 186 stderr_fd, 187 cancellation_latch=cancellation_latch, 188 timeout=request_timeout, 189 ), stdio_as( 190 stdin_fd=stdin_fd, stdout_fd=stdout_fd, stderr_fd=stderr_fd 191 ), hermetic_environment_as( 192 **env 193 ), argv_as( 194 (command,) + args 195 ): 196 # NB: Run implements exception handling, so only the most primitive errors will escape 197 # this function, where they will be logged to the pantsd.log by the server. 198 logger.info(f"handling request: `{' '.join(args)}`") 199 try: 200 return self.single_daemonized_run(working_directory.decode(), cancellation_latch) 201 finally: 202 logger.info(f"request completed: `{' '.join(args)}`") 203 [end of src/python/pants/bin/daemon_pants_runner.py] [start of src/python/pants/bin/remote_pants_runner.py] 1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 import logging 5 import signal 6 import sys 7 import termios 8 import time 9 from contextlib import contextmanager 10 from typing import List, Mapping 11 12 from pants.base.exiter import ExitCode 13 from pants.engine.internals.native import Native 14 from pants.engine.internals.native_engine import PyExecutor 15 from pants.init.options_initializer import OptionsInitializer 16 from pants.nailgun.nailgun_protocol import NailgunProtocol 17 from pants.option.options_bootstrapper import OptionsBootstrapper 18 from pants.pantsd.pants_daemon_client import PantsDaemonClient 19 20 logger = logging.getLogger(__name__) 21 22 23 @contextmanager 24 def interrupts_ignored(): 25 """Disables Python's default interrupt handling.""" 26 old_handler = signal.signal(signal.SIGINT, handler=lambda s, f: None) 27 try: 28 yield 29 finally: 30 signal.signal(signal.SIGINT, old_handler) 31 32 33 class STTYSettings: 34 """Saves/restores stty settings.""" 35 36 @classmethod 37 @contextmanager 38 def preserved(cls): 39 """Run potentially stty-modifying operations, e.g., REPL execution, in this 40 contextmanager.""" 41 inst = cls() 42 inst.save_tty_flags() 43 try: 44 yield 45 finally: 46 inst.restore_tty_flags() 47 48 def __init__(self): 49 self._tty_flags = None 50 51 def save_tty_flags(self): 52 # N.B. `stty(1)` operates against stdin. 53 try: 54 self._tty_flags = termios.tcgetattr(sys.stdin.fileno()) 55 except termios.error as e: 56 logger.debug("masking tcgetattr exception: {!r}".format(e)) 57 58 def restore_tty_flags(self): 59 if self._tty_flags: 60 try: 61 termios.tcsetattr(sys.stdin.fileno(), termios.TCSANOW, self._tty_flags) 62 except termios.error as e: 63 logger.debug("masking tcsetattr exception: {!r}".format(e)) 64 65 66 class RemotePantsRunner: 67 """A thin client variant of PantsRunner.""" 68 69 class Fallback(Exception): 70 """Raised when fallback to an alternate execution mode is requested.""" 71 72 class Terminated(Exception): 73 """Raised when an active run is terminated mid-flight.""" 74 75 def __init__( 76 self, 77 args: List[str], 78 env: Mapping[str, str], 79 options_bootstrapper: OptionsBootstrapper, 80 ) -> None: 81 """ 82 :param args: The arguments (e.g. sys.argv) for this run. 83 :param env: The environment (e.g. os.environ) for this run. 84 :param options_bootstrapper: The bootstrap options. 85 """ 86 self._start_time = time.time() 87 self._args = args 88 self._env = env 89 self._options_bootstrapper = options_bootstrapper 90 self._bootstrap_options = options_bootstrapper.bootstrap_options 91 self._client = PantsDaemonClient(self._bootstrap_options) 92 93 def run(self) -> ExitCode: 94 """Starts up a pantsd instance if one is not already running, then connects to it via 95 nailgun.""" 96 97 pantsd_handle = self._client.maybe_launch() 98 logger.debug(f"Connecting to pantsd on port {pantsd_handle.port}") 99 100 return self._connect_and_execute(pantsd_handle) 101 102 def _connect_and_execute(self, pantsd_handle: PantsDaemonClient.Handle) -> ExitCode: 103 native = Native() 104 105 global_options = self._bootstrap_options.for_global_scope() 106 executor = PyExecutor(*OptionsInitializer.compute_executor_arguments(global_options)) 107 108 # Merge the nailgun TTY capability environment variables with the passed environment dict. 109 ng_env = NailgunProtocol.ttynames_to_env(sys.stdin, sys.stdout.buffer, sys.stderr.buffer) 110 modified_env = { 111 **self._env, 112 **ng_env, 113 "PANTSD_RUNTRACKER_CLIENT_START_TIME": str(self._start_time), 114 "PANTSD_REQUEST_TIMEOUT_LIMIT": str( 115 global_options.pantsd_timeout_when_multiple_invocations 116 ), 117 } 118 119 command = self._args[0] 120 args = self._args[1:] 121 122 retries = 3 123 attempt = 1 124 while True: 125 port = pantsd_handle.port 126 logger.debug(f"Connecting to pantsd on port {port} attempt {attempt}/{retries}") 127 128 # We preserve TTY settings since the server might write directly to the TTY, and we'd like 129 # to clean up any side effects before exiting. 130 # 131 # We ignore keyboard interrupts because the nailgun client will handle them. 132 with STTYSettings.preserved(), interrupts_ignored(): 133 try: 134 return native.new_nailgun_client(executor=executor, port=port).execute( 135 command, args, modified_env 136 ) 137 138 # NailgunConnectionException represents a failure connecting to pantsd, so we retry 139 # up to the retry limit. 140 except native.lib.NailgunConnectionException as e: 141 if attempt > retries: 142 raise self.Fallback(e) 143 144 # Wait one second before retrying 145 logger.warning(f"Pantsd was unresponsive on port {port}, retrying.") 146 time.sleep(1) 147 148 # One possible cause of the daemon being non-responsive during an attempt might be if a 149 # another lifecycle operation is happening concurrently (incl teardown). To account for 150 # this, we won't begin attempting restarts until at least 1 attempt has passed. 151 if attempt > 1: 152 pantsd_handle = self._client.restart() 153 154 attempt += 1 155 [end of src/python/pants/bin/remote_pants_runner.py] [start of src/python/pants/engine/internals/native.py] 1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 import logging 5 import os 6 from typing import Dict, Iterable, List, Mapping, Optional, Tuple, Union, cast 7 8 from typing_extensions import Protocol 9 10 from pants.base.exiter import ExitCode 11 from pants.engine.fs import PathGlobs 12 from pants.engine.internals import native_engine 13 from pants.engine.internals.native_engine import ( 14 PyExecutionRequest, 15 PyExecutionStrategyOptions, 16 PyExecutor, 17 PyGeneratorResponseBreak, 18 PyGeneratorResponseGet, 19 PyGeneratorResponseGetMulti, 20 PyNailgunClient, 21 PyNailgunServer, 22 PyRemotingOptions, 23 PyScheduler, 24 PySession, 25 PySessionCancellationLatch, 26 PyTasks, 27 PyTypes, 28 ) 29 from pants.engine.internals.session import SessionValues 30 from pants.engine.rules import Get 31 from pants.engine.unions import union 32 from pants.util.logging import LogLevel 33 from pants.util.memo import memoized_property 34 from pants.util.meta import SingletonMetaclass 35 36 logger = logging.getLogger(__name__) 37 38 39 class Externs: 40 """Methods exposed from Python to Rust. 41 42 TODO: These could be implemented in Rust in `externs.rs` via the cpython API. 43 """ 44 45 def __init__(self, lib): 46 self.lib = lib 47 48 _do_raise_keyboardinterrupt_in = os.environ.get("_RAISE_KEYBOARDINTERRUPT_IN_EXTERNS", None) 49 50 def is_union(self, input_type): 51 """Return whether or not a type is a member of a union.""" 52 # NB: This check is exposed for testing error handling in CFFI methods. This code path should 53 # never be active in normal pants usage. 54 return union.is_instance(input_type) 55 56 def create_exception(self, msg): 57 """Given a utf8 message string, create an Exception object.""" 58 return Exception(msg) 59 60 def generator_send( 61 self, func, arg 62 ) -> Union[PyGeneratorResponseGet, PyGeneratorResponseGetMulti, PyGeneratorResponseBreak]: 63 """Given a generator, send it the given value and return a response.""" 64 if ( 65 self._do_raise_keyboardinterrupt_in 66 and self._do_raise_keyboardinterrupt_in in func.__name__ 67 ): 68 raise KeyboardInterrupt("ctrl-c interrupted execution of a ffi method!") 69 try: 70 res = func.send(arg) 71 72 if isinstance(res, Get): 73 # Get. 74 return PyGeneratorResponseGet( 75 product=res.output_type, 76 declared_subject=res.input_type, 77 subject=res.input, 78 ) 79 elif type(res) in (tuple, list): 80 # GetMulti. 81 return PyGeneratorResponseGetMulti( 82 gets=tuple( 83 PyGeneratorResponseGet( 84 product=get.output_type, 85 declared_subject=get.input_type, 86 subject=get.input, 87 ) 88 for get in res 89 ) 90 ) 91 else: 92 raise ValueError(f"internal engine error: unrecognized coroutine result {res}") 93 except StopIteration as e: 94 if not e.args: 95 raise 96 # This was a `return` from a coroutine, as opposed to a `StopIteration` raised 97 # by calling `next()` on an empty iterator. 98 return PyGeneratorResponseBreak(val=e.value) 99 100 101 class RawFdRunner(Protocol): 102 def __call__( 103 self, 104 command: str, 105 args: Tuple[str, ...], 106 env: Dict[str, str], 107 working_directory: bytes, 108 cancellation_latch: PySessionCancellationLatch, 109 stdin_fd: int, 110 stdout_fd: int, 111 stderr_fd: int, 112 ) -> ExitCode: 113 ... 114 115 116 class Native(metaclass=SingletonMetaclass): 117 """Encapsulates fetching a platform specific version of the native portion of the engine.""" 118 119 def __init__(self): 120 self.externs = Externs(self.lib) 121 self.lib.externs_set(self.externs) 122 123 class BinaryLocationError(Exception): 124 pass 125 126 @memoized_property 127 def lib(self): 128 """Load the native engine as a python module.""" 129 return native_engine 130 131 def init_rust_logging( 132 self, 133 level: int, 134 log_show_rust_3rdparty: bool, 135 use_color: bool, 136 show_target: bool, 137 log_levels_by_target: Mapping[str, LogLevel], 138 message_regex_filters: Iterable[str], 139 ): 140 log_levels_as_ints = {k: v.level for k, v in log_levels_by_target.items()} 141 return self.lib.init_logging( 142 level, 143 log_show_rust_3rdparty, 144 use_color, 145 show_target, 146 log_levels_as_ints, 147 tuple(message_regex_filters), 148 ) 149 150 def set_per_run_log_path(self, path: Optional[str]) -> None: 151 """Instructs the logging code to also write emitted logs to a run-specific log file; or 152 disables writing to any run-specific file if `None` is passed.""" 153 self.lib.set_per_run_log_path(path) 154 155 def default_cache_path(self) -> str: 156 return cast(str, self.lib.default_cache_path()) 157 158 def setup_pantsd_logger(self, log_file_path): 159 return self.lib.setup_pantsd_logger(log_file_path) 160 161 def setup_stderr_logger(self): 162 return self.lib.setup_stderr_logger() 163 164 def write_log(self, msg: str, *, level: int, target: str): 165 """Proxy a log message to the Rust logging faculties.""" 166 return self.lib.write_log(msg, level, target) 167 168 def write_stdout(self, scheduler, session, msg: str, teardown_ui: bool): 169 if teardown_ui: 170 self.teardown_dynamic_ui(scheduler, session) 171 return self.lib.write_stdout(session, msg) 172 173 def write_stderr(self, scheduler, session, msg: str, teardown_ui: bool): 174 if teardown_ui: 175 self.teardown_dynamic_ui(scheduler, session) 176 return self.lib.write_stderr(session, msg) 177 178 def teardown_dynamic_ui(self, scheduler, session): 179 self.lib.teardown_dynamic_ui(scheduler, session) 180 181 def flush_log(self): 182 return self.lib.flush_log() 183 184 def override_thread_logging_destination_to_just_pantsd(self): 185 self.lib.override_thread_logging_destination("pantsd") 186 187 def override_thread_logging_destination_to_just_stderr(self): 188 self.lib.override_thread_logging_destination("stderr") 189 190 def match_path_globs(self, path_globs: PathGlobs, paths: Iterable[str]) -> Tuple[str, ...]: 191 """Return all paths that match the PathGlobs.""" 192 return tuple(self.lib.match_path_globs(path_globs, tuple(paths))) 193 194 def nailgun_server_await_shutdown(self, nailgun_server) -> None: 195 """Blocks until the server has shut down. 196 197 Raises an exception if the server exited abnormally 198 """ 199 self.lib.nailgun_server_await_shutdown(nailgun_server) 200 201 def new_nailgun_server( 202 self, executor: PyExecutor, port: int, runner: RawFdRunner 203 ) -> PyNailgunServer: 204 """Creates a nailgun server with a requested port. 205 206 Returns the server and the actual port it bound to. 207 """ 208 return cast(PyNailgunServer, self.lib.nailgun_server_create(executor, port, runner)) 209 210 def new_nailgun_client(self, executor: PyExecutor, port: int) -> PyNailgunClient: 211 return cast(PyNailgunClient, self.lib.nailgun_client_create(executor, port)) 212 213 def new_tasks(self) -> PyTasks: 214 return PyTasks() 215 216 def new_execution_request(self) -> PyExecutionRequest: 217 return PyExecutionRequest() 218 219 def new_session( 220 self, 221 scheduler, 222 dynamic_ui: bool, 223 build_id, 224 session_values: SessionValues, 225 cancellation_latch: PySessionCancellationLatch, 226 ) -> PySession: 227 return PySession( 228 scheduler=scheduler, 229 should_render_ui=dynamic_ui, 230 build_id=build_id, 231 session_values=session_values, 232 cancellation_latch=cancellation_latch, 233 ) 234 235 def new_scheduler( 236 self, 237 tasks, 238 build_root: str, 239 local_store_dir: str, 240 local_execution_root_dir: str, 241 named_caches_dir: str, 242 ca_certs_path: Optional[str], 243 ignore_patterns: List[str], 244 use_gitignore: bool, 245 executor: PyExecutor, 246 execution_options, 247 types: PyTypes, 248 ) -> PyScheduler: 249 """Create and return a native Scheduler.""" 250 251 remoting_options = PyRemotingOptions( 252 execution_enable=execution_options.remote_execution, 253 store_servers=execution_options.remote_store_server, 254 execution_server=execution_options.remote_execution_server, 255 execution_process_cache_namespace=execution_options.process_execution_cache_namespace, 256 instance_name=execution_options.remote_instance_name, 257 root_ca_certs_path=execution_options.remote_ca_certs_path, 258 oauth_bearer_token_path=execution_options.remote_oauth_bearer_token_path, 259 store_thread_count=execution_options.remote_store_thread_count, 260 store_chunk_bytes=execution_options.remote_store_chunk_bytes, 261 store_chunk_upload_timeout=execution_options.remote_store_chunk_upload_timeout_seconds, 262 store_rpc_retries=execution_options.remote_store_rpc_retries, 263 store_connection_limit=execution_options.remote_store_connection_limit, 264 store_initial_timeout=execution_options.remote_store_initial_timeout, 265 store_timeout_multiplier=execution_options.remote_store_timeout_multiplier, 266 store_maximum_timeout=execution_options.remote_store_maximum_timeout, 267 execution_extra_platform_properties=tuple( 268 tuple(pair.split("=", 1)) 269 for pair in execution_options.remote_execution_extra_platform_properties 270 ), 271 execution_headers=tuple( 272 (k, v) for (k, v) in execution_options.remote_execution_headers.items() 273 ), 274 execution_overall_deadline_secs=execution_options.remote_execution_overall_deadline_secs, 275 ) 276 277 exec_stategy_opts = PyExecutionStrategyOptions( 278 local_parallelism=execution_options.process_execution_local_parallelism, 279 remote_parallelism=execution_options.process_execution_remote_parallelism, 280 cleanup_local_dirs=execution_options.process_execution_cleanup_local_dirs, 281 speculation_delay=execution_options.process_execution_speculation_delay, 282 speculation_strategy=execution_options.process_execution_speculation_strategy, 283 use_local_cache=execution_options.process_execution_use_local_cache, 284 local_enable_nailgun=execution_options.process_execution_local_enable_nailgun, 285 remote_cache_read=execution_options.remote_cache_read, 286 remote_cache_write=execution_options.remote_cache_write, 287 ) 288 289 return cast( 290 PyScheduler, 291 self.lib.scheduler_create( 292 executor, 293 tasks, 294 types, 295 # Project tree. 296 build_root, 297 local_store_dir, 298 local_execution_root_dir, 299 named_caches_dir, 300 ca_certs_path, 301 ignore_patterns, 302 use_gitignore, 303 remoting_options, 304 exec_stategy_opts, 305 ), 306 ) 307 308 def set_panic_handler(self): 309 if os.getenv("RUST_BACKTRACE", "0") == "0": 310 # The panic handler hides a lot of rust tracing which may be useful. 311 # Don't activate it when the user explicitly asks for rust backtraces. 312 self.lib.set_panic_handler() 313 [end of src/python/pants/engine/internals/native.py] [start of src/python/pants/pantsd/service/scheduler_service.py] 1 # Copyright 2016 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 import logging 5 import time 6 from typing import Optional, Tuple, cast 7 8 import psutil 9 10 from pants.engine.fs import PathGlobs, Snapshot 11 from pants.engine.internals.scheduler import ExecutionTimeoutError 12 from pants.init.engine_initializer import GraphScheduler 13 from pants.pantsd.service.pants_service import PantsService 14 15 16 class SchedulerService(PantsService): 17 """The pantsd scheduler service. 18 19 This service uses the scheduler to watch the filesystem and determine whether pantsd needs to 20 restart in order to reload its state. 21 """ 22 23 # The interval on which we will long-poll the invalidation globs. If a glob changes, the poll 24 # will return immediately, so this value primarily affects how frequently the `run` method 25 # will check the terminated condition. 26 INVALIDATION_POLL_INTERVAL = 0.5 27 # A grace period after startup that we will wait before enforcing our pid. 28 PIDFILE_GRACE_PERIOD = 5 29 30 def __init__( 31 self, 32 *, 33 graph_scheduler: GraphScheduler, 34 build_root: str, 35 invalidation_globs: Tuple[str, ...], 36 pidfile: str, 37 pid: int, 38 max_memory_usage_in_bytes: int, 39 ) -> None: 40 """ 41 :param graph_scheduler: The GraphScheduler instance for graph construction. 42 :param build_root: The current build root. 43 :param invalidation_globs: A tuple of `globs` that when encountered in filesystem event 44 subscriptions will tear down the daemon. 45 :param pidfile: A pidfile which should contain this processes' pid in order for the daemon 46 to remain valid. 47 :param pid: This processes' pid. 48 :param max_memory_usage_in_bytes: The maximum memory usage of the process: the service will 49 shut down if it observes more than this amount in use. 50 """ 51 super().__init__() 52 self._graph_helper = graph_scheduler 53 self._build_root = build_root 54 55 self._scheduler = graph_scheduler.scheduler 56 # This session is only used for checking whether any invalidation globs have been invalidated. 57 # It is not involved with a build itself; just with deciding when we should restart pantsd. 58 self._scheduler_session = self._scheduler.new_session( 59 build_id="scheduler_service_session", 60 ) 61 self._logger = logging.getLogger(__name__) 62 63 # NB: We declare these as a single field so that they can be changed atomically. 64 self._invalidation_globs_and_snapshot: Tuple[Tuple[str, ...], Optional[Snapshot]] = ( 65 invalidation_globs, 66 None, 67 ) 68 69 self._pidfile = pidfile 70 self._pid = pid 71 self._max_memory_usage_in_bytes = max_memory_usage_in_bytes 72 73 def _get_snapshot(self, globs: Tuple[str, ...], poll: bool) -> Optional[Snapshot]: 74 """Returns a Snapshot of the input globs. 75 76 If poll=True, will wait for up to INVALIDATION_POLL_INTERVAL for the globs to have changed, 77 and will return None if they have not changed. 78 """ 79 timeout = self.INVALIDATION_POLL_INTERVAL if poll else None 80 try: 81 snapshot = self._scheduler_session.product_request( 82 Snapshot, 83 subjects=[PathGlobs(globs)], 84 poll=poll, 85 timeout=timeout, 86 )[0] 87 return cast(Snapshot, snapshot) 88 except ExecutionTimeoutError: 89 if poll: 90 return None 91 raise 92 93 def _check_invalidation_globs(self, poll: bool): 94 """Check the digest of our invalidation Snapshot and exit if it has changed.""" 95 globs, invalidation_snapshot = self._invalidation_globs_and_snapshot 96 assert invalidation_snapshot is not None, "Should have been eagerly initialized in run." 97 98 snapshot = self._get_snapshot(globs, poll=poll) 99 if snapshot is None or snapshot.digest == invalidation_snapshot.digest: 100 return 101 102 before = set(invalidation_snapshot.files + invalidation_snapshot.dirs) 103 after = set(snapshot.files + snapshot.dirs) 104 added = after - before 105 removed = before - after 106 if added or removed: 107 description = f"+{added or '{}'}, -{removed or '{}'}" 108 else: 109 description = f"content changed ({snapshot.digest} fs {invalidation_snapshot.digest})" 110 self._logger.critical( 111 f"saw filesystem changes covered by invalidation globs: {description}. terminating the daemon." 112 ) 113 self.terminate() 114 115 def _check_pidfile(self): 116 try: 117 with open(self._pidfile, "r") as f: 118 pid_from_file = f.read() 119 except IOError: 120 raise Exception(f"Could not read pants pidfile at {self._pidfile}.") 121 if int(pid_from_file) != self._pid: 122 raise Exception(f"Another instance of pantsd is running at {pid_from_file}") 123 124 def _check_memory_usage(self): 125 memory_usage_in_bytes = psutil.Process(self._pid).memory_info()[0] 126 if memory_usage_in_bytes > self._max_memory_usage_in_bytes: 127 raise Exception( 128 f"pantsd process {self._pid} was using " 129 f"{memory_usage_in_bytes} bytes of memory (above the limit of " 130 f"{self._max_memory_usage_in_bytes} bytes)." 131 ) 132 133 def _check_invalidation_watcher_liveness(self): 134 self._scheduler.check_invalidation_watcher_liveness() 135 136 def run(self): 137 """Main service entrypoint.""" 138 # N.B. We compute the invalidating fileset eagerly at launch with an assumption that files 139 # that exist at startup are the only ones that can affect the running daemon. 140 globs, _ = self._invalidation_globs_and_snapshot 141 self._invalidation_globs_and_snapshot = (globs, self._get_snapshot(globs, poll=False)) 142 self._logger.debug("watching invalidation patterns: {}".format(globs)) 143 pidfile_deadline = time.time() + self.PIDFILE_GRACE_PERIOD 144 145 while not self._state.is_terminating: 146 try: 147 self._state.maybe_pause() 148 self._check_invalidation_watcher_liveness() 149 self._check_memory_usage() 150 if time.time() > pidfile_deadline: 151 self._check_pidfile() 152 # NB: This is a long poll that will keep us from looping too quickly here. 153 self._check_invalidation_globs(poll=True) 154 except Exception as e: 155 # Watcher failed for some reason 156 self._logger.critical(f"The scheduler was invalidated: {e!r}") 157 self.terminate() 158 [end of src/python/pants/pantsd/service/scheduler_service.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
8d70962443a16ba59e4284f4acbc90e18ad2216d
Ctrl+C delayed when pantsd is disabled #11223 broke `Ctrl+C` when `pantsd` is disabled, because we no longer explicitly poll the Python code inside the `Scheduler::execute` loop.
2020-12-30T22:15:33Z
<patch> diff --git a/src/python/pants/base/exception_sink.py b/src/python/pants/base/exception_sink.py --- a/src/python/pants/base/exception_sink.py +++ b/src/python/pants/base/exception_sink.py @@ -15,7 +15,6 @@ import psutil import setproctitle -from pants.engine.internals.native_engine import session_cancel_all from pants.util.dirutil import safe_mkdir, safe_open from pants.util.osutil import Pid @@ -75,7 +74,6 @@ def _send_signal_to_children(self, received_signal: int, signame: str) -> None: child_process.send_signal(received_signal) def handle_sigint(self, signum: int, _frame): - session_cancel_all() self._send_signal_to_children(signum, "SIGINT") raise KeyboardInterrupt("User interrupted execution with control-c!") @@ -102,12 +100,10 @@ def __init__(self, signum, signame): ) def handle_sigquit(self, signum, _frame): - session_cancel_all() self._send_signal_to_children(signum, "SIGQUIT") raise self.SignalHandledNonLocalExit(signum, "SIGQUIT") def handle_sigterm(self, signum, _frame): - session_cancel_all() self._send_signal_to_children(signum, "SIGTERM") raise self.SignalHandledNonLocalExit(signum, "SIGTERM") </patch>
[]
[]
apache__airflow-19142
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BaseOperator type hints for retry_delay and max_retry_delay should reveal float option ### Describe the issue with documentation `BaseOperator` type hints for `retry_delay` and `max_retry_delay` shows `timedelta` only, however the params also accept `float` seconds values. Also, type hint for `dag` param is missing. More precise type hints and params descriptions in the docs can help to understand the code behavior easier. ### How to solve the problem _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) </issue> <code> [start of README.md] 1 <!-- 2 Licensed to the Apache Software Foundation (ASF) under one 3 or more contributor license agreements. See the NOTICE file 4 distributed with this work for additional information 5 regarding copyright ownership. The ASF licenses this file 6 to you under the Apache License, Version 2.0 (the 7 "License"); you may not use this file except in compliance 8 with the License. You may obtain a copy of the License at 9 10 http://www.apache.org/licenses/LICENSE-2.0 11 12 Unless required by applicable law or agreed to in writing, 13 software distributed under the License is distributed on an 14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 KIND, either express or implied. See the License for the 16 specific language governing permissions and limitations 17 under the License. 18 --> 19 20 # Apache Airflow 21 22 [![PyPI version](https://badge.fury.io/py/apache-airflow.svg)](https://badge.fury.io/py/apache-airflow) 23 [![GitHub Build](https://github.com/apache/airflow/workflows/CI%20Build/badge.svg)](https://github.com/apache/airflow/actions) 24 [![Coverage Status](https://img.shields.io/codecov/c/github/apache/airflow/main.svg)](https://codecov.io/github/apache/airflow?branch=main) 25 [![License](https://img.shields.io/:license-Apache%202-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) 26 [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apache-airflow.svg)](https://pypi.org/project/apache-airflow/) 27 [![Docker Pulls](https://img.shields.io/docker/pulls/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 28 [![Docker Stars](https://img.shields.io/docker/stars/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 29 [![PyPI - Downloads](https://img.shields.io/pypi/dm/apache-airflow)](https://pypi.org/project/apache-airflow/) 30 [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/apache-airflow)](https://artifacthub.io/packages/search?repo=apache-airflow) 31 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 32 [![Twitter Follow](https://img.shields.io/twitter/follow/ApacheAirflow.svg?style=social&label=Follow)](https://twitter.com/ApacheAirflow) 33 [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://s.apache.org/airflow-slack) 34 35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. 36 37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. 38 39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. 40 41 <!-- START doctoc generated TOC please keep comment here to allow auto update --> 42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> 43 **Table of contents** 44 45 - [Project Focus](#project-focus) 46 - [Principles](#principles) 47 - [Requirements](#requirements) 48 - [Getting started](#getting-started) 49 - [Installing from PyPI](#installing-from-pypi) 50 - [Official source code](#official-source-code) 51 - [Convenience packages](#convenience-packages) 52 - [User Interface](#user-interface) 53 - [Semantic versioning](#semantic-versioning) 54 - [Version Life Cycle](#version-life-cycle) 55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions) 56 - [Contributing](#contributing) 57 - [Who uses Apache Airflow?](#who-uses-apache-airflow) 58 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow) 59 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation) 60 - [Airflow merchandise](#airflow-merchandise) 61 - [Links](#links) 62 - [Sponsors](#sponsors) 63 64 <!-- END doctoc generated TOC please keep comment here to allow auto update --> 65 66 ## Project Focus 67 68 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/). 69 70 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work. 71 72 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches. 73 74 ## Principles 75 76 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. 77 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment. 78 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine. 79 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. 80 81 ## Requirements 82 83 Apache Airflow is tested with: 84 85 | | Main version (dev) | Stable version (2.2.0) | 86 | -------------------- | ------------------------- | ------------------------ | 87 | Python | 3.6, 3.7, 3.8, 3.9 | 3.6, 3.7, 3.8, 3.9 | 88 | Kubernetes | 1.18, 1.19, 1.20 | 1.18, 1.19, 1.20 | 89 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 | 90 | MySQL | 5.7, 8 | 5.7, 8 | 91 | SQLite | 3.15.0+ | 3.15.0+ | 92 | MSSQL(Experimental) | 2017, 2019 | | 93 94 **Note**: MySQL 5.x versions are unable to or have limitations with 95 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html). 96 MariaDB is not tested/recommended. 97 98 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend 99 using the latest stable version of SQLite for local development. 100 101 **Note**: Python v3.10 is not supported yet. For details, see [#19059](https://github.com/apache/airflow/issues/19059). 102 103 ## Getting started 104 105 Visit the official Airflow website documentation (latest **stable** release) for help with 106 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html), 107 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking 108 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html). 109 110 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/). 111 112 For more information on Airflow Improvement Proposals (AIPs), visit 113 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals). 114 115 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/). 116 117 ## Installing from PyPI 118 119 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky 120 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and 121 applications usually pin them, but we should do neither and both simultaneously. We decided to keep 122 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries 123 if needed. This means that `pip install apache-airflow` will not work from time to time or will 124 produce unusable Airflow installation. 125 126 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint 127 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working" 128 constraints files separately per major/minor Python version. 129 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify 130 correct Airflow tag/version/branch and Python versions in the URL. 131 132 133 1. Installing just Airflow: 134 135 > Note: Only `pip` installation is currently officially supported. 136 137 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or 138 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as 139 `pip` - especially when it comes to constraint vs. requirements management. 140 Installing via `Poetry` or `pip-tools` is not currently supported. 141 142 If you wish to install Airflow using those tools, you should use the constraint files and convert 143 them to the appropriate format and workflow that your tool requires. 144 145 146 ```bash 147 pip install 'apache-airflow==2.2.0' \ 148 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.7.txt" 149 ``` 150 151 2. Installing with extras (i.e., postgres, google) 152 153 ```bash 154 pip install 'apache-airflow[postgres,google]==2.2.0' \ 155 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.7.txt" 156 ``` 157 158 For information on installing provider packages, check 159 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html). 160 161 ## Official source code 162 163 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project, 164 and our official source code releases: 165 166 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html) 167 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow) 168 - Are cryptographically signed by the release manager 169 - Are officially voted on by the PMC members during the 170 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval) 171 172 Following the ASF rules, the source packages released must be sufficient for a user to build and test the 173 release provided they have access to the appropriate platform and tools. 174 175 ## Convenience packages 176 177 There are other ways of installing and using Airflow. Those are "convenience" methods - they are 178 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users 179 who do not want to build the software themselves. 180 181 Those are - in the order of most common ways people install Airflow: 182 183 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool 184 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via 185 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can 186 read more about using, customising, and extending the images in the 187 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and 188 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document. 189 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that 190 were used to generate official source packages via git 191 192 All those artifacts are not official releases, but they are prepared using officially released sources. 193 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such 194 following the ASF Policy. 195 196 ## User Interface 197 198 - **DAGs**: Overview of all DAGs in your environment. 199 200 ![DAGs](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/dags.png) 201 202 - **Tree**: Tree representation of a DAG that spans across time. 203 204 ![Tree](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/tree.png) 205 206 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run. 207 208 ![Graph](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/graph.png) 209 210 - **Task Duration**: Total time spent on different tasks over time. 211 212 ![Task Duration](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/duration.png) 213 214 - **Gantt**: Duration and overlap of a DAG. 215 216 ![Gantt](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/gantt.png) 217 218 - **Code**: Quick way to view source code of a DAG. 219 220 ![Code](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/code.png) 221 222 ## Semantic versioning 223 224 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released. 225 226 There are few specific rules that we agreed to that define details of versioning of the different 227 packages: 228 229 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers). 230 Changing limits for versions of Airflow dependencies is not a breaking change on its own. 231 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only. 232 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version. 233 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed 234 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages, 235 they are present in providers as `install_requires` limitations. We aim to keep backwards 236 compatibility of providers with all previously released Airflow 2 versions but 237 there will sometimes be breaking changes that might make some, or all 238 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version 239 is a breaking change for provider because installing the new provider might automatically 240 upgrade Airflow (which might be an undesired side effect of upgrading provider). 241 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR 242 versions for the chart are independent from the Airflow version. We aim to keep backwards 243 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might 244 only work starting from specific Airflow releases. We might however limit the Helm 245 Chart to depend on minimal Airflow version. 246 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow. 247 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of 248 all clients. The clients then can release their own PATCH releases with bugfixes, 249 independently of Airflow PATCH releases. 250 251 ## Version Life Cycle 252 253 Apache Airflow version life cycle: 254 255 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated | 256 |---------|---------------------|-----------|---------------|-----------------|----------------| 257 | 2 | 2.2.0 | Supported | Dec 17, 2020 | TBD | TBD | 258 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 | 259 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 | 260 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 | 261 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 | 262 263 Limited support versions will be supported with security and critical bug fix only. 264 EOL versions will not get any fixes nor support. 265 We always recommend that all users run the latest available minor release for whatever major version is in use. 266 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date. 267 268 ## Support for Python and Kubernetes versions 269 270 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support. 271 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the 272 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and 273 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/). 274 275 1. We drop support for Python and Kubernetes versions when they reach EOL. We drop support for those 276 EOL versions in main right after EOL date, and it is effectively removed when we release the 277 first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow 278 For example, for Python 3.6 it means that we drop support in main right after 23.12.2021, and the first 279 MAJOR or MINOR version of Airflow released after will not have it. 280 281 2. The "oldest" supported version of Python/Kubernetes is the default one until we decide to switch to 282 later version. "Default" is only meaningful in terms of "smoke tests" in CI PRs, which are run using this 283 default version and the default reference image available. Currently `apache/airflow:latest` 284 and `apache/airflow:2.2.0` images are Python 3.7 images as we are preparing for 23.12.2021 when will 285 Python 3.6 reaches end of life. 286 287 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we 288 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with 289 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup. 290 291 ### Additional notes on Python version requirements 292 293 * Previous versions [require](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3 294 when using Python 3. 295 296 ## Contributing 297 298 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst). 299 300 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst). 301 302 ## Who uses Apache Airflow? 303 304 More than 400 organizations are using Apache Airflow 305 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md). 306 307 ## Who Maintains Apache Airflow? 308 309 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors), 310 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow) 311 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests. 312 If you would like to become a maintainer, please review the Apache Airflow 313 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer). 314 315 ## Can I use the Apache Airflow logo in my presentation? 316 317 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html). 318 319 ## Airflow merchandise 320 321 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out 322 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5). 323 324 ## Links 325 326 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/) 327 - [Chat](https://s.apache.org/airflow-slack) 328 329 ## Sponsors 330 331 The CI infrastructure for Apache Airflow has been sponsored by: 332 333 <!-- Ordered by most recently "funded" --> 334 335 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a> 336 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a> 337 [end of README.md] [start of airflow/providers/amazon/aws/hooks/batch_client.py] 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 19 """ 20 A client for AWS batch services 21 22 .. seealso:: 23 24 - http://boto3.readthedocs.io/en/latest/guide/configuration.html 25 - http://boto3.readthedocs.io/en/latest/reference/services/batch.html 26 - https://docs.aws.amazon.com/batch/latest/APIReference/Welcome.html 27 """ 28 29 from random import uniform 30 from time import sleep 31 from typing import Dict, List, Optional, Union 32 33 import botocore.client 34 import botocore.exceptions 35 import botocore.waiter 36 37 from airflow.exceptions import AirflowException 38 from airflow.providers.amazon.aws.hooks.base_aws import AwsBaseHook 39 from airflow.typing_compat import Protocol, runtime_checkable 40 41 42 @runtime_checkable 43 class AwsBatchProtocol(Protocol): 44 """ 45 A structured Protocol for ``boto3.client('batch') -> botocore.client.Batch``. 46 This is used for type hints on :py:meth:`.AwsBatchClient.client`; it covers 47 only the subset of client methods required. 48 49 .. seealso:: 50 51 - https://mypy.readthedocs.io/en/latest/protocols.html 52 - http://boto3.readthedocs.io/en/latest/reference/services/batch.html 53 """ 54 55 def describe_jobs(self, jobs: List[str]) -> Dict: 56 """ 57 Get job descriptions from AWS batch 58 59 :param jobs: a list of JobId to describe 60 :type jobs: List[str] 61 62 :return: an API response to describe jobs 63 :rtype: Dict 64 """ 65 ... 66 67 def get_waiter(self, waiterName: str) -> botocore.waiter.Waiter: 68 """ 69 Get an AWS Batch service waiter 70 71 :param waiterName: The name of the waiter. The name should match 72 the name (including the casing) of the key name in the waiter 73 model file (typically this is CamelCasing). 74 :type waiterName: str 75 76 :return: a waiter object for the named AWS batch service 77 :rtype: botocore.waiter.Waiter 78 79 .. note:: 80 AWS batch might not have any waiters (until botocore PR-1307 is released). 81 82 .. code-block:: python 83 84 import boto3 85 86 boto3.client("batch").waiter_names == [] 87 88 .. seealso:: 89 90 - https://boto3.amazonaws.com/v1/documentation/api/latest/guide/clients.html#waiters 91 - https://github.com/boto/botocore/pull/1307 92 """ 93 ... 94 95 def submit_job( 96 self, 97 jobName: str, 98 jobQueue: str, 99 jobDefinition: str, 100 arrayProperties: Dict, 101 parameters: Dict, 102 containerOverrides: Dict, 103 tags: Dict, 104 ) -> Dict: 105 """ 106 Submit a batch job 107 108 :param jobName: the name for the AWS batch job 109 :type jobName: str 110 111 :param jobQueue: the queue name on AWS Batch 112 :type jobQueue: str 113 114 :param jobDefinition: the job definition name on AWS Batch 115 :type jobDefinition: str 116 117 :param arrayProperties: the same parameter that boto3 will receive 118 :type arrayProperties: Dict 119 120 :param parameters: the same parameter that boto3 will receive 121 :type parameters: Dict 122 123 :param containerOverrides: the same parameter that boto3 will receive 124 :type containerOverrides: Dict 125 126 :param tags: the same parameter that boto3 will receive 127 :type tags: Dict 128 129 :return: an API response 130 :rtype: Dict 131 """ 132 ... 133 134 def terminate_job(self, jobId: str, reason: str) -> Dict: 135 """ 136 Terminate a batch job 137 138 :param jobId: a job ID to terminate 139 :type jobId: str 140 141 :param reason: a reason to terminate job ID 142 :type reason: str 143 144 :return: an API response 145 :rtype: Dict 146 """ 147 ... 148 149 150 # Note that the use of invalid-name parameters should be restricted to the boto3 mappings only; 151 # all the Airflow wrappers of boto3 clients should not adopt invalid-names to match boto3. 152 153 154 class AwsBatchClientHook(AwsBaseHook): 155 """ 156 A client for AWS batch services. 157 158 :param max_retries: exponential back-off retries, 4200 = 48 hours; 159 polling is only used when waiters is None 160 :type max_retries: Optional[int] 161 162 :param status_retries: number of HTTP retries to get job status, 10; 163 polling is only used when waiters is None 164 :type status_retries: Optional[int] 165 166 .. note:: 167 Several methods use a default random delay to check or poll for job status, i.e. 168 ``random.uniform(DEFAULT_DELAY_MIN, DEFAULT_DELAY_MAX)`` 169 Using a random interval helps to avoid AWS API throttle limits 170 when many concurrent tasks request job-descriptions. 171 172 To modify the global defaults for the range of jitter allowed when a 173 random delay is used to check batch job status, modify these defaults, e.g.: 174 .. code-block:: 175 176 AwsBatchClient.DEFAULT_DELAY_MIN = 0 177 AwsBatchClient.DEFAULT_DELAY_MAX = 5 178 179 When explicit delay values are used, a 1 second random jitter is applied to the 180 delay (e.g. a delay of 0 sec will be a ``random.uniform(0, 1)`` delay. It is 181 generally recommended that random jitter is added to API requests. A 182 convenience method is provided for this, e.g. to get a random delay of 183 10 sec +/- 5 sec: ``delay = AwsBatchClient.add_jitter(10, width=5, minima=0)`` 184 185 .. seealso:: 186 - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/batch.html 187 - https://docs.aws.amazon.com/general/latest/gr/api-retries.html 188 - https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 189 """ 190 191 MAX_RETRIES = 4200 192 STATUS_RETRIES = 10 193 194 # delays are in seconds 195 DEFAULT_DELAY_MIN = 1 196 DEFAULT_DELAY_MAX = 10 197 198 def __init__( 199 self, *args, max_retries: Optional[int] = None, status_retries: Optional[int] = None, **kwargs 200 ) -> None: 201 # https://github.com/python/mypy/issues/6799 hence type: ignore 202 super().__init__(client_type='batch', *args, **kwargs) # type: ignore 203 self.max_retries = max_retries or self.MAX_RETRIES 204 self.status_retries = status_retries or self.STATUS_RETRIES 205 206 @property 207 def client(self) -> Union[AwsBatchProtocol, botocore.client.BaseClient]: 208 """ 209 An AWS API client for batch services. 210 211 :return: a boto3 'batch' client for the ``.region_name`` 212 :rtype: Union[AwsBatchProtocol, botocore.client.BaseClient] 213 """ 214 return self.conn 215 216 def terminate_job(self, job_id: str, reason: str) -> Dict: 217 """ 218 Terminate a batch job 219 220 :param job_id: a job ID to terminate 221 :type job_id: str 222 223 :param reason: a reason to terminate job ID 224 :type reason: str 225 226 :return: an API response 227 :rtype: Dict 228 """ 229 response = self.get_conn().terminate_job(jobId=job_id, reason=reason) 230 self.log.info(response) 231 return response 232 233 def check_job_success(self, job_id: str) -> bool: 234 """ 235 Check the final status of the batch job; return True if the job 236 'SUCCEEDED', else raise an AirflowException 237 238 :param job_id: a batch job ID 239 :type job_id: str 240 241 :rtype: bool 242 243 :raises: AirflowException 244 """ 245 job = self.get_job_description(job_id) 246 job_status = job.get("status") 247 248 if job_status == "SUCCEEDED": 249 self.log.info("AWS batch job (%s) succeeded: %s", job_id, job) 250 return True 251 252 if job_status == "FAILED": 253 raise AirflowException(f"AWS Batch job ({job_id}) failed: {job}") 254 255 if job_status in ["SUBMITTED", "PENDING", "RUNNABLE", "STARTING", "RUNNING"]: 256 raise AirflowException(f"AWS Batch job ({job_id}) is not complete: {job}") 257 258 raise AirflowException(f"AWS Batch job ({job_id}) has unknown status: {job}") 259 260 def wait_for_job(self, job_id: str, delay: Union[int, float, None] = None) -> None: 261 """ 262 Wait for batch job to complete 263 264 :param job_id: a batch job ID 265 :type job_id: str 266 267 :param delay: a delay before polling for job status 268 :type delay: Optional[Union[int, float]] 269 270 :raises: AirflowException 271 """ 272 self.delay(delay) 273 self.poll_for_job_running(job_id, delay) 274 self.poll_for_job_complete(job_id, delay) 275 self.log.info("AWS Batch job (%s) has completed", job_id) 276 277 def poll_for_job_running(self, job_id: str, delay: Union[int, float, None] = None) -> None: 278 """ 279 Poll for job running. The status that indicates a job is running or 280 already complete are: 'RUNNING'|'SUCCEEDED'|'FAILED'. 281 282 So the status options that this will wait for are the transitions from: 283 'SUBMITTED'>'PENDING'>'RUNNABLE'>'STARTING'>'RUNNING'|'SUCCEEDED'|'FAILED' 284 285 The completed status options are included for cases where the status 286 changes too quickly for polling to detect a RUNNING status that moves 287 quickly from STARTING to RUNNING to completed (often a failure). 288 289 :param job_id: a batch job ID 290 :type job_id: str 291 292 :param delay: a delay before polling for job status 293 :type delay: Optional[Union[int, float]] 294 295 :raises: AirflowException 296 """ 297 self.delay(delay) 298 running_status = ["RUNNING", "SUCCEEDED", "FAILED"] 299 self.poll_job_status(job_id, running_status) 300 301 def poll_for_job_complete(self, job_id: str, delay: Union[int, float, None] = None) -> None: 302 """ 303 Poll for job completion. The status that indicates job completion 304 are: 'SUCCEEDED'|'FAILED'. 305 306 So the status options that this will wait for are the transitions from: 307 'SUBMITTED'>'PENDING'>'RUNNABLE'>'STARTING'>'RUNNING'>'SUCCEEDED'|'FAILED' 308 309 :param job_id: a batch job ID 310 :type job_id: str 311 312 :param delay: a delay before polling for job status 313 :type delay: Optional[Union[int, float]] 314 315 :raises: AirflowException 316 """ 317 self.delay(delay) 318 complete_status = ["SUCCEEDED", "FAILED"] 319 self.poll_job_status(job_id, complete_status) 320 321 def poll_job_status(self, job_id: str, match_status: List[str]) -> bool: 322 """ 323 Poll for job status using an exponential back-off strategy (with max_retries). 324 325 :param job_id: a batch job ID 326 :type job_id: str 327 328 :param match_status: a list of job status to match; the batch job status are: 329 'SUBMITTED'|'PENDING'|'RUNNABLE'|'STARTING'|'RUNNING'|'SUCCEEDED'|'FAILED' 330 :type match_status: List[str] 331 332 :rtype: bool 333 334 :raises: AirflowException 335 """ 336 retries = 0 337 while True: 338 339 job = self.get_job_description(job_id) 340 job_status = job.get("status") 341 self.log.info( 342 "AWS Batch job (%s) check status (%s) in %s", 343 job_id, 344 job_status, 345 match_status, 346 ) 347 348 if job_status in match_status: 349 return True 350 351 if retries >= self.max_retries: 352 raise AirflowException(f"AWS Batch job ({job_id}) status checks exceed max_retries") 353 354 retries += 1 355 pause = self.exponential_delay(retries) 356 self.log.info( 357 "AWS Batch job (%s) status check (%d of %d) in the next %.2f seconds", 358 job_id, 359 retries, 360 self.max_retries, 361 pause, 362 ) 363 self.delay(pause) 364 365 def get_job_description(self, job_id: str) -> Dict: 366 """ 367 Get job description (using status_retries). 368 369 :param job_id: a batch job ID 370 :type job_id: str 371 372 :return: an API response for describe jobs 373 :rtype: Dict 374 375 :raises: AirflowException 376 """ 377 retries = 0 378 while True: 379 try: 380 response = self.get_conn().describe_jobs(jobs=[job_id]) 381 return self.parse_job_description(job_id, response) 382 383 except botocore.exceptions.ClientError as err: 384 error = err.response.get("Error", {}) 385 if error.get("Code") == "TooManyRequestsException": 386 pass # allow it to retry, if possible 387 else: 388 raise AirflowException(f"AWS Batch job ({job_id}) description error: {err}") 389 390 retries += 1 391 if retries >= self.status_retries: 392 raise AirflowException( 393 f"AWS Batch job ({job_id}) description error: exceeded status_retries " 394 f"({self.status_retries})" 395 ) 396 397 pause = self.exponential_delay(retries) 398 self.log.info( 399 "AWS Batch job (%s) description retry (%d of %d) in the next %.2f seconds", 400 job_id, 401 retries, 402 self.status_retries, 403 pause, 404 ) 405 self.delay(pause) 406 407 @staticmethod 408 def parse_job_description(job_id: str, response: Dict) -> Dict: 409 """ 410 Parse job description to extract description for job_id 411 412 :param job_id: a batch job ID 413 :type job_id: str 414 415 :param response: an API response for describe jobs 416 :type response: Dict 417 418 :return: an API response to describe job_id 419 :rtype: Dict 420 421 :raises: AirflowException 422 """ 423 jobs = response.get("jobs", []) 424 matching_jobs = [job for job in jobs if job.get("jobId") == job_id] 425 if len(matching_jobs) != 1: 426 raise AirflowException(f"AWS Batch job ({job_id}) description error: response: {response}") 427 428 return matching_jobs[0] 429 430 @staticmethod 431 def add_jitter( 432 delay: Union[int, float], width: Union[int, float] = 1, minima: Union[int, float] = 0 433 ) -> float: 434 """ 435 Use delay +/- width for random jitter 436 437 Adding jitter to status polling can help to avoid 438 AWS batch API limits for monitoring batch jobs with 439 a high concurrency in Airflow tasks. 440 441 :param delay: number of seconds to pause; 442 delay is assumed to be a positive number 443 :type delay: Union[int, float] 444 445 :param width: delay +/- width for random jitter; 446 width is assumed to be a positive number 447 :type width: Union[int, float] 448 449 :param minima: minimum delay allowed; 450 minima is assumed to be a non-negative number 451 :type minima: Union[int, float] 452 453 :return: uniform(delay - width, delay + width) jitter 454 and it is a non-negative number 455 :rtype: float 456 """ 457 delay = abs(delay) 458 width = abs(width) 459 minima = abs(minima) 460 lower = max(minima, delay - width) 461 upper = delay + width 462 return uniform(lower, upper) 463 464 @staticmethod 465 def delay(delay: Union[int, float, None] = None) -> None: 466 """ 467 Pause execution for ``delay`` seconds. 468 469 :param delay: a delay to pause execution using ``time.sleep(delay)``; 470 a small 1 second jitter is applied to the delay. 471 :type delay: Optional[Union[int, float]] 472 473 .. note:: 474 This method uses a default random delay, i.e. 475 ``random.uniform(DEFAULT_DELAY_MIN, DEFAULT_DELAY_MAX)``; 476 using a random interval helps to avoid AWS API throttle limits 477 when many concurrent tasks request job-descriptions. 478 """ 479 if delay is None: 480 delay = uniform(AwsBatchClientHook.DEFAULT_DELAY_MIN, AwsBatchClientHook.DEFAULT_DELAY_MAX) 481 else: 482 delay = AwsBatchClientHook.add_jitter(delay) 483 sleep(delay) 484 485 @staticmethod 486 def exponential_delay(tries: int) -> float: 487 """ 488 An exponential back-off delay, with random jitter. There is a maximum 489 interval of 10 minutes (with random jitter between 3 and 10 minutes). 490 This is used in the :py:meth:`.poll_for_job_status` method. 491 492 :param tries: Number of tries 493 :type tries: int 494 495 :rtype: float 496 497 Examples of behavior: 498 499 .. code-block:: python 500 501 def exp(tries): 502 max_interval = 600.0 # 10 minutes in seconds 503 delay = 1 + pow(tries * 0.6, 2) 504 delay = min(max_interval, delay) 505 print(delay / 3, delay) 506 507 508 for tries in range(10): 509 exp(tries) 510 511 # 0.33 1.0 512 # 0.45 1.35 513 # 0.81 2.44 514 # 1.41 4.23 515 # 2.25 6.76 516 # 3.33 10.00 517 # 4.65 13.95 518 # 6.21 18.64 519 # 8.01 24.04 520 # 10.05 30.15 521 522 .. seealso:: 523 524 - https://docs.aws.amazon.com/general/latest/gr/api-retries.html 525 - https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 526 """ 527 max_interval = 600.0 # results in 3 to 10 minute delay 528 delay = 1 + pow(tries * 0.6, 2) 529 delay = min(max_interval, delay) 530 return uniform(delay / 3, delay) 531 [end of airflow/providers/amazon/aws/hooks/batch_client.py] [start of airflow/providers/google/cloud/utils/mlengine_prediction_summary.py] 1 # 2 # Licensed to the Apache Software Foundation (ASF) under one 3 # or more contributor license agreements. See the NOTICE file 4 # distributed with this work for additional information 5 # regarding copyright ownership. The ASF licenses this file 6 # to you under the Apache License, Version 2.0 (the 7 # "License"); you may not use this file except in compliance 8 # with the License. You may obtain a copy of the License at 9 # 10 # http://www.apache.org/licenses/LICENSE-2.0 11 # 12 # Unless required by applicable law or agreed to in writing, 13 # software distributed under the License is distributed on an 14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 # KIND, either express or implied. See the License for the 16 # specific language governing permissions and limitations 17 # under the License. 18 """ 19 A template called by DataFlowPythonOperator to summarize BatchPrediction. 20 21 It accepts a user function to calculate the metric(s) per instance in 22 the prediction results, then aggregates to output as a summary. 23 24 It accepts the following arguments: 25 26 - ``--prediction_path``: 27 The GCS folder that contains BatchPrediction results, containing 28 ``prediction.results-NNNNN-of-NNNNN`` files in the json format. 29 Output will be also stored in this folder, as 'prediction.summary.json'. 30 - ``--metric_fn_encoded``: 31 An encoded function that calculates and returns a tuple of metric(s) 32 for a given instance (as a dictionary). It should be encoded 33 via ``base64.b64encode(dill.dumps(fn, recurse=True))``. 34 - ``--metric_keys``: 35 A comma-separated key(s) of the aggregated metric(s) in the summary 36 output. The order and the size of the keys must match to the output 37 of metric_fn. 38 The summary will have an additional key, 'count', to represent the 39 total number of instances, so the keys shouldn't include 'count'. 40 41 42 Usage example: 43 44 .. code-block: python 45 46 from airflow.providers.google.cloud.operators.dataflow import DataflowCreatePythonJobOperator 47 48 49 def get_metric_fn(): 50 import math # all imports must be outside of the function to be passed. 51 def metric_fn(inst): 52 label = float(inst["input_label"]) 53 classes = float(inst["classes"]) 54 prediction = float(inst["scores"][1]) 55 log_loss = math.log(1 + math.exp( 56 -(label * 2 - 1) * math.log(prediction / (1 - prediction)))) 57 squared_err = (classes-label)**2 58 return (log_loss, squared_err) 59 return metric_fn 60 metric_fn_encoded = base64.b64encode(dill.dumps(get_metric_fn(), recurse=True)) 61 DataflowCreatePythonJobOperator( 62 task_id="summary-prediction", 63 py_options=["-m"], 64 py_file="airflow.providers.google.cloud.utils.mlengine_prediction_summary", 65 options={ 66 "prediction_path": prediction_path, 67 "metric_fn_encoded": metric_fn_encoded, 68 "metric_keys": "log_loss,mse" 69 }, 70 dataflow_default_options={ 71 "project": "xxx", "region": "us-east1", 72 "staging_location": "gs://yy", "temp_location": "gs://zz", 73 } 74 ) >> dag 75 76 When the input file is like the following:: 77 78 {"inputs": "1,x,y,z", "classes": 1, "scores": [0.1, 0.9]} 79 {"inputs": "0,o,m,g", "classes": 0, "scores": [0.7, 0.3]} 80 {"inputs": "1,o,m,w", "classes": 0, "scores": [0.6, 0.4]} 81 {"inputs": "1,b,r,b", "classes": 1, "scores": [0.2, 0.8]} 82 83 The output file will be:: 84 85 {"log_loss": 0.43890510565304547, "count": 4, "mse": 0.25} 86 87 To test outside of the dag: 88 89 .. code-block:: python 90 91 subprocess.check_call( 92 [ 93 "python", 94 "-m", 95 "airflow.providers.google.cloud.utils.mlengine_prediction_summary", 96 "--prediction_path=gs://...", 97 "--metric_fn_encoded=" + metric_fn_encoded, 98 "--metric_keys=log_loss,mse", 99 "--runner=DataflowRunner", 100 "--staging_location=gs://...", 101 "--temp_location=gs://...", 102 ] 103 ) 104 """ 105 106 import argparse 107 import base64 108 import json 109 import logging 110 import os 111 112 import apache_beam as beam 113 import dill 114 115 116 class JsonCoder: 117 """JSON encoder/decoder.""" 118 119 @staticmethod 120 def encode(x): 121 """JSON encoder.""" 122 return json.dumps(x).encode() 123 124 @staticmethod 125 def decode(x): 126 """JSON decoder.""" 127 return json.loads(x) 128 129 130 @beam.ptransform_fn 131 def MakeSummary(pcoll, metric_fn, metric_keys): 132 """Summary PTransform used in Dataflow.""" 133 return ( 134 pcoll 135 | "ApplyMetricFnPerInstance" >> beam.Map(metric_fn) 136 | "PairWith1" >> beam.Map(lambda tup: tup + (1,)) 137 | "SumTuple" >> beam.CombineGlobally(beam.combiners.TupleCombineFn(*([sum] * (len(metric_keys) + 1)))) 138 | "AverageAndMakeDict" 139 >> beam.Map( 140 lambda tup: dict( 141 [(name, tup[i] / tup[-1]) for i, name in enumerate(metric_keys)] + [("count", tup[-1])] 142 ) 143 ) 144 ) 145 146 147 def run(argv=None): 148 """Helper for obtaining prediction summary.""" 149 parser = argparse.ArgumentParser() 150 parser.add_argument( 151 "--prediction_path", 152 required=True, 153 help=( 154 "The GCS folder that contains BatchPrediction results, containing " 155 "prediction.results-NNNNN-of-NNNNN files in the json format. " 156 "Output will be also stored in this folder, as a file" 157 "'prediction.summary.json'." 158 ), 159 ) 160 parser.add_argument( 161 "--metric_fn_encoded", 162 required=True, 163 help=( 164 "An encoded function that calculates and returns a tuple of " 165 "metric(s) for a given instance (as a dictionary). It should be " 166 "encoded via base64.b64encode(dill.dumps(fn, recurse=True))." 167 ), 168 ) 169 parser.add_argument( 170 "--metric_keys", 171 required=True, 172 help=( 173 "A comma-separated keys of the aggregated metric(s) in the summary " 174 "output. The order and the size of the keys must match to the " 175 "output of metric_fn. The summary will have an additional key, " 176 "'count', to represent the total number of instances, so this flag " 177 "shouldn't include 'count'." 178 ), 179 ) 180 known_args, pipeline_args = parser.parse_known_args(argv) 181 182 metric_fn = dill.loads(base64.b64decode(known_args.metric_fn_encoded)) 183 if not callable(metric_fn): 184 raise ValueError("--metric_fn_encoded must be an encoded callable.") 185 metric_keys = known_args.metric_keys.split(",") 186 187 with beam.Pipeline(options=beam.pipeline.PipelineOptions(pipeline_args)) as pipe: 188 189 prediction_result_pattern = os.path.join(known_args.prediction_path, "prediction.results-*-of-*") 190 prediction_summary_path = os.path.join(known_args.prediction_path, "prediction.summary.json") 191 # This is apache-beam ptransform's convention 192 _ = ( 193 pipe 194 | "ReadPredictionResult" >> beam.io.ReadFromText(prediction_result_pattern, coder=JsonCoder()) 195 | "Summary" >> MakeSummary(metric_fn, metric_keys) 196 | "Write" 197 >> beam.io.WriteToText( 198 prediction_summary_path, 199 shard_name_template='', # without trailing -NNNNN-of-NNNNN. 200 coder=JsonCoder(), 201 ) 202 ) 203 204 205 if __name__ == "__main__": 206 # Dataflow does not print anything on the screen by default. Good practice says to configure the logger 207 # to be able to track the progress. This code is run in a separate process, so it's safe. 208 logging.getLogger().setLevel(logging.INFO) 209 run() 210 [end of airflow/providers/google/cloud/utils/mlengine_prediction_summary.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
apache/airflow
fe998a48be769f6a957611584145706b71385cc9
BaseOperator type hints for retry_delay and max_retry_delay should reveal float option ### Describe the issue with documentation `BaseOperator` type hints for `retry_delay` and `max_retry_delay` shows `timedelta` only, however the params also accept `float` seconds values. Also, type hint for `dag` param is missing. More precise type hints and params descriptions in the docs can help to understand the code behavior easier. ### How to solve the problem _No response_ ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
Thanks for opening your first issue here! Be sure to follow the issue template! Is related to: #13086
2021-10-21T20:32:09Z
<patch> diff --git a/airflow/models/baseoperator.py b/airflow/models/baseoperator.py --- a/airflow/models/baseoperator.py +++ b/airflow/models/baseoperator.py @@ -77,6 +77,7 @@ from airflow.utils.weight_rule import WeightRule if TYPE_CHECKING: + from airflow.models.dag import DAG from airflow.models.xcom_arg import XComArg from airflow.utils.task_group import TaskGroup @@ -241,14 +242,17 @@ class derived from this one results in the creation of a task object, :param retries: the number of retries that should be performed before failing the task :type retries: int - :param retry_delay: delay between retries - :type retry_delay: datetime.timedelta - :param retry_exponential_backoff: allow progressive longer waits between + :param retry_delay: delay between retries, can be set as ``timedelta`` or + ``float`` seconds, which will be converted into ``timedelta``, + the default is ``timedelta(seconds=300)``. + :type retry_delay: datetime.timedelta or float + :param retry_exponential_backoff: allow progressively longer waits between retries by using exponential backoff algorithm on retry delay (delay will be converted into seconds) :type retry_exponential_backoff: bool - :param max_retry_delay: maximum delay interval between retries - :type max_retry_delay: datetime.timedelta + :param max_retry_delay: maximum delay interval between retries, can be set as + ``timedelta`` or ``float`` seconds, which will be converted into ``timedelta``. + :type max_retry_delay: datetime.timedelta or float :param start_date: The ``start_date`` for the task, determines the ``execution_date`` for the first task instance. The best practice is to have the start_date rounded @@ -486,14 +490,14 @@ def __init__( email_on_retry: bool = conf.getboolean('email', 'default_email_on_retry', fallback=True), email_on_failure: bool = conf.getboolean('email', 'default_email_on_failure', fallback=True), retries: Optional[int] = conf.getint('core', 'default_task_retries', fallback=0), - retry_delay: timedelta = timedelta(seconds=300), + retry_delay: Union[timedelta, float] = timedelta(seconds=300), retry_exponential_backoff: bool = False, - max_retry_delay: Optional[timedelta] = None, + max_retry_delay: Optional[Union[timedelta, float]] = None, start_date: Optional[datetime] = None, end_date: Optional[datetime] = None, depends_on_past: bool = False, wait_for_downstream: bool = False, - dag=None, + dag: Optional['DAG'] = None, params: Optional[Dict] = None, default_args: Optional[Dict] = None, priority_weight: int = 1, @@ -804,7 +808,7 @@ def get_outlet_defs(self): return self._outlets @property - def dag(self) -> Any: + def dag(self) -> 'DAG': """Returns the Operator's DAG if set, otherwise raises an error""" if self.has_dag(): return self._dag @@ -812,7 +816,7 @@ def dag(self) -> Any: raise AirflowException(f'Operator {self} has not been assigned to a DAG yet') @dag.setter - def dag(self, dag: Any): + def dag(self, dag: Optional['DAG']): """ Operators can be assigned to one DAG, one time. Repeat assignments to that same DAG are ok. </patch>
[]
[]
pandas-dev__pandas-4846
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> DataFrame.sort_index does not use ascending when then value is a list with a single element ``` In [7]: d Out[7]: {'one': [1.0, 2.0, 3.0, 4.0], 'two': [4.0, 3.0, 2.0, 1.0]} In [11]: pd.DataFrame(d).sort_index(by=['two'], ascending=[0,]) Out[11]: one two 3 4 1 2 3 2 1 2 3 0 1 4 In [12]: pd.DataFrame(d).sort_index(by=['two'], ascending=0) Out[12]: one two 0 1 4 1 2 3 2 3 2 3 4 1 ``` </issue> <code> [start of README.rst] 1 ============================================= 2 pandas: powerful Python data analysis toolkit 3 ============================================= 4 5 .. image:: https://travis-ci.org/pydata/pandas.png 6 :target: https://travis-ci.org/pydata/pandas 7 8 What is it 9 ========== 10 11 **pandas** is a Python package providing fast, flexible, and expressive data 12 structures designed to make working with "relational" or "labeled" data both 13 easy and intuitive. It aims to be the fundamental high-level building block for 14 doing practical, **real world** data analysis in Python. Additionally, it has 15 the broader goal of becoming **the most powerful and flexible open source data 16 analysis / manipulation tool available in any language**. It is already well on 17 its way toward this goal. 18 19 Main Features 20 ============= 21 22 Here are just a few of the things that pandas does well: 23 24 - Easy handling of **missing data** (represented as NaN) in floating point as 25 well as non-floating point data 26 - Size mutability: columns can be **inserted and deleted** from DataFrame and 27 higher dimensional objects 28 - Automatic and explicit **data alignment**: objects can be explicitly 29 aligned to a set of labels, or the user can simply ignore the labels and 30 let `Series`, `DataFrame`, etc. automatically align the data for you in 31 computations 32 - Powerful, flexible **group by** functionality to perform 33 split-apply-combine operations on data sets, for both aggregating and 34 transforming data 35 - Make it **easy to convert** ragged, differently-indexed data in other 36 Python and NumPy data structures into DataFrame objects 37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting** 38 of large data sets 39 - Intuitive **merging** and **joining** data sets 40 - Flexible **reshaping** and pivoting of data sets 41 - **Hierarchical** labeling of axes (possible to have multiple labels per 42 tick) 43 - Robust IO tools for loading data from **flat files** (CSV and delimited), 44 Excel files, databases, and saving / loading data from the ultrafast **HDF5 45 format** 46 - **Time series**-specific functionality: date range generation and frequency 47 conversion, moving window statistics, moving window linear regressions, 48 date shifting and lagging, etc. 49 50 Where to get it 51 =============== 52 53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas 54 55 Binary installers for the latest released version are available at the Python 56 package index:: 57 58 http://pypi.python.org/pypi/pandas/ 59 60 And via ``easy_install`` or ``pip``:: 61 62 easy_install pandas 63 pip install pandas 64 65 Dependencies 66 ============ 67 68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher 69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher 70 - `pytz <http://pytz.sourceforge.net/>`__ 71 - Needed for time zone support with ``date_range`` 72 73 Highly Recommended Dependencies 74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 75 76 - `numexpr <http://code.google.com/p/numexpr/>`__ 77 - Needed to accelerate some expression evaluation operations 78 - Required by `PyTables` 79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__ 80 - Needed to accelerate certain numerical operations 81 82 Optional dependencies 83 ~~~~~~~~~~~~~~~~~~~~~ 84 85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher. 86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions 87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage 88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting 89 - `statsmodels <http://statsmodels.sourceforge.net/>`__ 90 - Needed for parts of :mod:`pandas.stats` 91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__ 92 - openpyxl version 1.6.1 or higher, for writing .xlsx files 93 - xlrd >= 0.9.0 94 - Needed for Excel I/O 95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3 96 access. 97 - One of the following combinations of libraries is needed to use the 98 top-level :func:`~pandas.io.html.read_html` function: 99 100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is 101 okay.) 102 - `BeautifulSoup4`_ and `lxml`_ 103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_ 104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>` 105 for reasons as to why you should probably **not** take this approach. 106 107 .. warning:: 108 109 - if you install `BeautifulSoup4`_ you must install either 110 `lxml`_ or `html5lib`_ or both. 111 :func:`~pandas.io.html.read_html` will **not** work with *only* 112 `BeautifulSoup4`_ installed. 113 - You are highly encouraged to read :ref:`HTML reading gotchas 114 <html-gotchas>`. It explains issues surrounding the installation and 115 usage of the above three libraries 116 - You may need to install an older version of `BeautifulSoup4`_: 117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 118 32-bit Ubuntu/Debian 119 - Additionally, if you're using `Anaconda`_ you should definitely 120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>` 121 122 .. note:: 123 124 - if you're on a system with ``apt-get`` you can do 125 126 .. code-block:: sh 127 128 sudo apt-get build-dep python-lxml 129 130 to get the necessary dependencies for installation of `lxml`_. This 131 will prevent further headaches down the line. 132 133 134 .. _html5lib: https://github.com/html5lib/html5lib-python 135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup 136 .. _lxml: http://lxml.de 137 .. _Anaconda: https://store.continuum.io/cshop/anaconda 138 139 140 Installation from sources 141 ========================= 142 143 To install pandas from source you need ``cython`` in addition to the normal dependencies above, 144 which can be installed from pypi:: 145 146 pip install cython 147 148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute:: 149 150 python setup.py install 151 152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__:: 153 154 python setup.py develop 155 156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically 157 (the optional ``-e`` option is for installing it in 158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__):: 159 160 pip install -e . 161 162 On Windows, you will need to install MinGW and execute:: 163 164 python setup.py build --compiler=mingw32 165 python setup.py install 166 167 See http://pandas.pydata.org/ for more information. 168 169 License 170 ======= 171 172 BSD 173 174 Documentation 175 ============= 176 177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 178 179 The Sphinx documentation should provide a good starting point for learning how 180 to use the library. Expect the docs to continue to expand as time goes on. 181 182 Background 183 ========== 184 185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 186 has been under active development since then. 187 188 Discussion and Development 189 ========================== 190 191 Since ``pandas`` development is related to a number of other scientific 192 Python projects, questions are welcome on the scipy-user mailing 193 list. Specialized discussions or design issues should take place on 194 the pystatsmodels mailing list / Google group, where 195 ``scikits.statsmodels`` and other libraries will also be discussed: 196 197 http://groups.google.com/group/pystatsmodels 198 199 .. _NumPy: http://numpy.scipy.org/ 200 [end of README.rst] [start of pandas/core/reshape.py] 1 # pylint: disable=E1101,E1103 2 # pylint: disable=W0703,W0622,W0613,W0201 3 4 from pandas.compat import range, zip 5 from pandas import compat 6 import itertools 7 8 import numpy as np 9 10 from pandas.core.series import Series 11 from pandas.core.frame import DataFrame 12 13 from pandas.core.categorical import Categorical 14 from pandas.core.common import (notnull, _ensure_platform_int, _maybe_promote, 15 isnull) 16 from pandas.core.groupby import (get_group_index, _compress_group_index, 17 decons_group_index) 18 import pandas.core.common as com 19 import pandas.algos as algos 20 21 from pandas.core.index import Index, MultiIndex 22 23 24 class _Unstacker(object): 25 """ 26 Helper class to unstack data / pivot with multi-level index 27 28 Parameters 29 ---------- 30 level : int or str, default last level 31 Level to "unstack". Accepts a name for the level. 32 33 Examples 34 -------- 35 >>> import pandas as pd 36 >>> index = pd.MultiIndex.from_tuples([('one', 'a'), ('one', 'b'), 37 ... ('two', 'a'), ('two', 'b')]) 38 >>> s = pd.Series(np.arange(1.0, 5.0), index=index) 39 >>> s 40 one a 1 41 b 2 42 two a 3 43 b 4 44 dtype: float64 45 46 >>> s.unstack(level=-1) 47 a b 48 one 1 2 49 two 3 4 50 51 >>> s.unstack(level=0) 52 one two 53 a 1 2 54 b 3 4 55 56 Returns 57 ------- 58 unstacked : DataFrame 59 """ 60 def __init__(self, values, index, level=-1, value_columns=None): 61 if values.ndim == 1: 62 values = values[:, np.newaxis] 63 self.values = values 64 self.value_columns = value_columns 65 66 if value_columns is None and values.shape[1] != 1: # pragma: no cover 67 raise ValueError('must pass column labels for multi-column data') 68 69 self.index = index 70 self.level = self.index._get_level_number(level) 71 72 levels = index.levels 73 labels = index.labels 74 def _make_index(lev,lab): 75 i = lev.__class__(_make_index_array_level(lev.values,lab)) 76 i.name = lev.name 77 return i 78 79 self.new_index_levels = list([ _make_index(lev,lab) for lev,lab in zip(levels,labels) ]) 80 self.new_index_names = list(index.names) 81 82 self.removed_name = self.new_index_names.pop(self.level) 83 self.removed_level = self.new_index_levels.pop(self.level) 84 85 self._make_sorted_values_labels() 86 self._make_selectors() 87 88 def _make_sorted_values_labels(self): 89 v = self.level 90 91 labs = list(self.index.labels) 92 levs = list(self.index.levels) 93 to_sort = labs[:v] + labs[v + 1:] + [labs[v]] 94 sizes = [len(x) for x in levs[:v] + levs[v + 1:] + [levs[v]]] 95 96 comp_index, obs_ids = get_compressed_ids(to_sort, sizes) 97 98 # group_index = get_group_index(to_sort, sizes) 99 # comp_index, obs_ids = _compress_group_index(group_index) 100 101 ngroups = len(obs_ids) 102 103 indexer = algos.groupsort_indexer(comp_index, ngroups)[0] 104 indexer = _ensure_platform_int(indexer) 105 106 self.sorted_values = com.take_nd(self.values, indexer, axis=0) 107 self.sorted_labels = [l.take(indexer) for l in to_sort] 108 109 def _make_selectors(self): 110 new_levels = self.new_index_levels 111 112 # make the mask 113 remaining_labels = self.sorted_labels[:-1] 114 level_sizes = [len(x) for x in new_levels] 115 116 comp_index, obs_ids = get_compressed_ids(remaining_labels, level_sizes) 117 ngroups = len(obs_ids) 118 119 comp_index = _ensure_platform_int(comp_index) 120 stride = self.index.levshape[self.level] 121 self.full_shape = ngroups, stride 122 123 selector = self.sorted_labels[-1] + stride * comp_index 124 mask = np.zeros(np.prod(self.full_shape), dtype=bool) 125 mask.put(selector, True) 126 127 if mask.sum() < len(self.index): 128 raise ValueError('Index contains duplicate entries, ' 129 'cannot reshape') 130 131 self.group_index = comp_index 132 self.mask = mask 133 self.unique_groups = obs_ids 134 self.compressor = comp_index.searchsorted(np.arange(ngroups)) 135 136 def get_result(self): 137 # TODO: find a better way than this masking business 138 139 values, value_mask = self.get_new_values() 140 columns = self.get_new_columns() 141 index = self.get_new_index() 142 143 # filter out missing levels 144 if values.shape[1] > 0: 145 col_inds, obs_ids = _compress_group_index(self.sorted_labels[-1]) 146 # rare case, level values not observed 147 if len(obs_ids) < self.full_shape[1]: 148 inds = (value_mask.sum(0) > 0).nonzero()[0] 149 values = com.take_nd(values, inds, axis=1) 150 columns = columns[inds] 151 152 # we might have a missing index 153 if len(index) != values.shape[0]: 154 mask = isnull(index) 155 if mask.any(): 156 l = np.arange(len(index)) 157 values, orig_values = np.empty((len(index),values.shape[1])), values 158 values.fill(np.nan) 159 values_indexer = com._ensure_int64(l[~mask]) 160 for i, j in enumerate(values_indexer): 161 values[j] = orig_values[i] 162 else: 163 index = index.take(self.unique_groups) 164 165 return DataFrame(values, index=index, columns=columns) 166 167 def get_new_values(self): 168 values = self.values 169 170 # place the values 171 length, width = self.full_shape 172 stride = values.shape[1] 173 result_width = width * stride 174 result_shape = (length, result_width) 175 176 # if our mask is all True, then we can use our existing dtype 177 if self.mask.all(): 178 dtype = values.dtype 179 new_values = np.empty(result_shape, dtype=dtype) 180 else: 181 dtype, fill_value = _maybe_promote(values.dtype) 182 new_values = np.empty(result_shape, dtype=dtype) 183 new_values.fill(fill_value) 184 185 new_mask = np.zeros(result_shape, dtype=bool) 186 187 # is there a simpler / faster way of doing this? 188 for i in range(values.shape[1]): 189 chunk = new_values[:, i * width: (i + 1) * width] 190 mask_chunk = new_mask[:, i * width: (i + 1) * width] 191 192 chunk.flat[self.mask] = self.sorted_values[:, i] 193 mask_chunk.flat[self.mask] = True 194 195 return new_values, new_mask 196 197 def get_new_columns(self): 198 if self.value_columns is None: 199 return self.removed_level 200 201 stride = len(self.removed_level) 202 width = len(self.value_columns) 203 propagator = np.repeat(np.arange(width), stride) 204 if isinstance(self.value_columns, MultiIndex): 205 new_levels = self.value_columns.levels + (self.removed_level,) 206 new_names = self.value_columns.names + (self.removed_name,) 207 208 new_labels = [lab.take(propagator) 209 for lab in self.value_columns.labels] 210 new_labels.append(np.tile(np.arange(stride), width)) 211 else: 212 new_levels = [self.value_columns, self.removed_level] 213 new_names = [self.value_columns.name, self.removed_name] 214 215 new_labels = [] 216 217 new_labels.append(propagator) 218 new_labels.append(np.tile(np.arange(stride), width)) 219 220 return MultiIndex(levels=new_levels, labels=new_labels, 221 names=new_names) 222 223 def get_new_index(self): 224 result_labels = [] 225 for cur in self.sorted_labels[:-1]: 226 labels = cur.take(self.compressor) 227 labels = _make_index_array_level(labels,cur) 228 result_labels.append(labels) 229 230 # construct the new index 231 if len(self.new_index_levels) == 1: 232 new_index = self.new_index_levels[0] 233 new_index.name = self.new_index_names[0] 234 else: 235 new_index = MultiIndex(levels=self.new_index_levels, 236 labels=result_labels, 237 names=self.new_index_names) 238 239 return new_index 240 241 242 def _make_index_array_level(lev,lab): 243 """ create the combined index array, preserving nans, return an array """ 244 mask = lab == -1 245 if not mask.any(): 246 return lev 247 248 l = np.arange(len(lab)) 249 mask_labels = np.empty(len(mask[mask]),dtype=object) 250 mask_labels.fill(np.nan) 251 mask_indexer = com._ensure_int64(l[mask]) 252 253 labels = lev 254 labels_indexer = com._ensure_int64(l[~mask]) 255 256 new_labels = np.empty(tuple([len(lab)]),dtype=object) 257 new_labels[labels_indexer] = labels 258 new_labels[mask_indexer] = mask_labels 259 260 return new_labels 261 262 def _unstack_multiple(data, clocs): 263 if len(clocs) == 0: 264 return data 265 266 # NOTE: This doesn't deal with hierarchical columns yet 267 268 index = data.index 269 270 clocs = [index._get_level_number(i) for i in clocs] 271 272 rlocs = [i for i in range(index.nlevels) if i not in clocs] 273 274 clevels = [index.levels[i] for i in clocs] 275 clabels = [index.labels[i] for i in clocs] 276 cnames = [index.names[i] for i in clocs] 277 rlevels = [index.levels[i] for i in rlocs] 278 rlabels = [index.labels[i] for i in rlocs] 279 rnames = [index.names[i] for i in rlocs] 280 281 shape = [len(x) for x in clevels] 282 group_index = get_group_index(clabels, shape) 283 284 comp_ids, obs_ids = _compress_group_index(group_index, sort=False) 285 recons_labels = decons_group_index(obs_ids, shape) 286 287 dummy_index = MultiIndex(levels=rlevels + [obs_ids], 288 labels=rlabels + [comp_ids], 289 names=rnames + ['__placeholder__']) 290 291 if isinstance(data, Series): 292 dummy = Series(data.values, index=dummy_index) 293 unstacked = dummy.unstack('__placeholder__') 294 new_levels = clevels 295 new_names = cnames 296 new_labels = recons_labels 297 else: 298 if isinstance(data.columns, MultiIndex): 299 result = data 300 for i in range(len(clocs)): 301 val = clocs[i] 302 result = result.unstack(val) 303 clocs = [val if i > val else val - 1 for val in clocs] 304 305 return result 306 307 dummy = DataFrame(data.values, index=dummy_index, 308 columns=data.columns) 309 310 unstacked = dummy.unstack('__placeholder__') 311 if isinstance(unstacked, Series): 312 unstcols = unstacked.index 313 else: 314 unstcols = unstacked.columns 315 new_levels = [unstcols.levels[0]] + clevels 316 new_names = [data.columns.name] + cnames 317 318 new_labels = [unstcols.labels[0]] 319 for rec in recons_labels: 320 new_labels.append(rec.take(unstcols.labels[-1])) 321 322 new_columns = MultiIndex(levels=new_levels, labels=new_labels, 323 names=new_names) 324 325 if isinstance(unstacked, Series): 326 unstacked.index = new_columns 327 else: 328 unstacked.columns = new_columns 329 330 return unstacked 331 332 333 def pivot(self, index=None, columns=None, values=None): 334 """ 335 See DataFrame.pivot 336 """ 337 if values is None: 338 indexed = self.set_index([index, columns]) 339 return indexed.unstack(columns) 340 else: 341 indexed = Series(self[values].values, 342 index=MultiIndex.from_arrays([self[index], self[columns]])) 343 return indexed.unstack(columns) 344 345 346 def pivot_simple(index, columns, values): 347 """ 348 Produce 'pivot' table based on 3 columns of this DataFrame. 349 Uses unique values from index / columns and fills with values. 350 351 Parameters 352 ---------- 353 index : ndarray 354 Labels to use to make new frame's index 355 columns : ndarray 356 Labels to use to make new frame's columns 357 values : ndarray 358 Values to use for populating new frame's values 359 360 Note 361 ---- 362 Obviously, all 3 of the input arguments must have the same length 363 364 Returns 365 ------- 366 DataFrame 367 """ 368 if (len(index) != len(columns)) or (len(columns) != len(values)): 369 raise AssertionError('Length of index, columns, and values must be the' 370 ' same') 371 372 if len(index) == 0: 373 return DataFrame(index=[]) 374 375 hindex = MultiIndex.from_arrays([index, columns]) 376 series = Series(values.ravel(), index=hindex) 377 series = series.sortlevel(0) 378 return series.unstack() 379 380 381 def _slow_pivot(index, columns, values): 382 """ 383 Produce 'pivot' table based on 3 columns of this DataFrame. 384 Uses unique values from index / columns and fills with values. 385 386 Parameters 387 ---------- 388 index : string or object 389 Column name to use to make new frame's index 390 columns : string or object 391 Column name to use to make new frame's columns 392 values : string or object 393 Column name to use for populating new frame's values 394 395 Could benefit from some Cython here. 396 """ 397 tree = {} 398 for i, (idx, col) in enumerate(zip(index, columns)): 399 if col not in tree: 400 tree[col] = {} 401 branch = tree[col] 402 branch[idx] = values[i] 403 404 return DataFrame(tree) 405 406 407 def unstack(obj, level): 408 if isinstance(level, (tuple, list)): 409 return _unstack_multiple(obj, level) 410 411 if isinstance(obj, DataFrame): 412 if isinstance(obj.index, MultiIndex): 413 return _unstack_frame(obj, level) 414 else: 415 return obj.T.stack(dropna=False) 416 else: 417 unstacker = _Unstacker(obj.values, obj.index, level=level) 418 return unstacker.get_result() 419 420 421 def _unstack_frame(obj, level): 422 from pandas.core.internals import BlockManager, make_block 423 424 if obj._is_mixed_type: 425 unstacker = _Unstacker(np.empty(obj.shape, dtype=bool), # dummy 426 obj.index, level=level, 427 value_columns=obj.columns) 428 new_columns = unstacker.get_new_columns() 429 new_index = unstacker.get_new_index() 430 new_axes = [new_columns, new_index] 431 432 new_blocks = [] 433 mask_blocks = [] 434 for blk in obj._data.blocks: 435 bunstacker = _Unstacker(blk.values.T, obj.index, level=level, 436 value_columns=blk.items) 437 new_items = bunstacker.get_new_columns() 438 new_values, mask = bunstacker.get_new_values() 439 440 mblk = make_block(mask.T, new_items, new_columns) 441 mask_blocks.append(mblk) 442 443 newb = make_block(new_values.T, new_items, new_columns) 444 new_blocks.append(newb) 445 446 result = DataFrame(BlockManager(new_blocks, new_axes)) 447 mask_frame = DataFrame(BlockManager(mask_blocks, new_axes)) 448 return result.ix[:, mask_frame.sum(0) > 0] 449 else: 450 unstacker = _Unstacker(obj.values, obj.index, level=level, 451 value_columns=obj.columns) 452 return unstacker.get_result() 453 454 455 def get_compressed_ids(labels, sizes): 456 # no overflow 457 if com._long_prod(sizes) < 2 ** 63: 458 group_index = get_group_index(labels, sizes) 459 comp_index, obs_ids = _compress_group_index(group_index) 460 else: 461 n = len(labels[0]) 462 mask = np.zeros(n, dtype=bool) 463 for v in labels: 464 mask |= v < 0 465 466 while com._long_prod(sizes) >= 2 ** 63: 467 i = len(sizes) 468 while com._long_prod(sizes[:i]) >= 2 ** 63: 469 i -= 1 470 471 rem_index, rem_ids = get_compressed_ids(labels[:i], 472 sizes[:i]) 473 sizes = [len(rem_ids)] + sizes[i:] 474 labels = [rem_index] + labels[i:] 475 476 return get_compressed_ids(labels, sizes) 477 478 return comp_index, obs_ids 479 480 481 def stack(frame, level=-1, dropna=True): 482 """ 483 Convert DataFrame to Series with multi-level Index. Columns become the 484 second level of the resulting hierarchical index 485 486 Returns 487 ------- 488 stacked : Series 489 """ 490 N, K = frame.shape 491 if isinstance(level, int) and level < 0: 492 level += frame.columns.nlevels 493 494 level = frame.columns._get_level_number(level) 495 496 if isinstance(frame.columns, MultiIndex): 497 return _stack_multi_columns(frame, level=level, dropna=dropna) 498 elif isinstance(frame.index, MultiIndex): 499 new_levels = list(frame.index.levels) 500 new_levels.append(frame.columns) 501 502 new_labels = [lab.repeat(K) for lab in frame.index.labels] 503 new_labels.append(np.tile(np.arange(K), N).ravel()) 504 505 new_names = list(frame.index.names) 506 new_names.append(frame.columns.name) 507 new_index = MultiIndex(levels=new_levels, labels=new_labels, 508 names=new_names) 509 else: 510 ilabels = np.arange(N).repeat(K) 511 clabels = np.tile(np.arange(K), N).ravel() 512 new_index = MultiIndex(levels=[frame.index, frame.columns], 513 labels=[ilabels, clabels], 514 names=[frame.index.name, frame.columns.name]) 515 516 new_values = frame.values.ravel() 517 if dropna: 518 mask = notnull(new_values) 519 new_values = new_values[mask] 520 new_index = new_index[mask] 521 return Series(new_values, index=new_index) 522 523 524 def _stack_multi_columns(frame, level=-1, dropna=True): 525 this = frame.copy() 526 527 # this makes life much simpler 528 if level != frame.columns.nlevels - 1: 529 # roll levels to put selected level at end 530 roll_columns = this.columns 531 for i in range(level, frame.columns.nlevels - 1): 532 roll_columns = roll_columns.swaplevel(i, i + 1) 533 this.columns = roll_columns 534 535 if not this.columns.is_lexsorted(): 536 this = this.sortlevel(0, axis=1) 537 538 # tuple list excluding level for grouping columns 539 if len(frame.columns.levels) > 2: 540 tuples = list(zip(*[lev.values.take(lab) 541 for lev, lab in zip(this.columns.levels[:-1], 542 this.columns.labels[:-1])])) 543 unique_groups = [key for key, _ in itertools.groupby(tuples)] 544 new_names = this.columns.names[:-1] 545 new_columns = MultiIndex.from_tuples(unique_groups, names=new_names) 546 else: 547 new_columns = unique_groups = this.columns.levels[0] 548 549 # time to ravel the values 550 new_data = {} 551 level_vals = this.columns.levels[-1] 552 levsize = len(level_vals) 553 drop_cols = [] 554 for key in unique_groups: 555 loc = this.columns.get_loc(key) 556 slice_len = loc.stop - loc.start 557 # can make more efficient? 558 559 if slice_len == 0: 560 drop_cols.append(key) 561 continue 562 elif slice_len != levsize: 563 chunk = this.ix[:, this.columns[loc]] 564 chunk.columns = level_vals.take(chunk.columns.labels[-1]) 565 value_slice = chunk.reindex(columns=level_vals).values 566 else: 567 if frame._is_mixed_type: 568 value_slice = this.ix[:, this.columns[loc]].values 569 else: 570 value_slice = this.values[:, loc] 571 572 new_data[key] = value_slice.ravel() 573 574 if len(drop_cols) > 0: 575 new_columns = new_columns - drop_cols 576 577 N = len(this) 578 579 if isinstance(this.index, MultiIndex): 580 new_levels = list(this.index.levels) 581 new_names = list(this.index.names) 582 new_labels = [lab.repeat(levsize) for lab in this.index.labels] 583 else: 584 new_levels = [this.index] 585 new_labels = [np.arange(N).repeat(levsize)] 586 new_names = [this.index.name] # something better? 587 588 new_levels.append(frame.columns.levels[level]) 589 new_labels.append(np.tile(np.arange(levsize), N)) 590 new_names.append(frame.columns.names[level]) 591 592 new_index = MultiIndex(levels=new_levels, labels=new_labels, 593 names=new_names) 594 595 result = DataFrame(new_data, index=new_index, columns=new_columns) 596 597 # more efficient way to go about this? can do the whole masking biz but 598 # will only save a small amount of time... 599 if dropna: 600 result = result.dropna(axis=0, how='all') 601 602 return result 603 604 605 def melt(frame, id_vars=None, value_vars=None, 606 var_name=None, value_name='value', col_level=None): 607 """ 608 "Unpivots" a DataFrame from wide format to long format, optionally leaving 609 id variables set 610 611 Parameters 612 ---------- 613 frame : DataFrame 614 id_vars : tuple, list, or ndarray 615 value_vars : tuple, list, or ndarray 616 var_name : scalar, if None uses frame.column.name or 'variable' 617 value_name : scalar, default 'value' 618 col_level : scalar, if columns are a MultiIndex then use this level to melt 619 620 Examples 621 -------- 622 >>> import pandas as pd 623 >>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'}, 624 ... 'B': {0: 1, 1: 3, 2: 5}, 625 ... 'C': {0: 2, 1: 4, 2: 6}}) 626 627 >>> df 628 A B C 629 0 a 1 2 630 1 b 3 4 631 2 c 5 6 632 633 >>> melt(df, id_vars=['A'], value_vars=['B']) 634 A variable value 635 0 a B 1 636 1 b B 3 637 2 c B 5 638 639 >>> melt(df, id_vars=['A'], value_vars=['B'], 640 ... var_name='myVarname', value_name='myValname') 641 A myVarname myValname 642 0 a B 1 643 1 b B 3 644 2 c B 5 645 646 >>> df.columns = [list('ABC'), list('DEF')] 647 648 >>> melt(df, col_level=0, id_vars=['A'], value_vars=['B']) 649 A variable value 650 0 a B 1 651 1 b B 3 652 2 c B 5 653 654 >>> melt(df, id_vars=[('A', 'D')], value_vars=[('B', 'E')]) 655 (A, D) variable_0 variable_1 value 656 0 a B E 1 657 1 b B E 3 658 2 c B E 5 659 660 """ 661 # TODO: what about the existing index? 662 if id_vars is not None: 663 if not isinstance(id_vars, (tuple, list, np.ndarray)): 664 id_vars = [id_vars] 665 else: 666 id_vars = list(id_vars) 667 else: 668 id_vars = [] 669 670 if value_vars is not None: 671 if not isinstance(value_vars, (tuple, list, np.ndarray)): 672 value_vars = [value_vars] 673 frame = frame.ix[:, id_vars + value_vars] 674 else: 675 frame = frame.copy() 676 677 if col_level is not None: # allow list or other? 678 frame.columns = frame.columns.get_level_values(col_level) # frame is a copy 679 680 if var_name is None: 681 if isinstance(frame.columns, MultiIndex): 682 if len(frame.columns.names) == len(set(frame.columns.names)): 683 var_name = frame.columns.names 684 else: 685 var_name = ['variable_%s' % i for i in 686 range(len(frame.columns.names))] 687 else: 688 var_name = [frame.columns.name if frame.columns.name is not None 689 else 'variable'] 690 if isinstance(var_name, compat.string_types): 691 var_name = [var_name] 692 693 N, K = frame.shape 694 K -= len(id_vars) 695 696 mdata = {} 697 for col in id_vars: 698 mdata[col] = np.tile(frame.pop(col).values, K) 699 700 mcolumns = id_vars + var_name + [value_name] 701 702 mdata[value_name] = frame.values.ravel('F') 703 for i, col in enumerate(var_name): 704 # asanyarray will keep the columns as an Index 705 mdata[col] = np.asanyarray(frame.columns.get_level_values(i)).repeat(N) 706 707 return DataFrame(mdata, columns=mcolumns) 708 709 710 def lreshape(data, groups, dropna=True, label=None): 711 """ 712 Reshape long-format data to wide. Generalized inverse of DataFrame.pivot 713 714 Parameters 715 ---------- 716 data : DataFrame 717 groups : dict 718 {new_name : list_of_columns} 719 dropna : boolean, default True 720 721 Examples 722 -------- 723 >>> import pandas as pd 724 >>> data = pd.DataFrame({'hr1': [514, 573], 'hr2': [545, 526], 725 ... 'team': ['Red Sox', 'Yankees'], 726 ... 'year1': [2007, 2008], 'year2': [2008, 2008]}) 727 >>> data 728 hr1 hr2 team year1 year2 729 0 514 545 Red Sox 2007 2008 730 1 573 526 Yankees 2007 2008 731 732 >>> pd.lreshape(data, {'year': ['year1', 'year2'], 'hr': ['hr1', 'hr2']}) 733 team hr year 734 0 Red Sox 514 2007 735 1 Yankees 573 2007 736 2 Red Sox 545 2008 737 3 Yankees 526 2008 738 739 Returns 740 ------- 741 reshaped : DataFrame 742 """ 743 if isinstance(groups, dict): 744 keys = list(groups.keys()) 745 values = list(groups.values()) 746 else: 747 keys, values = zip(*groups) 748 749 all_cols = list(set.union(*[set(x) for x in values])) 750 id_cols = list(data.columns.diff(all_cols)) 751 752 K = len(values[0]) 753 754 for seq in values: 755 if len(seq) != K: 756 raise ValueError('All column lists must be same length') 757 758 mdata = {} 759 pivot_cols = [] 760 761 for target, names in zip(keys, values): 762 mdata[target] = com._concat_compat([data[col].values for col in names]) 763 pivot_cols.append(target) 764 765 for col in id_cols: 766 mdata[col] = np.tile(data[col].values, K) 767 768 if dropna: 769 mask = np.ones(len(mdata[pivot_cols[0]]), dtype=bool) 770 for c in pivot_cols: 771 mask &= notnull(mdata[c]) 772 if not mask.all(): 773 mdata = dict((k, v[mask]) for k, v in compat.iteritems(mdata)) 774 775 return DataFrame(mdata, columns=id_cols + pivot_cols) 776 777 778 def convert_dummies(data, cat_variables, prefix_sep='_'): 779 """ 780 Compute DataFrame with specified columns converted to dummy variables (0 / 781 1). Result columns will be prefixed with the column name, then the level 782 name, e.g. 'A_foo' for column A and level foo 783 784 Parameters 785 ---------- 786 data : DataFrame 787 cat_variables : list-like 788 Must be column names in the DataFrame 789 prefix_sep : string, default '_' 790 String to use to separate column name from dummy level 791 792 Returns 793 ------- 794 dummies : DataFrame 795 """ 796 result = data.drop(cat_variables, axis=1) 797 for variable in cat_variables: 798 dummies = get_dummies(data[variable], prefix=variable, 799 prefix_sep=prefix_sep) 800 result = result.join(dummies) 801 return result 802 803 804 def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False): 805 """ 806 Convert categorical variable into dummy/indicator variables 807 808 Parameters 809 ---------- 810 data : array-like or Series 811 prefix : string, default None 812 String to append DataFrame column names 813 prefix_sep : string, default '_' 814 If appending prefix, separator/delimiter to use 815 dummy_na : bool, default False 816 Add a column to indicate NaNs, if False NaNs are ignored. 817 818 Returns 819 ------- 820 dummies : DataFrame 821 822 Examples 823 -------- 824 >>> s = pd.Series(list('abca')) 825 826 >>> get_dummies(s) 827 a b c 828 0 1 0 0 829 1 0 1 0 830 2 0 0 1 831 3 1 0 0 832 833 >>> s1 = ['a', 'b', np.nan] 834 835 >>> get_dummies(s1) 836 a b 837 0 1 0 838 1 0 1 839 2 0 0 840 841 >>> get_dummies(s1, dummy_na=True) 842 a b NaN 843 0 1 0 0 844 1 0 1 0 845 2 0 0 1 846 847 """ 848 cat = Categorical.from_array(Series(data)) # Series avoids inconsistent NaN handling 849 levels = cat.levels 850 851 # if all NaN 852 if not dummy_na and len(levels) == 0: 853 if isinstance(data, Series): 854 index = data.index 855 else: 856 index = np.arange(len(data)) 857 return DataFrame(index=index) 858 859 number_of_cols = len(levels) 860 if dummy_na: 861 number_of_cols += 1 862 863 dummy_mat = np.eye(number_of_cols).take(cat.labels, axis=0) 864 865 if dummy_na: 866 levels = np.append(cat.levels, np.nan) 867 else: 868 # reset NaN GH4446 869 dummy_mat[cat.labels == -1] = 0 870 871 if prefix is not None: 872 dummy_cols = ['%s%s%s' % (prefix, prefix_sep, str(v)) 873 for v in levels] 874 else: 875 dummy_cols = levels 876 877 if isinstance(data, Series): 878 index = data.index 879 else: 880 index = None 881 882 return DataFrame(dummy_mat, index=index, columns=dummy_cols) 883 884 885 def make_axis_dummies(frame, axis='minor', transform=None): 886 """ 887 Construct 1-0 dummy variables corresponding to designated axis 888 labels 889 890 Parameters 891 ---------- 892 frame : DataFrame 893 axis : {'major', 'minor'}, default 'minor' 894 transform : function, default None 895 Function to apply to axis labels first. For example, to 896 get "day of week" dummies in a time series regression 897 you might call:: 898 899 make_axis_dummies(panel, axis='major', 900 transform=lambda d: d.weekday()) 901 Returns 902 ------- 903 dummies : DataFrame 904 Column names taken from chosen axis 905 """ 906 numbers = { 907 'major': 0, 908 'minor': 1 909 } 910 num = numbers.get(axis, axis) 911 912 items = frame.index.levels[num] 913 labels = frame.index.labels[num] 914 if transform is not None: 915 mapped_items = items.map(transform) 916 cat = Categorical.from_array(mapped_items.take(labels)) 917 labels = cat.labels 918 items = cat.levels 919 920 values = np.eye(len(items), dtype=float) 921 values = values.take(labels, axis=0) 922 923 return DataFrame(values, columns=items, index=frame.index) 924 925 926 def block2d_to_blocknd(values, items, shape, labels, ref_items=None): 927 """ pivot to the labels shape """ 928 from pandas.core.internals import make_block 929 panel_shape = (len(items),) + shape 930 931 # TODO: lexsort depth needs to be 2!! 932 933 # Create observation selection vector using major and minor 934 # labels, for converting to panel format. 935 selector = factor_indexer(shape[1:], labels) 936 mask = np.zeros(np.prod(shape), dtype=bool) 937 mask.put(selector, True) 938 939 if mask.all(): 940 pvalues = np.empty(panel_shape, dtype=values.dtype) 941 else: 942 dtype, fill_value = _maybe_promote(values.dtype) 943 pvalues = np.empty(panel_shape, dtype=dtype) 944 pvalues.fill(fill_value) 945 946 values = values 947 for i in range(len(items)): 948 pvalues[i].flat[mask] = values[:, i] 949 950 if ref_items is None: 951 ref_items = items 952 953 return make_block(pvalues, items, ref_items) 954 955 956 def factor_indexer(shape, labels): 957 """ given a tuple of shape and a list of Categorical labels, return the expanded label indexer """ 958 mult = np.array(shape)[::-1].cumprod()[::-1] 959 return com._ensure_platform_int(np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T) 960 [end of pandas/core/reshape.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
ec28c9de150cef1fecdf3c580c68ce7b501e1b33
DataFrame.sort_index does not use ascending when then value is a list with a single element ``` In [7]: d Out[7]: {'one': [1.0, 2.0, 3.0, 4.0], 'two': [4.0, 3.0, 2.0, 1.0]} In [11]: pd.DataFrame(d).sort_index(by=['two'], ascending=[0,]) Out[11]: one two 3 4 1 2 3 2 1 2 3 0 1 4 In [12]: pd.DataFrame(d).sort_index(by=['two'], ascending=0) Out[12]: one two 0 1 4 1 2 3 2 3 2 3 4 1 ```
Take it back looking at the wrong column. Main issue is that ascending is interpreted wrong. Trivial to fix though. thank you for the replay. I knew how to workaround the issue, but if you do compute the ascending elements programmatically, you have to add a special case for the single column case, which is not good Well, the main problem is that you're not actually passing what you think. WHen pass `ascending=0` with a single column, it gets interpreted into `ascending=False` (because 0 is falsey). It works the same way with multiple columns too. Are you thinking that passing: `pd.DataFrame(d).sort_index(by=['two'], ascending=0)` Should be equivalent to `pd.DataFrame(d).sort_index(by=['two'], ascending=[0,])`? For example,this isn't supported: `pd.DataFrame(d).sort_index(by=['one', 'two'], ascending=0)` basically, the other option for this would be to raise an error because it's not iterable or a bool... @jburroni please close if this answers your concern. If not, I'll take another look. I think it might just be okay (otherwise have to be less Pythonic when testing for truthiness). Here is the thing using this: ``` In [19]: df.sort_index(by=['two', 'one'], ascending=[0,0]) Out[19]: one two 0 1 4 1 2 3 2 3 2 3 4 1 ``` but, using only 'two' ``` In [20]: df.sort_index(by=['two'], ascending=[0]) Out[20]: one two 3 4 1 2 3 2 1 2 3 0 1 4 ``` And this is not consistent That is the issue I'm most concerned as this issue arise when you programmatically define a sort_index columns and ascending orders @jburroni yes, that is a bug. I'll fix it. @jtratner thank you @jburroni btw - why do you use [0, 0] as opposed to [False, False]? I've just copied the example frmo: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.sort_index.html ``` >>> result = df.sort_index(by=['A', 'B'], ascending=[1, 0]) ``` ah okay. I'll change that too for clarity...can only be True or False anyways (but True and False are actually equivalent to 1 and 0 respectively)
2013-09-15T16:52:08Z
<patch> diff --git a/doc/source/release.rst b/doc/source/release.rst --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -424,6 +424,10 @@ Bug Fixes across different versions of matplotlib (:issue:`4789`) - Suppressed DeprecationWarning associated with internal calls issued by repr() (:issue:`4391`) - Fixed an issue with a duplicate index and duplicate selector with ``.loc`` (:issue:`4825`) + - Fixed an issue with ``DataFrame.sort_index`` where, when sorting by a + single column and passing a list for ``ascending``, the argument for + ``ascending`` was being interpreted as ``True`` (:issue:`4839`, + :issue:`4846`) pandas 0.12.0 ------------- diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -2856,7 +2856,7 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False, Examples -------- - >>> result = df.sort_index(by=['A', 'B'], ascending=[1, 0]) + >>> result = df.sort_index(by=['A', 'B'], ascending=[True, False]) Returns ------- @@ -2875,6 +2875,9 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False, raise ValueError('When sorting by column, axis must be 0 (rows)') if not isinstance(by, (tuple, list)): by = [by] + if com._is_sequence(ascending) and len(by) != len(ascending): + raise ValueError('Length of ascending (%d) != length of by' + ' (%d)' % (len(ascending), len(by))) if len(by) > 1: keys = [] @@ -2900,6 +2903,8 @@ def trans(v): raise ValueError('Cannot sort by duplicate column %s' % str(by)) indexer = k.argsort(kind=kind) + if isinstance(ascending, (tuple, list)): + ascending = ascending[0] if not ascending: indexer = indexer[::-1] elif isinstance(labels, MultiIndex): </patch>
[]
[]
ipython__ipython-11650
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Volume normalization in IPython.display.Audio should be optional I am manipulating audio using numpy with IPython Notebook and want to use IPython.display.Audio for listening to the numpy arrays. Unfortunately, auto-normalization tampers with the results. Example: ``` # Generate a sound import IPython import numpy as np framerate = 44100 t = np.linspace(0,5,framerate*5) tone = np.sin(2*np.pi*220*t) antitone = np.sin(2*np.pi*220*t + np.pi) IPython.display.Audio(tone+antitone, rate=framerate) ``` Adding a sin wav to itself shifted by 180 deg should give total silence. Instead, auto-normalization amplifies the floating point errors. The problem is in the IPython.lib.display.Audio _make_wav method, which always normalizes the numpy array(see 'scaled' variable). I think that we should have a normalize keyword argument, so that Audio can be used for audio analysis. Something like: ``` Audio(tone+antitone, rate=framerate, normalize=False) ``` </issue> <code> [start of README.rst] 1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master 2 :target: https://codecov.io/github/ipython/ipython?branch=master 3 4 .. image:: https://img.shields.io/pypi/v/IPython.svg 5 :target: https://pypi.python.org/pypi/ipython 6 7 .. image:: https://img.shields.io/travis/ipython/ipython.svg 8 :target: https://travis-ci.org/ipython/ipython 9 10 .. image:: https://www.codetriage.com/ipython/ipython/badges/users.svg 11 :target: https://www.codetriage.com/ipython/ipython/ 12 13 =========================================== 14 IPython: Productive Interactive Computing 15 =========================================== 16 17 Overview 18 ======== 19 20 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io 21 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use, and 22 contribute to the project. 23 24 **IPython versions and Python Support** 25 26 **IPython 7.0** requires Python version 3.5 and above. 27 28 **IPython 6.x** requires Python version 3.3 and above. 29 30 **IPython 5.x LTS** is the compatible release for Python 2.7. 31 If you require Python 2 support, you **must** use IPython 5.x LTS. Please 32 update your project configurations and requirements as necessary. 33 34 35 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*. 36 See the `Jupyter installation docs <https://jupyter.readthedocs.io/en/latest/install.html>`__ 37 if you want to use these. 38 39 40 41 42 Development and Instant running 43 =============================== 44 45 You can find the latest version of the development documentation on `readthedocs 46 <https://ipython.readthedocs.io/en/latest/>`_. 47 48 You can run IPython from this directory without even installing it system-wide 49 by typing at the terminal:: 50 51 $ python -m IPython 52 53 Or see the `development installation docs 54 <https://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_ 55 for the latest revision on read the docs. 56 57 Documentation and installation instructions for older version of IPython can be 58 found on the `IPython website <https://ipython.org/documentation.html>`_ 59 60 61 62 IPython requires Python version 3 or above 63 ========================================== 64 65 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or 66 3.2. 67 68 For a version compatible with Python 2.7, please install the 5.x LTS Long Term 69 Support version. 70 71 If you are encountering this error message you are likely trying to install or 72 use IPython from source. You need to checkout the remote 5.x branch. If you are 73 using git the following should work:: 74 75 $ git fetch origin 76 $ git checkout 5.x 77 78 If you encounter this error message with a regular install of IPython, then you 79 likely need to update your package manager, for example if you are using `pip` 80 check the version of pip with:: 81 82 $ pip --version 83 84 You will need to update pip to the version 9.0.1 or greater. If you are not using 85 pip, please inquiry with the maintainers of the package for your package 86 manager. 87 88 For more information see one of our blog posts: 89 90 https://blog.jupyter.org/release-of-ipython-5-0-8ce60b8d2e8e 91 92 As well as the following Pull-Request for discussion: 93 94 https://github.com/ipython/ipython/pull/9900 95 96 This error does also occur if you are invoking ``setup.py`` directly – which you 97 should not – or are using ``easy_install`` If this is the case, use ``pip 98 install .`` instead of ``setup.py install`` , and ``pip install -e .`` instead 99 of ``setup.py develop`` If you are depending on IPython as a dependency you may 100 also want to have a conditional dependency on IPython depending on the Python 101 version:: 102 103 install_req = ['ipython'] 104 if sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv: 105 install_req.remove('ipython') 106 install_req.append('ipython<6') 107 108 setup( 109 ... 110 install_requires=install_req 111 ) 112 [end of README.rst] [start of IPython/core/usage.py] 1 # -*- coding: utf-8 -*- 2 """Usage information for the main IPython applications. 3 """ 4 #----------------------------------------------------------------------------- 5 # Copyright (C) 2008-2011 The IPython Development Team 6 # Copyright (C) 2001-2007 Fernando Perez. <[email protected]> 7 # 8 # Distributed under the terms of the BSD License. The full license is in 9 # the file COPYING, distributed as part of this software. 10 #----------------------------------------------------------------------------- 11 12 import sys 13 from IPython.core import release 14 15 cl_usage = """\ 16 ========= 17 IPython 18 ========= 19 20 Tools for Interactive Computing in Python 21 ========================================= 22 23 A Python shell with automatic history (input and output), dynamic object 24 introspection, easier configuration, command completion, access to the 25 system shell and more. IPython can also be embedded in running programs. 26 27 28 Usage 29 30 ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ... 31 32 If invoked with no options, it executes the file and exits, passing the 33 remaining arguments to the script, just as if you had specified the same 34 command with python. You may need to specify `--` before args to be passed 35 to the script, to prevent IPython from attempting to parse them. If you 36 specify the option `-i` before the filename, it will enter an interactive 37 IPython session after running the script, rather than exiting. Files ending 38 in .py will be treated as normal Python, but files ending in .ipy can 39 contain special IPython syntax (magic commands, shell expansions, etc.). 40 41 Almost all configuration in IPython is available via the command-line. Do 42 `ipython --help-all` to see all available options. For persistent 43 configuration, look into your `ipython_config.py` configuration file for 44 details. 45 46 This file is typically installed in the `IPYTHONDIR` directory, and there 47 is a separate configuration directory for each profile. The default profile 48 directory will be located in $IPYTHONDIR/profile_default. IPYTHONDIR 49 defaults to to `$HOME/.ipython`. For Windows users, $HOME resolves to 50 C:\\Users\\YourUserName in most instances. 51 52 To initialize a profile with the default configuration file, do:: 53 54 $> ipython profile create 55 56 and start editing `IPYTHONDIR/profile_default/ipython_config.py` 57 58 In IPython's documentation, we will refer to this directory as 59 `IPYTHONDIR`, you can change its default location by creating an 60 environment variable with this name and setting it to the desired path. 61 62 For more information, see the manual available in HTML and PDF in your 63 installation, or online at https://ipython.org/documentation.html. 64 """ 65 66 interactive_usage = """ 67 IPython -- An enhanced Interactive Python 68 ========================================= 69 70 IPython offers a fully compatible replacement for the standard Python 71 interpreter, with convenient shell features, special commands, command 72 history mechanism and output results caching. 73 74 At your system command line, type 'ipython -h' to see the command line 75 options available. This document only describes interactive features. 76 77 GETTING HELP 78 ------------ 79 80 Within IPython you have various way to access help: 81 82 ? -> Introduction and overview of IPython's features (this screen). 83 object? -> Details about 'object'. 84 object?? -> More detailed, verbose information about 'object'. 85 %quickref -> Quick reference of all IPython specific syntax and magics. 86 help -> Access Python's own help system. 87 88 If you are in terminal IPython you can quit this screen by pressing `q`. 89 90 91 MAIN FEATURES 92 ------------- 93 94 * Access to the standard Python help with object docstrings and the Python 95 manuals. Simply type 'help' (no quotes) to invoke it. 96 97 * Magic commands: type %magic for information on the magic subsystem. 98 99 * System command aliases, via the %alias command or the configuration file(s). 100 101 * Dynamic object information: 102 103 Typing ?word or word? prints detailed information about an object. Certain 104 long strings (code, etc.) get snipped in the center for brevity. 105 106 Typing ??word or word?? gives access to the full information without 107 snipping long strings. Strings that are longer than the screen are printed 108 through the less pager. 109 110 The ?/?? system gives access to the full source code for any object (if 111 available), shows function prototypes and other useful information. 112 113 If you just want to see an object's docstring, type '%pdoc object' (without 114 quotes, and without % if you have automagic on). 115 116 * Tab completion in the local namespace: 117 118 At any time, hitting tab will complete any available python commands or 119 variable names, and show you a list of the possible completions if there's 120 no unambiguous one. It will also complete filenames in the current directory. 121 122 * Search previous command history in multiple ways: 123 124 - Start typing, and then use arrow keys up/down or (Ctrl-p/Ctrl-n) to search 125 through the history items that match what you've typed so far. 126 127 - Hit Ctrl-r: opens a search prompt. Begin typing and the system searches 128 your history for lines that match what you've typed so far, completing as 129 much as it can. 130 131 - %hist: search history by index. 132 133 * Persistent command history across sessions. 134 135 * Logging of input with the ability to save and restore a working session. 136 137 * System shell with !. Typing !ls will run 'ls' in the current directory. 138 139 * The reload command does a 'deep' reload of a module: changes made to the 140 module since you imported will actually be available without having to exit. 141 142 * Verbose and colored exception traceback printouts. See the magic xmode and 143 xcolor functions for details (just type %magic). 144 145 * Input caching system: 146 147 IPython offers numbered prompts (In/Out) with input and output caching. All 148 input is saved and can be retrieved as variables (besides the usual arrow 149 key recall). 150 151 The following GLOBAL variables always exist (so don't overwrite them!): 152 _i: stores previous input. 153 _ii: next previous. 154 _iii: next-next previous. 155 _ih : a list of all input _ih[n] is the input from line n. 156 157 Additionally, global variables named _i<n> are dynamically created (<n> 158 being the prompt counter), such that _i<n> == _ih[<n>] 159 160 For example, what you typed at prompt 14 is available as _i14 and _ih[14]. 161 162 You can create macros which contain multiple input lines from this history, 163 for later re-execution, with the %macro function. 164 165 The history function %hist allows you to see any part of your input history 166 by printing a range of the _i variables. Note that inputs which contain 167 magic functions (%) appear in the history with a prepended comment. This is 168 because they aren't really valid Python code, so you can't exec them. 169 170 * Output caching system: 171 172 For output that is returned from actions, a system similar to the input 173 cache exists but using _ instead of _i. Only actions that produce a result 174 (NOT assignments, for example) are cached. If you are familiar with 175 Mathematica, IPython's _ variables behave exactly like Mathematica's % 176 variables. 177 178 The following GLOBAL variables always exist (so don't overwrite them!): 179 _ (one underscore): previous output. 180 __ (two underscores): next previous. 181 ___ (three underscores): next-next previous. 182 183 Global variables named _<n> are dynamically created (<n> being the prompt 184 counter), such that the result of output <n> is always available as _<n>. 185 186 Finally, a global dictionary named _oh exists with entries for all lines 187 which generated output. 188 189 * Directory history: 190 191 Your history of visited directories is kept in the global list _dh, and the 192 magic %cd command can be used to go to any entry in that list. 193 194 * Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython) 195 196 1. Auto-parentheses 197 198 Callable objects (i.e. functions, methods, etc) can be invoked like 199 this (notice the commas between the arguments):: 200 201 In [1]: callable_ob arg1, arg2, arg3 202 203 and the input will be translated to this:: 204 205 callable_ob(arg1, arg2, arg3) 206 207 This feature is off by default (in rare cases it can produce 208 undesirable side-effects), but you can activate it at the command-line 209 by starting IPython with `--autocall 1`, set it permanently in your 210 configuration file, or turn on at runtime with `%autocall 1`. 211 212 You can force auto-parentheses by using '/' as the first character 213 of a line. For example:: 214 215 In [1]: /globals # becomes 'globals()' 216 217 Note that the '/' MUST be the first character on the line! This 218 won't work:: 219 220 In [2]: print /globals # syntax error 221 222 In most cases the automatic algorithm should work, so you should 223 rarely need to explicitly invoke /. One notable exception is if you 224 are trying to call a function with a list of tuples as arguments (the 225 parenthesis will confuse IPython):: 226 227 In [1]: zip (1,2,3),(4,5,6) # won't work 228 229 but this will work:: 230 231 In [2]: /zip (1,2,3),(4,5,6) 232 ------> zip ((1,2,3),(4,5,6)) 233 Out[2]= [(1, 4), (2, 5), (3, 6)] 234 235 IPython tells you that it has altered your command line by 236 displaying the new command line preceded by -->. e.g.:: 237 238 In [18]: callable list 239 -------> callable (list) 240 241 2. Auto-Quoting 242 243 You can force auto-quoting of a function's arguments by using ',' as 244 the first character of a line. For example:: 245 246 In [1]: ,my_function /home/me # becomes my_function("/home/me") 247 248 If you use ';' instead, the whole argument is quoted as a single 249 string (while ',' splits on whitespace):: 250 251 In [2]: ,my_function a b c # becomes my_function("a","b","c") 252 In [3]: ;my_function a b c # becomes my_function("a b c") 253 254 Note that the ',' MUST be the first character on the line! This 255 won't work:: 256 257 In [4]: x = ,my_function /home/me # syntax error 258 """ 259 260 interactive_usage_min = """\ 261 An enhanced console for Python. 262 Some of its features are: 263 - Tab completion in the local namespace. 264 - Logging of input, see command-line options. 265 - System shell escape via ! , eg !ls. 266 - Magic commands, starting with a % (like %ls, %pwd, %cd, etc.) 267 - Keeps track of locally defined variables via %who, %whos. 268 - Show object information with a ? eg ?x or x? (use ?? for more info). 269 """ 270 271 quick_reference = r""" 272 IPython -- An enhanced Interactive Python - Quick Reference Card 273 ================================================================ 274 275 obj?, obj?? : Get help, or more help for object (also works as 276 ?obj, ??obj). 277 ?foo.*abc* : List names in 'foo' containing 'abc' in them. 278 %magic : Information about IPython's 'magic' % functions. 279 280 Magic functions are prefixed by % or %%, and typically take their arguments 281 without parentheses, quotes or even commas for convenience. Line magics take a 282 single % and cell magics are prefixed with two %%. 283 284 Example magic function calls: 285 286 %alias d ls -F : 'd' is now an alias for 'ls -F' 287 alias d ls -F : Works if 'alias' not a python name 288 alist = %alias : Get list of aliases to 'alist' 289 cd /usr/share : Obvious. cd -<tab> to choose from visited dirs. 290 %cd?? : See help AND source for magic %cd 291 %timeit x=10 : time the 'x=10' statement with high precision. 292 %%timeit x=2**100 293 x**100 : time 'x**100' with a setup of 'x=2**100'; setup code is not 294 counted. This is an example of a cell magic. 295 296 System commands: 297 298 !cp a.txt b/ : System command escape, calls os.system() 299 cp a.txt b/ : after %rehashx, most system commands work without ! 300 cp ${f}.txt $bar : Variable expansion in magics and system commands 301 files = !ls /usr : Capture system command output 302 files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc' 303 304 History: 305 306 _i, _ii, _iii : Previous, next previous, next next previous input 307 _i4, _ih[2:5] : Input history line 4, lines 2-4 308 exec _i81 : Execute input history line #81 again 309 %rep 81 : Edit input history line #81 310 _, __, ___ : previous, next previous, next next previous output 311 _dh : Directory history 312 _oh : Output history 313 %hist : Command history of current session. 314 %hist -g foo : Search command history of (almost) all sessions for 'foo'. 315 %hist -g : Command history of (almost) all sessions. 316 %hist 1/2-8 : Command history containing lines 2-8 of session 1. 317 %hist 1/ ~2/ : Command history of session 1 and 2 sessions before current. 318 %hist ~8/1-~6/5 : Command history from line 1 of 8 sessions ago to 319 line 5 of 6 sessions ago. 320 %edit 0/ : Open editor to execute code with history of current session. 321 322 Autocall: 323 324 f 1,2 : f(1,2) # Off by default, enable with %autocall magic. 325 /f 1,2 : f(1,2) (forced autoparen) 326 ,f 1 2 : f("1","2") 327 ;f 1 2 : f("1 2") 328 329 Remember: TAB completion works in many contexts, not just file names 330 or python names. 331 332 The following magic functions are currently available: 333 334 """ 335 336 default_banner_parts = ["Python %s\n"%sys.version.split("\n")[0], 337 "Type 'copyright', 'credits' or 'license' for more information\n" , 338 "IPython {version} -- An enhanced Interactive Python. Type '?' for help.\n".format(version=release.version), 339 ] 340 341 default_banner = ''.join(default_banner_parts) 342 [end of IPython/core/usage.py] [start of IPython/lib/display.py] 1 """Various display related classes. 2 3 Authors : MinRK, gregcaporaso, dannystaple 4 """ 5 from html import escape as html_escape 6 from os.path import exists, isfile, splitext, abspath, join, isdir 7 from os import walk, sep, fsdecode 8 9 from IPython.core.display import DisplayObject, TextDisplayObject 10 11 __all__ = ['Audio', 'IFrame', 'YouTubeVideo', 'VimeoVideo', 'ScribdDocument', 12 'FileLink', 'FileLinks', 'Code'] 13 14 15 class Audio(DisplayObject): 16 """Create an audio object. 17 18 When this object is returned by an input cell or passed to the 19 display function, it will result in Audio controls being displayed 20 in the frontend (only works in the notebook). 21 22 Parameters 23 ---------- 24 data : numpy array, list, unicode, str or bytes 25 Can be one of 26 27 * Numpy 1d array containing the desired waveform (mono) 28 * Numpy 2d array containing waveforms for each channel. 29 Shape=(NCHAN, NSAMPLES). For the standard channel order, see 30 http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx 31 * List of float or integer representing the waveform (mono) 32 * String containing the filename 33 * Bytestring containing raw PCM data or 34 * URL pointing to a file on the web. 35 36 If the array option is used, the waveform will be normalized. 37 38 If a filename or url is used, the format support will be browser 39 dependent. 40 url : unicode 41 A URL to download the data from. 42 filename : unicode 43 Path to a local file to load the data from. 44 embed : boolean 45 Should the audio data be embedded using a data URI (True) or should 46 the original source be referenced. Set this to True if you want the 47 audio to playable later with no internet connection in the notebook. 48 49 Default is `True`, unless the keyword argument `url` is set, then 50 default value is `False`. 51 rate : integer 52 The sampling rate of the raw data. 53 Only required when data parameter is being used as an array 54 autoplay : bool 55 Set to True if the audio should immediately start playing. 56 Default is `False`. 57 58 Examples 59 -------- 60 :: 61 62 # Generate a sound 63 import numpy as np 64 framerate = 44100 65 t = np.linspace(0,5,framerate*5) 66 data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t) 67 Audio(data,rate=framerate) 68 69 # Can also do stereo or more channels 70 dataleft = np.sin(2*np.pi*220*t) 71 dataright = np.sin(2*np.pi*224*t) 72 Audio([dataleft, dataright],rate=framerate) 73 74 Audio("http://www.nch.com.au/acm/8k16bitpcm.wav") # From URL 75 Audio(url="http://www.w3schools.com/html/horse.ogg") 76 77 Audio('/path/to/sound.wav') # From file 78 Audio(filename='/path/to/sound.ogg') 79 80 Audio(b'RAW_WAV_DATA..) # From bytes 81 Audio(data=b'RAW_WAV_DATA..) 82 83 """ 84 _read_flags = 'rb' 85 86 def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, autoplay=False): 87 if filename is None and url is None and data is None: 88 raise ValueError("No image data found. Expecting filename, url, or data.") 89 if embed is False and url is None: 90 raise ValueError("No url found. Expecting url when embed=False") 91 92 if url is not None and embed is not True: 93 self.embed = False 94 else: 95 self.embed = True 96 self.autoplay = autoplay 97 super(Audio, self).__init__(data=data, url=url, filename=filename) 98 99 if self.data is not None and not isinstance(self.data, bytes): 100 self.data = self._make_wav(data,rate) 101 102 def reload(self): 103 """Reload the raw data from file or URL.""" 104 import mimetypes 105 if self.embed: 106 super(Audio, self).reload() 107 108 if self.filename is not None: 109 self.mimetype = mimetypes.guess_type(self.filename)[0] 110 elif self.url is not None: 111 self.mimetype = mimetypes.guess_type(self.url)[0] 112 else: 113 self.mimetype = "audio/wav" 114 115 def _make_wav(self, data, rate): 116 """ Transform a numpy array to a PCM bytestring """ 117 import struct 118 from io import BytesIO 119 import wave 120 121 try: 122 import numpy as np 123 124 data = np.array(data, dtype=float) 125 if len(data.shape) == 1: 126 nchan = 1 127 elif len(data.shape) == 2: 128 # In wave files,channels are interleaved. E.g., 129 # "L1R1L2R2..." for stereo. See 130 # http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx 131 # for channel ordering 132 nchan = data.shape[0] 133 data = data.T.ravel() 134 else: 135 raise ValueError('Array audio input must be a 1D or 2D array') 136 scaled = np.int16(data/np.max(np.abs(data))*32767).tolist() 137 except ImportError: 138 # check that it is a "1D" list 139 idata = iter(data) # fails if not an iterable 140 try: 141 iter(idata.next()) 142 raise TypeError('Only lists of mono audio are ' 143 'supported if numpy is not installed') 144 except TypeError: 145 # this means it's not a nested list, which is what we want 146 pass 147 maxabsvalue = float(max([abs(x) for x in data])) 148 scaled = [int(x/maxabsvalue*32767) for x in data] 149 nchan = 1 150 151 fp = BytesIO() 152 waveobj = wave.open(fp,mode='wb') 153 waveobj.setnchannels(nchan) 154 waveobj.setframerate(rate) 155 waveobj.setsampwidth(2) 156 waveobj.setcomptype('NONE','NONE') 157 waveobj.writeframes(b''.join([struct.pack('<h',x) for x in scaled])) 158 val = fp.getvalue() 159 waveobj.close() 160 161 return val 162 163 def _data_and_metadata(self): 164 """shortcut for returning metadata with url information, if defined""" 165 md = {} 166 if self.url: 167 md['url'] = self.url 168 if md: 169 return self.data, md 170 else: 171 return self.data 172 173 def _repr_html_(self): 174 src = """ 175 <audio controls="controls" {autoplay}> 176 <source src="{src}" type="{type}" /> 177 Your browser does not support the audio element. 178 </audio> 179 """ 180 return src.format(src=self.src_attr(),type=self.mimetype, autoplay=self.autoplay_attr()) 181 182 def src_attr(self): 183 import base64 184 if self.embed and (self.data is not None): 185 data = base64=base64.b64encode(self.data).decode('ascii') 186 return """data:{type};base64,{base64}""".format(type=self.mimetype, 187 base64=data) 188 elif self.url is not None: 189 return self.url 190 else: 191 return "" 192 193 def autoplay_attr(self): 194 if(self.autoplay): 195 return 'autoplay="autoplay"' 196 else: 197 return '' 198 199 class IFrame(object): 200 """ 201 Generic class to embed an iframe in an IPython notebook 202 """ 203 204 iframe = """ 205 <iframe 206 width="{width}" 207 height="{height}" 208 src="{src}{params}" 209 frameborder="0" 210 allowfullscreen 211 ></iframe> 212 """ 213 214 def __init__(self, src, width, height, **kwargs): 215 self.src = src 216 self.width = width 217 self.height = height 218 self.params = kwargs 219 220 def _repr_html_(self): 221 """return the embed iframe""" 222 if self.params: 223 try: 224 from urllib.parse import urlencode # Py 3 225 except ImportError: 226 from urllib import urlencode 227 params = "?" + urlencode(self.params) 228 else: 229 params = "" 230 return self.iframe.format(src=self.src, 231 width=self.width, 232 height=self.height, 233 params=params) 234 235 class YouTubeVideo(IFrame): 236 """Class for embedding a YouTube Video in an IPython session, based on its video id. 237 238 e.g. to embed the video from https://www.youtube.com/watch?v=foo , you would 239 do:: 240 241 vid = YouTubeVideo("foo") 242 display(vid) 243 244 To start from 30 seconds:: 245 246 vid = YouTubeVideo("abc", start=30) 247 display(vid) 248 249 To calculate seconds from time as hours, minutes, seconds use 250 :class:`datetime.timedelta`:: 251 252 start=int(timedelta(hours=1, minutes=46, seconds=40).total_seconds()) 253 254 Other parameters can be provided as documented at 255 https://developers.google.com/youtube/player_parameters#Parameters 256 257 When converting the notebook using nbconvert, a jpeg representation of the video 258 will be inserted in the document. 259 """ 260 261 def __init__(self, id, width=400, height=300, **kwargs): 262 self.id=id 263 src = "https://www.youtube.com/embed/{0}".format(id) 264 super(YouTubeVideo, self).__init__(src, width, height, **kwargs) 265 266 def _repr_jpeg_(self): 267 # Deferred import 268 from urllib.request import urlopen 269 270 try: 271 return urlopen("https://img.youtube.com/vi/{id}/hqdefault.jpg".format(id=self.id)).read() 272 except IOError: 273 return None 274 275 class VimeoVideo(IFrame): 276 """ 277 Class for embedding a Vimeo video in an IPython session, based on its video id. 278 """ 279 280 def __init__(self, id, width=400, height=300, **kwargs): 281 src="https://player.vimeo.com/video/{0}".format(id) 282 super(VimeoVideo, self).__init__(src, width, height, **kwargs) 283 284 class ScribdDocument(IFrame): 285 """ 286 Class for embedding a Scribd document in an IPython session 287 288 Use the start_page params to specify a starting point in the document 289 Use the view_mode params to specify display type one off scroll | slideshow | book 290 291 e.g to Display Wes' foundational paper about PANDAS in book mode from page 3 292 293 ScribdDocument(71048089, width=800, height=400, start_page=3, view_mode="book") 294 """ 295 296 def __init__(self, id, width=400, height=300, **kwargs): 297 src="https://www.scribd.com/embeds/{0}/content".format(id) 298 super(ScribdDocument, self).__init__(src, width, height, **kwargs) 299 300 class FileLink(object): 301 """Class for embedding a local file link in an IPython session, based on path 302 303 e.g. to embed a link that was generated in the IPython notebook as my/data.txt 304 305 you would do:: 306 307 local_file = FileLink("my/data.txt") 308 display(local_file) 309 310 or in the HTML notebook, just:: 311 312 FileLink("my/data.txt") 313 """ 314 315 html_link_str = "<a href='%s' target='_blank'>%s</a>" 316 317 def __init__(self, 318 path, 319 url_prefix='', 320 result_html_prefix='', 321 result_html_suffix='<br>'): 322 """ 323 Parameters 324 ---------- 325 path : str 326 path to the file or directory that should be formatted 327 url_prefix : str 328 prefix to be prepended to all files to form a working link [default: 329 ''] 330 result_html_prefix : str 331 text to append to beginning to link [default: ''] 332 result_html_suffix : str 333 text to append at the end of link [default: '<br>'] 334 """ 335 if isdir(path): 336 raise ValueError("Cannot display a directory using FileLink. " 337 "Use FileLinks to display '%s'." % path) 338 self.path = fsdecode(path) 339 self.url_prefix = url_prefix 340 self.result_html_prefix = result_html_prefix 341 self.result_html_suffix = result_html_suffix 342 343 def _format_path(self): 344 fp = ''.join([self.url_prefix, html_escape(self.path)]) 345 return ''.join([self.result_html_prefix, 346 self.html_link_str % \ 347 (fp, html_escape(self.path, quote=False)), 348 self.result_html_suffix]) 349 350 def _repr_html_(self): 351 """return html link to file 352 """ 353 if not exists(self.path): 354 return ("Path (<tt>%s</tt>) doesn't exist. " 355 "It may still be in the process of " 356 "being generated, or you may have the " 357 "incorrect path." % self.path) 358 359 return self._format_path() 360 361 def __repr__(self): 362 """return absolute path to file 363 """ 364 return abspath(self.path) 365 366 class FileLinks(FileLink): 367 """Class for embedding local file links in an IPython session, based on path 368 369 e.g. to embed links to files that were generated in the IPython notebook 370 under ``my/data``, you would do:: 371 372 local_files = FileLinks("my/data") 373 display(local_files) 374 375 or in the HTML notebook, just:: 376 377 FileLinks("my/data") 378 """ 379 def __init__(self, 380 path, 381 url_prefix='', 382 included_suffixes=None, 383 result_html_prefix='', 384 result_html_suffix='<br>', 385 notebook_display_formatter=None, 386 terminal_display_formatter=None, 387 recursive=True): 388 """ 389 See :class:`FileLink` for the ``path``, ``url_prefix``, 390 ``result_html_prefix`` and ``result_html_suffix`` parameters. 391 392 included_suffixes : list 393 Filename suffixes to include when formatting output [default: include 394 all files] 395 396 notebook_display_formatter : function 397 Used to format links for display in the notebook. See discussion of 398 formatter functions below. 399 400 terminal_display_formatter : function 401 Used to format links for display in the terminal. See discussion of 402 formatter functions below. 403 404 Formatter functions must be of the form:: 405 406 f(dirname, fnames, included_suffixes) 407 408 dirname : str 409 The name of a directory 410 fnames : list 411 The files in that directory 412 included_suffixes : list 413 The file suffixes that should be included in the output (passing None 414 meansto include all suffixes in the output in the built-in formatters) 415 recursive : boolean 416 Whether to recurse into subdirectories. Default is True. 417 418 The function should return a list of lines that will be printed in the 419 notebook (if passing notebook_display_formatter) or the terminal (if 420 passing terminal_display_formatter). This function is iterated over for 421 each directory in self.path. Default formatters are in place, can be 422 passed here to support alternative formatting. 423 424 """ 425 if isfile(path): 426 raise ValueError("Cannot display a file using FileLinks. " 427 "Use FileLink to display '%s'." % path) 428 self.included_suffixes = included_suffixes 429 # remove trailing slashes for more consistent output formatting 430 path = path.rstrip('/') 431 432 self.path = path 433 self.url_prefix = url_prefix 434 self.result_html_prefix = result_html_prefix 435 self.result_html_suffix = result_html_suffix 436 437 self.notebook_display_formatter = \ 438 notebook_display_formatter or self._get_notebook_display_formatter() 439 self.terminal_display_formatter = \ 440 terminal_display_formatter or self._get_terminal_display_formatter() 441 442 self.recursive = recursive 443 444 def _get_display_formatter(self, 445 dirname_output_format, 446 fname_output_format, 447 fp_format, 448 fp_cleaner=None): 449 """ generate built-in formatter function 450 451 this is used to define both the notebook and terminal built-in 452 formatters as they only differ by some wrapper text for each entry 453 454 dirname_output_format: string to use for formatting directory 455 names, dirname will be substituted for a single "%s" which 456 must appear in this string 457 fname_output_format: string to use for formatting file names, 458 if a single "%s" appears in the string, fname will be substituted 459 if two "%s" appear in the string, the path to fname will be 460 substituted for the first and fname will be substituted for the 461 second 462 fp_format: string to use for formatting filepaths, must contain 463 exactly two "%s" and the dirname will be subsituted for the first 464 and fname will be substituted for the second 465 """ 466 def f(dirname, fnames, included_suffixes=None): 467 result = [] 468 # begin by figuring out which filenames, if any, 469 # are going to be displayed 470 display_fnames = [] 471 for fname in fnames: 472 if (isfile(join(dirname,fname)) and 473 (included_suffixes is None or 474 splitext(fname)[1] in included_suffixes)): 475 display_fnames.append(fname) 476 477 if len(display_fnames) == 0: 478 # if there are no filenames to display, don't print anything 479 # (not even the directory name) 480 pass 481 else: 482 # otherwise print the formatted directory name followed by 483 # the formatted filenames 484 dirname_output_line = dirname_output_format % dirname 485 result.append(dirname_output_line) 486 for fname in display_fnames: 487 fp = fp_format % (dirname,fname) 488 if fp_cleaner is not None: 489 fp = fp_cleaner(fp) 490 try: 491 # output can include both a filepath and a filename... 492 fname_output_line = fname_output_format % (fp, fname) 493 except TypeError: 494 # ... or just a single filepath 495 fname_output_line = fname_output_format % fname 496 result.append(fname_output_line) 497 return result 498 return f 499 500 def _get_notebook_display_formatter(self, 501 spacer="&nbsp;&nbsp;"): 502 """ generate function to use for notebook formatting 503 """ 504 dirname_output_format = \ 505 self.result_html_prefix + "%s/" + self.result_html_suffix 506 fname_output_format = \ 507 self.result_html_prefix + spacer + self.html_link_str + self.result_html_suffix 508 fp_format = self.url_prefix + '%s/%s' 509 if sep == "\\": 510 # Working on a platform where the path separator is "\", so 511 # must convert these to "/" for generating a URI 512 def fp_cleaner(fp): 513 # Replace all occurrences of backslash ("\") with a forward 514 # slash ("/") - this is necessary on windows when a path is 515 # provided as input, but we must link to a URI 516 return fp.replace('\\','/') 517 else: 518 fp_cleaner = None 519 520 return self._get_display_formatter(dirname_output_format, 521 fname_output_format, 522 fp_format, 523 fp_cleaner) 524 525 def _get_terminal_display_formatter(self, 526 spacer=" "): 527 """ generate function to use for terminal formatting 528 """ 529 dirname_output_format = "%s/" 530 fname_output_format = spacer + "%s" 531 fp_format = '%s/%s' 532 533 return self._get_display_formatter(dirname_output_format, 534 fname_output_format, 535 fp_format) 536 537 def _format_path(self): 538 result_lines = [] 539 if self.recursive: 540 walked_dir = list(walk(self.path)) 541 else: 542 walked_dir = [next(walk(self.path))] 543 walked_dir.sort() 544 for dirname, subdirs, fnames in walked_dir: 545 result_lines += self.notebook_display_formatter(dirname, fnames, self.included_suffixes) 546 return '\n'.join(result_lines) 547 548 def __repr__(self): 549 """return newline-separated absolute paths 550 """ 551 result_lines = [] 552 if self.recursive: 553 walked_dir = list(walk(self.path)) 554 else: 555 walked_dir = [next(walk(self.path))] 556 walked_dir.sort() 557 for dirname, subdirs, fnames in walked_dir: 558 result_lines += self.terminal_display_formatter(dirname, fnames, self.included_suffixes) 559 return '\n'.join(result_lines) 560 561 562 class Code(TextDisplayObject): 563 """Display syntax-highlighted source code. 564 565 This uses Pygments to highlight the code for HTML and Latex output. 566 567 Parameters 568 ---------- 569 data : str 570 The code as a string 571 url : str 572 A URL to fetch the code from 573 filename : str 574 A local filename to load the code from 575 language : str 576 The short name of a Pygments lexer to use for highlighting. 577 If not specified, it will guess the lexer based on the filename 578 or the code. Available lexers: http://pygments.org/docs/lexers/ 579 """ 580 def __init__(self, data=None, url=None, filename=None, language=None): 581 self.language = language 582 super().__init__(data=data, url=url, filename=filename) 583 584 def _get_lexer(self): 585 if self.language: 586 from pygments.lexers import get_lexer_by_name 587 return get_lexer_by_name(self.language) 588 elif self.filename: 589 from pygments.lexers import get_lexer_for_filename 590 return get_lexer_for_filename(self.filename) 591 else: 592 from pygments.lexers import guess_lexer 593 return guess_lexer(self.data) 594 595 def __repr__(self): 596 return self.data 597 598 def _repr_html_(self): 599 from pygments import highlight 600 from pygments.formatters import HtmlFormatter 601 fmt = HtmlFormatter() 602 style = '<style>{}</style>'.format(fmt.get_style_defs('.output_html')) 603 return style + highlight(self.data, self._get_lexer(), fmt) 604 605 def _repr_latex_(self): 606 from pygments import highlight 607 from pygments.formatters import LatexFormatter 608 return highlight(self.data, self._get_lexer(), LatexFormatter()) 609 [end of IPython/lib/display.py] [start of IPython/sphinxext/custom_doctests.py] 1 """ 2 Handlers for IPythonDirective's @doctest pseudo-decorator. 3 4 The Sphinx extension that provides support for embedded IPython code provides 5 a pseudo-decorator @doctest, which treats the input/output block as a 6 doctest, raising a RuntimeError during doc generation if the actual output 7 (after running the input) does not match the expected output. 8 9 An example usage is: 10 11 .. code-block:: rst 12 13 .. ipython:: 14 15 In [1]: x = 1 16 17 @doctest 18 In [2]: x + 2 19 Out[3]: 3 20 21 One can also provide arguments to the decorator. The first argument should be 22 the name of a custom handler. The specification of any other arguments is 23 determined by the handler. For example, 24 25 .. code-block:: rst 26 27 .. ipython:: 28 29 @doctest float 30 In [154]: 0.1 + 0.2 31 Out[154]: 0.3 32 33 allows the actual output ``0.30000000000000004`` to match the expected output 34 due to a comparison with `np.allclose`. 35 36 This module contains handlers for the @doctest pseudo-decorator. Handlers 37 should have the following function signature:: 38 39 handler(sphinx_shell, args, input_lines, found, submitted) 40 41 where `sphinx_shell` is the embedded Sphinx shell, `args` contains the list 42 of arguments that follow: '@doctest handler_name', `input_lines` contains 43 a list of the lines relevant to the current doctest, `found` is a string 44 containing the output from the IPython shell, and `submitted` is a string 45 containing the expected output from the IPython shell. 46 47 Handlers must be registered in the `doctests` dict at the end of this module. 48 49 """ 50 51 def str_to_array(s): 52 """ 53 Simplistic converter of strings from repr to float NumPy arrays. 54 55 If the repr representation has ellipsis in it, then this will fail. 56 57 Parameters 58 ---------- 59 s : str 60 The repr version of a NumPy array. 61 62 Examples 63 -------- 64 >>> s = "array([ 0.3, inf, nan])" 65 >>> a = str_to_array(s) 66 67 """ 68 import numpy as np 69 70 # Need to make sure eval() knows about inf and nan. 71 # This also assumes default printoptions for NumPy. 72 from numpy import inf, nan 73 74 if s.startswith(u'array'): 75 # Remove array( and ) 76 s = s[6:-1] 77 78 if s.startswith(u'['): 79 a = np.array(eval(s), dtype=float) 80 else: 81 # Assume its a regular float. Force 1D so we can index into it. 82 a = np.atleast_1d(float(s)) 83 return a 84 85 def float_doctest(sphinx_shell, args, input_lines, found, submitted): 86 """ 87 Doctest which allow the submitted output to vary slightly from the input. 88 89 Here is how it might appear in an rst file: 90 91 .. code-block:: rst 92 93 .. ipython:: 94 95 @doctest float 96 In [1]: 0.1 + 0.2 97 Out[1]: 0.3 98 99 """ 100 import numpy as np 101 102 if len(args) == 2: 103 rtol = 1e-05 104 atol = 1e-08 105 else: 106 # Both must be specified if any are specified. 107 try: 108 rtol = float(args[2]) 109 atol = float(args[3]) 110 except IndexError: 111 e = ("Both `rtol` and `atol` must be specified " 112 "if either are specified: {0}".format(args)) 113 raise IndexError(e) 114 115 try: 116 submitted = str_to_array(submitted) 117 found = str_to_array(found) 118 except: 119 # For example, if the array is huge and there are ellipsis in it. 120 error = True 121 else: 122 found_isnan = np.isnan(found) 123 submitted_isnan = np.isnan(submitted) 124 error = not np.allclose(found_isnan, submitted_isnan) 125 error |= not np.allclose(found[~found_isnan], 126 submitted[~submitted_isnan], 127 rtol=rtol, atol=atol) 128 129 TAB = ' ' * 4 130 directive = sphinx_shell.directive 131 if directive is None: 132 source = 'Unavailable' 133 content = 'Unavailable' 134 else: 135 source = directive.state.document.current_source 136 # Add tabs and make into a single string. 137 content = '\n'.join([TAB + line for line in directive.content]) 138 139 if error: 140 141 e = ('doctest float comparison failure\n\n' 142 'Document source: {0}\n\n' 143 'Raw content: \n{1}\n\n' 144 'On input line(s):\n{TAB}{2}\n\n' 145 'we found output:\n{TAB}{3}\n\n' 146 'instead of the expected:\n{TAB}{4}\n\n') 147 e = e.format(source, content, '\n'.join(input_lines), repr(found), 148 repr(submitted), TAB=TAB) 149 raise RuntimeError(e) 150 151 # dict of allowable doctest handlers. The key represents the first argument 152 # that must be given to @doctest in order to activate the handler. 153 doctests = { 154 'float': float_doctest, 155 } 156 [end of IPython/sphinxext/custom_doctests.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
ipython/ipython
f0f6cd8b8c9f74ea8b2c5e37b6132212ce661c28
Volume normalization in IPython.display.Audio should be optional I am manipulating audio using numpy with IPython Notebook and want to use IPython.display.Audio for listening to the numpy arrays. Unfortunately, auto-normalization tampers with the results. Example: ``` # Generate a sound import IPython import numpy as np framerate = 44100 t = np.linspace(0,5,framerate*5) tone = np.sin(2*np.pi*220*t) antitone = np.sin(2*np.pi*220*t + np.pi) IPython.display.Audio(tone+antitone, rate=framerate) ``` Adding a sin wav to itself shifted by 180 deg should give total silence. Instead, auto-normalization amplifies the floating point errors. The problem is in the IPython.lib.display.Audio _make_wav method, which always normalizes the numpy array(see 'scaled' variable). I think that we should have a normalize keyword argument, so that Audio can be used for audio analysis. Something like: ``` Audio(tone+antitone, rate=framerate, normalize=False) ```
That seems reasonable. Do you want to make a PR? Sure, will make a PR in the near future. I am also interested in making a couple of enhancements to Audio: 1. Consider passing in a `maxvalue` value rather than a `normalize` flag. That way, you can still normalize, but to a different value. In the antitone example above, maxvalue would be 1. 2. Adding a `save` method to Audio. I have code that does the first. The second can accomplished with: ``` python def save_wav(data, rate, filename): audio = Audio(data, rate=rate) with open(filename, "wb") as fp: fp.write(audio.data) ``` but would be handy to have as a defined method on Audio: ``` python def save(self, filename): with open(filename, "wb") as fp: fp.write(self.data) ``` @mmcdan @takluyver : Any progress on this? The Audio object is distorting my output, even adding a DC offset. Have been searching for a while for a bug in my code, before finding this issue. Plotting the waveform and listening to it via Audio is totally different -- You can directly compare the difference between downloading from Audio, vs. saving via librosa. How can we get Audio to just leave the signal unaltered? ...Ok, made a PR for this. Woops, missed a spot. New PR. According to the most recent 'read the docs' there is no 'norm' parameter. (https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#module-IPython.display) I would really like to have this feature. For anyone looking for a simple workaround (until #11161 is done), to prevent IPython from boosting the volume of quiet clips, you can simply set one sample to 1 (if you're working with floating-point audio), e.g. ```python import numpy as np import IPython # Example audio - A440 for one second. some_audio = np.sin(2 * np.pi * 440 * np.linspace(0, 1, 44100)) / 8 # Hack to disable IPython audio normalization. some_audio[-1] = 1 IPython.display.Audio(some_audio, rate=44100) ``` This adds a noticeable click at the end of the clip but at least it doesn't normalize the amplitude.
2019-03-15T16:07:13Z
<patch> diff --git a/IPython/lib/display.py b/IPython/lib/display.py --- a/IPython/lib/display.py +++ b/IPython/lib/display.py @@ -54,6 +54,12 @@ class Audio(DisplayObject): autoplay : bool Set to True if the audio should immediately start playing. Default is `False`. + normalize : bool + Whether audio should be normalized (rescaled) to the maximum possible + range. Default is `True`. When set to `False`, `data` must be between + -1 and 1 (inclusive), otherwise an error is raised. + Applies only when `data` is a list or array of samples; other types of + audio are never normalized. Examples -------- @@ -83,9 +89,9 @@ class Audio(DisplayObject): """ _read_flags = 'rb' - def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, autoplay=False): + def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, autoplay=False, normalize=True): if filename is None and url is None and data is None: - raise ValueError("No image data found. Expecting filename, url, or data.") + raise ValueError("No audio data found. Expecting filename, url, or data.") if embed is False and url is None: raise ValueError("No url found. Expecting url when embed=False") @@ -97,7 +103,9 @@ def __init__(self, data=None, filename=None, url=None, embed=None, rate=None, au super(Audio, self).__init__(data=data, url=url, filename=filename) if self.data is not None and not isinstance(self.data, bytes): - self.data = self._make_wav(data,rate) + if rate is None: + raise ValueError("rate must be specified when data is a numpy array or list of audio samples.") + self.data = Audio._make_wav(data, rate, normalize) def reload(self): """Reload the raw data from file or URL.""" @@ -112,41 +120,17 @@ def reload(self): else: self.mimetype = "audio/wav" - def _make_wav(self, data, rate): + @staticmethod + def _make_wav(data, rate, normalize): """ Transform a numpy array to a PCM bytestring """ import struct from io import BytesIO import wave try: - import numpy as np - - data = np.array(data, dtype=float) - if len(data.shape) == 1: - nchan = 1 - elif len(data.shape) == 2: - # In wave files,channels are interleaved. E.g., - # "L1R1L2R2..." for stereo. See - # http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx - # for channel ordering - nchan = data.shape[0] - data = data.T.ravel() - else: - raise ValueError('Array audio input must be a 1D or 2D array') - scaled = np.int16(data/np.max(np.abs(data))*32767).tolist() + scaled, nchan = Audio._validate_and_normalize_with_numpy(data, normalize) except ImportError: - # check that it is a "1D" list - idata = iter(data) # fails if not an iterable - try: - iter(idata.next()) - raise TypeError('Only lists of mono audio are ' - 'supported if numpy is not installed') - except TypeError: - # this means it's not a nested list, which is what we want - pass - maxabsvalue = float(max([abs(x) for x in data])) - scaled = [int(x/maxabsvalue*32767) for x in data] - nchan = 1 + scaled, nchan = Audio._validate_and_normalize_without_numpy(data, normalize) fp = BytesIO() waveobj = wave.open(fp,mode='wb') @@ -160,6 +144,48 @@ def _make_wav(self, data, rate): return val + @staticmethod + def _validate_and_normalize_with_numpy(data, normalize): + import numpy as np + + data = np.array(data, dtype=float) + if len(data.shape) == 1: + nchan = 1 + elif len(data.shape) == 2: + # In wave files,channels are interleaved. E.g., + # "L1R1L2R2..." for stereo. See + # http://msdn.microsoft.com/en-us/library/windows/hardware/dn653308(v=vs.85).aspx + # for channel ordering + nchan = data.shape[0] + data = data.T.ravel() + else: + raise ValueError('Array audio input must be a 1D or 2D array') + + max_abs_value = np.max(np.abs(data)) + normalization_factor = Audio._get_normalization_factor(max_abs_value, normalize) + scaled = np.int16(data / normalization_factor * 32767).tolist() + return scaled, nchan + + + @staticmethod + def _validate_and_normalize_without_numpy(data, normalize): + try: + max_abs_value = float(max([abs(x) for x in data])) + except TypeError: + raise TypeError('Only lists of mono audio are ' + 'supported if numpy is not installed') + + normalization_factor = Audio._get_normalization_factor(max_abs_value, normalize) + scaled = [int(x / normalization_factor * 32767) for x in data] + nchan = 1 + return scaled, nchan + + @staticmethod + def _get_normalization_factor(max_abs_value, normalize): + if not normalize and max_abs_value > 1: + raise ValueError('Audio data must be between -1 and 1 when normalize=False.') + return max_abs_value if normalize else 1 + def _data_and_metadata(self): """shortcut for returning metadata with url information, if defined""" md = {} </patch>
[]
[]
pandas-dev__pandas-6553
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Allow timestamp option for StataWriter.write_file() This is a combined feature request & minor bug notice. Feature Request: I would like to be able to write code that produces, byte-for-byte, reproducible outputs. To that end I want to write Stata dta files with a blank (or constant) timestamp. It would be nice to allow write_file() to accept a timestamp (or some option to zero it out). Bug: In an attempt to do this myself, I made my own version of StataWriter.write_file() where the only difference is I call (underscore)write_header() internal function with a constant timestamp. But that produces the following bug. ``` python import pandas as pd import numpy as np from pandas.io.stata import StataWriter import datetime df = pd.DataFrame(np.random.randn(6,4),index=list('abcdef'),columns=list('ABCD')) writer = StataWriter('ouput.dta', df) fktime_stamp = datetime.datetime.now() writer._write_header(time_stamp=fktime_stamp) # rest of write_file() ``` produces the following error ``` File "C:\Program Files\Python27\lib\site-packages\pandas\io\stata.py", line 1057, in _write_header elif not isinstance(time_stamp, datetime): TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types ``` My system details are. ``` >>> pd.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 2.7.2.final.0 python-bits: 64 OS: Windows OS-release: 7 machine: AMD64 processor: AMD64 Family 16 Model 6 Stepping 3, AuthenticAMD byteorder: little LC_ALL: None LANG: None pandas: 0.13.1 Cython: None numpy: 1.8.1 scipy: None statsmodels: None IPython: None sphinx: None patsy: None scikits.timeseries: None dateutil: 2.2 pytz: None bottleneck: None tables: None numexpr: None matplotlib: None openpyxl: None xlrd: None xlwt: None xlsxwriter: None sqlalchemy: None lxml: None bs4: None html5lib: None bq: None apiclient: None ``` </issue> <code> [start of README.md] 1 # pandas: powerful Python data analysis toolkit 2 3 ![Travis-CI Build Status](https://travis-ci.org/pydata/pandas.png) 4 5 [![Scatter-CI Status page](http://scatterci.github.io/scatterci48.jpg)](http://scatterci.github.io/pydata/pandas) 6 7 ## What is it 8 9 **pandas** is a Python package providing fast, flexible, and expressive data 10 structures designed to make working with "relational" or "labeled" data both 11 easy and intuitive. It aims to be the fundamental high-level building block for 12 doing practical, **real world** data analysis in Python. Additionally, it has 13 the broader goal of becoming **the most powerful and flexible open source data 14 analysis / manipulation tool available in any language**. It is already well on 15 its way toward this goal. 16 17 ## Main Features 18 Here are just a few of the things that pandas does well: 19 20 - Easy handling of [**missing data**][missing-data] (represented as 21 `NaN`) in floating point as well as non-floating point data 22 - Size mutability: columns can be [**inserted and 23 deleted**][insertion-deletion] from DataFrame and higher dimensional 24 objects 25 - Automatic and explicit [**data alignment**][alignment]: objects can 26 be explicitly aligned to a set of labels, or the user can simply 27 ignore the labels and let `Series`, `DataFrame`, etc. automatically 28 align the data for you in computations 29 - Powerful, flexible [**group by**][groupby] functionality to perform 30 split-apply-combine operations on data sets, for both aggregating 31 and transforming data 32 - Make it [**easy to convert**][conversion] ragged, 33 differently-indexed data in other Python and NumPy data structures 34 into DataFrame objects 35 - Intelligent label-based [**slicing**][slicing], [**fancy 36 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 37 large data sets 38 - Intuitive [**merging**][merging] and [**joining**][joining] data 39 sets 40 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 41 data sets 42 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 43 labels per tick) 44 - Robust IO tools for loading data from [**flat files**][flat-files] 45 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 46 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 47 - [**Time series**][timeseries]-specific functionality: date range 48 generation and frequency conversion, moving window statistics, 49 moving window linear regressions, date shifting and lagging, etc. 50 51 52 [missing-data]: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 53 [insertion-deletion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 54 [alignment]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 55 [groupby]: http://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 56 [conversion]: http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 57 [slicing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 58 [fancy-indexing]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 59 [subsetting]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 60 [merging]: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 61 [joining]: http://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 62 [reshape]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 63 [pivot-table]: http://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 64 [mi]: http://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 65 [flat-files]: http://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 66 [excel]: http://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 67 [db]: http://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 68 [hdfstore]: http://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 69 [timeseries]: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 70 71 ## Where to get it 72 The source code is currently hosted on GitHub at: 73 http://github.com/pydata/pandas 74 75 Binary installers for the latest released version are available at the Python 76 package index 77 78 http://pypi.python.org/pypi/pandas/ 79 80 And via `easy_install`: 81 82 ```sh 83 easy_install pandas 84 ``` 85 86 or `pip`: 87 88 ```sh 89 pip install pandas 90 ``` 91 92 ## Dependencies 93 - [NumPy](http://www.numpy.org): 1.6.1 or higher 94 - [python-dateutil](http://labix.org/python-dateutil): 1.5 or higher 95 - [pytz](http://pytz.sourceforge.net) 96 - Needed for time zone support with ``pandas.date_range`` 97 98 ### Highly Recommended Dependencies 99 - [numexpr](http://code.google.com/p/numexpr/) 100 - Needed to accelerate some expression evaluation operations 101 - Required by PyTables 102 - [bottleneck](http://berkeleyanalytics.com/bottleneck) 103 - Needed to accelerate certain numerical operations 104 105 ### Optional dependencies 106 - [Cython](http://www.cython.org): Only necessary to build development version. Version 0.17.1 or higher. 107 - [SciPy](http://www.scipy.org): miscellaneous statistical functions 108 - [PyTables](http://www.pytables.org): necessary for HDF5-based storage 109 - [SQLAlchemy](http://www.sqlalchemy.org): for SQL database support. Version 0.8.1 or higher recommended. 110 - [matplotlib](http://matplotlib.sourceforge.net/): for plotting 111 - [statsmodels](http://statsmodels.sourceforge.net/) 112 - Needed for parts of `pandas.stats` 113 - For Excel I/O: 114 - [xlrd/xlwt](http://www.python-excel.org/) 115 - Excel reading (xlrd) and writing (xlwt) 116 - [openpyxl](http://packages.python.org/openpyxl/) 117 - openpyxl version 1.6.1 or higher, for writing .xlsx files 118 - xlrd >= 0.9.0 119 - [XlsxWriter](https://pypi.python.org/pypi/XlsxWriter) 120 - Alternative Excel writer. 121 - [Google bq Command Line Tool](https://developers.google.com/bigquery/bq-command-line-tool/) 122 - Needed for `pandas.io.gbq` 123 - [boto](https://pypi.python.org/pypi/boto): necessary for Amazon S3 access. 124 - One of the following combinations of libraries is needed to use the 125 top-level [`pandas.read_html`][read-html-docs] function: 126 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] (Any 127 recent version of [html5lib][html5lib] is okay.) 128 - [BeautifulSoup4][BeautifulSoup4] and [lxml][lxml] 129 - [BeautifulSoup4][BeautifulSoup4] and [html5lib][html5lib] and [lxml][lxml] 130 - Only [lxml][lxml], although see [HTML reading gotchas][html-gotchas] 131 for reasons as to why you should probably **not** take this approach. 132 133 #### Notes about HTML parsing libraries 134 - If you install [BeautifulSoup4][BeautifulSoup4] you must install 135 either [lxml][lxml] or [html5lib][html5lib] or both. 136 `pandas.read_html` will **not** work with *only* `BeautifulSoup4` 137 installed. 138 - You are strongly encouraged to read [HTML reading 139 gotchas][html-gotchas]. It explains issues surrounding the 140 installation and usage of the above three libraries. 141 - You may need to install an older version of 142 [BeautifulSoup4][BeautifulSoup4]: 143 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 144 32-bit Ubuntu/Debian 145 - Additionally, if you're using [Anaconda][Anaconda] you should 146 definitely read [the gotchas about HTML parsing][html-gotchas] 147 libraries 148 - If you're on a system with `apt-get` you can do 149 150 ```sh 151 sudo apt-get build-dep python-lxml 152 ``` 153 154 to get the necessary dependencies for installation of [lxml][lxml]. 155 This will prevent further headaches down the line. 156 157 [html5lib]: https://github.com/html5lib/html5lib-python "html5lib" 158 [BeautifulSoup4]: http://www.crummy.com/software/BeautifulSoup "BeautifulSoup4" 159 [lxml]: http://lxml.de 160 [Anaconda]: https://store.continuum.io/cshop/anaconda 161 [NumPy]: http://numpy.scipy.org/ 162 [html-gotchas]: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#html-table-parsing 163 [read-html-docs]: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.html.read_html.html#pandas.io.html.read_html 164 165 ## Installation from sources 166 To install pandas from source you need Cython in addition to the normal 167 dependencies above. Cython can be installed from pypi: 168 169 ```sh 170 pip install cython 171 ``` 172 173 In the `pandas` directory (same one where you found this file after 174 cloning the git repo), execute: 175 176 ```sh 177 python setup.py install 178 ``` 179 180 or for installing in [development mode](http://www.pip-installer.org/en/latest/usage.html): 181 182 ```sh 183 python setup.py develop 184 ``` 185 186 Alternatively, you can use `pip` if you want all the dependencies pulled 187 in automatically (the `-e` option is for installing it in [development 188 mode](http://www.pip-installer.org/en/latest/usage.html)): 189 190 ```sh 191 pip install -e . 192 ``` 193 194 On Windows, you will need to install MinGW and execute: 195 196 ```sh 197 python setup.py build --compiler=mingw32 198 python setup.py install 199 ``` 200 201 See http://pandas.pydata.org/ for more information. 202 203 ## License 204 BSD 205 206 ## Documentation 207 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 208 209 The Sphinx documentation should provide a good starting point for learning how 210 to use the library. Expect the docs to continue to expand as time goes on. 211 212 ## Background 213 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 214 has been under active development since then. 215 216 ## Discussion and Development 217 Since pandas development is related to a number of other scientific 218 Python projects, questions are welcome on the scipy-user mailing 219 list. Specialized discussions or design issues should take place on 220 the pystatsmodels mailing list / Google group, where 221 ``scikits.statsmodels`` and other libraries will also be discussed: 222 223 http://groups.google.com/group/pystatsmodels 224 [end of README.md] [start of pandas/io/excel.py] 1 """ 2 Module parse to/from Excel 3 """ 4 5 #---------------------------------------------------------------------- 6 # ExcelFile class 7 import os 8 import datetime 9 import abc 10 import numpy as np 11 12 from pandas.io.parsers import TextParser 13 from pandas.tseries.period import Period 14 from pandas import json 15 from pandas.compat import map, zip, reduce, range, lrange, u, add_metaclass 16 from pandas.core import config 17 from pandas.core.common import pprint_thing 18 import pandas.compat as compat 19 import pandas.core.common as com 20 from warnings import warn 21 22 __all__ = ["read_excel", "ExcelWriter", "ExcelFile"] 23 24 _writer_extensions = ["xlsx", "xls", "xlsm"] 25 _writers = {} 26 27 28 def register_writer(klass): 29 """Adds engine to the excel writer registry. You must use this method to 30 integrate with ``to_excel``. Also adds config options for any new 31 ``supported_extensions`` defined on the writer.""" 32 if not compat.callable(klass): 33 raise ValueError("Can only register callables as engines") 34 engine_name = klass.engine 35 _writers[engine_name] = klass 36 for ext in klass.supported_extensions: 37 if ext.startswith('.'): 38 ext = ext[1:] 39 if ext not in _writer_extensions: 40 config.register_option("io.excel.%s.writer" % ext, 41 engine_name, validator=str) 42 _writer_extensions.append(ext) 43 44 45 def get_writer(engine_name): 46 try: 47 return _writers[engine_name] 48 except KeyError: 49 raise ValueError("No Excel writer '%s'" % engine_name) 50 51 52 def read_excel(io, sheetname, **kwds): 53 """Read an Excel table into a pandas DataFrame 54 55 Parameters 56 ---------- 57 io : string, file-like object or xlrd workbook 58 If a string, expected to be a path to xls or xlsx file 59 sheetname : string 60 Name of Excel sheet 61 header : int, default 0 62 Row to use for the column labels of the parsed DataFrame 63 skiprows : list-like 64 Rows to skip at the beginning (0-indexed) 65 skip_footer : int, default 0 66 Rows at the end to skip (0-indexed) 67 index_col : int, default None 68 Column to use as the row labels of the DataFrame. Pass None if 69 there is no such column 70 parse_cols : int or list, default None 71 * If None then parse all columns, 72 * If int then indicates last column to be parsed 73 * If list of ints then indicates list of column numbers to be parsed 74 * If string then indicates comma separated list of column names and 75 column ranges (e.g. "A:E" or "A,C,E:F") 76 na_values : list-like, default None 77 List of additional strings to recognize as NA/NaN 78 keep_default_na : bool, default True 79 If na_values are specified and keep_default_na is False the default NaN 80 values are overridden, otherwise they're appended to 81 verbose : boolean, default False 82 Indicate number of NA values placed in non-numeric columns 83 engine: string, default None 84 If io is not a buffer or path, this must be set to identify io. 85 Acceptable values are None or xlrd 86 convert_float : boolean, default True 87 convert integral floats to int (i.e., 1.0 --> 1). If False, all numeric 88 data will be read in as floats: Excel stores all numbers as floats 89 internally 90 has_index_names : boolean, default False 91 True if the cols defined in index_col have an index name and are 92 not in the header. Index name will be placed on a separate line below 93 the header. 94 95 Returns 96 ------- 97 parsed : DataFrame 98 DataFrame from the passed in Excel file 99 """ 100 if 'kind' in kwds: 101 kwds.pop('kind') 102 warn("kind keyword is no longer supported in read_excel and may be " 103 "removed in a future version", FutureWarning) 104 105 engine = kwds.pop('engine', None) 106 107 return ExcelFile(io, engine=engine).parse(sheetname=sheetname, **kwds) 108 109 110 class ExcelFile(object): 111 """ 112 Class for parsing tabular excel sheets into DataFrame objects. 113 Uses xlrd. See ExcelFile.parse for more documentation 114 115 Parameters 116 ---------- 117 io : string, file-like object or xlrd workbook 118 If a string, expected to be a path to xls or xlsx file 119 engine: string, default None 120 If io is not a buffer or path, this must be set to identify io. 121 Acceptable values are None or xlrd 122 """ 123 def __init__(self, io, **kwds): 124 125 import xlrd # throw an ImportError if we need to 126 127 ver = tuple(map(int, xlrd.__VERSION__.split(".")[:2])) 128 if ver < (0, 9): # pragma: no cover 129 raise ImportError("pandas requires xlrd >= 0.9.0 for excel " 130 "support, current version " + xlrd.__VERSION__) 131 132 self.io = io 133 134 engine = kwds.pop('engine', None) 135 136 if engine is not None and engine != 'xlrd': 137 raise ValueError("Unknown engine: %s" % engine) 138 139 if isinstance(io, compat.string_types): 140 self.book = xlrd.open_workbook(io) 141 elif engine == "xlrd" and isinstance(io, xlrd.Book): 142 self.book = io 143 elif hasattr(io, "read"): 144 data = io.read() 145 self.book = xlrd.open_workbook(file_contents=data) 146 else: 147 raise ValueError('Must explicitly set engine if not passing in' 148 ' buffer or path for io.') 149 150 def parse(self, sheetname, header=0, skiprows=None, skip_footer=0, 151 index_col=None, parse_cols=None, parse_dates=False, 152 date_parser=None, na_values=None, thousands=None, chunksize=None, 153 convert_float=True, has_index_names=False, **kwds): 154 """Read an Excel table into DataFrame 155 156 Parameters 157 ---------- 158 sheetname : string or integer 159 Name of Excel sheet or the page number of the sheet 160 header : int, default 0 161 Row to use for the column labels of the parsed DataFrame 162 skiprows : list-like 163 Rows to skip at the beginning (0-indexed) 164 skip_footer : int, default 0 165 Rows at the end to skip (0-indexed) 166 index_col : int, default None 167 Column to use as the row labels of the DataFrame. Pass None if 168 there is no such column 169 parse_cols : int or list, default None 170 * If None then parse all columns 171 * If int then indicates last column to be parsed 172 * If list of ints then indicates list of column numbers to be 173 parsed 174 * If string then indicates comma separated list of column names and 175 column ranges (e.g. "A:E" or "A,C,E:F") 176 parse_dates : boolean, default False 177 Parse date Excel values, 178 date_parser : function default None 179 Date parsing function 180 na_values : list-like, default None 181 List of additional strings to recognize as NA/NaN 182 thousands : str, default None 183 Thousands separator 184 chunksize : int, default None 185 Size of file chunk to read for lazy evaluation. 186 convert_float : boolean, default True 187 convert integral floats to int (i.e., 1.0 --> 1). If False, all 188 numeric data will be read in as floats: Excel stores all numbers as 189 floats internally. 190 has_index_names : boolean, default False 191 True if the cols defined in index_col have an index name and are 192 not in the header 193 194 Returns 195 ------- 196 parsed : DataFrame 197 DataFrame parsed from the Excel file 198 """ 199 skipfooter = kwds.pop('skipfooter', None) 200 if skipfooter is not None: 201 skip_footer = skipfooter 202 203 return self._parse_excel(sheetname, header=header, skiprows=skiprows, 204 index_col=index_col, 205 has_index_names=has_index_names, 206 parse_cols=parse_cols, 207 parse_dates=parse_dates, 208 date_parser=date_parser, na_values=na_values, 209 thousands=thousands, chunksize=chunksize, 210 skip_footer=skip_footer, 211 convert_float=convert_float, 212 **kwds) 213 214 def _should_parse(self, i, parse_cols): 215 216 def _range2cols(areas): 217 """ 218 Convert comma separated list of column names and column ranges to a 219 list of 0-based column indexes. 220 221 >>> _range2cols('A:E') 222 [0, 1, 2, 3, 4] 223 >>> _range2cols('A,C,Z:AB') 224 [0, 2, 25, 26, 27] 225 """ 226 def _excel2num(x): 227 "Convert Excel column name like 'AB' to 0-based column index" 228 return reduce(lambda s, a: s * 26 + ord(a) - ord('A') + 1, 229 x.upper().strip(), 0) - 1 230 231 cols = [] 232 for rng in areas.split(','): 233 if ':' in rng: 234 rng = rng.split(':') 235 cols += lrange(_excel2num(rng[0]), _excel2num(rng[1]) + 1) 236 else: 237 cols.append(_excel2num(rng)) 238 return cols 239 240 if isinstance(parse_cols, int): 241 return i <= parse_cols 242 elif isinstance(parse_cols, compat.string_types): 243 return i in _range2cols(parse_cols) 244 else: 245 return i in parse_cols 246 247 def _parse_excel(self, sheetname, header=0, skiprows=None, skip_footer=0, 248 index_col=None, has_index_names=None, parse_cols=None, 249 parse_dates=False, date_parser=None, na_values=None, 250 thousands=None, chunksize=None, convert_float=True, 251 **kwds): 252 from xlrd import (xldate_as_tuple, XL_CELL_DATE, 253 XL_CELL_ERROR, XL_CELL_BOOLEAN, 254 XL_CELL_NUMBER) 255 256 datemode = self.book.datemode 257 if isinstance(sheetname, compat.string_types): 258 sheet = self.book.sheet_by_name(sheetname) 259 else: # assume an integer if not a string 260 sheet = self.book.sheet_by_index(sheetname) 261 262 data = [] 263 should_parse = {} 264 for i in range(sheet.nrows): 265 row = [] 266 for j, (value, typ) in enumerate(zip(sheet.row_values(i), 267 sheet.row_types(i))): 268 if parse_cols is not None and j not in should_parse: 269 should_parse[j] = self._should_parse(j, parse_cols) 270 271 if parse_cols is None or should_parse[j]: 272 if typ == XL_CELL_DATE: 273 dt = xldate_as_tuple(value, datemode) 274 # how to produce this first case? 275 if dt[0] < datetime.MINYEAR: # pragma: no cover 276 value = datetime.time(*dt[3:]) 277 else: 278 value = datetime.datetime(*dt) 279 elif typ == XL_CELL_ERROR: 280 value = np.nan 281 elif typ == XL_CELL_BOOLEAN: 282 value = bool(value) 283 elif convert_float and typ == XL_CELL_NUMBER: 284 # GH5394 - Excel 'numbers' are always floats 285 # it's a minimal perf hit and less suprising 286 val = int(value) 287 if val == value: 288 value = val 289 290 row.append(value) 291 292 data.append(row) 293 294 if header is not None: 295 data[header] = _trim_excel_header(data[header]) 296 297 parser = TextParser(data, header=header, index_col=index_col, 298 has_index_names=has_index_names, 299 na_values=na_values, 300 thousands=thousands, 301 parse_dates=parse_dates, 302 date_parser=date_parser, 303 skiprows=skiprows, 304 skip_footer=skip_footer, 305 chunksize=chunksize, 306 **kwds) 307 308 return parser.read() 309 310 @property 311 def sheet_names(self): 312 return self.book.sheet_names() 313 314 def close(self): 315 """close io if necessary""" 316 if hasattr(self.io, 'close'): 317 self.io.close() 318 319 def __enter__(self): 320 return self 321 322 def __exit__(self, exc_type, exc_value, traceback): 323 self.close() 324 325 326 def _trim_excel_header(row): 327 # trim header row so auto-index inference works 328 # xlrd uses '' , openpyxl None 329 while len(row) > 0 and (row[0] == '' or row[0] is None): 330 row = row[1:] 331 return row 332 333 334 def _conv_value(val): 335 # Convert numpy types to Python types for the Excel writers. 336 if com.is_integer(val): 337 val = int(val) 338 elif com.is_float(val): 339 val = float(val) 340 elif com.is_bool(val): 341 val = bool(val) 342 elif isinstance(val, Period): 343 val = "%s" % val 344 345 return val 346 347 348 @add_metaclass(abc.ABCMeta) 349 class ExcelWriter(object): 350 """ 351 Class for writing DataFrame objects into excel sheets, default is to use 352 xlwt for xls, openpyxl for xlsx. See DataFrame.to_excel for typical usage. 353 354 Parameters 355 ---------- 356 path : string 357 Path to xls or xlsx file. 358 engine : string (optional) 359 Engine to use for writing. If None, defaults to 360 ``io.excel.<extension>.writer``. NOTE: can only be passed as a keyword 361 argument. 362 date_format : string, default None 363 Format string for dates written into Excel files (e.g. 'YYYY-MM-DD') 364 datetime_format : string, default None 365 Format string for datetime objects written into Excel files 366 (e.g. 'YYYY-MM-DD HH:MM:SS') 367 """ 368 # Defining an ExcelWriter implementation (see abstract methods for more...) 369 370 # - Mandatory 371 # - ``write_cells(self, cells, sheet_name=None, startrow=0, startcol=0)`` 372 # --> called to write additional DataFrames to disk 373 # - ``supported_extensions`` (tuple of supported extensions), used to 374 # check that engine supports the given extension. 375 # - ``engine`` - string that gives the engine name. Necessary to 376 # instantiate class directly and bypass ``ExcelWriterMeta`` engine 377 # lookup. 378 # - ``save(self)`` --> called to save file to disk 379 # - Mostly mandatory (i.e. should at least exist) 380 # - book, cur_sheet, path 381 382 # - Optional: 383 # - ``__init__(self, path, engine=None, **kwargs)`` --> always called 384 # with path as first argument. 385 386 # You also need to register the class with ``register_writer()``. 387 # Technically, ExcelWriter implementations don't need to subclass 388 # ExcelWriter. 389 def __new__(cls, path, engine=None, **kwargs): 390 # only switch class if generic(ExcelWriter) 391 if cls == ExcelWriter: 392 if engine is None: 393 ext = os.path.splitext(path)[-1][1:] 394 try: 395 engine = config.get_option('io.excel.%s.writer' % ext) 396 except KeyError: 397 error = ValueError("No engine for filetype: '%s'" % ext) 398 raise error 399 cls = get_writer(engine) 400 401 return object.__new__(cls) 402 403 # declare external properties you can count on 404 book = None 405 curr_sheet = None 406 path = None 407 408 @abc.abstractproperty 409 def supported_extensions(self): 410 "extensions that writer engine supports" 411 pass 412 413 @abc.abstractproperty 414 def engine(self): 415 "name of engine" 416 pass 417 418 @abc.abstractmethod 419 def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): 420 """ 421 Write given formated cells into Excel an excel sheet 422 423 Parameters 424 ---------- 425 cells : generator 426 cell of formated data to save to Excel sheet 427 sheet_name : string, default None 428 Name of Excel sheet, if None, then use self.cur_sheet 429 startrow: upper left cell row to dump data frame 430 startcol: upper left cell column to dump data frame 431 """ 432 pass 433 434 @abc.abstractmethod 435 def save(self): 436 """ 437 Save workbook to disk. 438 """ 439 pass 440 441 def __init__(self, path, engine=None, 442 date_format=None, datetime_format=None, **engine_kwargs): 443 # validate that this engine can handle the extension 444 ext = os.path.splitext(path)[-1] 445 self.check_extension(ext) 446 447 self.path = path 448 self.sheets = {} 449 self.cur_sheet = None 450 451 if date_format is None: 452 self.date_format = 'YYYY-MM-DD' 453 else: 454 self.date_format = date_format 455 if datetime_format is None: 456 self.datetime_format = 'YYYY-MM-DD HH:MM:SS' 457 else: 458 self.datetime_format = datetime_format 459 460 def _get_sheet_name(self, sheet_name): 461 if sheet_name is None: 462 sheet_name = self.cur_sheet 463 if sheet_name is None: # pragma: no cover 464 raise ValueError('Must pass explicit sheet_name or set ' 465 'cur_sheet property') 466 return sheet_name 467 468 @classmethod 469 def check_extension(cls, ext): 470 """checks that path's extension against the Writer's supported 471 extensions. If it isn't supported, raises UnsupportedFiletypeError.""" 472 if ext.startswith('.'): 473 ext = ext[1:] 474 if not any(ext in extension for extension in cls.supported_extensions): 475 msg = (u("Invalid extension for engine '%s': '%s'") % 476 (pprint_thing(cls.engine), pprint_thing(ext))) 477 raise ValueError(msg) 478 else: 479 return True 480 481 # Allow use as a contextmanager 482 def __enter__(self): 483 return self 484 485 def __exit__(self, exc_type, exc_value, traceback): 486 self.close() 487 488 def close(self): 489 """synonym for save, to make it more file-like""" 490 return self.save() 491 492 493 class _OpenpyxlWriter(ExcelWriter): 494 engine = 'openpyxl' 495 supported_extensions = ('.xlsx', '.xlsm') 496 497 def __init__(self, path, engine=None, **engine_kwargs): 498 # Use the openpyxl module as the Excel writer. 499 from openpyxl.workbook import Workbook 500 501 super(_OpenpyxlWriter, self).__init__(path, **engine_kwargs) 502 503 # Create workbook object with default optimized_write=True. 504 self.book = Workbook() 505 # Openpyxl 1.6.1 adds a dummy sheet. We remove it. 506 if self.book.worksheets: 507 self.book.remove_sheet(self.book.worksheets[0]) 508 509 def save(self): 510 """ 511 Save workbook to disk. 512 """ 513 return self.book.save(self.path) 514 515 def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): 516 # Write the frame cells using openpyxl. 517 from openpyxl.cell import get_column_letter 518 519 sheet_name = self._get_sheet_name(sheet_name) 520 521 if sheet_name in self.sheets: 522 wks = self.sheets[sheet_name] 523 else: 524 wks = self.book.create_sheet() 525 wks.title = sheet_name 526 self.sheets[sheet_name] = wks 527 528 for cell in cells: 529 colletter = get_column_letter(startcol + cell.col + 1) 530 xcell = wks.cell("%s%s" % (colletter, startrow + cell.row + 1)) 531 xcell.value = _conv_value(cell.val) 532 style = None 533 if cell.style: 534 style = self._convert_to_style(cell.style) 535 for field in style.__fields__: 536 xcell.style.__setattr__(field, 537 style.__getattribute__(field)) 538 539 if isinstance(cell.val, datetime.datetime): 540 xcell.style.number_format.format_code = self.datetime_format 541 elif isinstance(cell.val, datetime.date): 542 xcell.style.number_format.format_code = self.date_format 543 544 if cell.mergestart is not None and cell.mergeend is not None: 545 cletterstart = get_column_letter(startcol + cell.col + 1) 546 cletterend = get_column_letter(startcol + cell.mergeend + 1) 547 548 wks.merge_cells('%s%s:%s%s' % (cletterstart, 549 startrow + cell.row + 1, 550 cletterend, 551 startrow + cell.mergestart + 1)) 552 553 # Excel requires that the format of the first cell in a merged 554 # range is repeated in the rest of the merged range. 555 if style: 556 first_row = startrow + cell.row + 1 557 last_row = startrow + cell.mergestart + 1 558 first_col = startcol + cell.col + 1 559 last_col = startcol + cell.mergeend + 1 560 561 for row in range(first_row, last_row + 1): 562 for col in range(first_col, last_col + 1): 563 if row == first_row and col == first_col: 564 # Ignore first cell. It is already handled. 565 continue 566 colletter = get_column_letter(col) 567 xcell = wks.cell("%s%s" % (colletter, row)) 568 for field in style.__fields__: 569 xcell.style.__setattr__( 570 field, style.__getattribute__(field)) 571 572 @classmethod 573 def _convert_to_style(cls, style_dict): 574 """ 575 converts a style_dict to an openpyxl style object 576 Parameters 577 ---------- 578 style_dict: style dictionary to convert 579 """ 580 581 from openpyxl.style import Style 582 xls_style = Style() 583 for key, value in style_dict.items(): 584 for nk, nv in value.items(): 585 if key == "borders": 586 (xls_style.borders.__getattribute__(nk) 587 .__setattr__('border_style', nv)) 588 else: 589 xls_style.__getattribute__(key).__setattr__(nk, nv) 590 591 return xls_style 592 593 register_writer(_OpenpyxlWriter) 594 595 596 class _XlwtWriter(ExcelWriter): 597 engine = 'xlwt' 598 supported_extensions = ('.xls',) 599 600 def __init__(self, path, engine=None, **engine_kwargs): 601 # Use the xlwt module as the Excel writer. 602 import xlwt 603 604 super(_XlwtWriter, self).__init__(path, **engine_kwargs) 605 606 self.book = xlwt.Workbook() 607 self.fm_datetime = xlwt.easyxf(num_format_str=self.datetime_format) 608 self.fm_date = xlwt.easyxf(num_format_str=self.date_format) 609 610 def save(self): 611 """ 612 Save workbook to disk. 613 """ 614 return self.book.save(self.path) 615 616 def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): 617 # Write the frame cells using xlwt. 618 619 sheet_name = self._get_sheet_name(sheet_name) 620 621 if sheet_name in self.sheets: 622 wks = self.sheets[sheet_name] 623 else: 624 wks = self.book.add_sheet(sheet_name) 625 self.sheets[sheet_name] = wks 626 627 style_dict = {} 628 629 for cell in cells: 630 val = _conv_value(cell.val) 631 632 num_format_str = None 633 if isinstance(cell.val, datetime.datetime): 634 num_format_str = self.datetime_format 635 if isinstance(cell.val, datetime.date): 636 num_format_str = self.date_format 637 638 stylekey = json.dumps(cell.style) 639 if num_format_str: 640 stylekey += num_format_str 641 642 if stylekey in style_dict: 643 style = style_dict[stylekey] 644 else: 645 style = self._convert_to_style(cell.style, num_format_str) 646 style_dict[stylekey] = style 647 648 if cell.mergestart is not None and cell.mergeend is not None: 649 wks.write_merge(startrow + cell.row, 650 startrow + cell.mergestart, 651 startcol + cell.col, 652 startcol + cell.mergeend, 653 val, style) 654 else: 655 wks.write(startrow + cell.row, 656 startcol + cell.col, 657 val, style) 658 659 @classmethod 660 def _style_to_xlwt(cls, item, firstlevel=True, field_sep=',', 661 line_sep=';'): 662 """helper which recursively generate an xlwt easy style string 663 for example: 664 665 hstyle = {"font": {"bold": True}, 666 "border": {"top": "thin", 667 "right": "thin", 668 "bottom": "thin", 669 "left": "thin"}, 670 "align": {"horiz": "center"}} 671 will be converted to 672 font: bold on; \ 673 border: top thin, right thin, bottom thin, left thin; \ 674 align: horiz center; 675 """ 676 if hasattr(item, 'items'): 677 if firstlevel: 678 it = ["%s: %s" % (key, cls._style_to_xlwt(value, False)) 679 for key, value in item.items()] 680 out = "%s " % (line_sep).join(it) 681 return out 682 else: 683 it = ["%s %s" % (key, cls._style_to_xlwt(value, False)) 684 for key, value in item.items()] 685 out = "%s " % (field_sep).join(it) 686 return out 687 else: 688 item = "%s" % item 689 item = item.replace("True", "on") 690 item = item.replace("False", "off") 691 return item 692 693 @classmethod 694 def _convert_to_style(cls, style_dict, num_format_str=None): 695 """ 696 converts a style_dict to an xlwt style object 697 Parameters 698 ---------- 699 style_dict: style dictionary to convert 700 num_format_str: optional number format string 701 """ 702 import xlwt 703 704 if style_dict: 705 xlwt_stylestr = cls._style_to_xlwt(style_dict) 706 style = xlwt.easyxf(xlwt_stylestr, field_sep=',', line_sep=';') 707 else: 708 style = xlwt.XFStyle() 709 if num_format_str is not None: 710 style.num_format_str = num_format_str 711 712 return style 713 714 register_writer(_XlwtWriter) 715 716 717 class _XlsxWriter(ExcelWriter): 718 engine = 'xlsxwriter' 719 supported_extensions = ('.xlsx',) 720 721 def __init__(self, path, engine=None, 722 date_format=None, datetime_format=None, **engine_kwargs): 723 # Use the xlsxwriter module as the Excel writer. 724 import xlsxwriter 725 726 super(_XlsxWriter, self).__init__(path, engine=engine, 727 date_format=date_format, datetime_format=datetime_format, 728 **engine_kwargs) 729 730 self.book = xlsxwriter.Workbook(path, **engine_kwargs) 731 732 def save(self): 733 """ 734 Save workbook to disk. 735 """ 736 return self.book.close() 737 738 def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0): 739 # Write the frame cells using xlsxwriter. 740 741 sheet_name = self._get_sheet_name(sheet_name) 742 743 if sheet_name in self.sheets: 744 wks = self.sheets[sheet_name] 745 else: 746 wks = self.book.add_worksheet(sheet_name) 747 self.sheets[sheet_name] = wks 748 749 style_dict = {} 750 751 for cell in cells: 752 num_format_str = None 753 if isinstance(cell.val, datetime.datetime): 754 num_format_str = self.datetime_format 755 if isinstance(cell.val, datetime.date): 756 num_format_str = self.date_format 757 758 stylekey = json.dumps(cell.style) 759 if num_format_str: 760 stylekey += num_format_str 761 762 if stylekey in style_dict: 763 style = style_dict[stylekey] 764 else: 765 style = self._convert_to_style(cell.style, num_format_str) 766 style_dict[stylekey] = style 767 768 if cell.mergestart is not None and cell.mergeend is not None: 769 wks.merge_range(startrow + cell.row, 770 startcol + cell.col, 771 startrow + cell.mergestart, 772 startcol + cell.mergeend, 773 cell.val, style) 774 else: 775 wks.write(startrow + cell.row, 776 startcol + cell.col, 777 cell.val, style) 778 779 def _convert_to_style(self, style_dict, num_format_str=None): 780 """ 781 converts a style_dict to an xlsxwriter format object 782 Parameters 783 ---------- 784 style_dict: style dictionary to convert 785 num_format_str: optional number format string 786 """ 787 788 # Create a XlsxWriter format object. 789 xl_format = self.book.add_format() 790 791 if num_format_str is not None: 792 xl_format.set_num_format(num_format_str) 793 794 if style_dict is None: 795 return xl_format 796 797 # Map the cell font to XlsxWriter font properties. 798 if style_dict.get('font'): 799 font = style_dict['font'] 800 if font.get('bold'): 801 xl_format.set_bold() 802 803 # Map the alignment to XlsxWriter alignment properties. 804 alignment = style_dict.get('alignment') 805 if alignment: 806 if (alignment.get('horizontal') 807 and alignment['horizontal'] == 'center'): 808 xl_format.set_align('center') 809 if (alignment.get('vertical') 810 and alignment['vertical'] == 'top'): 811 xl_format.set_align('top') 812 813 # Map the cell borders to XlsxWriter border properties. 814 if style_dict.get('borders'): 815 xl_format.set_border() 816 817 return xl_format 818 819 register_writer(_XlsxWriter) 820 [end of pandas/io/excel.py] [start of pandas/util/print_versions.py] 1 import os 2 import platform 3 import sys 4 import struct 5 import subprocess 6 import codecs 7 8 9 def get_sys_info(): 10 "Returns system information as a dict" 11 12 blob = [] 13 14 # get full commit hash 15 commit = None 16 if os.path.isdir(".git") and os.path.isdir("pandas"): 17 try: 18 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), 19 stdout=subprocess.PIPE, stderr=subprocess.PIPE) 20 so, serr = pipe.communicate() 21 except: 22 pass 23 else: 24 if pipe.returncode == 0: 25 commit = so 26 try: 27 commit = so.decode('utf-8') 28 except ValueError: 29 pass 30 commit = commit.strip().strip('"') 31 32 blob.append(('commit', commit)) 33 34 try: 35 sysname, nodename, release, version, machine, processor = platform.uname( 36 ) 37 blob.extend([ 38 ("python", "%d.%d.%d.%s.%s" % sys.version_info[:]), 39 ("python-bits", struct.calcsize("P") * 8), 40 ("OS", "%s" % (sysname)), 41 ("OS-release", "%s" % (release)), 42 # ("Version", "%s" % (version)), 43 ("machine", "%s" % (machine)), 44 ("processor", "%s" % (processor)), 45 ("byteorder", "%s" % sys.byteorder), 46 ("LC_ALL", "%s" % os.environ.get('LC_ALL', "None")), 47 ("LANG", "%s" % os.environ.get('LANG', "None")), 48 49 ]) 50 except: 51 pass 52 53 return blob 54 55 56 def show_versions(as_json=False): 57 import imp 58 sys_info = get_sys_info() 59 60 deps = [ 61 # (MODULE_NAME, f(mod) -> mod version) 62 ("pandas", lambda mod: mod.__version__), 63 ("Cython", lambda mod: mod.__version__), 64 ("numpy", lambda mod: mod.version.version), 65 ("scipy", lambda mod: mod.version.version), 66 ("statsmodels", lambda mod: mod.__version__), 67 ("IPython", lambda mod: mod.__version__), 68 ("sphinx", lambda mod: mod.__version__), 69 ("patsy", lambda mod: mod.__version__), 70 ("scikits.timeseries", lambda mod: mod.__version__), 71 ("dateutil", lambda mod: mod.__version__), 72 ("pytz", lambda mod: mod.VERSION), 73 ("bottleneck", lambda mod: mod.__version__), 74 ("tables", lambda mod: mod.__version__), 75 ("numexpr", lambda mod: mod.__version__), 76 ("matplotlib", lambda mod: mod.__version__), 77 ("openpyxl", lambda mod: mod.__version__), 78 ("xlrd", lambda mod: mod.__VERSION__), 79 ("xlwt", lambda mod: mod.__VERSION__), 80 ("xlsxwriter", lambda mod: mod.__version__), 81 ("lxml", lambda mod: mod.etree.__version__), 82 ("bs4", lambda mod: mod.__version__), 83 ("html5lib", lambda mod: mod.__version__), 84 ("bq", lambda mod: mod._VersionNumber()), 85 ("apiclient", lambda mod: mod.__version__), 86 ("rpy2", lambda mod: mod.__version__), 87 ("sqlalchemy", lambda mod: mod.__version__), 88 ("pymysql", lambda mod: mod.__version__), 89 ("psycopg2", lambda mod: mod.__version__), 90 ] 91 92 deps_blob = list() 93 for (modname, ver_f) in deps: 94 try: 95 try: 96 mod = imp.load_module(modname, *imp.find_module(modname)) 97 except (ImportError): 98 import importlib 99 mod = importlib.import_module(modname) 100 ver = ver_f(mod) 101 deps_blob.append((modname, ver)) 102 except: 103 deps_blob.append((modname, None)) 104 105 if (as_json): 106 # 2.6-safe 107 try: 108 import json 109 except: 110 import simplejson as json 111 112 j = dict(system=dict(sys_info), dependencies=dict(deps_blob)) 113 114 if as_json == True: 115 print(j) 116 else: 117 with codecs.open(as_json, "wb", encoding='utf8') as f: 118 json.dump(j, f, indent=2) 119 120 else: 121 122 print("\nINSTALLED VERSIONS") 123 print("------------------") 124 125 for k, stat in sys_info: 126 print("%s: %s" % (k, stat)) 127 128 print("") 129 for k, stat in deps_blob: 130 print("%s: %s" % (k, stat)) 131 132 133 def main(): 134 # optparse is 2.6-safe 135 from optparse import OptionParser 136 parser = OptionParser() 137 parser.add_option("-j", "--json", metavar="FILE", nargs=1, 138 help="Save output as JSON into file, pass in '-' to output to stdout") 139 140 (options, args) = parser.parse_args() 141 142 if options.json == "-": 143 options.json = True 144 145 show_versions(as_json=options.json) 146 147 return 0 148 149 if __name__ == "__main__": 150 sys.exit(main()) 151 [end of pandas/util/print_versions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
170377d892b8154d8fa3067145dc07b3cb5011f9
Allow timestamp option for StataWriter.write_file() This is a combined feature request & minor bug notice. Feature Request: I would like to be able to write code that produces, byte-for-byte, reproducible outputs. To that end I want to write Stata dta files with a blank (or constant) timestamp. It would be nice to allow write_file() to accept a timestamp (or some option to zero it out). Bug: In an attempt to do this myself, I made my own version of StataWriter.write_file() where the only difference is I call (underscore)write_header() internal function with a constant timestamp. But that produces the following bug. ``` python import pandas as pd import numpy as np from pandas.io.stata import StataWriter import datetime df = pd.DataFrame(np.random.randn(6,4),index=list('abcdef'),columns=list('ABCD')) writer = StataWriter('ouput.dta', df) fktime_stamp = datetime.datetime.now() writer._write_header(time_stamp=fktime_stamp) # rest of write_file() ``` produces the following error ``` File "C:\Program Files\Python27\lib\site-packages\pandas\io\stata.py", line 1057, in _write_header elif not isinstance(time_stamp, datetime): TypeError: isinstance() arg 2 must be a class, type, or tuple of classes and types ``` My system details are. ``` >>> pd.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 2.7.2.final.0 python-bits: 64 OS: Windows OS-release: 7 machine: AMD64 processor: AMD64 Family 16 Model 6 Stepping 3, AuthenticAMD byteorder: little LC_ALL: None LANG: None pandas: 0.13.1 Cython: None numpy: 1.8.1 scipy: None statsmodels: None IPython: None sphinx: None patsy: None scikits.timeseries: None dateutil: 2.2 pytz: None bottleneck: None tables: None numexpr: None matplotlib: None openpyxl: None xlrd: None xlwt: None xlsxwriter: None sqlalchemy: None lxml: None bs4: None html5lib: None bq: None apiclient: None ```
going to merge #6335 shortly to fix some basic issues. then pls revist It should be `isinstance(..., datetime.datetime)` for starters. @jseabold hit the nail on the head. @jreback I've put together a patch what will allow the `time_stamp` to be set from `to_stata` if there is any demand for this. FWIW, this code is unreachable in normal use. There are a couple of other file properties that aren't exposed externally (e.g. an 80 character description string). This was a small fix since I didn't have to refresh my memory, so I have submitted a PR.
2014-03-05T13:46:45Z
<patch> diff --git a/doc/source/release.rst b/doc/source/release.rst --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -147,6 +147,7 @@ Improvements to existing features - perf improvements in DataFrame construction with certain offsets, by removing faulty caching (e.g. MonthEnd,BusinessMonthEnd), (:issue:`6479`) - perf improvements in single-dtyped indexing (:issue:`6484`) +- ``StataWriter`` and ``DataFrame.to_stata`` accept time stamp and data labels (:issue:`6545`) .. _release.bug_fixes-0.14.0: diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt --- a/doc/source/v0.14.0.txt +++ b/doc/source/v0.14.0.txt @@ -312,6 +312,9 @@ Enhancements - ``DataFrame.to_stata`` will now check data for compatibility with Stata data types and will upcast when needed. When it isn't possibly to losslessly upcast, a warning is raised (:issue:`6327`) +- ``DataFrame.to_stata`` and ``StataWriter`` will accept keyword arguments time_stamp + and data_label which allow the time stamp and dataset label to be set when creating a + file. (:issue:`6545`) Performance ~~~~~~~~~~~ diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -1216,7 +1216,7 @@ def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='', def to_stata( self, fname, convert_dates=None, write_index=True, encoding="latin-1", - byteorder=None): + byteorder=None, time_stamp=None, data_label=None): """ A class for writing Stata binary dta files from array-like objects @@ -1247,7 +1247,8 @@ def to_stata( """ from pandas.io.stata import StataWriter writer = StataWriter(fname, self, convert_dates=convert_dates, - encoding=encoding, byteorder=byteorder) + encoding=encoding, byteorder=byteorder, + time_stamp=time_stamp, data_label=data_label) writer.write_file() def to_sql(self, name, con, flavor='sqlite', if_exists='fail', **kwargs): diff --git a/pandas/io/stata.py b/pandas/io/stata.py --- a/pandas/io/stata.py +++ b/pandas/io/stata.py @@ -375,6 +375,18 @@ def __init__(self, encoding): 'd': np.float64(struct.unpack('<d', b'\x00\x00\x00\x00\x00\x00\xe0\x7f')[0]) } + # Reserved words cannot be used as variable names + self.RESERVED_WORDS = ('aggregate', 'array', 'boolean', 'break', + 'byte', 'case', 'catch', 'class', 'colvector', + 'complex', 'const', 'continue', 'default', + 'delegate', 'delete', 'do', 'double', 'else', + 'eltypedef', 'end', 'enum', 'explicit', + 'export', 'external', 'float', 'for', 'friend', + 'function', 'global', 'goto', 'if', 'inline', + 'int', 'local', 'long', 'NULL', 'pragma', + 'protected', 'quad', 'rowvector', 'short', + 'typedef', 'typename', 'virtual') + def _decode_bytes(self, str, errors=None): if compat.PY3 or self._encoding is not None: return str.decode(self._encoding, errors) @@ -449,10 +461,10 @@ def _read_header(self): self.path_or_buf.read(4))[0] self.path_or_buf.read(11) # </N><label> strlen = struct.unpack('b', self.path_or_buf.read(1))[0] - self.data_label = self.path_or_buf.read(strlen) + self.data_label = self._null_terminate(self.path_or_buf.read(strlen)) self.path_or_buf.read(19) # </label><timestamp> strlen = struct.unpack('b', self.path_or_buf.read(1))[0] - self.time_stamp = self.path_or_buf.read(strlen) + self.time_stamp = self._null_terminate(self.path_or_buf.read(strlen)) self.path_or_buf.read(26) # </timestamp></header><map> self.path_or_buf.read(8) # 0x0000000000000000 self.path_or_buf.read(8) # position of <map> @@ -543,11 +555,11 @@ def _read_header(self): self.nobs = struct.unpack(self.byteorder + 'I', self.path_or_buf.read(4))[0] if self.format_version > 105: - self.data_label = self.path_or_buf.read(81) + self.data_label = self._null_terminate(self.path_or_buf.read(81)) else: - self.data_label = self.path_or_buf.read(32) + self.data_label = self._null_terminate(self.path_or_buf.read(32)) if self.format_version > 104: - self.time_stamp = self.path_or_buf.read(18) + self.time_stamp = self._null_terminate(self.path_or_buf.read(18)) # descriptors if self.format_version > 108: @@ -1029,6 +1041,11 @@ class StataWriter(StataParser): byteorder : str Can be ">", "<", "little", or "big". The default is None which uses `sys.byteorder` + time_stamp : datetime + A date time to use when writing the file. Can be None, in which + case the current time is used. + dataset_label : str + A label for the data set. Should be 80 characters or smaller. Returns ------- @@ -1047,10 +1064,13 @@ class StataWriter(StataParser): >>> writer.write_file() """ def __init__(self, fname, data, convert_dates=None, write_index=True, - encoding="latin-1", byteorder=None): + encoding="latin-1", byteorder=None, time_stamp=None, + data_label=None): super(StataWriter, self).__init__(encoding) self._convert_dates = convert_dates self._write_index = write_index + self._time_stamp = time_stamp + self._data_label = data_label # attach nobs, nvars, data, varlist, typlist self._prepare_pandas(data) @@ -1086,7 +1106,7 @@ def __iter__(self): if self._write_index: data = data.reset_index() - # Check columns for compatbaility with stata + # Check columns for compatibility with stata data = _cast_to_stata_types(data) self.datarows = DataFrameRowIter(data) self.nobs, self.nvar = data.shape @@ -1110,7 +1130,8 @@ def __iter__(self): self.fmtlist[key] = self._convert_dates[key] def write_file(self): - self._write_header() + self._write_header(time_stamp=self._time_stamp, + data_label=self._data_label) self._write_descriptors() self._write_variable_labels() # write 5 zeros for expansion fields @@ -1147,7 +1168,7 @@ def _write_header(self, data_label=None, time_stamp=None): # format dd Mon yyyy hh:mm if time_stamp is None: time_stamp = datetime.datetime.now() - elif not isinstance(time_stamp, datetime): + elif not isinstance(time_stamp, datetime.datetime): raise ValueError("time_stamp should be datetime type") self._file.write( self._null_terminate(time_stamp.strftime("%d %b %Y %H:%M")) @@ -1169,7 +1190,9 @@ def _write_descriptors(self, typlist=None, varlist=None, srtlist=None, for c in name: if (c < 'A' or c > 'Z') and (c < 'a' or c > 'z') and (c < '0' or c > '9') and c != '_': name = name.replace(c, '_') - + # Variable name must not be a reserved word + if name in self.RESERVED_WORDS: + name = '_' + name # Variable name may not start with a number if name[0] > '0' and name[0] < '9': name = '_' + name </patch>
[]
[]
pandas-dev__pandas-4267
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> test_bs4_version_fails: ImportError: html5lib not found please install it ``` ====================================================================== ERROR: pandas.io.tests.test_html.test_bs4_version_fails ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 83, in test_bs4_version_fails flavor='bs4') File "/usr/lib/python3.2/unittest/case.py", line 557, in assertRaises callableObj(*args, **kwargs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 906, in read_html attrs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 765, in _parse parser = _parser_dispatch(flav) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 719, in _parser_dispatch raise ImportError("html5lib not found please install it") ImportError: html5lib not found please install it ``` on 4c2d050 there is no python3-html5lib on any debian system yet </issue> <code> [start of README.rst] 1 ============================================= 2 pandas: powerful Python data analysis toolkit 3 ============================================= 4 5 .. image:: https://travis-ci.org/pydata/pandas.png 6 :target: https://travis-ci.org/pydata/pandas 7 8 What is it 9 ========== 10 11 **pandas** is a Python package providing fast, flexible, and expressive data 12 structures designed to make working with "relational" or "labeled" data both 13 easy and intuitive. It aims to be the fundamental high-level building block for 14 doing practical, **real world** data analysis in Python. Additionally, it has 15 the broader goal of becoming **the most powerful and flexible open source data 16 analysis / manipulation tool available in any language**. It is already well on 17 its way toward this goal. 18 19 Main Features 20 ============= 21 22 Here are just a few of the things that pandas does well: 23 24 - Easy handling of **missing data** (represented as NaN) in floating point as 25 well as non-floating point data 26 - Size mutability: columns can be **inserted and deleted** from DataFrame and 27 higher dimensional objects 28 - Automatic and explicit **data alignment**: objects can be explicitly 29 aligned to a set of labels, or the user can simply ignore the labels and 30 let `Series`, `DataFrame`, etc. automatically align the data for you in 31 computations 32 - Powerful, flexible **group by** functionality to perform 33 split-apply-combine operations on data sets, for both aggregating and 34 transforming data 35 - Make it **easy to convert** ragged, differently-indexed data in other 36 Python and NumPy data structures into DataFrame objects 37 - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting** 38 of large data sets 39 - Intuitive **merging** and **joining** data sets 40 - Flexible **reshaping** and pivoting of data sets 41 - **Hierarchical** labeling of axes (possible to have multiple labels per 42 tick) 43 - Robust IO tools for loading data from **flat files** (CSV and delimited), 44 Excel files, databases, and saving / loading data from the ultrafast **HDF5 45 format** 46 - **Time series**-specific functionality: date range generation and frequency 47 conversion, moving window statistics, moving window linear regressions, 48 date shifting and lagging, etc. 49 50 Where to get it 51 =============== 52 53 The source code is currently hosted on GitHub at: http://github.com/pydata/pandas 54 55 Binary installers for the latest released version are available at the Python 56 package index:: 57 58 http://pypi.python.org/pypi/pandas/ 59 60 And via ``easy_install`` or ``pip``:: 61 62 easy_install pandas 63 pip install pandas 64 65 Dependencies 66 ============ 67 68 - `NumPy <http://www.numpy.org>`__: 1.6.1 or higher 69 - `python-dateutil <http://labix.org/python-dateutil>`__ 1.5 or higher 70 - `pytz <http://pytz.sourceforge.net/>`__ 71 - Needed for time zone support with ``date_range`` 72 73 Highly Recommended Dependencies 74 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 75 76 - `numexpr <http://code.google.com/p/numexpr/>`__ 77 - Needed to accelerate some expression evaluation operations 78 - Required by `PyTables` 79 - `bottleneck <http://berkeleyanalytics.com/bottleneck>`__ 80 - Needed to accelerate certain numerical operations 81 82 Optional dependencies 83 ~~~~~~~~~~~~~~~~~~~~~ 84 85 - `Cython <http://www.cython.org>`__: Only necessary to build development version. Version 0.17.1 or higher. 86 - `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions 87 - `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage 88 - `matplotlib <http://matplotlib.sourceforge.net/>`__: for plotting 89 - `statsmodels <http://statsmodels.sourceforge.net/>`__ 90 - Needed for parts of :mod:`pandas.stats` 91 - `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__ 92 - openpyxl version 1.6.1 or higher, for writing .xlsx files 93 - xlrd >= 0.9.0 94 - Needed for Excel I/O 95 - `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3 96 access. 97 - One of the following combinations of libraries is needed to use the 98 top-level :func:`~pandas.io.html.read_html` function: 99 100 - `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is 101 okay.) 102 - `BeautifulSoup4`_ and `lxml`_ 103 - `BeautifulSoup4`_ and `html5lib`_ and `lxml`_ 104 - Only `lxml`_, although see :ref:`HTML reading gotchas <html-gotchas>` 105 for reasons as to why you should probably **not** take this approach. 106 107 .. warning:: 108 109 - if you install `BeautifulSoup4`_ you must install either 110 `lxml`_ or `html5lib`_ or both. 111 :func:`~pandas.io.html.read_html` will **not** work with *only* 112 `BeautifulSoup4`_ installed. 113 - You are highly encouraged to read :ref:`HTML reading gotchas 114 <html-gotchas>`. It explains issues surrounding the installation and 115 usage of the above three libraries 116 - You may need to install an older version of `BeautifulSoup4`_: 117 - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 118 32-bit Ubuntu/Debian 119 - Additionally, if you're using `Anaconda`_ you should definitely 120 read :ref:`the gotchas about HTML parsing libraries <html-gotchas>` 121 122 .. note:: 123 124 - if you're on a system with ``apt-get`` you can do 125 126 .. code-block:: sh 127 128 sudo apt-get build-dep python-lxml 129 130 to get the necessary dependencies for installation of `lxml`_. This 131 will prevent further headaches down the line. 132 133 134 .. _html5lib: https://github.com/html5lib/html5lib-python 135 .. _BeautifulSoup4: http://www.crummy.com/software/BeautifulSoup 136 .. _lxml: http://lxml.de 137 .. _Anaconda: https://store.continuum.io/cshop/anaconda 138 139 140 Installation from sources 141 ========================= 142 143 To install pandas from source you need ``cython`` in addition to the normal dependencies above, 144 which can be installed from pypi:: 145 146 pip install cython 147 148 In the ``pandas`` directory (same one where you found this file after cloning the git repo), execute:: 149 150 python setup.py install 151 152 or for installing in `development mode <http://www.pip-installer.org/en/latest/usage.html>`__:: 153 154 python setup.py develop 155 156 Alternatively, you can use `pip` if you want all the dependencies pulled in automatically 157 (the optional ``-e`` option is for installing it in 158 `development mode <http://www.pip-installer.org/en/latest/usage.html>`__):: 159 160 pip install -e . 161 162 On Windows, you will need to install MinGW and execute:: 163 164 python setup.py build --compiler=mingw32 165 python setup.py install 166 167 See http://pandas.pydata.org/ for more information. 168 169 License 170 ======= 171 172 BSD 173 174 Documentation 175 ============= 176 177 The official documentation is hosted on PyData.org: http://pandas.pydata.org/ 178 179 The Sphinx documentation should provide a good starting point for learning how 180 to use the library. Expect the docs to continue to expand as time goes on. 181 182 Background 183 ========== 184 185 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 186 has been under active development since then. 187 188 Discussion and Development 189 ========================== 190 191 Since ``pandas`` development is related to a number of other scientific 192 Python projects, questions are welcome on the scipy-user mailing 193 list. Specialized discussions or design issues should take place on 194 the pystatsmodels mailing list / Google group, where 195 ``scikits.statsmodels`` and other libraries will also be discussed: 196 197 http://groups.google.com/group/pystatsmodels 198 199 .. _NumPy: http://numpy.scipy.org/ 200 [end of README.rst] [start of pandas/io/html.py] 1 """:mod:`pandas.io.html` is a module containing functionality for dealing with 2 HTML IO. 3 4 """ 5 6 import os 7 import re 8 import numbers 9 import urllib2 10 import urlparse 11 import collections 12 13 from distutils.version import LooseVersion 14 15 import numpy as np 16 17 from pandas import DataFrame, MultiIndex, isnull 18 from pandas.io.common import _is_url, urlopen 19 20 21 try: 22 import bs4 23 except ImportError: 24 _HAS_BS4 = False 25 else: 26 _HAS_BS4 = True 27 28 29 try: 30 import lxml 31 except ImportError: 32 _HAS_LXML = False 33 else: 34 _HAS_LXML = True 35 36 37 try: 38 import html5lib 39 except ImportError: 40 _HAS_HTML5LIB = False 41 else: 42 _HAS_HTML5LIB = True 43 44 45 ############# 46 # READ HTML # 47 ############# 48 _RE_WHITESPACE = re.compile(r'([\r\n]+|\s{2,})') 49 50 51 def _remove_whitespace(s, regex=_RE_WHITESPACE): 52 """Replace extra whitespace inside of a string with a single space. 53 54 Parameters 55 ---------- 56 s : str or unicode 57 The string from which to remove extra whitespace. 58 59 regex : regex 60 The regular expression to use to remove extra whitespace. 61 62 Returns 63 ------- 64 subd : str or unicode 65 `s` with all extra whitespace replaced with a single space. 66 """ 67 return regex.sub(' ', s.strip()) 68 69 70 def _get_skiprows_iter(skiprows): 71 """Get an iterator given an integer, slice or container. 72 73 Parameters 74 ---------- 75 skiprows : int, slice, container 76 The iterator to use to skip rows; can also be a slice. 77 78 Raises 79 ------ 80 TypeError 81 * If `skiprows` is not a slice, integer, or Container 82 83 Raises 84 ------ 85 TypeError 86 * If `skiprows` is not a slice, integer, or Container 87 88 Returns 89 ------- 90 it : iterable 91 A proper iterator to use to skip rows of a DataFrame. 92 """ 93 if isinstance(skiprows, slice): 94 return range(skiprows.start or 0, skiprows.stop, skiprows.step or 1) 95 elif isinstance(skiprows, numbers.Integral): 96 return range(skiprows) 97 elif isinstance(skiprows, collections.Container): 98 return skiprows 99 else: 100 raise TypeError('{0} is not a valid type for skipping' 101 ' rows'.format(type(skiprows))) 102 103 104 def _read(io): 105 """Try to read from a url, file or string. 106 107 Parameters 108 ---------- 109 io : str, unicode, or file-like 110 111 Returns 112 ------- 113 raw_text : str 114 """ 115 if _is_url(io): 116 try: 117 with urlopen(io) as url: 118 raw_text = url.read() 119 except urllib2.URLError: 120 raise ValueError('Invalid URL: "{0}"'.format(io)) 121 elif hasattr(io, 'read'): 122 raw_text = io.read() 123 elif os.path.isfile(io): 124 with open(io) as f: 125 raw_text = f.read() 126 elif isinstance(io, basestring): 127 raw_text = io 128 else: 129 raise TypeError("Cannot read object of type " 130 "'{0.__class__.__name__!r}'".format(io)) 131 return raw_text 132 133 134 class _HtmlFrameParser(object): 135 """Base class for parsers that parse HTML into DataFrames. 136 137 Parameters 138 ---------- 139 io : str or file-like 140 This can be either a string of raw HTML, a valid URL using the HTTP, 141 FTP, or FILE protocols or a file-like object. 142 143 match : str or regex 144 The text to match in the document. 145 146 attrs : dict 147 List of HTML <table> element attributes to match. 148 149 Attributes 150 ---------- 151 io : str or file-like 152 raw HTML, URL, or file-like object 153 154 match : regex 155 The text to match in the raw HTML 156 157 attrs : dict-like 158 A dictionary of valid table attributes to use to search for table 159 elements. 160 161 Notes 162 ----- 163 To subclass this class effectively you must override the following methods: 164 * :func:`_build_doc` 165 * :func:`_text_getter` 166 * :func:`_parse_td` 167 * :func:`_parse_tables` 168 * :func:`_parse_tr` 169 * :func:`_parse_thead` 170 * :func:`_parse_tbody` 171 * :func:`_parse_tfoot` 172 See each method's respective documentation for details on their 173 functionality. 174 """ 175 def __init__(self, io, match, attrs): 176 self.io = io 177 self.match = match 178 self.attrs = attrs 179 180 def parse_tables(self): 181 tables = self._parse_tables(self._build_doc(), self.match, self.attrs) 182 return (self._build_table(table) for table in tables) 183 184 def _parse_raw_data(self, rows): 185 """Parse the raw data into a list of lists. 186 187 Parameters 188 ---------- 189 rows : iterable of node-like 190 A list of row elements. 191 192 text_getter : callable 193 A callable that gets the text from an individual node. This must be 194 defined by subclasses. 195 196 column_finder : callable 197 A callable that takes a row node as input and returns a list of the 198 column node in that row. This must be defined by subclasses. 199 200 Raises 201 ------ 202 AssertionError 203 * If `text_getter` is not callable 204 * If `column_finder` is not callable 205 206 Returns 207 ------- 208 data : list of list of strings 209 """ 210 data = [[_remove_whitespace(self._text_getter(col)) for col in 211 self._parse_td(row)] for row in rows] 212 return data 213 214 def _text_getter(self, obj): 215 """Return the text of an individual DOM node. 216 217 Parameters 218 ---------- 219 obj : node-like 220 A DOM node. 221 222 Returns 223 ------- 224 text : str or unicode 225 The text from an individual DOM node. 226 """ 227 raise NotImplementedError 228 229 def _parse_td(self, obj): 230 """Return the td elements from a row element. 231 232 Parameters 233 ---------- 234 obj : node-like 235 236 Returns 237 ------- 238 columns : list of node-like 239 These are the elements of each row, i.e., the columns. 240 """ 241 raise NotImplementedError 242 243 def _parse_tables(self, doc, match, attrs): 244 """Return all tables from the parsed DOM. 245 246 Parameters 247 ---------- 248 doc : tree-like 249 The DOM from which to parse the table element. 250 251 match : str or regular expression 252 The text to search for in the DOM tree. 253 254 attrs : dict 255 A dictionary of table attributes that can be used to disambiguate 256 mutliple tables on a page. 257 258 Raises 259 ------ 260 AssertionError 261 * If `match` does not match any text in the document. 262 263 Returns 264 ------- 265 tables : list of node-like 266 A list of <table> elements to be parsed into raw data. 267 """ 268 raise NotImplementedError 269 270 def _parse_tr(self, table): 271 """Return the list of row elements from the parsed table element. 272 273 Parameters 274 ---------- 275 table : node-like 276 A table element that contains row elements. 277 278 Returns 279 ------- 280 rows : list of node-like 281 A list row elements of a table, usually <tr> or <th> elements. 282 """ 283 raise NotImplementedError 284 285 def _parse_thead(self, table): 286 """Return the header of a table. 287 288 Parameters 289 ---------- 290 table : node-like 291 A table element that contains row elements. 292 293 Returns 294 ------- 295 thead : node-like 296 A <thead>...</thead> element. 297 """ 298 raise NotImplementedError 299 300 def _parse_tbody(self, table): 301 """Return the body of the table. 302 303 Parameters 304 ---------- 305 table : node-like 306 A table element that contains row elements. 307 308 Returns 309 ------- 310 tbody : node-like 311 A <tbody>...</tbody> element. 312 """ 313 raise NotImplementedError 314 315 def _parse_tfoot(self, table): 316 """Return the footer of the table if any. 317 318 Parameters 319 ---------- 320 table : node-like 321 A table element that contains row elements. 322 323 Returns 324 ------- 325 tfoot : node-like 326 A <tfoot>...</tfoot> element. 327 """ 328 raise NotImplementedError 329 330 def _build_doc(self): 331 """Return a tree-like object that can be used to iterate over the DOM. 332 333 Returns 334 ------- 335 obj : tree-like 336 """ 337 raise NotImplementedError 338 339 def _build_table(self, table): 340 header = self._parse_raw_thead(table) 341 body = self._parse_raw_tbody(table) 342 footer = self._parse_raw_tfoot(table) 343 return header, body, footer 344 345 def _parse_raw_thead(self, table): 346 thead = self._parse_thead(table) 347 res = [] 348 if thead: 349 res = map(self._text_getter, self._parse_th(thead[0])) 350 return np.array(res).squeeze() if res and len(res) == 1 else res 351 352 def _parse_raw_tfoot(self, table): 353 tfoot = self._parse_tfoot(table) 354 res = [] 355 if tfoot: 356 res = map(self._text_getter, self._parse_td(tfoot[0])) 357 return np.array(res).squeeze() if res and len(res) == 1 else res 358 359 def _parse_raw_tbody(self, table): 360 tbody = self._parse_tbody(table) 361 362 try: 363 res = self._parse_tr(tbody[0]) 364 except IndexError: 365 res = self._parse_tr(table) 366 return self._parse_raw_data(res) 367 368 369 class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser): 370 """HTML to DataFrame parser that uses BeautifulSoup under the hood. 371 372 See Also 373 -------- 374 pandas.io.html._HtmlFrameParser 375 pandas.io.html._LxmlFrameParser 376 377 Notes 378 ----- 379 Documentation strings for this class are in the base class 380 :class:`pandas.io.html._HtmlFrameParser`. 381 """ 382 def __init__(self, *args, **kwargs): 383 super(_BeautifulSoupHtml5LibFrameParser, self).__init__(*args, 384 **kwargs) 385 from bs4 import SoupStrainer 386 self._strainer = SoupStrainer('table') 387 388 def _text_getter(self, obj): 389 return obj.text 390 391 def _parse_td(self, row): 392 return row.find_all(('td', 'th')) 393 394 def _parse_tr(self, element): 395 return element.find_all('tr') 396 397 def _parse_th(self, element): 398 return element.find_all('th') 399 400 def _parse_thead(self, table): 401 return table.find_all('thead') 402 403 def _parse_tbody(self, table): 404 return table.find_all('tbody') 405 406 def _parse_tfoot(self, table): 407 return table.find_all('tfoot') 408 409 def _parse_tables(self, doc, match, attrs): 410 element_name = self._strainer.name 411 tables = doc.find_all(element_name, attrs=attrs) 412 if not tables: 413 # known sporadically working release 414 raise AssertionError('No tables found') 415 416 mts = [table.find(text=match) for table in tables] 417 matched_tables = [mt for mt in mts if mt is not None] 418 tables = list(set(mt.find_parent(element_name) 419 for mt in matched_tables)) 420 421 if not tables: 422 raise AssertionError("No tables found matching " 423 "'{0}'".format(match.pattern)) 424 return tables 425 426 def _setup_build_doc(self): 427 raw_text = _read(self.io) 428 if not raw_text: 429 raise AssertionError('No text parsed from document: ' 430 '{0}'.format(self.io)) 431 return raw_text 432 433 def _build_doc(self): 434 from bs4 import BeautifulSoup 435 return BeautifulSoup(self._setup_build_doc(), features='html5lib') 436 437 438 def _build_node_xpath_expr(attrs): 439 """Build an xpath expression to simulate bs4's ability to pass in kwargs to 440 search for attributes when using the lxml parser. 441 442 Parameters 443 ---------- 444 attrs : dict 445 A dict of HTML attributes. These are NOT checked for validity. 446 447 Returns 448 ------- 449 expr : unicode 450 An XPath expression that checks for the given HTML attributes. 451 """ 452 # give class attribute as class_ because class is a python keyword 453 if 'class_' in attrs: 454 attrs['class'] = attrs.pop('class_') 455 456 s = (u"@{k}='{v}'".format(k=k, v=v) for k, v in attrs.iteritems()) 457 return u'[{0}]'.format(' and '.join(s)) 458 459 460 _re_namespace = {'re': 'http://exslt.org/regular-expressions'} 461 _valid_schemes = 'http', 'file', 'ftp' 462 463 464 class _LxmlFrameParser(_HtmlFrameParser): 465 """HTML to DataFrame parser that uses lxml under the hood. 466 467 Warning 468 ------- 469 This parser can only handle HTTP, FTP, and FILE urls. 470 471 See Also 472 -------- 473 _HtmlFrameParser 474 _BeautifulSoupLxmlFrameParser 475 476 Notes 477 ----- 478 Documentation strings for this class are in the base class 479 :class:`_HtmlFrameParser`. 480 """ 481 def __init__(self, *args, **kwargs): 482 super(_LxmlFrameParser, self).__init__(*args, **kwargs) 483 484 def _text_getter(self, obj): 485 return obj.text_content() 486 487 def _parse_td(self, row): 488 return row.xpath('.//td|.//th') 489 490 def _parse_tr(self, table): 491 expr = './/tr[normalize-space()]' 492 return table.xpath(expr) 493 494 def _parse_tables(self, doc, match, kwargs): 495 pattern = match.pattern 496 497 # check all descendants for the given pattern 498 check_all_expr = u'//*' 499 if pattern: 500 check_all_expr += u"[re:test(text(), '{0}')]".format(pattern) 501 502 # go up the tree until we find a table 503 check_table_expr = '/ancestor::table' 504 xpath_expr = check_all_expr + check_table_expr 505 506 # if any table attributes were given build an xpath expression to 507 # search for them 508 if kwargs: 509 xpath_expr += _build_node_xpath_expr(kwargs) 510 tables = doc.xpath(xpath_expr, namespaces=_re_namespace) 511 if not tables: 512 raise AssertionError("No tables found matching regex " 513 "'{0}'".format(pattern)) 514 return tables 515 516 def _build_doc(self): 517 """ 518 Raises 519 ------ 520 ValueError 521 * If a URL that lxml cannot parse is passed. 522 523 Exception 524 * Any other ``Exception`` thrown. For example, trying to parse a 525 URL that is syntactically correct on a machine with no internet 526 connection will fail. 527 528 See Also 529 -------- 530 pandas.io.html._HtmlFrameParser._build_doc 531 """ 532 from lxml.html import parse, fromstring, HTMLParser 533 from lxml.etree import XMLSyntaxError 534 parser = HTMLParser(recover=False) 535 536 try: 537 # try to parse the input in the simplest way 538 r = parse(self.io, parser=parser) 539 540 try: 541 r = r.getroot() 542 except AttributeError: 543 pass 544 except (UnicodeDecodeError, IOError): 545 # if the input is a blob of html goop 546 if not _is_url(self.io): 547 r = fromstring(self.io, parser=parser) 548 549 try: 550 r = r.getroot() 551 except AttributeError: 552 pass 553 else: 554 # not a url 555 scheme = urlparse.urlparse(self.io).scheme 556 if scheme not in _valid_schemes: 557 # lxml can't parse it 558 msg = ('{0} is not a valid url scheme, valid schemes are ' 559 '{1}').format(scheme, _valid_schemes) 560 raise ValueError(msg) 561 else: 562 # something else happened: maybe a faulty connection 563 raise 564 else: 565 if not hasattr(r, 'text_content'): 566 raise XMLSyntaxError("no text parsed from document", 0, 0, 0) 567 return r 568 569 def _parse_tbody(self, table): 570 return table.xpath('.//tbody') 571 572 def _parse_thead(self, table): 573 return table.xpath('.//thead') 574 575 def _parse_tfoot(self, table): 576 return table.xpath('.//tfoot') 577 578 def _parse_raw_thead(self, table): 579 expr = './/thead//th' 580 return [_remove_whitespace(x.text_content()) for x in 581 table.xpath(expr)] 582 583 def _parse_raw_tfoot(self, table): 584 expr = './/tfoot//th' 585 return [_remove_whitespace(x.text_content()) for x in 586 table.xpath(expr)] 587 588 589 def _data_to_frame(data, header, index_col, infer_types, skiprows): 590 """Parse a BeautifulSoup table into a DataFrame. 591 592 Parameters 593 ---------- 594 data : tuple of lists 595 The raw data to be placed into a DataFrame. This is a list of lists of 596 strings or unicode. If it helps, it can be thought of as a matrix of 597 strings instead. 598 599 header : int or None 600 An integer indicating the row to use for the column header or None 601 indicating no header will be used. 602 603 index_col : int or None 604 An integer indicating the column to use for the index or None 605 indicating no column will be used. 606 607 infer_types : bool 608 Whether to convert numbers and dates. 609 610 skiprows : collections.Container or int or slice 611 Iterable used to skip rows. 612 613 Returns 614 ------- 615 df : DataFrame 616 A DataFrame containing the data from `data` 617 618 Raises 619 ------ 620 ValueError 621 * If `skiprows` is not found in the rows of the parsed DataFrame. 622 623 Raises 624 ------ 625 ValueError 626 * If `skiprows` is not found in the rows of the parsed DataFrame. 627 628 See Also 629 -------- 630 read_html 631 632 Notes 633 ----- 634 The `data` parameter is guaranteed not to be a list of empty lists. 635 """ 636 thead, tbody, tfoot = data 637 columns = thead or None 638 df = DataFrame(tbody, columns=columns) 639 640 if skiprows is not None: 641 it = _get_skiprows_iter(skiprows) 642 643 try: 644 df = df.drop(it) 645 except ValueError: 646 raise ValueError('Labels {0} not found when trying to skip' 647 ' rows'.format(it)) 648 649 # convert to numbers/dates where possible 650 # must be sequential since dates trump numbers if both args are given 651 if infer_types: 652 df = df.convert_objects(convert_numeric=True) 653 df = df.convert_objects(convert_dates='coerce') 654 655 if header is not None: 656 header_rows = df.iloc[header] 657 658 if header_rows.ndim == 2: 659 names = header_rows.index 660 df.columns = MultiIndex.from_arrays(header_rows.values, 661 names=names) 662 else: 663 df.columns = header_rows 664 665 df = df.drop(df.index[header]) 666 667 if index_col is not None: 668 cols = df.columns[index_col] 669 670 try: 671 cols = cols.tolist() 672 except AttributeError: 673 pass 674 675 # drop by default 676 df.set_index(cols, inplace=True) 677 if df.index.nlevels == 1: 678 if isnull(df.index.name) or not df.index.name: 679 df.index.name = None 680 else: 681 names = [name or None for name in df.index.names] 682 df.index = MultiIndex.from_tuples(df.index.values, names=names) 683 684 return df 685 686 687 _valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser, 688 'html5lib': _BeautifulSoupHtml5LibFrameParser, 689 'bs4': _BeautifulSoupHtml5LibFrameParser} 690 691 692 def _parser_dispatch(flavor): 693 """Choose the parser based on the input flavor. 694 695 Parameters 696 ---------- 697 flavor : str 698 The type of parser to use. This must be a valid backend. 699 700 Returns 701 ------- 702 cls : _HtmlFrameParser subclass 703 The parser class based on the requested input flavor. 704 705 Raises 706 ------ 707 AssertionError 708 * If `flavor` is not a valid backend. 709 ImportError 710 * If you do not have the requested `flavor` 711 """ 712 valid_parsers = _valid_parsers.keys() 713 if flavor not in valid_parsers: 714 raise AssertionError('"{0!r}" is not a valid flavor, valid flavors are' 715 ' {1}'.format(flavor, valid_parsers)) 716 717 if flavor in ('bs4', 'html5lib'): 718 if not _HAS_HTML5LIB: 719 raise ImportError("html5lib not found please install it") 720 if not _HAS_BS4: 721 raise ImportError("bs4 not found please install it") 722 if bs4.__version__ == LooseVersion('4.2.0'): 723 raise AssertionError("You're using a version" 724 " of BeautifulSoup4 (4.2.0) that has been" 725 " known to cause problems on certain" 726 " operating systems such as Debian. " 727 "Please install a version of" 728 " BeautifulSoup4 != 4.2.0, both earlier" 729 " and later releases will work.") 730 else: 731 if not _HAS_LXML: 732 raise ImportError("lxml not found please install it") 733 return _valid_parsers[flavor] 734 735 736 def _validate_parser_flavor(flavor): 737 if flavor is None: 738 flavor = ['lxml', 'bs4'] 739 elif isinstance(flavor, basestring): 740 flavor = [flavor] 741 elif isinstance(flavor, collections.Iterable): 742 if not all(isinstance(flav, basestring) for flav in flavor): 743 raise TypeError('{0} is not an iterable of strings'.format(flavor)) 744 else: 745 raise TypeError('{0} is not a valid "flavor"'.format(flavor)) 746 747 flavor = list(flavor) 748 valid_flavors = _valid_parsers.keys() 749 750 if not set(flavor) & set(valid_flavors): 751 raise ValueError('{0} is not a valid set of flavors, valid flavors are' 752 ' {1}'.format(flavor, valid_flavors)) 753 return flavor 754 755 756 def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs): 757 # bonus: re.compile is idempotent under function iteration so you can pass 758 # a compiled regex to it and it will return itself 759 flavor = _validate_parser_flavor(flavor) 760 compiled_match = re.compile(match) 761 762 # ugly hack because python 3 DELETES the exception variable! 763 retained = None 764 for flav in flavor: 765 parser = _parser_dispatch(flav) 766 p = parser(io, compiled_match, attrs) 767 768 try: 769 tables = p.parse_tables() 770 except Exception as caught: 771 retained = caught 772 else: 773 break 774 else: 775 raise retained 776 777 return [_data_to_frame(table, header, index_col, infer_types, skiprows) 778 for table in tables] 779 780 781 def read_html(io, match='.+', flavor=None, header=None, index_col=None, 782 skiprows=None, infer_types=True, attrs=None): 783 r"""Read an HTML table into a DataFrame. 784 785 Parameters 786 ---------- 787 io : str or file-like 788 A string or file like object that can be either a url, a file-like 789 object, or a raw string containing HTML. Note that lxml only accepts 790 the http, ftp and file url protocols. If you have a URI that starts 791 with ``'https'`` you might removing the ``'s'``. 792 793 match : str or regex, optional, default '.+' 794 The set of tables containing text matching this regex or string will be 795 returned. Unless the HTML is extremely simple you will probably need to 796 pass a non-empty string here. Defaults to '.+' (match any non-empty 797 string). The default value will return all tables contained on a page. 798 This value is converted to a regular expression so that there is 799 consistent behavior between Beautiful Soup and lxml. 800 801 flavor : str, container of strings, default ``None`` 802 The parsing engine to use under the hood. 'bs4' and 'html5lib' are 803 synonymous with each other, they are both there for backwards 804 compatibility. The default of ``None`` tries to use ``lxml`` to parse 805 and if that fails it falls back on ``bs4`` + ``html5lib``. 806 807 header : int or array-like or None, optional, default ``None`` 808 The row (or rows for a MultiIndex) to use to make the columns headers. 809 Note that this row will be removed from the data. 810 811 index_col : int or array-like or None, optional, default ``None`` 812 The column to use to make the index. Note that this column will be 813 removed from the data. 814 815 skiprows : int or collections.Container or slice or None, optional, default ``None`` 816 If an integer is given then skip this many rows after parsing the 817 column header. If a sequence of integers is given skip those specific 818 rows (0-based). Note that 819 820 .. code-block:: python 821 822 skiprows == 0 823 824 yields the same result as 825 826 .. code-block:: python 827 828 skiprows is None 829 830 If `skiprows` is a positive integer, say :math:`n`, then 831 it is treated as "skip :math:`n` rows", *not* as "skip the 832 :math:`n^\textrm{th}` row". 833 834 infer_types : bool, optional, default ``True`` 835 Whether to convert numeric types and date-appearing strings to numbers 836 and dates, respectively. 837 838 attrs : dict or None, optional, default ``None`` 839 This is a dictionary of attributes that you can pass to use to identify 840 the table in the HTML. These are not checked for validity before being 841 passed to lxml or Beautiful Soup. However, these attributes must be 842 valid HTML table attributes to work correctly. For example, 843 844 .. code-block:: python 845 846 attrs = {'id': 'table'} 847 848 is a valid attribute dictionary because the 'id' HTML tag attribute is 849 a valid HTML attribute for *any* HTML tag as per `this document 850 <http://www.w3.org/TR/html-markup/global-attributes.html>`__. 851 852 .. code-block:: python 853 854 attrs = {'asdf': 'table'} 855 856 is *not* a valid attribute dictionary because 'asdf' is not a valid 857 HTML attribute even if it is a valid XML attribute. Valid HTML 4.01 858 table attributes can be found `here 859 <http://www.w3.org/TR/REC-html40/struct/tables.html#h-11.2>`__. A 860 working draft of the HTML 5 spec can be found `here 861 <http://www.w3.org/TR/html-markup/table.html>`__. It contains the 862 latest information on table attributes for the modern web. 863 864 Returns 865 ------- 866 dfs : list of DataFrames 867 A list of DataFrames, each of which is the parsed data from each of the 868 tables on the page. 869 870 Notes 871 ----- 872 Before using this function you should probably read the :ref:`gotchas about 873 the parser libraries that this function uses <html-gotchas>`. 874 875 There's as little cleaning of the data as possible due to the heterogeneity 876 and general disorder of HTML on the web. 877 878 Expect some cleanup after you call this function. For example, 879 you might need to pass `infer_types=False` and perform manual conversion if 880 the column names are converted to NaN when you pass the `header=0` 881 argument. We try to assume as little as possible about the structure of the 882 table and push the idiosyncrasies of the HTML contained in the table to 883 you, the user. 884 885 This function only searches for <table> elements and only for <tr> and <th> 886 rows and <td> elements within those rows. This could be extended by 887 subclassing one of the parser classes contained in :mod:`pandas.io.html`. 888 889 Similar to :func:`read_csv` the `header` argument is applied **after** 890 `skiprows` is applied. 891 892 This function will *always* return a list of :class:`DataFrame` *or* 893 it will fail, e.g., it will *not* return an empty list. 894 895 Examples 896 -------- 897 See the :ref:`read_html documentation in the IO section of the docs 898 <io.read_html>` for many examples of reading HTML. 899 """ 900 # Type check here. We don't want to parse only to fail because of an 901 # invalid value of an integer skiprows. 902 if isinstance(skiprows, numbers.Integral) and skiprows < 0: 903 raise AssertionError('cannot skip rows starting from the end of the ' 904 'data (you passed a negative value)') 905 return _parse(flavor, io, match, header, index_col, skiprows, infer_types, 906 attrs) 907 [end of pandas/io/html.py] [start of scripts/find_commits_touching_func.py] 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 4 # copryright 2013, y-p @ github 5 6 from __future__ import print_function 7 8 """Search the git history for all commits touching a named method 9 10 You need the sh module to run this 11 WARNING: this script uses git clean -f, running it on a repo with untracked files 12 will probably erase them. 13 """ 14 import logging 15 import re 16 import os 17 from collections import namedtuple 18 from dateutil import parser 19 20 try: 21 import sh 22 except ImportError: 23 raise ImportError("The 'sh' package is required in order to run this script. ") 24 25 import argparse 26 27 desc = """ 28 Find all commits touching a sepcified function across the codebase. 29 """.strip() 30 argparser = argparse.ArgumentParser(description=desc) 31 argparser.add_argument('funcname', metavar='FUNCNAME', 32 help='Name of function/method to search for changes on.') 33 argparser.add_argument('-f', '--file-masks', metavar='f_re(,f_re)*', 34 default=["\.py.?$"], 35 help='comma seperated list of regexes to match filenames against\n'+ 36 'defaults all .py? files') 37 argparser.add_argument('-d', '--dir-masks', metavar='d_re(,d_re)*', 38 default=[], 39 help='comma seperated list of regexes to match base path against') 40 argparser.add_argument('-p', '--path-masks', metavar='p_re(,p_re)*', 41 default=[], 42 help='comma seperated list of regexes to match full file path against') 43 argparser.add_argument('-y', '--saw-the-warning', 44 action='store_true',default=False, 45 help='must specify this to run, acknowledge you realize this will erase untracked files') 46 argparser.add_argument('--debug-level', 47 default="CRITICAL", 48 help='debug level of messages (DEBUG,INFO,etc...)') 49 50 args = argparser.parse_args() 51 52 53 lfmt = logging.Formatter(fmt='%(levelname)-8s %(message)s', 54 datefmt='%m-%d %H:%M:%S' 55 ) 56 57 shh = logging.StreamHandler() 58 shh.setFormatter(lfmt) 59 60 logger=logging.getLogger("findit") 61 logger.addHandler(shh) 62 63 64 Hit=namedtuple("Hit","commit path") 65 HASH_LEN=8 66 67 def clean_checkout(comm): 68 h,s,d = get_commit_vitals(comm) 69 if len(s) > 60: 70 s = s[:60] + "..." 71 s=s.split("\n")[0] 72 logger.info("CO: %s %s" % (comm,s )) 73 74 sh.git('checkout', comm ,_tty_out=False) 75 sh.git('clean', '-f') 76 77 def get_hits(defname,files=()): 78 cs=set() 79 for f in files: 80 try: 81 r=sh.git('blame', '-L', '/def\s*{start}/,/def/'.format(start=defname),f,_tty_out=False) 82 except sh.ErrorReturnCode_128: 83 logger.debug("no matches in %s" % f) 84 continue 85 86 lines = r.strip().splitlines()[:-1] 87 # remove comment lines 88 lines = [x for x in lines if not re.search("^\w+\s*\(.+\)\s*#",x)] 89 hits = set(map(lambda x: x.split(" ")[0],lines)) 90 cs.update(set([Hit(commit=c,path=f) for c in hits])) 91 92 return cs 93 94 def get_commit_info(c,fmt,sep='\t'): 95 r=sh.git('log', "--format={}".format(fmt), '{}^..{}'.format(c,c),"-n","1",_tty_out=False) 96 return unicode(r).split(sep) 97 98 def get_commit_vitals(c,hlen=HASH_LEN): 99 h,s,d= get_commit_info(c,'%H\t%s\t%ci',"\t") 100 return h[:hlen],s,parser.parse(d) 101 102 def file_filter(state,dirname,fnames): 103 if args.dir_masks and not any([re.search(x,dirname) for x in args.dir_masks]): 104 return 105 for f in fnames: 106 p = os.path.abspath(os.path.join(os.path.realpath(dirname),f)) 107 if any([re.search(x,f) for x in args.file_masks])\ 108 or any([re.search(x,p) for x in args.path_masks]): 109 if os.path.isfile(p): 110 state['files'].append(p) 111 112 def search(defname,head_commit="HEAD"): 113 HEAD,s = get_commit_vitals("HEAD")[:2] 114 logger.info("HEAD at %s: %s" % (HEAD,s)) 115 done_commits = set() 116 # allhits = set() 117 files = [] 118 state = dict(files=files) 119 os.path.walk('.',file_filter,state) 120 # files now holds a list of paths to files 121 122 # seed with hits from q 123 allhits= set(get_hits(defname, files = files)) 124 q = set([HEAD]) 125 try: 126 while q: 127 h=q.pop() 128 clean_checkout(h) 129 hits = get_hits(defname, files = files) 130 for x in hits: 131 prevc = get_commit_vitals(x.commit+"^")[0] 132 if prevc not in done_commits: 133 q.add(prevc) 134 allhits.update(hits) 135 done_commits.add(h) 136 137 logger.debug("Remaining: %s" % q) 138 finally: 139 logger.info("Restoring HEAD to %s" % HEAD) 140 clean_checkout(HEAD) 141 return allhits 142 143 def pprint_hits(hits): 144 SUBJ_LEN=50 145 PATH_LEN = 20 146 hits=list(hits) 147 max_p = 0 148 for hit in hits: 149 p=hit.path.split(os.path.realpath(os.curdir)+os.path.sep)[-1] 150 max_p=max(max_p,len(p)) 151 152 if max_p < PATH_LEN: 153 SUBJ_LEN += PATH_LEN - max_p 154 PATH_LEN = max_p 155 156 def sorter(i): 157 h,s,d=get_commit_vitals(hits[i].commit) 158 return hits[i].path,d 159 160 print("\nThese commits touched the %s method in these files on these dates:\n" \ 161 % args.funcname) 162 for i in sorted(range(len(hits)),key=sorter): 163 hit = hits[i] 164 h,s,d=get_commit_vitals(hit.commit) 165 p=hit.path.split(os.path.realpath(os.curdir)+os.path.sep)[-1] 166 167 fmt = "{:%d} {:10} {:<%d} {:<%d}" % (HASH_LEN, SUBJ_LEN, PATH_LEN) 168 if len(s) > SUBJ_LEN: 169 s = s[:SUBJ_LEN-5] + " ..." 170 print(fmt.format(h[:HASH_LEN],d.isoformat()[:10],s,p[-20:]) ) 171 172 print("\n") 173 174 def main(): 175 if not args.saw_the_warning: 176 argparser.print_help() 177 print(""" 178 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 179 WARNING: this script uses git clean -f, running it on a repo with untracked files. 180 It's recommended that you make a fresh clone and run from it's root directory. 181 You must specify the -y argument to ignore this warning. 182 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 183 """) 184 return 185 if isinstance(args.file_masks,basestring): 186 args.file_masks = args.file_masks.split(',') 187 if isinstance(args.path_masks,basestring): 188 args.path_masks = args.path_masks.split(',') 189 if isinstance(args.dir_masks,basestring): 190 args.dir_masks = args.dir_masks.split(',') 191 192 logger.setLevel(getattr(logging,args.debug_level)) 193 194 hits=search(args.funcname) 195 pprint_hits(hits) 196 197 pass 198 199 if __name__ == "__main__": 200 import sys 201 sys.exit(main()) 202 [end of scripts/find_commits_touching_func.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
df5af03d4ff7d31fac00daa84cff0bc223a846d9
test_bs4_version_fails: ImportError: html5lib not found please install it ``` ====================================================================== ERROR: pandas.io.tests.test_html.test_bs4_version_fails ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/tests/test_html.py", line 83, in test_bs4_version_fails flavor='bs4') File "/usr/lib/python3.2/unittest/case.py", line 557, in assertRaises callableObj(*args, **kwargs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 906, in read_html attrs) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 765, in _parse parser = _parser_dispatch(flav) File "/home/yoh/deb/gits/pkg-exppsy/pandas/debian/tmp/usr/lib/python3/dist-packages/pandas/io/html.py", line 719, in _parser_dispatch raise ImportError("html5lib not found please install it") ImportError: html5lib not found please install it ``` on 4c2d050 there is no python3-html5lib on any debian system yet
2013-07-16T21:29:15Z
<patch> diff --git a/doc/source/release.rst b/doc/source/release.rst --- a/doc/source/release.rst +++ b/doc/source/release.rst @@ -341,6 +341,7 @@ pandas 0.12 (:issue:`4226`) - Fixed bug in initializing ``DatetimeIndex`` with an array of strings in a certain time zone (:issue:`4229`) + - Fixed bug where html5lib wasn't being properly skipped (:issue:`4265`) pandas 0.11.0 ============= diff --git a/doc/source/v0.12.0.txt b/doc/source/v0.12.0.txt --- a/doc/source/v0.12.0.txt +++ b/doc/source/v0.12.0.txt @@ -474,6 +474,7 @@ Bug Fixes (:issue:`4226`) - Fixed bug in initializing ``DatetimeIndex`` with an array of strings in a certain time zone (:issue:`4229`) + - Fixed bug where html5lib wasn't being properly skipped (:issue:`4265`) See the :ref:`full release notes <release>` or issue tracker </patch>
[]
[]
pantsbuild__pants-5605
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Engine executions for multiple SingleAddresses are redundant In many cases we're aware of literal addresses that may or may not exist (ie, they came in from the command line or via `--target-spec-file`). Currently those are represented as independent `SingleAddress` objects, each of which ends up being a root in the engine. While the work will not be completely redundant, requesting transitive `HydratedTargets` for each of many independent roots will result in many independent dependencies sets, which then need to be merged for output. To preserve semantics while improving performance, we should request transitive `HydratedTargets` for a container holding all of the roots, which will dedupe across the entire set. </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a build system for software projects in a variety of languages. 4 It works particularly well for a source code repository that contains 5 many distinct projects. 6 7 Friendly documentation: http://www.pantsbuild.org/ 8 9 We release to [PyPI](https://pypi.python.org/pypi) 10 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 11 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.python.org/pypi/pantsbuild.pants) 12 13 We use [Travis CI](https://travis-ci.org) to verify the build 14 [![Build Status](https://travis-ci.org/pantsbuild/pants.svg?branch=master)](https://travis-ci.org/pantsbuild/pants/branches). 15 16 We use [Coveralls](https://coveralls.io) to monitor test coverage 17 [![Coverage Status](https://coveralls.io/repos/pantsbuild/pants/badge.png?branch=master)](https://coveralls.io/r/pantsbuild/pants). 18 19 # Requirements 20 21 At a minimum, pants requires the following to run properly: 22 23 * Linux or Mac OS X 24 * Python 2.7.x (the latest stable version of 2.7 is recommended) 25 * A C compiler, system headers, Python headers (to compile native Python modules) and the libffi 26 library and headers (to compile and link modules that use CFFI to access native code). 27 * Internet access (so that pants can fully bootstrap itself) 28 29 Additionally, if you use the jvm backend to work with java or scala code (installed by default): 30 31 * OpenJDK or Oracle JDK version 7 or greater 32 [end of README.md] [start of contrib/go/src/python/pants/contrib/go/tasks/go_fetch.py] 1 # coding=utf-8 2 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 import os 9 import shutil 10 from collections import defaultdict 11 12 from pants.base.exceptions import TaskError 13 from pants.build_graph.address import Address 14 from pants.build_graph.address_lookup_error import AddressLookupError 15 from pants.util.contextutil import temporary_dir 16 from pants.util.dirutil import safe_concurrent_creation, safe_mkdir 17 18 from pants.contrib.go.subsystems.fetcher_factory import FetcherFactory 19 from pants.contrib.go.targets.go_remote_library import GoRemoteLibrary 20 from pants.contrib.go.tasks.go_task import GoTask 21 22 23 class GoFetch(GoTask): 24 """Fetches third-party Go libraries.""" 25 26 @classmethod 27 def implementation_version(cls): 28 return super(GoFetch, cls).implementation_version() + [('GoFetch', 2)] 29 30 @classmethod 31 def subsystem_dependencies(cls): 32 return super(GoFetch, cls).subsystem_dependencies() + (FetcherFactory,) 33 34 @classmethod 35 def product_types(cls): 36 return ['go_remote_lib_src'] 37 38 @classmethod 39 def register_options(cls, register): 40 pass 41 42 @property 43 def cache_target_dirs(self): 44 # TODO(John Sirois): See TODO in _fetch_pkg, re-consider how artifact caching works for fetches. 45 return True 46 47 def execute(self): 48 self.context.products.safe_create_data('go_remote_lib_src', lambda: defaultdict(str)) 49 go_remote_libs = self.context.targets(self.is_remote_lib) 50 if not go_remote_libs: 51 return 52 53 undeclared_deps = self._transitive_download_remote_libs(set(go_remote_libs)) 54 if undeclared_deps: 55 self._log_undeclared_deps(undeclared_deps) 56 raise TaskError('Failed to resolve transitive Go remote dependencies.') 57 58 def _log_undeclared_deps(self, undeclared_deps): 59 for dependee, deps in undeclared_deps.items(): 60 self.context.log.error('{address} has remote dependencies which require local declaration:' 61 .format(address=dependee.address.reference())) 62 for dep_import_path, address in deps: 63 self.context.log.error('\t--> {import_path} (expected go_remote_library declaration ' 64 'at {address})'.format(import_path=dep_import_path, 65 address=address.reference())) 66 67 @staticmethod 68 def _get_fetcher(import_path): 69 return FetcherFactory.global_instance().get_fetcher(import_path) 70 71 def _fetch_pkg(self, gopath, pkg, rev): 72 """Fetch the package and setup symlinks.""" 73 fetcher = self._get_fetcher(pkg) 74 root = fetcher.root() 75 root_dir = os.path.join(self.workdir, 'fetches', root, rev) 76 77 # Only fetch each remote root once. 78 if not os.path.exists(root_dir): 79 with temporary_dir() as tmp_fetch_root: 80 with self.context.new_workunit('fetch {}'.format(pkg)): 81 fetcher.fetch(dest=tmp_fetch_root, rev=rev) 82 safe_mkdir(root_dir) 83 for path in os.listdir(tmp_fetch_root): 84 shutil.move(os.path.join(tmp_fetch_root, path), os.path.join(root_dir, path)) 85 86 # TODO(John Sirois): Circle back and get get rid of this symlink tree. 87 # GoWorkspaceTask will further symlink a single package from the tree below into a 88 # target's workspace when it could just be linking from the fetch_dir. The only thing 89 # standing in the way is a determination of what we want to artifact cache. If we don't 90 # want to cache fetched zips, linking straight from the fetch_dir works simply. Otherwise 91 # thought needs to be applied to using the artifact cache directly or synthesizing a 92 # canonical owner target for the fetched files that 'child' targets (subpackages) can 93 # depend on and share the fetch from. 94 dest_dir = os.path.join(gopath, 'src', root) 95 # We may have been `invalidate`d and not `clean-all`ed so we need a new empty symlink 96 # chroot to avoid collision; thus `clean=True`. 97 safe_mkdir(dest_dir, clean=True) 98 for path in os.listdir(root_dir): 99 os.symlink(os.path.join(root_dir, path), os.path.join(dest_dir, path)) 100 101 # Note: Will update import_root_map. 102 def _map_fetched_remote_source(self, go_remote_lib, gopath, all_known_remote_libs, 103 resolved_remote_libs, undeclared_deps, import_root_map): 104 # See if we've computed the remote import paths for this rev of this lib in a previous run. 105 remote_import_paths_cache = os.path.join(os.path.dirname(gopath), 'remote_import_paths.txt') 106 if os.path.exists(remote_import_paths_cache): 107 with open(remote_import_paths_cache, 'r') as fp: 108 remote_import_paths = [line.decode('utf8').strip() for line in fp.readlines()] 109 else: 110 remote_import_paths = self._get_remote_import_paths(go_remote_lib.import_path, 111 gopath=gopath) 112 with safe_concurrent_creation(remote_import_paths_cache) as safe_path: 113 with open(safe_path, 'w') as fp: 114 for path in remote_import_paths: 115 fp.write('{}\n'.format(path).encode('utf8')) 116 117 for remote_import_path in remote_import_paths: 118 remote_root = import_root_map.get(remote_import_path) 119 if remote_root is None: 120 fetcher = self._get_fetcher(remote_import_path) 121 remote_root = fetcher.root() 122 import_root_map[remote_import_path] = remote_root 123 124 spec_path = os.path.join(go_remote_lib.target_base, remote_root) 125 126 package_path = GoRemoteLibrary.remote_package_path(remote_root, remote_import_path) 127 target_name = package_path or os.path.basename(remote_root) 128 129 address = Address(spec_path, target_name) 130 if not any(address == lib.address for lib in all_known_remote_libs): 131 try: 132 # If we've already resolved a package from this remote root, its ok to define an 133 # implicit synthetic remote target for all other packages in the same remote root. 134 same_remote_libs = [lib for lib in all_known_remote_libs 135 if spec_path == lib.address.spec_path] 136 implicit_ok = any(same_remote_libs) 137 138 # If we're creating a synthetic remote target, we should pin it to the same 139 # revision as the rest of the library. 140 rev = None 141 if implicit_ok: 142 rev = same_remote_libs[0].rev 143 144 remote_lib = self._resolve(go_remote_lib, address, package_path, rev, implicit_ok) 145 resolved_remote_libs.add(remote_lib) 146 all_known_remote_libs.add(remote_lib) 147 except self.UndeclaredRemoteLibError as e: 148 undeclared_deps[go_remote_lib].add((remote_import_path, e.address)) 149 self.context.build_graph.inject_dependency(go_remote_lib.address, address) 150 151 def _transitive_download_remote_libs(self, go_remote_libs, all_known_remote_libs=None): 152 """Recursively attempt to resolve / download all remote transitive deps of go_remote_libs. 153 154 Returns a dict<GoRemoteLibrary, set<tuple<str, Address>>>, which maps a go remote library to a 155 set of unresolved remote dependencies, each dependency expressed as a tuple containing the 156 the import path of the dependency and the expected target address. If all transitive 157 dependencies were successfully resolved, returns an empty dict. 158 159 Downloads as many invalidated transitive dependencies as possible, and returns as many 160 undeclared dependencies as possible. However, because the dependencies of a remote library 161 can only be determined _after_ it has been downloaded, a transitive dependency of an undeclared 162 remote library will never be detected. 163 164 Because go_remote_libraries do not declare dependencies (rather, they are inferred), injects 165 all successfully resolved transitive dependencies into the build graph. 166 """ 167 if not go_remote_libs: 168 return {} 169 170 all_known_remote_libs = all_known_remote_libs or set() 171 all_known_remote_libs.update(go_remote_libs) 172 173 resolved_remote_libs = set() 174 undeclared_deps = defaultdict(set) 175 go_remote_lib_src = self.context.products.get_data('go_remote_lib_src') 176 177 with self.invalidated(go_remote_libs) as invalidation_check: 178 # We accumulate mappings from import path to root (e.g., example.org/pkg/foo -> example.org) 179 # from all targets in this map, so that targets share as much of this information as 180 # possible during this run. 181 # We cache these mappings. to avoid repeatedly fetching them over the network via the 182 # meta tag protocol. Note that this mapping is unversioned: It's defined as "whatever meta 183 # tag is currently being served at the relevant URL", which is inherently independent of 184 # the rev of the remote library. We (and the entire Go ecosystem) assume that this mapping 185 # never changes, in practice. 186 import_root_map = {} 187 for vt in invalidation_check.all_vts: 188 import_root_map_path = os.path.join(vt.results_dir, 'pkg_root_map.txt') 189 import_root_map.update(self._read_import_root_map_file(import_root_map_path)) 190 191 go_remote_lib = vt.target 192 gopath = os.path.join(vt.results_dir, 'gopath') 193 if not vt.valid: 194 self._fetch_pkg(gopath, go_remote_lib.import_path, go_remote_lib.rev) 195 # _map_fetched_remote_source() will modify import_root_map. 196 self._map_fetched_remote_source(go_remote_lib, gopath, all_known_remote_libs, 197 resolved_remote_libs, undeclared_deps, import_root_map) 198 go_remote_lib_src[go_remote_lib] = os.path.join(gopath, 'src', go_remote_lib.import_path) 199 200 # Cache the mapping against this target's key. Note that because we accumulate 201 # mappings across targets, the file may contain mappings that this target doesn't 202 # need or care about (although it will contain all the mappings this target does need). 203 # But the file is small, so there's no harm in this redundancy. 204 self._write_import_root_map_file(import_root_map_path, import_root_map) 205 206 # Recurse after the invalidated block, so the libraries we downloaded are now "valid" 207 # and thus we don't try to download a library twice. 208 trans_undeclared_deps = self._transitive_download_remote_libs(resolved_remote_libs, 209 all_known_remote_libs) 210 undeclared_deps.update(trans_undeclared_deps) 211 212 return undeclared_deps 213 214 class UndeclaredRemoteLibError(Exception): 215 def __init__(self, address): 216 self.address = address 217 218 def _resolve(self, dependent_remote_lib, address, pkg, rev, implicit_ok): 219 """Resolves the GoRemoteLibrary at `address` defining the given `pkg`. 220 221 If `implicit_ok` is True, then a GoRemoteLibrary to own `pkg` is always synthesized if it does 222 not already exist; otherwise the address must already exist in the build graph (a BUILD file 223 must exist on disk that owns the given `pkg` and declares a `rev` for it). 224 225 :param dependent_remote_lib: The remote library that depends on the remote `pkg`. 226 :type: :class:`pants.contrib.go.targets.go_remote_library.GoRemoteLibrary` 227 :param address: The address of the remote library that should own `pkg`. 228 :type: :class:`pants.base.Address` 229 :param string pkg: The remote package path whose owning target needs to be resolved. 230 :param string rev: The revision of the package. None defaults to `master`. 231 :param bool implicit_ok: `False` if the given `address` must be defined in a BUILD file on disk; 232 otherwise a remote library to own `pkg` will always be created and 233 returned. 234 :returns: The resulting resolved remote library after injecting it in the build graph. 235 :rtype: :class:`pants.contrib.go.targets.go_remote_library.GoRemoteLibrary` 236 :raises: :class:`GoFetch.UndeclaredRemoteLibError`: If no BUILD file exists for the remote root 237 `pkg` lives in. 238 """ 239 try: 240 self.context.build_graph.inject_address_closure(address) 241 except AddressLookupError: 242 if implicit_ok: 243 self.context.add_new_target(address=address, 244 target_base=dependent_remote_lib.target_base, 245 target_type=GoRemoteLibrary, 246 pkg=pkg, 247 rev=rev) 248 else: 249 raise self.UndeclaredRemoteLibError(address) 250 return self.context.build_graph.get_target(address) 251 252 @staticmethod 253 def _is_relative(import_path): 254 return import_path.startswith('.') 255 256 def _get_remote_import_paths(self, pkg, gopath=None): 257 """Returns the remote import paths declared by the given remote Go `pkg`. 258 259 NB: This only includes production code imports, no test code imports. 260 """ 261 import_listing = self.import_oracle.list_imports(pkg, gopath=gopath) 262 return [imp for imp in import_listing.imports 263 if (not self.import_oracle.is_go_internal_import(imp) and 264 # We assume relative imports are local to the package and skip attempts to 265 # recursively resolve them. 266 not self._is_relative(imp))] 267 268 @staticmethod 269 def _read_import_root_map_file(path): 270 """Reads a file mapping import paths to roots (e.g., example.org/pkg/foo -> example.org).""" 271 if os.path.exists(path): 272 with open(path, 'r') as fp: 273 return dict({import_path: root for import_path, root in 274 (x.decode('utf8').strip().split('\t') for x in fp.readlines())}) 275 else: 276 return {} 277 278 @staticmethod 279 def _write_import_root_map_file(path, import_root_map): 280 """Writes a file mapping import paths to roots.""" 281 with safe_concurrent_creation(path) as safe_path: 282 with open(safe_path, 'w') as fp: 283 for import_path, root in sorted(import_root_map.items()): 284 fp.write('{}\t{}\n'.format(import_path, root).encode('utf8')) 285 [end of contrib/go/src/python/pants/contrib/go/tasks/go_fetch.py] [start of src/python/pants/engine/build_files.py] 1 # coding=utf-8 2 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 import collections 9 from os.path import dirname, join 10 11 import six 12 13 from pants.base.project_tree import Dir 14 from pants.base.specs import (AscendantAddresses, DescendantAddresses, SiblingAddresses, 15 SingleAddress) 16 from pants.build_graph.address import Address, BuildFileAddress 17 from pants.engine.addressable import (AddressableDescriptor, BuildFileAddresses, Collection, 18 Exactly, TypeConstraintError) 19 from pants.engine.fs import FilesContent, PathGlobs, Snapshot 20 from pants.engine.mapper import AddressFamily, AddressMap, AddressMapper, ResolveError 21 from pants.engine.objects import Locatable, SerializableFactory, Validatable 22 from pants.engine.rules import RootRule, SingletonRule, TaskRule, rule 23 from pants.engine.selectors import Select, SelectDependencies, SelectProjection 24 from pants.engine.struct import Struct 25 from pants.util.objects import datatype 26 27 28 _SPECS_CONSTRAINT = Exactly(SingleAddress, 29 SiblingAddresses, 30 DescendantAddresses, 31 AscendantAddresses) 32 33 34 class ResolvedTypeMismatchError(ResolveError): 35 """Indicates a resolved object was not of the expected type.""" 36 37 38 def _key_func(entry): 39 key, value = entry 40 return key 41 42 43 class BuildDirs(datatype('BuildDirs', ['dependencies'])): 44 """A list of Stat objects for directories containing build files.""" 45 46 47 class BuildFiles(datatype('BuildFiles', ['files_content'])): 48 """The FileContents of BUILD files in some directory""" 49 50 51 class BuildFileGlobs(datatype('BuildFilesGlobs', ['path_globs'])): 52 """A wrapper around PathGlobs that are known to match a build file pattern.""" 53 54 55 @rule(BuildFiles, 56 [SelectProjection(FilesContent, PathGlobs, 'path_globs', BuildFileGlobs)]) 57 def build_files(files_content): 58 return BuildFiles(files_content) 59 60 61 @rule(BuildFileGlobs, [Select(AddressMapper), Select(Dir)]) 62 def buildfile_path_globs_for_dir(address_mapper, directory): 63 patterns = address_mapper.build_patterns 64 return BuildFileGlobs(PathGlobs.create(directory.path, include=patterns, exclude=())) 65 66 67 @rule(AddressFamily, [Select(AddressMapper), Select(Dir), Select(BuildFiles)]) 68 def parse_address_family(address_mapper, path, build_files): 69 """Given the contents of the build files in one directory, return an AddressFamily. 70 71 The AddressFamily may be empty, but it will not be None. 72 """ 73 files_content = build_files.files_content.dependencies 74 if not files_content: 75 raise ResolveError('Directory "{}" does not contain build files.'.format(path)) 76 address_maps = [] 77 paths = (f.path for f in files_content) 78 ignored_paths = set(address_mapper.build_ignore_patterns.match_files(paths)) 79 for filecontent_product in files_content: 80 if filecontent_product.path in ignored_paths: 81 continue 82 address_maps.append(AddressMap.parse(filecontent_product.path, 83 filecontent_product.content, 84 address_mapper.parser)) 85 return AddressFamily.create(path.path, address_maps) 86 87 88 class UnhydratedStruct(datatype('UnhydratedStruct', ['address', 'struct', 'dependencies'])): 89 """A product type that holds a Struct which has not yet been hydrated. 90 91 A Struct counts as "hydrated" when all of its members (which are not themselves dependencies 92 lists) have been resolved from the graph. This means that hydrating a struct is eager in terms 93 of inline addressable fields, but lazy in terms of the complete graph walk represented by 94 the `dependencies` field of StructWithDeps. 95 """ 96 97 def __eq__(self, other): 98 if type(self) != type(other): 99 return NotImplemented 100 return self.struct == other.struct 101 102 def __ne__(self, other): 103 return not (self == other) 104 105 def __hash__(self): 106 return hash(self.struct) 107 108 109 def _raise_did_you_mean(address_family, name): 110 names = [a.target_name for a in address_family.addressables] 111 possibilities = '\n '.join(':{}'.format(target_name) for target_name in sorted(names)) 112 raise ResolveError('"{}" was not found in namespace "{}". ' 113 'Did you mean one of:\n {}' 114 .format(name, address_family.namespace, possibilities)) 115 116 117 @rule(UnhydratedStruct, 118 [Select(AddressMapper), 119 SelectProjection(AddressFamily, Dir, 'spec_path', Address), 120 Select(Address)]) 121 def resolve_unhydrated_struct(address_mapper, address_family, address): 122 """Given an Address and its AddressFamily, resolve an UnhydratedStruct. 123 124 Recursively collects any embedded addressables within the Struct, but will not walk into a 125 dependencies field, since those are requested explicitly by tasks using SelectDependencies. 126 """ 127 128 struct = address_family.addressables.get(address) 129 addresses = address_family.addressables 130 if not struct or address not in addresses: 131 _raise_did_you_mean(address_family, address.target_name) 132 133 dependencies = [] 134 def maybe_append(outer_key, value): 135 if isinstance(value, six.string_types): 136 if outer_key != 'dependencies': 137 dependencies.append(Address.parse(value, 138 relative_to=address.spec_path, 139 subproject_roots=address_mapper.subproject_roots)) 140 elif isinstance(value, Struct): 141 collect_dependencies(value) 142 143 def collect_dependencies(item): 144 for key, value in sorted(item._asdict().items(), key=_key_func): 145 if not AddressableDescriptor.is_addressable(item, key): 146 continue 147 if isinstance(value, collections.MutableMapping): 148 for _, v in sorted(value.items(), key=_key_func): 149 maybe_append(key, v) 150 elif isinstance(value, collections.MutableSequence): 151 for v in value: 152 maybe_append(key, v) 153 else: 154 maybe_append(key, value) 155 156 collect_dependencies(struct) 157 158 return UnhydratedStruct( 159 filter(lambda build_address: build_address == address, addresses)[0], struct, dependencies) 160 161 162 def hydrate_struct(address_mapper, unhydrated_struct, dependencies): 163 """Hydrates a Struct from an UnhydratedStruct and its satisfied embedded addressable deps. 164 165 Note that this relies on the guarantee that DependenciesNode provides dependencies in the 166 order they were requested. 167 """ 168 address = unhydrated_struct.address 169 struct = unhydrated_struct.struct 170 171 def maybe_consume(outer_key, value): 172 if isinstance(value, six.string_types): 173 if outer_key == 'dependencies': 174 # Don't recurse into the dependencies field of a Struct, since those will be explicitly 175 # requested by tasks. But do ensure that their addresses are absolute, since we're 176 # about to lose the context in which they were declared. 177 value = Address.parse(value, 178 relative_to=address.spec_path, 179 subproject_roots=address_mapper.subproject_roots) 180 else: 181 value = dependencies[maybe_consume.idx] 182 maybe_consume.idx += 1 183 elif isinstance(value, Struct): 184 value = consume_dependencies(value) 185 return value 186 # NB: Some pythons throw an UnboundLocalError for `idx` if it is a simple local variable. 187 maybe_consume.idx = 0 188 189 # 'zip' the previously-requested dependencies back together as struct fields. 190 def consume_dependencies(item, args=None): 191 hydrated_args = args or {} 192 for key, value in sorted(item._asdict().items(), key=_key_func): 193 if not AddressableDescriptor.is_addressable(item, key): 194 hydrated_args[key] = value 195 continue 196 197 if isinstance(value, collections.MutableMapping): 198 container_type = type(value) 199 hydrated_args[key] = container_type((k, maybe_consume(key, v)) 200 for k, v in sorted(value.items(), key=_key_func)) 201 elif isinstance(value, collections.MutableSequence): 202 container_type = type(value) 203 hydrated_args[key] = container_type(maybe_consume(key, v) for v in value) 204 else: 205 hydrated_args[key] = maybe_consume(key, value) 206 return _hydrate(type(item), address.spec_path, **hydrated_args) 207 208 return consume_dependencies(struct, args={'address': address}) 209 210 211 def _hydrate(item_type, spec_path, **kwargs): 212 # If the item will be Locatable, inject the spec_path. 213 if issubclass(item_type, Locatable): 214 kwargs['spec_path'] = spec_path 215 216 try: 217 item = item_type(**kwargs) 218 except TypeConstraintError as e: 219 raise ResolvedTypeMismatchError(e) 220 221 # Let factories replace the hydrated object. 222 if isinstance(item, SerializableFactory): 223 item = item.create() 224 225 # Finally make sure objects that can self-validate get a chance to do so. 226 if isinstance(item, Validatable): 227 item.validate() 228 229 return item 230 231 232 @rule(BuildFileAddresses, 233 [Select(AddressMapper), 234 SelectDependencies(AddressFamily, BuildDirs, field_types=(Dir,)), 235 Select(_SPECS_CONSTRAINT)]) 236 def addresses_from_address_families(address_mapper, address_families, spec): 237 """Given a list of AddressFamilies and a Spec, return matching Addresses. 238 239 Raises a ResolveError if: 240 - there were no matching AddressFamilies, or 241 - the Spec matches no addresses for SingleAddresses. 242 """ 243 244 def raise_if_empty_address_families(): 245 if not address_families: 246 raise ResolveError('Path "{}" contains no BUILD files.'.format(spec.directory)) 247 248 def exclude_address(address): 249 if address_mapper.exclude_patterns: 250 address_str = address.spec 251 return any(p.search(address_str) is not None for p in address_mapper.exclude_patterns) 252 return False 253 254 def all_included_addresses(): 255 return (a 256 for af in address_families 257 for a in af.addressables.keys() 258 if not exclude_address(a)) 259 260 if type(spec) in (DescendantAddresses, SiblingAddresses): 261 raise_if_empty_address_families() 262 addresses = tuple(all_included_addresses()) 263 elif type(spec) is SingleAddress: 264 raise_if_empty_address_families() 265 addresses = tuple(a for a in all_included_addresses() if a.target_name == spec.name) 266 if not addresses and len(address_families) == 1: 267 _raise_did_you_mean(address_families[0], spec.name) 268 elif type(spec) is AscendantAddresses: 269 addresses = tuple(all_included_addresses()) 270 else: 271 raise ValueError('Unrecognized Spec type: {}'.format(spec)) 272 273 return BuildFileAddresses(addresses) 274 275 276 @rule(BuildDirs, [Select(AddressMapper), Select(Snapshot)]) 277 def filter_build_dirs(address_mapper, snapshot): 278 """Given a Snapshot matching a build pattern, return parent directories as BuildDirs.""" 279 dirnames = set(dirname(f.stat.path) for f in snapshot.files) 280 ignored_dirnames = address_mapper.build_ignore_patterns.match_files('{}/'.format(dirname) for dirname in dirnames) 281 ignored_dirnames = set(d.rstrip('/') for d in ignored_dirnames) 282 return BuildDirs(tuple(Dir(d) for d in dirnames if d not in ignored_dirnames)) 283 284 285 @rule(PathGlobs, [Select(AddressMapper), Select(_SPECS_CONSTRAINT)]) 286 def spec_to_globs(address_mapper, spec): 287 """Given a Spec object, return a PathGlobs object for the build files that it matches.""" 288 if type(spec) is DescendantAddresses: 289 directory = spec.directory 290 patterns = [join('**', pattern) for pattern in address_mapper.build_patterns] 291 elif type(spec) in (SiblingAddresses, SingleAddress): 292 directory = spec.directory 293 patterns = address_mapper.build_patterns 294 elif type(spec) is AscendantAddresses: 295 directory = '' 296 patterns = [ 297 join(f, pattern) 298 for pattern in address_mapper.build_patterns 299 for f in _recursive_dirname(spec.directory) 300 ] 301 else: 302 raise ValueError('Unrecognized Spec type: {}'.format(spec)) 303 return PathGlobs.create(directory, include=patterns, exclude=[]) 304 305 306 def _recursive_dirname(f): 307 """Given a relative path like 'a/b/c/d', yield all ascending path components like: 308 309 'a/b/c/d' 310 'a/b/c' 311 'a/b' 312 'a' 313 '' 314 """ 315 while f: 316 yield f 317 f = dirname(f) 318 yield '' 319 320 321 # TODO: This is a bit of a lie: `Struct` is effectively abstract, so this collection 322 # will contain subclasses of `Struct` for the symbol table types. These APIs need more 323 # polish before we make them public: see #4535 in particular. 324 HydratedStructs = Collection.of(Struct) 325 326 327 BuildFilesCollection = Collection.of(BuildFiles) 328 329 330 def create_graph_rules(address_mapper, symbol_table): 331 """Creates tasks used to parse Structs from BUILD files. 332 333 :param address_mapper_key: The subject key for an AddressMapper instance. 334 :param symbol_table: A SymbolTable instance to provide symbols for Address lookups. 335 """ 336 symbol_table_constraint = symbol_table.constraint() 337 return [ 338 TaskRule(BuildFilesCollection, 339 [SelectDependencies(BuildFiles, BuildDirs, field_types=(Dir,))], 340 BuildFilesCollection), 341 # A singleton to provide the AddressMapper. 342 SingletonRule(AddressMapper, address_mapper), 343 # Support for resolving Structs from Addresses. 344 TaskRule( 345 symbol_table_constraint, 346 [Select(AddressMapper), 347 Select(UnhydratedStruct), 348 SelectDependencies(symbol_table_constraint, UnhydratedStruct, field_types=(Address,))], 349 hydrate_struct 350 ), 351 resolve_unhydrated_struct, 352 TaskRule( 353 HydratedStructs, 354 [SelectDependencies(symbol_table_constraint, 355 BuildFileAddresses, 356 field_types=(Address,), 357 field='addresses')], 358 HydratedStructs 359 ), 360 # BUILD file parsing. 361 parse_address_family, 362 build_files, 363 buildfile_path_globs_for_dir, 364 # Spec handling: locate directories that contain build files, and request 365 # AddressFamilies for each of them. 366 addresses_from_address_families, 367 filter_build_dirs, 368 spec_to_globs, 369 # Root rules representing parameters that might be provided via root subjects. 370 RootRule(Address), 371 RootRule(BuildFileAddress), 372 RootRule(BuildFileAddresses), 373 RootRule(AscendantAddresses), 374 RootRule(DescendantAddresses), 375 RootRule(SiblingAddresses), 376 RootRule(SingleAddress), 377 ] 378 [end of src/python/pants/engine/build_files.py] [start of src/python/pants/engine/legacy/graph.py] 1 # coding=utf-8 2 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 3 # Licensed under the Apache License, Version 2.0 (see LICENSE). 4 5 from __future__ import (absolute_import, division, generators, nested_scopes, print_function, 6 unicode_literals, with_statement) 7 8 import logging 9 from contextlib import contextmanager 10 11 from twitter.common.collections import OrderedSet 12 13 from pants.backend.jvm.targets.jvm_app import Bundle, JvmApp 14 from pants.base.exceptions import TargetDefinitionException 15 from pants.base.parse_context import ParseContext 16 from pants.base.specs import SingleAddress 17 from pants.base.target_roots import ChangedTargetRoots, LiteralTargetRoots 18 from pants.build_graph.address import Address 19 from pants.build_graph.address_lookup_error import AddressLookupError 20 from pants.build_graph.build_graph import BuildGraph 21 from pants.build_graph.remote_sources import RemoteSources 22 from pants.engine.addressable import BuildFileAddresses, Collection 23 from pants.engine.fs import PathGlobs, Snapshot 24 from pants.engine.legacy.structs import BundleAdaptor, BundlesField, SourcesField, TargetAdaptor 25 from pants.engine.mapper import ResolveError 26 from pants.engine.rules import TaskRule, rule 27 from pants.engine.selectors import Select, SelectDependencies, SelectProjection, SelectTransitive 28 from pants.source.wrapped_globs import EagerFilesetWithSpec, FilesetRelPathWrapper 29 from pants.util.dirutil import fast_relpath 30 from pants.util.objects import datatype 31 32 33 logger = logging.getLogger(__name__) 34 35 36 def target_types_from_symbol_table(symbol_table): 37 """Given a LegacySymbolTable, return the concrete target types constructed for each alias.""" 38 aliases = symbol_table.aliases() 39 target_types = dict(aliases.target_types) 40 for alias, factory in aliases.target_macro_factories.items(): 41 target_type, = factory.target_types 42 target_types[alias] = target_type 43 return target_types 44 45 46 class _DestWrapper(datatype('DestWrapper', ['target_types'])): 47 """A wrapper for dest field of RemoteSources target. 48 49 This is only used when instantiating RemoteSources target. 50 """ 51 52 53 class LegacyBuildGraph(BuildGraph): 54 """A directed acyclic graph of Targets and dependencies. Not necessarily connected. 55 56 This implementation is backed by a Scheduler that is able to resolve TransitiveHydratedTargets. 57 """ 58 59 class InvalidCommandLineSpecError(AddressLookupError): 60 """Raised when command line spec is not a valid directory""" 61 62 @classmethod 63 def create(cls, scheduler, symbol_table): 64 """Construct a graph given a Scheduler, Engine, and a SymbolTable class.""" 65 return cls(scheduler, target_types_from_symbol_table(symbol_table)) 66 67 def __init__(self, scheduler, target_types): 68 """Construct a graph given a Scheduler, Engine, and a SymbolTable class. 69 70 :param scheduler: A Scheduler that is configured to be able to resolve TransitiveHydratedTargets. 71 :param symbol_table: A SymbolTable instance used to instantiate Target objects. Must match 72 the symbol table installed in the scheduler (TODO: see comment in `_instantiate_target`). 73 """ 74 self._scheduler = scheduler 75 self._target_types = target_types 76 super(LegacyBuildGraph, self).__init__() 77 78 def clone_new(self): 79 """Returns a new BuildGraph instance of the same type and with the same __init__ params.""" 80 return LegacyBuildGraph(self._scheduler, self._target_types) 81 82 def _index(self, roots): 83 """Index from the given roots into the storage provided by the base class. 84 85 This is an additive operation: any existing connections involving these nodes are preserved. 86 """ 87 all_addresses = set() 88 new_targets = list() 89 90 # Index the ProductGraph. 91 for product in roots: 92 # We have a successful TransitiveHydratedTargets value (for a particular input Spec). 93 for hydrated_target in product.dependencies: 94 target_adaptor = hydrated_target.adaptor 95 address = target_adaptor.address 96 all_addresses.add(address) 97 if address not in self._target_by_address: 98 new_targets.append(self._index_target(target_adaptor)) 99 100 # Once the declared dependencies of all targets are indexed, inject their 101 # additional "traversable_(dependency_)?specs". 102 deps_to_inject = OrderedSet() 103 addresses_to_inject = set() 104 def inject(target, dep_spec, is_dependency): 105 address = Address.parse(dep_spec, relative_to=target.address.spec_path) 106 if not any(address == t.address for t in target.dependencies): 107 addresses_to_inject.add(address) 108 if is_dependency: 109 deps_to_inject.add((target.address, address)) 110 111 self.apply_injectables(new_targets) 112 113 for target in new_targets: 114 for spec in target.compute_dependency_specs(payload=target.payload): 115 inject(target, spec, is_dependency=True) 116 117 for spec in target.compute_injectable_specs(payload=target.payload): 118 inject(target, spec, is_dependency=False) 119 120 # Inject all addresses, then declare injected dependencies. 121 self.inject_addresses_closure(addresses_to_inject) 122 for target_address, dep_address in deps_to_inject: 123 self.inject_dependency(dependent=target_address, dependency=dep_address) 124 125 return all_addresses 126 127 def _index_target(self, target_adaptor): 128 """Instantiate the given TargetAdaptor, index it in the graph, and return a Target.""" 129 # Instantiate the target. 130 address = target_adaptor.address 131 target = self._instantiate_target(target_adaptor) 132 self._target_by_address[address] = target 133 134 for dependency in target_adaptor.dependencies: 135 if dependency in self._target_dependencies_by_address[address]: 136 raise self.DuplicateAddressError( 137 'Addresses in dependencies must be unique. ' 138 "'{spec}' is referenced more than once by target '{target}'." 139 .format(spec=dependency.spec, target=address.spec) 140 ) 141 # Link its declared dependencies, which will be indexed independently. 142 self._target_dependencies_by_address[address].add(dependency) 143 self._target_dependees_by_address[dependency].add(address) 144 return target 145 146 def _instantiate_target(self, target_adaptor): 147 """Given a TargetAdaptor struct previously parsed from a BUILD file, instantiate a Target. 148 149 TODO: This assumes that the SymbolTable used for parsing matches the SymbolTable passed 150 to this graph. Would be good to make that more explicit, but it might be better to nuke 151 the Target subclassing pattern instead, and lean further into the "configuration composition" 152 model explored in the `exp` package. 153 """ 154 target_cls = self._target_types[target_adaptor.type_alias] 155 try: 156 # Pop dependencies, which were already consumed during construction. 157 kwargs = target_adaptor.kwargs() 158 kwargs.pop('dependencies') 159 160 # Instantiate. 161 if target_cls is JvmApp: 162 return self._instantiate_jvm_app(kwargs) 163 elif target_cls is RemoteSources: 164 return self._instantiate_remote_sources(kwargs) 165 return target_cls(build_graph=self, **kwargs) 166 except TargetDefinitionException: 167 raise 168 except Exception as e: 169 raise TargetDefinitionException( 170 target_adaptor.address, 171 'Failed to instantiate Target with type {}: {}'.format(target_cls, e)) 172 173 def _instantiate_jvm_app(self, kwargs): 174 """For JvmApp target, convert BundleAdaptor to BundleProps.""" 175 parse_context = ParseContext(kwargs['address'].spec_path, dict()) 176 bundleprops_factory = Bundle(parse_context) 177 kwargs['bundles'] = [ 178 bundleprops_factory.create_bundle_props(bundle) 179 for bundle in kwargs['bundles'] 180 ] 181 182 return JvmApp(build_graph=self, **kwargs) 183 184 def _instantiate_remote_sources(self, kwargs): 185 """For RemoteSources target, convert "dest" field to its real target type.""" 186 kwargs['dest'] = _DestWrapper((self._target_types[kwargs['dest']],)) 187 return RemoteSources(build_graph=self, **kwargs) 188 189 def inject_synthetic_target(self, 190 address, 191 target_type, 192 dependencies=None, 193 derived_from=None, 194 **kwargs): 195 target = target_type(name=address.target_name, 196 address=address, 197 build_graph=self, 198 **kwargs) 199 self.inject_target(target, 200 dependencies=dependencies, 201 derived_from=derived_from, 202 synthetic=True) 203 204 def inject_address_closure(self, address): 205 self.inject_addresses_closure([address]) 206 207 def inject_addresses_closure(self, addresses): 208 addresses = set(addresses) - set(self._target_by_address.keys()) 209 if not addresses: 210 return 211 matched = set(self._inject_specs([SingleAddress(a.spec_path, a.target_name) for a in addresses])) 212 missing = addresses - matched 213 if missing: 214 # TODO: When SingleAddress resolution converted from projection of a directory 215 # and name to a match for PathGlobs, we lost our useful AddressLookupError formatting. 216 raise AddressLookupError('Addresses were not matched: {}'.format(missing)) 217 218 def inject_roots_closure(self, target_roots, fail_fast=None): 219 if type(target_roots) is ChangedTargetRoots: 220 for address in self._inject_addresses(target_roots.addresses): 221 yield address 222 elif type(target_roots) is LiteralTargetRoots: 223 for address in self._inject_specs(target_roots.specs): 224 yield address 225 else: 226 raise ValueError('Unrecognized TargetRoots type: `{}`.'.format(target_roots)) 227 228 def inject_specs_closure(self, specs, fail_fast=None): 229 # Request loading of these specs. 230 for address in self._inject_specs(specs): 231 yield address 232 233 def resolve_address(self, address): 234 if not self.contains_address(address): 235 self.inject_address_closure(address) 236 return self.get_target(address) 237 238 @contextmanager 239 def _resolve_context(self): 240 try: 241 yield 242 except ResolveError as e: 243 # NB: ResolveError means that a target was not found, which is a common user facing error. 244 raise AddressLookupError(str(e)) 245 except Exception as e: 246 raise AddressLookupError( 247 'Build graph construction failed: {} {}'.format(type(e).__name__, str(e)) 248 ) 249 250 def _inject_addresses(self, subjects): 251 """Injects targets into the graph for each of the given `Address` objects, and then yields them. 252 253 TODO: See #4533 about unifying "collection of literal Addresses" with the `Spec` types, which 254 would avoid the need for the independent `_inject_addresses` and `_inject_specs` codepaths. 255 """ 256 logger.debug('Injecting addresses to %s: %s', self, subjects) 257 with self._resolve_context(): 258 addresses = tuple(subjects) 259 hydrated_targets = self._scheduler.product_request(TransitiveHydratedTargets, 260 [BuildFileAddresses(addresses)]) 261 262 self._index(hydrated_targets) 263 264 yielded_addresses = set() 265 for address in subjects: 266 if address not in yielded_addresses: 267 yielded_addresses.add(address) 268 yield address 269 270 def _inject_specs(self, subjects): 271 """Injects targets into the graph for each of the given `Spec` objects. 272 273 Yields the resulting addresses. 274 """ 275 logger.debug('Injecting specs to %s: %s', self, subjects) 276 with self._resolve_context(): 277 product_results = self._scheduler.products_request([TransitiveHydratedTargets, BuildFileAddresses], 278 subjects) 279 280 self._index(product_results[TransitiveHydratedTargets]) 281 282 yielded_addresses = set() 283 for subject, product in zip(subjects, product_results[BuildFileAddresses]): 284 if not product.dependencies: 285 raise self.InvalidCommandLineSpecError( 286 'Spec {} does not match any targets.'.format(subject)) 287 for address in product.dependencies: 288 if address not in yielded_addresses: 289 yielded_addresses.add(address) 290 yield address 291 292 293 class HydratedTarget(datatype('HydratedTarget', ['address', 'adaptor', 'dependencies'])): 294 """A wrapper for a fully hydrated TargetAdaptor object. 295 296 Transitive graph walks collect ordered sets of TransitiveHydratedTargets which involve a huge amount 297 of hashing: we implement eq/hash via direct usage of an Address field to speed that up. 298 """ 299 300 @property 301 def addresses(self): 302 return self.dependencies 303 304 def __eq__(self, other): 305 if type(self) != type(other): 306 return False 307 return self.address == other.address 308 309 def __ne__(self, other): 310 return not (self == other) 311 312 def __hash__(self): 313 return hash(self.address) 314 315 316 class TransitiveHydratedTargets(Collection.of(HydratedTarget)): 317 """A transitive set of HydratedTarget objects.""" 318 319 320 class HydratedTargets(Collection.of(HydratedTarget)): 321 """An intransitive set of HydratedTarget objects.""" 322 323 324 @rule(TransitiveHydratedTargets, [SelectTransitive(HydratedTarget, 325 BuildFileAddresses, 326 field_types=(Address,), 327 field='addresses')]) 328 def transitive_hydrated_targets(targets): 329 """Recursively requests HydratedTarget instances, which will result in an eager, transitive graph walk.""" 330 return TransitiveHydratedTargets(targets) 331 332 333 @rule(HydratedTargets, [SelectDependencies(HydratedTarget, 334 BuildFileAddresses, 335 field_types=(Address,), 336 field='addresses')]) 337 def hydrated_targets(targets): 338 """Requests HydratedTarget instances.""" 339 return HydratedTargets(targets) 340 341 342 class HydratedField(datatype('HydratedField', ['name', 'value'])): 343 """A wrapper for a fully constructed replacement kwarg for a HydratedTarget.""" 344 345 346 def hydrate_target(target_adaptor, hydrated_fields): 347 """Construct a HydratedTarget from a TargetAdaptor and hydrated versions of its adapted fields.""" 348 # Hydrate the fields of the adaptor and re-construct it. 349 kwargs = target_adaptor.kwargs() 350 for field in hydrated_fields: 351 kwargs[field.name] = field.value 352 return HydratedTarget(target_adaptor.address, 353 TargetAdaptor(**kwargs), 354 tuple(target_adaptor.dependencies)) 355 356 357 def _eager_fileset_with_spec(spec_path, filespec, snapshot, include_dirs=False): 358 fds = snapshot.path_stats if include_dirs else snapshot.files 359 files = tuple(fast_relpath(fd.path, spec_path) for fd in fds) 360 361 relpath_adjusted_filespec = FilesetRelPathWrapper.to_filespec(filespec['globs'], spec_path) 362 if filespec.has_key('exclude'): 363 relpath_adjusted_filespec['exclude'] = [FilesetRelPathWrapper.to_filespec(e['globs'], spec_path) 364 for e in filespec['exclude']] 365 366 return EagerFilesetWithSpec(spec_path, 367 relpath_adjusted_filespec, 368 files=files, 369 files_hash=snapshot.fingerprint) 370 371 372 @rule(HydratedField, 373 [Select(SourcesField), 374 SelectProjection(Snapshot, PathGlobs, 'path_globs', SourcesField)]) 375 def hydrate_sources(sources_field, snapshot): 376 """Given a SourcesField and a Snapshot for its path_globs, create an EagerFilesetWithSpec.""" 377 fileset_with_spec = _eager_fileset_with_spec(sources_field.address.spec_path, 378 sources_field.filespecs, 379 snapshot) 380 return HydratedField(sources_field.arg, fileset_with_spec) 381 382 383 @rule(HydratedField, 384 [Select(BundlesField), 385 SelectDependencies(Snapshot, BundlesField, 'path_globs_list', field_types=(PathGlobs,))]) 386 def hydrate_bundles(bundles_field, snapshot_list): 387 """Given a BundlesField and a Snapshot for each of its filesets create a list of BundleAdaptors.""" 388 bundles = [] 389 zipped = zip(bundles_field.bundles, 390 bundles_field.filespecs_list, 391 snapshot_list) 392 for bundle, filespecs, snapshot in zipped: 393 spec_path = bundles_field.address.spec_path 394 kwargs = bundle.kwargs() 395 # NB: We `include_dirs=True` because bundle filesets frequently specify directories in order 396 # to trigger a (deprecated) default inclusion of their recursive contents. See the related 397 # deprecation in `pants.backend.jvm.tasks.bundle_create`. 398 kwargs['fileset'] = _eager_fileset_with_spec(getattr(bundle, 'rel_path', spec_path), 399 filespecs, 400 snapshot, 401 include_dirs=True) 402 bundles.append(BundleAdaptor(**kwargs)) 403 return HydratedField('bundles', bundles) 404 405 406 def create_legacy_graph_tasks(symbol_table): 407 """Create tasks to recursively parse the legacy graph.""" 408 symbol_table_constraint = symbol_table.constraint() 409 return [ 410 transitive_hydrated_targets, 411 hydrated_targets, 412 TaskRule( 413 HydratedTarget, 414 [Select(symbol_table_constraint), 415 SelectDependencies(HydratedField, 416 symbol_table_constraint, 417 'field_adaptors', 418 field_types=(SourcesField, BundlesField,))], 419 hydrate_target 420 ), 421 hydrate_sources, 422 hydrate_bundles, 423 ] 424 [end of src/python/pants/engine/legacy/graph.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
2d4e61753d3896ee969ba239cb0a66c36d781f84
Engine executions for multiple SingleAddresses are redundant In many cases we're aware of literal addresses that may or may not exist (ie, they came in from the command line or via `--target-spec-file`). Currently those are represented as independent `SingleAddress` objects, each of which ends up being a root in the engine. While the work will not be completely redundant, requesting transitive `HydratedTargets` for each of many independent roots will result in many independent dependencies sets, which then need to be merged for output. To preserve semantics while improving performance, we should request transitive `HydratedTargets` for a container holding all of the roots, which will dedupe across the entire set.
Hm... it's also possible that by fixing #4358 (ie, removing `SelectTransitive`) we would allow for more memoization here, which would mean that the API did not need to change. Looking at this one today.
2018-03-15T23:23:58Z
<patch> diff --git a/src/python/pants/bin/engine_initializer.py b/src/python/pants/bin/engine_initializer.py --- a/src/python/pants/bin/engine_initializer.py +++ b/src/python/pants/bin/engine_initializer.py @@ -11,7 +11,7 @@ from pants.base.build_environment import get_buildroot, get_scm from pants.base.file_system_project_tree import FileSystemProjectTree from pants.base.target_roots import ChangedTargetRoots, LiteralTargetRoots -from pants.engine.build_files import BuildFileAddresses, create_graph_rules +from pants.engine.build_files import BuildFileAddresses, Specs, create_graph_rules from pants.engine.fs import create_fs_rules from pants.engine.isolated_process import create_process_rules from pants.engine.legacy.address_mapper import LegacyAddressMapper @@ -84,7 +84,7 @@ def warm_product_graph(self, target_roots): if type(target_roots) is ChangedTargetRoots: subjects = [BuildFileAddresses(target_roots.addresses)] elif type(target_roots) is LiteralTargetRoots: - subjects = target_roots.specs + subjects = [Specs(tuple(target_roots.specs))] else: raise ValueError('Unexpected TargetRoots type: `{}`.'.format(target_roots)) request = self.scheduler.execution_request([TransitiveHydratedTargets], subjects) diff --git a/src/python/pants/engine/build_files.py b/src/python/pants/engine/build_files.py --- a/src/python/pants/engine/build_files.py +++ b/src/python/pants/engine/build_files.py @@ -12,25 +12,21 @@ from pants.base.project_tree import Dir from pants.base.specs import (AscendantAddresses, DescendantAddresses, SiblingAddresses, - SingleAddress) + SingleAddress, Spec) from pants.build_graph.address import Address, BuildFileAddress +from pants.build_graph.address_lookup_error import AddressLookupError from pants.engine.addressable import (AddressableDescriptor, BuildFileAddresses, Collection, - Exactly, TypeConstraintError) + TypeConstraintError) from pants.engine.fs import FilesContent, PathGlobs, Snapshot from pants.engine.mapper import AddressFamily, AddressMap, AddressMapper, ResolveError from pants.engine.objects import Locatable, SerializableFactory, Validatable from pants.engine.rules import RootRule, SingletonRule, TaskRule, rule from pants.engine.selectors import Select, SelectDependencies, SelectProjection from pants.engine.struct import Struct +from pants.util.dirutil import fast_relpath_optional from pants.util.objects import datatype -_SPECS_CONSTRAINT = Exactly(SingleAddress, - SiblingAddresses, - DescendantAddresses, - AscendantAddresses) - - class ResolvedTypeMismatchError(ResolveError): """Indicates a resolved object was not of the expected type.""" @@ -52,6 +48,10 @@ class BuildFileGlobs(datatype('BuildFilesGlobs', ['path_globs'])): """A wrapper around PathGlobs that are known to match a build file pattern.""" +class Specs(Collection.of(Spec)): + """A collection of Spec subclasses.""" + + @rule(BuildFiles, [SelectProjection(FilesContent, PathGlobs, 'path_globs', BuildFileGlobs)]) def build_files(files_content): @@ -60,8 +60,10 @@ def build_files(files_content): @rule(BuildFileGlobs, [Select(AddressMapper), Select(Dir)]) def buildfile_path_globs_for_dir(address_mapper, directory): - patterns = address_mapper.build_patterns - return BuildFileGlobs(PathGlobs.create(directory.path, include=patterns, exclude=())) + patterns = tuple(join(directory.path, p) for p in address_mapper.build_patterns) + return BuildFileGlobs(PathGlobs.create('', + include=patterns, + exclude=address_mapper.build_ignore_patterns)) @rule(AddressFamily, [Select(AddressMapper), Select(Dir), Select(BuildFiles)]) @@ -74,11 +76,7 @@ def parse_address_family(address_mapper, path, build_files): if not files_content: raise ResolveError('Directory "{}" does not contain build files.'.format(path)) address_maps = [] - paths = (f.path for f in files_content) - ignored_paths = set(address_mapper.build_ignore_patterns.match_files(paths)) for filecontent_product in files_content: - if filecontent_product.path in ignored_paths: - continue address_maps.append(AddressMap.parse(filecontent_product.path, filecontent_product.content, address_mapper.parser)) @@ -232,18 +230,24 @@ def _hydrate(item_type, spec_path, **kwargs): @rule(BuildFileAddresses, [Select(AddressMapper), SelectDependencies(AddressFamily, BuildDirs, field_types=(Dir,)), - Select(_SPECS_CONSTRAINT)]) -def addresses_from_address_families(address_mapper, address_families, spec): - """Given a list of AddressFamilies and a Spec, return matching Addresses. + Select(Specs)]) +def addresses_from_address_families(address_mapper, address_families, specs): + """Given a list of AddressFamilies matching a list of Specs, return matching Addresses. - Raises a ResolveError if: + Raises a AddressLookupError if: - there were no matching AddressFamilies, or - the Spec matches no addresses for SingleAddresses. """ - def raise_if_empty_address_families(): - if not address_families: - raise ResolveError('Path "{}" contains no BUILD files.'.format(spec.directory)) + # NB: `@memoized` does not work on local functions. + def by_directory(): + if by_directory.cached is None: + by_directory.cached = {af.namespace: af for af in address_families} + return by_directory.cached + by_directory.cached = None + + def raise_empty_address_family(spec): + raise ResolveError('Path "{}" contains no BUILD files.'.format(spec.directory)) def exclude_address(address): if address_mapper.exclude_patterns: @@ -251,24 +255,49 @@ def exclude_address(address): return any(p.search(address_str) is not None for p in address_mapper.exclude_patterns) return False - def all_included_addresses(): - return (a - for af in address_families - for a in af.addressables.keys() - if not exclude_address(a)) - - if type(spec) in (DescendantAddresses, SiblingAddresses): - raise_if_empty_address_families() - addresses = tuple(all_included_addresses()) - elif type(spec) is SingleAddress: - raise_if_empty_address_families() - addresses = tuple(a for a in all_included_addresses() if a.target_name == spec.name) - if not addresses and len(address_families) == 1: - _raise_did_you_mean(address_families[0], spec.name) - elif type(spec) is AscendantAddresses: - addresses = tuple(all_included_addresses()) - else: - raise ValueError('Unrecognized Spec type: {}'.format(spec)) + addresses = [] + included = set() + def include(address_families, predicate=None): + matched = False + for af in address_families: + for a in af.addressables.keys(): + if a in included: + continue + if not exclude_address(a) and (predicate is None or predicate(a)): + matched = True + addresses.append(a) + included.add(a) + return matched + + for spec in specs.dependencies: + if type(spec) is DescendantAddresses: + matched = include( + af + for af in address_families + if fast_relpath_optional(af.namespace, spec.directory) is not None + ) + if not matched: + raise AddressLookupError( + 'Spec {} does not match any targets.'.format(spec)) + elif type(spec) is SiblingAddresses: + address_family = by_directory().get(spec.directory) + if not address_family: + raise_empty_address_family(spec) + include([address_family]) + elif type(spec) is SingleAddress: + address_family = by_directory().get(spec.directory) + if not address_family: + raise_empty_address_family(spec) + if not include([address_family], predicate=lambda a: a.target_name == spec.name): + _raise_did_you_mean(address_family, spec.name) + elif type(spec) is AscendantAddresses: + include( + af + for af in address_families + if fast_relpath_optional(spec.directory, af.namespace) is not None + ) + else: + raise ValueError('Unrecognized Spec type: {}'.format(spec)) return BuildFileAddresses(addresses) @@ -277,30 +306,28 @@ def all_included_addresses(): def filter_build_dirs(address_mapper, snapshot): """Given a Snapshot matching a build pattern, return parent directories as BuildDirs.""" dirnames = set(dirname(f.stat.path) for f in snapshot.files) - ignored_dirnames = address_mapper.build_ignore_patterns.match_files('{}/'.format(dirname) for dirname in dirnames) - ignored_dirnames = set(d.rstrip('/') for d in ignored_dirnames) - return BuildDirs(tuple(Dir(d) for d in dirnames if d not in ignored_dirnames)) - - -@rule(PathGlobs, [Select(AddressMapper), Select(_SPECS_CONSTRAINT)]) -def spec_to_globs(address_mapper, spec): - """Given a Spec object, return a PathGlobs object for the build files that it matches.""" - if type(spec) is DescendantAddresses: - directory = spec.directory - patterns = [join('**', pattern) for pattern in address_mapper.build_patterns] - elif type(spec) in (SiblingAddresses, SingleAddress): - directory = spec.directory - patterns = address_mapper.build_patterns - elif type(spec) is AscendantAddresses: - directory = '' - patterns = [ - join(f, pattern) - for pattern in address_mapper.build_patterns - for f in _recursive_dirname(spec.directory) - ] - else: - raise ValueError('Unrecognized Spec type: {}'.format(spec)) - return PathGlobs.create(directory, include=patterns, exclude=[]) + return BuildDirs(tuple(Dir(d) for d in dirnames)) + + +@rule(PathGlobs, [Select(AddressMapper), Select(Specs)]) +def spec_to_globs(address_mapper, specs): + """Given a Spec object, return a PathGlobs object for the build files that it matches. + """ + patterns = set() + for spec in specs.dependencies: + if type(spec) is DescendantAddresses: + patterns.update(join(spec.directory, '**', pattern) + for pattern in address_mapper.build_patterns) + elif type(spec) in (SiblingAddresses, SingleAddress): + patterns.update(join(spec.directory, pattern) + for pattern in address_mapper.build_patterns) + elif type(spec) is AscendantAddresses: + patterns.update(join(f, pattern) + for pattern in address_mapper.build_patterns + for f in _recursive_dirname(spec.directory)) + else: + raise ValueError('Unrecognized Spec type: {}'.format(spec)) + return PathGlobs.create('', include=patterns, exclude=address_mapper.build_ignore_patterns) def _recursive_dirname(f): @@ -370,8 +397,5 @@ def create_graph_rules(address_mapper, symbol_table): RootRule(Address), RootRule(BuildFileAddress), RootRule(BuildFileAddresses), - RootRule(AscendantAddresses), - RootRule(DescendantAddresses), - RootRule(SiblingAddresses), - RootRule(SingleAddress), + RootRule(Specs), ] diff --git a/src/python/pants/engine/legacy/address_mapper.py b/src/python/pants/engine/legacy/address_mapper.py --- a/src/python/pants/engine/legacy/address_mapper.py +++ b/src/python/pants/engine/legacy/address_mapper.py @@ -10,9 +10,10 @@ from pants.base.build_file import BuildFile from pants.base.specs import DescendantAddresses, SiblingAddresses +from pants.build_graph.address_lookup_error import AddressLookupError from pants.build_graph.address_mapper import AddressMapper from pants.engine.addressable import BuildFileAddresses -from pants.engine.build_files import BuildFilesCollection +from pants.engine.build_files import BuildFilesCollection, Specs from pants.engine.mapper import ResolveError from pants.engine.nodes import Throw from pants.util.dirutil import fast_relpath @@ -32,16 +33,12 @@ def __init__(self, scheduler, build_root): self._build_root = build_root def scan_build_files(self, base_path): - request = self._scheduler.execution_request([BuildFilesCollection], [(DescendantAddresses(base_path))]) - - result = self._scheduler.execute(request) - if result.error: - raise result.error + specs = (DescendantAddresses(base_path),) + build_files_collection, = self._scheduler.product_request(BuildFilesCollection, [Specs(specs)]) build_files_set = set() - for _, state in result.root_products: - for build_files in state.value.dependencies: - build_files_set.update(f.path for f in build_files.files_content.dependencies) + for build_files in build_files_collection.dependencies: + build_files_set.update(f.path for f in build_files.files_content.dependencies) return build_files_set @@ -68,30 +65,34 @@ def addresses_in_spec_path(self, spec_path): def scan_specs(self, specs, fail_fast=True): return self._internal_scan_specs(specs, fail_fast=fail_fast, missing_is_fatal=True) + def _specs_string(self, specs): + return ', '.join(s.to_spec_string() for s in specs) + def _internal_scan_specs(self, specs, fail_fast=True, missing_is_fatal=True): - request = self._scheduler.execution_request([BuildFileAddresses], specs) + # TODO: This should really use `product_request`, but on the other hand, we need to + # deprecate the entire `AddressMapper` interface anyway. See #4769. + request = self._scheduler.execution_request([BuildFileAddresses], [Specs(tuple(specs))]) result = self._scheduler.execute(request) if result.error: raise self.BuildFileScanError(str(result.error)) - - addresses = set() - for (spec, _), state in result.root_products: - if isinstance(state, Throw): - if isinstance(state.exc, ResolveError): - if missing_is_fatal: - raise self.BuildFileScanError( - 'Spec `{}` does not match any targets.\n{}'.format(spec.to_spec_string(), str(state.exc))) - else: - # NB: ignore Throws containing ResolveErrors because they are due to missing targets / files - continue + (_, state), = result.root_products + + if isinstance(state, Throw): + if isinstance(state.exc, (AddressLookupError, ResolveError)): + if missing_is_fatal: + raise self.BuildFileScanError( + 'Spec `{}` does not match any targets.\n{}'.format( + self._specs_string(specs), str(state.exc))) else: - raise self.BuildFileScanError(str(state.exc)) - elif missing_is_fatal and not state.value.dependencies: - raise self.BuildFileScanError( - 'Spec `{}` does not match any targets.'.format(spec.to_spec_string())) - - addresses.update(state.value.dependencies) - return addresses + # NB: ignore Throws containing ResolveErrors because they are due to missing targets / files + return set() + else: + raise self.BuildFileScanError(str(state.exc)) + elif missing_is_fatal and not state.value.dependencies: + raise self.BuildFileScanError( + 'Spec `{}` does not match any targets.'.format(self._specs_string(specs))) + + return set(state.value.dependencies) def scan_addresses(self, root=None): if root: diff --git a/src/python/pants/engine/legacy/graph.py b/src/python/pants/engine/legacy/graph.py --- a/src/python/pants/engine/legacy/graph.py +++ b/src/python/pants/engine/legacy/graph.py @@ -6,6 +6,7 @@ unicode_literals, with_statement) import logging +from collections import deque from contextlib import contextmanager from twitter.common.collections import OrderedSet @@ -20,11 +21,11 @@ from pants.build_graph.build_graph import BuildGraph from pants.build_graph.remote_sources import RemoteSources from pants.engine.addressable import BuildFileAddresses, Collection +from pants.engine.build_files import Specs from pants.engine.fs import PathGlobs, Snapshot from pants.engine.legacy.structs import BundleAdaptor, BundlesField, SourcesField, TargetAdaptor -from pants.engine.mapper import ResolveError from pants.engine.rules import TaskRule, rule -from pants.engine.selectors import Select, SelectDependencies, SelectProjection, SelectTransitive +from pants.engine.selectors import Select, SelectDependencies, SelectProjection from pants.source.wrapped_globs import EagerFilesetWithSpec, FilesetRelPathWrapper from pants.util.dirutil import fast_relpath from pants.util.objects import datatype @@ -56,9 +57,6 @@ class LegacyBuildGraph(BuildGraph): This implementation is backed by a Scheduler that is able to resolve TransitiveHydratedTargets. """ - class InvalidCommandLineSpecError(AddressLookupError): - """Raised when command line spec is not a valid directory""" - @classmethod def create(cls, scheduler, symbol_table): """Construct a graph given a Scheduler, Engine, and a SymbolTable class.""" @@ -79,7 +77,7 @@ def clone_new(self): """Returns a new BuildGraph instance of the same type and with the same __init__ params.""" return LegacyBuildGraph(self._scheduler, self._target_types) - def _index(self, roots): + def _index(self, hydrated_targets): """Index from the given roots into the storage provided by the base class. This is an additive operation: any existing connections involving these nodes are preserved. @@ -88,14 +86,12 @@ def _index(self, roots): new_targets = list() # Index the ProductGraph. - for product in roots: - # We have a successful TransitiveHydratedTargets value (for a particular input Spec). - for hydrated_target in product.dependencies: - target_adaptor = hydrated_target.adaptor - address = target_adaptor.address - all_addresses.add(address) - if address not in self._target_by_address: - new_targets.append(self._index_target(target_adaptor)) + for hydrated_target in hydrated_targets: + target_adaptor = hydrated_target.adaptor + address = target_adaptor.address + all_addresses.add(address) + if address not in self._target_by_address: + new_targets.append(self._index_target(target_adaptor)) # Once the declared dependencies of all targets are indexed, inject their # additional "traversable_(dependency_)?specs". @@ -208,12 +204,8 @@ def inject_addresses_closure(self, addresses): addresses = set(addresses) - set(self._target_by_address.keys()) if not addresses: return - matched = set(self._inject_specs([SingleAddress(a.spec_path, a.target_name) for a in addresses])) - missing = addresses - matched - if missing: - # TODO: When SingleAddress resolution converted from projection of a directory - # and name to a match for PathGlobs, we lost our useful AddressLookupError formatting. - raise AddressLookupError('Addresses were not matched: {}'.format(missing)) + for _ in self._inject_specs([SingleAddress(a.spec_path, a.target_name) for a in addresses]): + pass def inject_roots_closure(self, target_roots, fail_fast=None): if type(target_roots) is ChangedTargetRoots: @@ -239,9 +231,6 @@ def resolve_address(self, address): def _resolve_context(self): try: yield - except ResolveError as e: - # NB: ResolveError means that a target was not found, which is a common user facing error. - raise AddressLookupError(str(e)) except Exception as e: raise AddressLookupError( 'Build graph construction failed: {} {}'.format(type(e).__name__, str(e)) @@ -250,16 +239,15 @@ def _resolve_context(self): def _inject_addresses(self, subjects): """Injects targets into the graph for each of the given `Address` objects, and then yields them. - TODO: See #4533 about unifying "collection of literal Addresses" with the `Spec` types, which - would avoid the need for the independent `_inject_addresses` and `_inject_specs` codepaths. + TODO: See #5606 about undoing the split between `_inject_addresses` and `_inject_specs`. """ logger.debug('Injecting addresses to %s: %s', self, subjects) with self._resolve_context(): addresses = tuple(subjects) - hydrated_targets = self._scheduler.product_request(TransitiveHydratedTargets, - [BuildFileAddresses(addresses)]) + thts, = self._scheduler.product_request(TransitiveHydratedTargets, + [BuildFileAddresses(addresses)]) - self._index(hydrated_targets) + self._index(thts.closure) yielded_addresses = set() for address in subjects: @@ -274,20 +262,14 @@ def _inject_specs(self, subjects): """ logger.debug('Injecting specs to %s: %s', self, subjects) with self._resolve_context(): - product_results = self._scheduler.products_request([TransitiveHydratedTargets, BuildFileAddresses], - subjects) + specs = tuple(subjects) + thts, = self._scheduler.product_request(TransitiveHydratedTargets, + [Specs(specs)]) - self._index(product_results[TransitiveHydratedTargets]) + self._index(thts.closure) - yielded_addresses = set() - for subject, product in zip(subjects, product_results[BuildFileAddresses]): - if not product.dependencies: - raise self.InvalidCommandLineSpecError( - 'Spec {} does not match any targets.'.format(subject)) - for address in product.dependencies: - if address not in yielded_addresses: - yielded_addresses.add(address) - yield address + for hydrated_target in thts.roots: + yield hydrated_target.address class HydratedTarget(datatype('HydratedTarget', ['address', 'adaptor', 'dependencies'])): @@ -313,21 +295,50 @@ def __hash__(self): return hash(self.address) -class TransitiveHydratedTargets(Collection.of(HydratedTarget)): - """A transitive set of HydratedTarget objects.""" +class TransitiveHydratedTarget(datatype('TransitiveHydratedTarget', ['root', 'dependencies'])): + """A recursive structure wrapping a HydratedTarget root and TransitiveHydratedTarget deps.""" + + +class TransitiveHydratedTargets(datatype('TransitiveHydratedTargets', ['roots', 'closure'])): + """A set of HydratedTarget roots, and their transitive, flattened, de-duped closure.""" class HydratedTargets(Collection.of(HydratedTarget)): """An intransitive set of HydratedTarget objects.""" -@rule(TransitiveHydratedTargets, [SelectTransitive(HydratedTarget, - BuildFileAddresses, - field_types=(Address,), - field='addresses')]) -def transitive_hydrated_targets(targets): - """Recursively requests HydratedTarget instances, which will result in an eager, transitive graph walk.""" - return TransitiveHydratedTargets(targets) +@rule(TransitiveHydratedTargets, [SelectDependencies(TransitiveHydratedTarget, + BuildFileAddresses, + field_types=(Address,), + field='addresses')]) +def transitive_hydrated_targets(transitive_hydrated_targets): + """Kicks off recursion on expansion of TransitiveHydratedTarget objects. + + The TransitiveHydratedTarget struct represents a structure-shared graph, which we walk + and flatten here. The engine memoizes the computation of TransitiveHydratedTarget, so + when multiple TransitiveHydratedTargets objects are being constructed for multiple + roots, their structure will be shared. + """ + closure = set() + to_visit = deque(transitive_hydrated_targets) + + while to_visit: + tht = to_visit.popleft() + if tht.root in closure: + continue + closure.add(tht.root) + to_visit.extend(tht.dependencies) + + return TransitiveHydratedTargets(tuple(tht.root for tht in transitive_hydrated_targets), closure) + + +@rule(TransitiveHydratedTarget, [Select(HydratedTarget), + SelectDependencies(TransitiveHydratedTarget, + HydratedTarget, + field_types=(Address,), + field='addresses')]) +def transitive_hydrated_target(root, dependencies): + return TransitiveHydratedTarget(root, dependencies) @rule(HydratedTargets, [SelectDependencies(HydratedTarget, @@ -408,6 +419,7 @@ def create_legacy_graph_tasks(symbol_table): symbol_table_constraint = symbol_table.constraint() return [ transitive_hydrated_targets, + transitive_hydrated_target, hydrated_targets, TaskRule( HydratedTarget, diff --git a/src/python/pants/engine/legacy/source_mapper.py b/src/python/pants/engine/legacy/source_mapper.py --- a/src/python/pants/engine/legacy/source_mapper.py +++ b/src/python/pants/engine/legacy/source_mapper.py @@ -12,6 +12,7 @@ from pants.base.specs import AscendantAddresses, SingleAddress from pants.build_graph.address import parse_spec from pants.build_graph.source_mapper import SourceMapper +from pants.engine.build_files import Specs from pants.engine.legacy.address_mapper import LegacyAddressMapper from pants.engine.legacy.graph import HydratedTargets from pants.source.filespec import any_matches_filespec @@ -79,14 +80,14 @@ def iter_target_addresses_for_sources(self, sources): """Bulk, iterable form of `target_addresses_for_source`.""" # Walk up the buildroot looking for targets that would conceivably claim changed sources. sources_set = set(sources) - subjects = [AscendantAddresses(directory=d) for d in self._unique_dirs_for_sources(sources_set)] + specs = tuple(AscendantAddresses(directory=d) for d in self._unique_dirs_for_sources(sources_set)) # Uniqify all transitive hydrated targets. hydrated_target_to_address = {} - for hydrated_targets in self._scheduler.product_request(HydratedTargets, subjects): - for hydrated_target in hydrated_targets.dependencies: - if hydrated_target not in hydrated_target_to_address: - hydrated_target_to_address[hydrated_target] = hydrated_target.adaptor.address + hydrated_targets, = self._scheduler.product_request(HydratedTargets, [Specs(specs)]) + for hydrated_target in hydrated_targets.dependencies: + if hydrated_target not in hydrated_target_to_address: + hydrated_target_to_address[hydrated_target] = hydrated_target.adaptor.address for hydrated_target, legacy_address in six.iteritems(hydrated_target_to_address): # Handle BUILD files. diff --git a/src/python/pants/engine/mapper.py b/src/python/pants/engine/mapper.py --- a/src/python/pants/engine/mapper.py +++ b/src/python/pants/engine/mapper.py @@ -8,9 +8,6 @@ import re from collections import OrderedDict -from pathspec import PathSpec -from pathspec.patterns.gitwildmatch import GitWildMatchPattern - from pants.build_graph.address import BuildFileAddress from pants.engine.objects import Serializable from pants.util.memo import memoized_property @@ -184,8 +181,8 @@ def __init__(self, :param list exclude_target_regexps: A list of regular expressions for excluding targets. """ self.parser = parser - self.build_patterns = build_patterns or (b'BUILD', b'BUILD.*') - self.build_ignore_patterns = PathSpec.from_lines(GitWildMatchPattern, build_ignore_patterns or []) + self.build_patterns = tuple(build_patterns or [b'BUILD', b'BUILD.*']) + self.build_ignore_patterns = tuple(build_ignore_patterns or []) self._exclude_target_regexps = exclude_target_regexps or [] self.exclude_patterns = [re.compile(pattern) for pattern in self._exclude_target_regexps] self.subproject_roots = subproject_roots or [] diff --git a/src/python/pants/engine/native.py b/src/python/pants/engine/native.py --- a/src/python/pants/engine/native.py +++ b/src/python/pants/engine/native.py @@ -162,7 +162,6 @@ void tasks_add_select(Tasks*, TypeConstraint); void tasks_add_select_variant(Tasks*, TypeConstraint, Buffer); void tasks_add_select_dependencies(Tasks*, TypeConstraint, TypeConstraint, Buffer, TypeIdBuffer); -void tasks_add_select_transitive(Tasks*, TypeConstraint, TypeConstraint, Buffer, TypeIdBuffer); void tasks_add_select_projection(Tasks*, TypeConstraint, TypeId, Buffer, TypeConstraint); void tasks_task_end(Tasks*); void tasks_singleton_add(Tasks*, Value, TypeConstraint); diff --git a/src/python/pants/engine/scheduler.py b/src/python/pants/engine/scheduler.py --- a/src/python/pants/engine/scheduler.py +++ b/src/python/pants/engine/scheduler.py @@ -20,8 +20,8 @@ from pants.engine.native import Function, TypeConstraint, TypeId from pants.engine.nodes import Return, State, Throw from pants.engine.rules import RuleIndex, SingletonRule, TaskRule -from pants.engine.selectors import (Select, SelectDependencies, SelectProjection, SelectTransitive, - SelectVariant, constraint_for) +from pants.engine.selectors import (Select, SelectDependencies, SelectProjection, SelectVariant, + constraint_for) from pants.engine.struct import HasProducts, Variants from pants.util.contextutil import temporary_file_path from pants.util.objects import datatype @@ -208,12 +208,6 @@ def _register_task(self, output_constraint, rule): self._to_constraint(selector.dep_product), self._to_utf8_buf(selector.field), self._to_ids_buf(selector.field_types)) - elif selector_type is SelectTransitive: - self._native.lib.tasks_add_select_transitive(self._tasks, - product_constraint, - self._to_constraint(selector.dep_product), - self._to_utf8_buf(selector.field), - self._to_ids_buf(selector.field_types)) elif selector_type is SelectProjection: self._native.lib.tasks_add_select_projection(self._tasks, self._to_constraint(selector.product), diff --git a/src/python/pants/engine/selectors.py b/src/python/pants/engine/selectors.py --- a/src/python/pants/engine/selectors.py +++ b/src/python/pants/engine/selectors.py @@ -127,26 +127,6 @@ def __repr__(self): field_types_portion) -class SelectTransitive(datatype('Transitive', ['product', 'dep_product', 'field', 'field_types']), - Selector): - """A variation of `SelectDependencies` that is used to recursively request a product. - - One use case is to eagerly walk the entire graph. - - It first selects for dep_product then recursively requests products with the `product` type, expanding each by its - `field`. - - Requires that both the dep_product and product have the same field `field` that contains the same `field_types`. - """ - - DEFAULT_FIELD = 'dependencies' - - optional = False - - def __new__(cls, product, dep_product, field=DEFAULT_FIELD, field_types=tuple()): - return super(SelectTransitive, cls).__new__(cls, product, dep_product, field, field_types) - - class SelectProjection(datatype('Projection', ['product', 'projected_subject', 'field', 'input_product']), Selector): """Selects a field of the given Subject to produce a Subject, Product dependency from. diff --git a/src/python/pants/scm/change_calculator.py b/src/python/pants/scm/change_calculator.py --- a/src/python/pants/scm/change_calculator.py +++ b/src/python/pants/scm/change_calculator.py @@ -13,7 +13,7 @@ from pants.base.build_environment import get_scm from pants.base.specs import DescendantAddresses from pants.build_graph.address import Address -from pants.engine.build_files import HydratedStructs +from pants.engine.build_files import HydratedStructs, Specs from pants.engine.legacy.graph import target_types_from_symbol_table from pants.engine.legacy.source_mapper import EngineSourceMapper from pants.goal.workspace import ScmWorkspace @@ -147,9 +147,10 @@ def iter_changed_target_addresses(self, changed_request): # For dependee finding, we need to parse all build files to collect all structs. But we # don't need to fully hydrate targets (ie, expand their source globs), and so we use # the `HydratedStructs` product. See #4535 for more info. + specs = (DescendantAddresses(''),) adaptor_iter = (t for targets in self._scheduler.product_request(HydratedStructs, - [DescendantAddresses('')]) + [Specs(specs)]) for t in targets.dependencies) graph = _DependentGraph.from_iterable(target_types_from_symbol_table(self._symbol_table), adaptor_iter) </patch>
[]
[]
pantsbuild__pants-16808
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> mypy output is limited to 80 characters wide and no colours **Describe the bug** Mypy has various nifty affordances to make it easier to understand its errors, e.g. colours and (with `--pretty`) printing the relevant line. It seems like pants currently gets in the way of these, presumably because mypy is detecting the non-tty output buffering. For instance, https://gist.github.com/ce2e03d8788848ad6db27df6d66cde6d ```python a_really_really_really_long_variable_name = 12345678901234567890 + "this makes the line really long" ``` Running mypy via `./pants check ::` gives output: ![image](https://user-images.githubusercontent.com/1203825/183544558-131305c6-5eee-4ae0-a49a-bb9af6e6832f.png) While running it normally `mypy --pretty broken.py` (with the same version 0.961 installed via pip) gives output: ![image](https://user-images.githubusercontent.com/1203825/183544573-365dcce1-650d-4551-9f84-b47928cd4c75.png) The first output is worse: - line truncation despite (much) wider terminal (if the main error message was longer, it'd be wrapped, too, which makes it extra hard to read) - lack of colours/bold **Pants version** - 2.12.0 - 2.14.0.dev5 **OS** macOS **Additional info** Exact reproduction steps: ```shell git clone https://gist.github.com/ce2e03d8788848ad6db27df6d66cde6d cd ce2e03d8788848ad6db27df6d66cde6d ./pants check :: mypy --version mypy --pretty broken.py ``` </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a scalable build system for _monorepos_: codebases containing 4 multiple projects, often using multiple programming languages and frameworks, 5 in a single unified code repository. 6 7 Some noteworthy features include: 8 9 * Explicit dependency modeling. 10 * Fine-grained invalidation. 11 * Shared result caching. 12 * Concurrent execution. 13 * Remote execution. 14 * Unified interface for multiple tools and languages. 15 * Extensibility and customizability via a plugin API. 16 17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/). 18 19 # Requirements 20 21 To run Pants, you need: 22 23 * Linux or macOS. 24 * Python 3.7+ discoverable on your `PATH`. 25 * A C compiler, system headers and Python headers (to compile native Python modules). 26 * Internet access (so that Pants can fully bootstrap itself). 27 28 # Credits 29 30 We release to [PyPI](https://pypi.org/pypi) 31 32 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 33 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 34 35 <img width="150" height="61" src="https://uploads-ssl.webflow.com/5ac3c046c82724970fc60918/5c019d917bba312af7553b49_MacStadium-developerlogo.png"> 36 [end of README.md] [start of src/python/pants/backend/python/typecheck/mypy/rules.py] 1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 import dataclasses 7 import itertools 8 from dataclasses import dataclass 9 from hashlib import sha256 10 from textwrap import dedent 11 from typing import Iterable, Optional, Tuple 12 13 import packaging 14 15 from pants.backend.python.subsystems.setup import PythonSetup 16 from pants.backend.python.target_types import PythonSourceField 17 from pants.backend.python.typecheck.mypy.skip_field import SkipMyPyField 18 from pants.backend.python.typecheck.mypy.subsystem import ( 19 MyPy, 20 MyPyConfigFile, 21 MyPyFirstPartyPlugins, 22 ) 23 from pants.backend.python.util_rules import partition, pex_from_targets 24 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints 25 from pants.backend.python.util_rules.pex import ( 26 Pex, 27 PexRequest, 28 PexResolveInfo, 29 VenvPex, 30 VenvPexProcess, 31 ) 32 from pants.backend.python.util_rules.pex_from_targets import RequirementsPexRequest 33 from pants.backend.python.util_rules.python_sources import ( 34 PythonSourceFiles, 35 PythonSourceFilesRequest, 36 ) 37 from pants.base.build_root import BuildRoot 38 from pants.core.goals.check import REPORT_DIR, CheckRequest, CheckResult, CheckResults 39 from pants.core.util_rules.source_files import SourceFiles, SourceFilesRequest 40 from pants.core.util_rules.system_binaries import CpBinary, MkdirBinary, MvBinary 41 from pants.engine.collection import Collection 42 from pants.engine.fs import CreateDigest, Digest, FileContent, MergeDigests, RemovePrefix 43 from pants.engine.process import FallibleProcessResult, Process 44 from pants.engine.rules import Get, MultiGet, collect_rules, rule, rule_helper 45 from pants.engine.target import CoarsenedTargets, FieldSet, Target 46 from pants.engine.unions import UnionRule 47 from pants.util.logging import LogLevel 48 from pants.util.ordered_set import FrozenOrderedSet, OrderedSet 49 from pants.util.strutil import pluralize, shell_quote 50 51 52 @dataclass(frozen=True) 53 class MyPyFieldSet(FieldSet): 54 required_fields = (PythonSourceField,) 55 56 sources: PythonSourceField 57 58 @classmethod 59 def opt_out(cls, tgt: Target) -> bool: 60 return tgt.get(SkipMyPyField).value 61 62 63 @dataclass(frozen=True) 64 class MyPyPartition: 65 root_field_sets: FrozenOrderedSet[MyPyFieldSet] 66 closure: FrozenOrderedSet[Target] 67 resolve_description: str | None 68 interpreter_constraints: InterpreterConstraints 69 70 def description(self) -> str: 71 ics = str(sorted(str(c) for c in self.interpreter_constraints)) 72 return f"{self.resolve_description}, {ics}" if self.resolve_description else ics 73 74 75 class MyPyPartitions(Collection[MyPyPartition]): 76 pass 77 78 79 class MyPyRequest(CheckRequest): 80 field_set_type = MyPyFieldSet 81 name = MyPy.options_scope 82 83 84 @rule_helper 85 async def _generate_argv( 86 mypy: MyPy, 87 *, 88 pex: VenvPex, 89 cache_dir: str, 90 venv_python: str, 91 file_list_path: str, 92 python_version: Optional[str], 93 ) -> Tuple[str, ...]: 94 args = [pex.pex.argv0, f"--python-executable={venv_python}", *mypy.args] 95 if mypy.config: 96 args.append(f"--config-file={mypy.config}") 97 if python_version: 98 args.append(f"--python-version={python_version}") 99 100 mypy_pex_info = await Get(PexResolveInfo, VenvPex, pex) 101 mypy_info = mypy_pex_info.find("mypy") 102 assert mypy_info is not None 103 if mypy_info.version > packaging.version.Version("0.700") and python_version is not None: 104 # Skip mtime checks because we don't propogate mtime when materialzing the sandbox, so the 105 # mtime checks will always fail otherwise. 106 args.append("--skip-cache-mtime-check") 107 # See "__run_wrapper.sh" below for explanation 108 args.append("--sqlite-cache") # Added in v 0.660 109 args.extend(("--cache-dir", cache_dir)) 110 else: 111 # Don't bother caching 112 args.append("--cache-dir=/dev/null") 113 args.append(f"@{file_list_path}") 114 return tuple(args) 115 116 117 def determine_python_files(files: Iterable[str]) -> Tuple[str, ...]: 118 """We run over all .py and .pyi files, but .pyi files take precedence. 119 120 MyPy will error if we say to run over the same module with both its .py and .pyi files, so we 121 must be careful to only use the .pyi stub. 122 """ 123 result: OrderedSet[str] = OrderedSet() 124 for f in files: 125 if f.endswith(".pyi"): 126 py_file = f[:-1] # That is, strip the `.pyi` suffix to be `.py`. 127 result.discard(py_file) 128 result.add(f) 129 elif f.endswith(".py"): 130 pyi_file = f + "i" 131 if pyi_file not in result: 132 result.add(f) 133 return tuple(result) 134 135 136 @rule 137 async def mypy_typecheck_partition( 138 partition: MyPyPartition, 139 config_file: MyPyConfigFile, 140 first_party_plugins: MyPyFirstPartyPlugins, 141 build_root: BuildRoot, 142 mypy: MyPy, 143 python_setup: PythonSetup, 144 mkdir: MkdirBinary, 145 cp: CpBinary, 146 mv: MvBinary, 147 ) -> CheckResult: 148 # MyPy requires 3.5+ to run, but uses the typed-ast library to work with 2.7, 3.4, 3.5, 3.6, 149 # and 3.7. However, typed-ast does not understand 3.8+, so instead we must run MyPy with 150 # Python 3.8+ when relevant. We only do this if <3.8 can't be used, as we don't want a 151 # loose requirement like `>=3.6` to result in requiring Python 3.8+, which would error if 152 # 3.8+ is not installed on the machine. 153 tool_interpreter_constraints = ( 154 partition.interpreter_constraints 155 if ( 156 mypy.options.is_default("interpreter_constraints") 157 and partition.interpreter_constraints.requires_python38_or_newer( 158 python_setup.interpreter_versions_universe 159 ) 160 ) 161 else mypy.interpreter_constraints 162 ) 163 164 closure_sources_get = Get(PythonSourceFiles, PythonSourceFilesRequest(partition.closure)) 165 roots_sources_get = Get( 166 SourceFiles, 167 SourceFilesRequest(fs.sources for fs in partition.root_field_sets), 168 ) 169 170 # See `requirements_venv_pex` for how this will get wrapped in a `VenvPex`. 171 requirements_pex_get = Get( 172 Pex, 173 RequirementsPexRequest( 174 (fs.address for fs in partition.root_field_sets), 175 hardcoded_interpreter_constraints=partition.interpreter_constraints, 176 ), 177 ) 178 extra_type_stubs_pex_get = Get( 179 Pex, PexRequest, mypy.extra_type_stubs_pex_request(partition.interpreter_constraints) 180 ) 181 182 mypy_pex_get = Get( 183 VenvPex, 184 PexRequest, 185 mypy.to_pex_request( 186 interpreter_constraints=tool_interpreter_constraints, 187 extra_requirements=first_party_plugins.requirement_strings, 188 ), 189 ) 190 191 ( 192 closure_sources, 193 roots_sources, 194 mypy_pex, 195 extra_type_stubs_pex, 196 requirements_pex, 197 ) = await MultiGet( 198 closure_sources_get, 199 roots_sources_get, 200 mypy_pex_get, 201 extra_type_stubs_pex_get, 202 requirements_pex_get, 203 ) 204 205 python_files = determine_python_files(roots_sources.snapshot.files) 206 file_list_path = "__files.txt" 207 file_list_digest_request = Get( 208 Digest, 209 CreateDigest([FileContent(file_list_path, "\n".join(python_files).encode())]), 210 ) 211 212 # This creates a venv with all the 3rd-party requirements used by the code. We tell MyPy to 213 # use this venv by setting `--python-executable`. Note that this Python interpreter is 214 # different than what we run MyPy with. 215 # 216 # We could have directly asked the `PexFromTargetsRequest` to return a `VenvPex`, rather than 217 # `Pex`, but that would mean missing out on sharing a cache with other goals like `test` and 218 # `run`. 219 requirements_venv_pex_request = Get( 220 VenvPex, 221 PexRequest( 222 output_filename="requirements_venv.pex", 223 internal_only=True, 224 pex_path=[requirements_pex, extra_type_stubs_pex], 225 interpreter_constraints=partition.interpreter_constraints, 226 ), 227 ) 228 229 requirements_venv_pex, file_list_digest = await MultiGet( 230 requirements_venv_pex_request, file_list_digest_request 231 ) 232 233 py_version = config_file.python_version_to_autoset( 234 partition.interpreter_constraints, python_setup.interpreter_versions_universe 235 ) 236 named_cache_dir = f".cache/mypy_cache/{sha256(build_root.path.encode()).hexdigest()}" 237 run_cache_dir = ".tmp_cache/mypy_cache" 238 argv = await _generate_argv( 239 mypy, 240 pex=mypy_pex, 241 venv_python=requirements_venv_pex.python.argv0, 242 cache_dir=run_cache_dir, 243 file_list_path=file_list_path, 244 python_version=py_version, 245 ) 246 247 script_runner_digest = await Get( 248 Digest, 249 CreateDigest( 250 [ 251 FileContent( 252 "__mypy_runner.sh", 253 dedent( 254 f"""\ 255 # We want to leverage the MyPy cache for fast incremental runs of MyPy. 256 # Pants exposes "append_only_caches" we can leverage, but with the caveat 257 # that it requires either only appending files, or multiprocess-safe access. 258 # 259 # MyPy guarantees neither, but there's workarounds! 260 # 261 # By default, MyPy uses 2 cache files per source file, which introduces a 262 # whole slew of race conditions. We can minimize the race conditions by 263 # using MyPy's SQLite cache. MyPy still has race conditions when using the 264 # db, as it issues at least 2 single-row queries per source file at different 265 # points in time (therefore SQLite's own safety guarantees don't apply). 266 # 267 # To workaround this we make a copy of the db from the append_only_cache, 268 # run MyPy on it, then move the updated cache back to the append_only_cache. 269 # This is multiprocess-safe as mv on the same filesystem is an atomic "rename", 270 # and any processes copying the "old" file will still have valid file 271 # descriptors for the "old" file. 272 # 273 # There is a chance of multiple processes thrashing on the cache, leaving 274 # it in a state that doesn't reflect reality at the current point in time, 275 # and forcing other processes to do potentially done work. This strategy 276 # still provides a net benefit because the cache is generally _mostly_ 277 # valid (it includes entries for the standard library, and 3rdparty deps, 278 # among 1stparty sources), and even in the worst case 279 # (every single file has changed) the overhead of missing the cache each 280 # query should be small when compared to the work being done of typechecking. 281 # 282 # Lastly, we expect that since this is run through Pants which attempts 283 # to partition MyPy runs by python version (which the DB is independent 284 # for different versions) and uses a one-process-at-a-time daemon by default, 285 # multuple MyPy processes operating on a single db cache should be rare. 286 287 {mkdir.path} -p {run_cache_dir}/{py_version} > /dev/null 2>&1 || true 288 {cp.path} {named_cache_dir}/{py_version}/cache.db {run_cache_dir}/{py_version}/cache.db > /dev/null 2>&1 || true 289 {' '.join((shell_quote(arg) for arg in argv))} 290 EXIT_CODE=$? 291 {mkdir.path} -p {named_cache_dir}/{py_version} > /dev/null 2>&1 || true 292 {mv.path} {run_cache_dir}/{py_version}/cache.db {named_cache_dir}/{py_version}/cache.db > /dev/null 2>&1 || true 293 exit $EXIT_CODE 294 """ 295 ).encode(), 296 is_executable=True, 297 ) 298 ] 299 ), 300 ) 301 302 merged_input_files = await Get( 303 Digest, 304 MergeDigests( 305 [ 306 file_list_digest, 307 first_party_plugins.sources_digest, 308 closure_sources.source_files.snapshot.digest, 309 requirements_venv_pex.digest, 310 config_file.digest, 311 script_runner_digest, 312 ] 313 ), 314 ) 315 316 all_used_source_roots = sorted( 317 set(itertools.chain(first_party_plugins.source_roots, closure_sources.source_roots)) 318 ) 319 env = { 320 "PEX_EXTRA_SYS_PATH": ":".join(all_used_source_roots), 321 "MYPYPATH": ":".join(all_used_source_roots), 322 # Force a fixed terminal width. This is effectively infinite, disabling mypy's 323 # builtin truncation and line wrapping. Terminals do an acceptable job of soft-wrapping 324 # diagnostic text and source code is typically already hard-wrapped to a limited width. 325 # (Unique random number to make it easier to search for the source of this setting.) 326 "MYPY_FORCE_TERMINAL_WIDTH": "642092230765939", 327 } 328 329 process = await Get( 330 Process, 331 VenvPexProcess( 332 mypy_pex, 333 input_digest=merged_input_files, 334 extra_env=env, 335 output_directories=(REPORT_DIR,), 336 description=f"Run MyPy on {pluralize(len(python_files), 'file')}.", 337 level=LogLevel.DEBUG, 338 append_only_caches={"mypy_cache": named_cache_dir}, 339 ), 340 ) 341 process = dataclasses.replace(process, argv=("__mypy_runner.sh",)) 342 result = await Get(FallibleProcessResult, Process, process) 343 report = await Get(Digest, RemovePrefix(result.output_digest, REPORT_DIR)) 344 return CheckResult.from_fallible_process_result( 345 result, partition_description=partition.description(), report=report 346 ) 347 348 349 @rule(desc="Determine if necessary to partition MyPy input", level=LogLevel.DEBUG) 350 async def mypy_determine_partitions( 351 request: MyPyRequest, mypy: MyPy, python_setup: PythonSetup 352 ) -> MyPyPartitions: 353 354 resolve_and_interpreter_constraints_to_coarsened_targets = ( 355 await partition._by_interpreter_constraints_and_resolve(request.field_sets, python_setup) 356 ) 357 358 return MyPyPartitions( 359 MyPyPartition( 360 FrozenOrderedSet(roots), 361 FrozenOrderedSet(CoarsenedTargets(root_cts).closure()), 362 resolve if len(python_setup.resolves) > 1 else None, 363 interpreter_constraints or mypy.interpreter_constraints, 364 ) 365 for (resolve, interpreter_constraints), (roots, root_cts) in sorted( 366 resolve_and_interpreter_constraints_to_coarsened_targets.items() 367 ) 368 ) 369 370 371 # TODO(#10864): Improve performance, e.g. by leveraging the MyPy cache. 372 @rule(desc="Typecheck using MyPy", level=LogLevel.DEBUG) 373 async def mypy_typecheck(request: MyPyRequest, mypy: MyPy) -> CheckResults: 374 if mypy.skip: 375 return CheckResults([], checker_name=request.name) 376 377 partitions = await Get(MyPyPartitions, MyPyRequest, request) 378 partitioned_results = await MultiGet( 379 Get(CheckResult, MyPyPartition, partition) for partition in partitions 380 ) 381 return CheckResults(partitioned_results, checker_name=request.name) 382 383 384 def rules(): 385 return [ 386 *collect_rules(), 387 UnionRule(CheckRequest, MyPyRequest), 388 *pex_from_targets.rules(), 389 ] 390 [end of src/python/pants/backend/python/typecheck/mypy/rules.py] [start of src/python/pants/backend/python/typecheck/mypy/subsystem.py] 1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 import itertools 7 import logging 8 from dataclasses import dataclass 9 from typing import Iterable 10 11 from pants.backend.python.goals import lockfile 12 from pants.backend.python.goals.export import ExportPythonTool, ExportPythonToolSentinel 13 from pants.backend.python.goals.lockfile import ( 14 GeneratePythonLockfile, 15 GeneratePythonToolLockfileSentinel, 16 ) 17 from pants.backend.python.subsystems.python_tool_base import ExportToolOption, PythonToolBase 18 from pants.backend.python.subsystems.setup import PythonSetup 19 from pants.backend.python.target_types import ( 20 ConsoleScript, 21 InterpreterConstraintsField, 22 PythonRequirementsField, 23 PythonSourceField, 24 ) 25 from pants.backend.python.typecheck.mypy.skip_field import SkipMyPyField 26 from pants.backend.python.util_rules import partition 27 from pants.backend.python.util_rules.interpreter_constraints import InterpreterConstraints 28 from pants.backend.python.util_rules.partition import _find_all_unique_interpreter_constraints 29 from pants.backend.python.util_rules.pex import PexRequest 30 from pants.backend.python.util_rules.pex_requirements import ( 31 EntireLockfile, 32 PexRequirements, 33 ToolCustomLockfile, 34 ) 35 from pants.backend.python.util_rules.python_sources import ( 36 PythonSourceFiles, 37 PythonSourceFilesRequest, 38 ) 39 from pants.core.goals.generate_lockfiles import NO_TOOL_LOCKFILE, GenerateToolLockfileSentinel 40 from pants.core.util_rules.config_files import ConfigFiles, ConfigFilesRequest 41 from pants.core.util_rules.lockfile_metadata import calculate_invalidation_digest 42 from pants.engine.addresses import Addresses, UnparsedAddressInputs 43 from pants.engine.fs import EMPTY_DIGEST, Digest, DigestContents, FileContent 44 from pants.engine.rules import Get, collect_rules, rule, rule_helper 45 from pants.engine.target import ( 46 AllTargets, 47 AllTargetsRequest, 48 FieldSet, 49 Target, 50 TransitiveTargets, 51 TransitiveTargetsRequest, 52 ) 53 from pants.engine.unions import UnionRule 54 from pants.option.option_types import ( 55 ArgsListOption, 56 BoolOption, 57 FileOption, 58 SkipOption, 59 StrListOption, 60 StrOption, 61 TargetListOption, 62 ) 63 from pants.util.docutil import bin_name, doc_url, git_url 64 from pants.util.logging import LogLevel 65 from pants.util.ordered_set import FrozenOrderedSet 66 from pants.util.strutil import softwrap 67 68 logger = logging.getLogger(__name__) 69 70 71 @dataclass(frozen=True) 72 class MyPyFieldSet(FieldSet): 73 required_fields = (PythonSourceField,) 74 75 sources: PythonSourceField 76 interpreter_constraints: InterpreterConstraintsField 77 78 @classmethod 79 def opt_out(cls, tgt: Target) -> bool: 80 return tgt.get(SkipMyPyField).value 81 82 83 # -------------------------------------------------------------------------------------- 84 # Subsystem 85 # -------------------------------------------------------------------------------------- 86 87 88 class MyPy(PythonToolBase): 89 options_scope = "mypy" 90 name = "MyPy" 91 help = "The MyPy Python type checker (http://mypy-lang.org/)." 92 93 default_version = "mypy==0.961" 94 default_main = ConsoleScript("mypy") 95 96 # See `mypy/rules.py`. We only use these default constraints in some situations. 97 register_interpreter_constraints = True 98 default_interpreter_constraints = ["CPython>=3.7,<4"] 99 100 register_lockfile = True 101 default_lockfile_resource = ("pants.backend.python.typecheck.mypy", "mypy.lock") 102 default_lockfile_path = "src/python/pants/backend/python/typecheck/mypy/mypy.lock" 103 default_lockfile_url = git_url(default_lockfile_path) 104 uses_requirements_from_source_plugins = True 105 106 skip = SkipOption("check") 107 args = ArgsListOption(example="--python-version 3.7 --disallow-any-expr") 108 export = ExportToolOption() 109 config = FileOption( 110 default=None, 111 advanced=True, 112 help=lambda cls: softwrap( 113 f""" 114 Path to a config file understood by MyPy 115 (https://mypy.readthedocs.io/en/stable/config_file.html). 116 117 Setting this option will disable `[{cls.options_scope}].config_discovery`. Use 118 this option if the config is located in a non-standard location. 119 """ 120 ), 121 ) 122 config_discovery = BoolOption( 123 default=True, 124 advanced=True, 125 help=lambda cls: softwrap( 126 f""" 127 If true, Pants will include any relevant config files during runs 128 (`mypy.ini`, `.mypy.ini`, and `setup.cfg`). 129 130 Use `[{cls.options_scope}].config` instead if your config is in a non-standard location. 131 """ 132 ), 133 ) 134 _source_plugins = TargetListOption( 135 advanced=True, 136 help=softwrap( 137 """ 138 An optional list of `python_sources` target addresses to load first-party plugins. 139 140 You must also set `plugins = path.to.module` in your `mypy.ini`, and 141 set the `[mypy].config` option in your `pants.toml`. 142 143 To instead load third-party plugins, set the option `[mypy].extra_requirements` 144 and set the `plugins` option in `mypy.ini`. 145 Tip: it's often helpful to define a dedicated 'resolve' via 146 `[python].resolves` for your MyPy plugins such as 'mypy-plugins' 147 so that the third-party requirements used by your plugin, like `mypy`, do not 148 mix with the rest of your project. Read that option's help message for more info 149 on resolves. 150 """ 151 ), 152 ) 153 extra_type_stubs = StrListOption( 154 advanced=True, 155 help=softwrap( 156 """ 157 Extra type stub requirements to install when running MyPy. 158 159 Normally, type stubs can be installed as typical requirements, such as putting 160 them in `requirements.txt` or using a `python_requirement` target. 161 Alternatively, you can use this option so that the dependencies are solely 162 used when running MyPy and are not runtime dependencies. 163 164 Expects a list of pip-style requirement strings, like 165 `['types-requests==2.25.9']`. 166 167 We recommend also enabling `[mypy].extra_type_stubs_lockfile` for a more reproducible 168 build and less supply-chain security risk. 169 """ 170 ), 171 ) 172 extra_type_stubs_lockfile = StrOption( 173 advanced=True, 174 # Note that there is no default lockfile, as by default, extra_type_stubs is empty. 175 default=NO_TOOL_LOCKFILE, 176 help=softwrap( 177 f""" 178 Path to a lockfile for the option `[mypy].extra_type_stubs`. 179 180 Set to the string `{NO_TOOL_LOCKFILE}` to opt out of using a lockfile. We 181 do not recommend this if you use `[mypy].extra_type_stubs`, though, as lockfiles are 182 essential for reproducible builds and supply-chain security. 183 184 To use a lockfile, set this option to a file path relative to the 185 build root, then run `{bin_name()} generate-lockfiles --resolve=mypy-extra-type-stubs`. 186 """ 187 ), 188 ) 189 190 @property 191 def config_request(self) -> ConfigFilesRequest: 192 # Refer to https://mypy.readthedocs.io/en/stable/config_file.html. 193 return ConfigFilesRequest( 194 specified=self.config, 195 specified_option_name=f"{self.options_scope}.config", 196 discovery=self.config_discovery, 197 check_existence=["mypy.ini", ".mypy.ini"], 198 check_content={"setup.cfg": b"[mypy", "pyproject.toml": b"[tool.mypy"}, 199 ) 200 201 @property 202 def source_plugins(self) -> UnparsedAddressInputs: 203 return UnparsedAddressInputs( 204 self._source_plugins, 205 owning_address=None, 206 description_of_origin=f"the option `[{self.options_scope}].source_plugins`", 207 ) 208 209 def extra_type_stubs_pex_request( 210 self, interpreter_constraints: InterpreterConstraints 211 ) -> PexRequest: 212 requirements: PexRequirements | EntireLockfile 213 if self.extra_type_stubs_lockfile == NO_TOOL_LOCKFILE: 214 requirements = PexRequirements(self.extra_type_stubs) 215 else: 216 tool_lockfile = ToolCustomLockfile( 217 file_path=self.extra_type_stubs_lockfile, 218 file_path_description_of_origin=( 219 f"the option `[{self.options_scope}].extra_type_stubs_lockfile`" 220 ), 221 lockfile_hex_digest=calculate_invalidation_digest(self.extra_type_stubs), 222 resolve_name=MyPyExtraTypeStubsLockfileSentinel.resolve_name, 223 uses_project_interpreter_constraints=True, 224 uses_source_plugins=False, 225 ) 226 requirements = EntireLockfile(tool_lockfile, complete_req_strings=self.extra_type_stubs) 227 return PexRequest( 228 output_filename="extra_type_stubs.pex", 229 internal_only=True, 230 requirements=requirements, 231 interpreter_constraints=interpreter_constraints, 232 ) 233 234 def check_and_warn_if_python_version_configured(self, config: FileContent | None) -> bool: 235 """Determine if we can dynamically set `--python-version` and warn if not.""" 236 configured = [] 237 if config and b"python_version" in config.content: 238 configured.append( 239 softwrap( 240 f""" 241 `python_version` in {config.path} (which is used because of either config 242 discovery or the `[mypy].config` option) 243 """ 244 ) 245 ) 246 if "--py2" in self.args: 247 configured.append("`--py2` in the `--mypy-args` option") 248 if any(arg.startswith("--python-version") for arg in self.args): 249 configured.append("`--python-version` in the `--mypy-args` option") 250 if configured: 251 formatted_configured = " and you set ".join(configured) 252 logger.warning( 253 softwrap( 254 f""" 255 You set {formatted_configured}. Normally, Pants would automatically set this 256 for you based on your code's interpreter constraints 257 ({doc_url('python-interpreter-compatibility')}). Instead, it will 258 use what you set. 259 260 (Allowing Pants to automatically set the option allows Pants to partition your 261 targets by their constraints, so that, for example, you can run MyPy on 262 Python 2-only code and Python 3-only code at the same time. It also allows Pants 263 to leverage MyPy's cache, making subsequent runs of MyPy very fast. 264 In the future, this feature may no longer work.) 265 """ 266 ) 267 ) 268 return bool(configured) 269 270 271 # -------------------------------------------------------------------------------------- 272 # Config files 273 # -------------------------------------------------------------------------------------- 274 275 276 @dataclass(frozen=True) 277 class MyPyConfigFile: 278 digest: Digest 279 _python_version_configured: bool 280 281 def python_version_to_autoset( 282 self, interpreter_constraints: InterpreterConstraints, interpreter_universe: Iterable[str] 283 ) -> str | None: 284 """If the user did not already set `--python-version`, return the major.minor version to 285 use.""" 286 if self._python_version_configured: 287 return None 288 return interpreter_constraints.minimum_python_version(interpreter_universe) 289 290 291 @rule 292 async def setup_mypy_config(mypy: MyPy) -> MyPyConfigFile: 293 config_files = await Get(ConfigFiles, ConfigFilesRequest, mypy.config_request) 294 digest_contents = await Get(DigestContents, Digest, config_files.snapshot.digest) 295 python_version_configured = mypy.check_and_warn_if_python_version_configured( 296 digest_contents[0] if digest_contents else None 297 ) 298 return MyPyConfigFile(config_files.snapshot.digest, python_version_configured) 299 300 301 # -------------------------------------------------------------------------------------- 302 # First party plugins 303 # -------------------------------------------------------------------------------------- 304 305 306 @dataclass(frozen=True) 307 class MyPyFirstPartyPlugins: 308 requirement_strings: FrozenOrderedSet[str] 309 sources_digest: Digest 310 source_roots: tuple[str, ...] 311 312 313 @rule("Prepare [mypy].source_plugins", level=LogLevel.DEBUG) 314 async def mypy_first_party_plugins( 315 mypy: MyPy, 316 ) -> MyPyFirstPartyPlugins: 317 if not mypy.source_plugins: 318 return MyPyFirstPartyPlugins(FrozenOrderedSet(), EMPTY_DIGEST, ()) 319 320 plugin_target_addresses = await Get(Addresses, UnparsedAddressInputs, mypy.source_plugins) 321 transitive_targets = await Get( 322 TransitiveTargets, TransitiveTargetsRequest(plugin_target_addresses) 323 ) 324 325 requirements = PexRequirements.req_strings_from_requirement_fields( 326 ( 327 plugin_tgt[PythonRequirementsField] 328 for plugin_tgt in transitive_targets.closure 329 if plugin_tgt.has_field(PythonRequirementsField) 330 ), 331 ) 332 333 sources = await Get(PythonSourceFiles, PythonSourceFilesRequest(transitive_targets.closure)) 334 return MyPyFirstPartyPlugins( 335 requirement_strings=requirements, 336 sources_digest=sources.source_files.snapshot.digest, 337 source_roots=sources.source_roots, 338 ) 339 340 341 # -------------------------------------------------------------------------------------- 342 # Interpreter constraints 343 # -------------------------------------------------------------------------------------- 344 345 346 @rule_helper 347 async def _mypy_interpreter_constraints( 348 mypy: MyPy, python_setup: PythonSetup 349 ) -> InterpreterConstraints: 350 constraints = mypy.interpreter_constraints 351 if mypy.options.is_default("interpreter_constraints"): 352 code_constraints = await _find_all_unique_interpreter_constraints( 353 python_setup, MyPyFieldSet 354 ) 355 if code_constraints.requires_python38_or_newer(python_setup.interpreter_versions_universe): 356 constraints = code_constraints 357 return constraints 358 359 360 # -------------------------------------------------------------------------------------- 361 # Lockfiles 362 # -------------------------------------------------------------------------------------- 363 364 365 class MyPyLockfileSentinel(GeneratePythonToolLockfileSentinel): 366 resolve_name = MyPy.options_scope 367 368 369 @rule( 370 desc="Determine MyPy interpreter constraints (for lockfile generation)", 371 level=LogLevel.DEBUG, 372 ) 373 async def setup_mypy_lockfile( 374 _: MyPyLockfileSentinel, 375 first_party_plugins: MyPyFirstPartyPlugins, 376 mypy: MyPy, 377 python_setup: PythonSetup, 378 ) -> GeneratePythonLockfile: 379 if not mypy.uses_custom_lockfile: 380 return GeneratePythonLockfile.from_tool( 381 mypy, use_pex=python_setup.generate_lockfiles_with_pex 382 ) 383 384 constraints = await _mypy_interpreter_constraints(mypy, python_setup) 385 return GeneratePythonLockfile.from_tool( 386 mypy, 387 constraints, 388 extra_requirements=first_party_plugins.requirement_strings, 389 use_pex=python_setup.generate_lockfiles_with_pex, 390 ) 391 392 393 class MyPyExtraTypeStubsLockfileSentinel(GeneratePythonToolLockfileSentinel): 394 resolve_name = "mypy-extra-type-stubs" 395 396 397 @rule(desc="Set up lockfile request for [mypy].extra_type_stubs", level=LogLevel.DEBUG) 398 async def setup_mypy_extra_type_stubs_lockfile( 399 request: MyPyExtraTypeStubsLockfileSentinel, 400 mypy: MyPy, 401 python_setup: PythonSetup, 402 ) -> GeneratePythonLockfile: 403 use_pex = python_setup.generate_lockfiles_with_pex 404 if mypy.extra_type_stubs_lockfile == NO_TOOL_LOCKFILE: 405 return GeneratePythonLockfile( 406 requirements=FrozenOrderedSet(), 407 interpreter_constraints=InterpreterConstraints(), 408 resolve_name=request.resolve_name, 409 lockfile_dest=mypy.extra_type_stubs_lockfile, 410 use_pex=use_pex, 411 ) 412 413 # While MyPy will run in partitions, we need a set of constraints that works with every 414 # partition. 415 # 416 # This first finds the ICs of each partition. Then, it ORs all unique resulting interpreter 417 # constraints. The net effect is that every possible Python interpreter used will be covered. 418 all_tgts = await Get(AllTargets, AllTargetsRequest()) 419 all_field_sets = [ 420 MyPyFieldSet.create(tgt) for tgt in all_tgts if MyPyFieldSet.is_applicable(tgt) 421 ] 422 resolve_and_interpreter_constraints_to_coarsened_targets = ( 423 await partition._by_interpreter_constraints_and_resolve(all_field_sets, python_setup) 424 ) 425 unique_constraints = { 426 ics for resolve, ics in resolve_and_interpreter_constraints_to_coarsened_targets.keys() 427 } 428 interpreter_constraints = InterpreterConstraints( 429 itertools.chain.from_iterable(unique_constraints) 430 ) or InterpreterConstraints(python_setup.interpreter_constraints) 431 return GeneratePythonLockfile( 432 requirements=FrozenOrderedSet(mypy.extra_type_stubs), 433 interpreter_constraints=interpreter_constraints, 434 resolve_name=request.resolve_name, 435 lockfile_dest=mypy.extra_type_stubs_lockfile, 436 use_pex=use_pex, 437 ) 438 439 440 # -------------------------------------------------------------------------------------- 441 # Export 442 # -------------------------------------------------------------------------------------- 443 444 445 class MyPyExportSentinel(ExportPythonToolSentinel): 446 pass 447 448 449 @rule(desc="Determine MyPy interpreter constraints (for `export` goal)", level=LogLevel.DEBUG) 450 async def mypy_export( 451 _: MyPyExportSentinel, 452 mypy: MyPy, 453 python_setup: PythonSetup, 454 first_party_plugins: MyPyFirstPartyPlugins, 455 ) -> ExportPythonTool: 456 if not mypy.export: 457 return ExportPythonTool(resolve_name=mypy.options_scope, pex_request=None) 458 constraints = await _mypy_interpreter_constraints(mypy, python_setup) 459 return ExportPythonTool( 460 resolve_name=mypy.options_scope, 461 pex_request=mypy.to_pex_request( 462 interpreter_constraints=constraints, 463 extra_requirements=first_party_plugins.requirement_strings, 464 ), 465 ) 466 467 468 def rules(): 469 return ( 470 *collect_rules(), 471 *lockfile.rules(), 472 UnionRule(GenerateToolLockfileSentinel, MyPyLockfileSentinel), 473 UnionRule(GenerateToolLockfileSentinel, MyPyExtraTypeStubsLockfileSentinel), 474 UnionRule(ExportPythonToolSentinel, MyPyExportSentinel), 475 ) 476 [end of src/python/pants/backend/python/typecheck/mypy/subsystem.py] [start of src/python/pants/util/strutil.py] 1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 import re 7 import shlex 8 import textwrap 9 from typing import Iterable 10 11 12 def ensure_binary(text_or_binary: bytes | str) -> bytes: 13 if isinstance(text_or_binary, bytes): 14 return text_or_binary 15 elif isinstance(text_or_binary, str): 16 return text_or_binary.encode("utf8") 17 else: 18 raise TypeError(f"Argument is neither text nor binary type.({type(text_or_binary)})") 19 20 21 def ensure_text(text_or_binary: bytes | str) -> str: 22 if isinstance(text_or_binary, bytes): 23 return text_or_binary.decode() 24 elif isinstance(text_or_binary, str): 25 return text_or_binary 26 else: 27 raise TypeError(f"Argument is neither text nor binary type ({type(text_or_binary)})") 28 29 30 def safe_shlex_split(text_or_binary: bytes | str) -> list[str]: 31 """Split a string using shell-like syntax. 32 33 Safe even on python versions whose shlex.split() method doesn't accept unicode. 34 """ 35 value = ensure_text(text_or_binary) 36 return shlex.split(value) 37 38 39 # `_shell_unsafe_chars_pattern` and `shell_quote` are modified from the CPython 3.6 source: 40 # https://github.com/python/cpython/blob/142e3c08a40c75b5788474b0defe7d5c0671f675/Lib/shlex.py#L308 41 _shell_unsafe_chars_pattern = re.compile(r"[^\w@%+=:,./-]").search 42 43 44 def shell_quote(s: str) -> str: 45 """Return a shell-escaped version of the string *s*.""" 46 if not s: 47 return "''" 48 if _shell_unsafe_chars_pattern(s) is None: 49 return s 50 51 # use single quotes, and put single quotes into double quotes 52 # the string $'b is then quoted as '$'"'"'b' 53 return "'" + s.replace("'", "'\"'\"'") + "'" 54 55 56 def safe_shlex_join(arg_list: Iterable[str]) -> str: 57 """Join a list of strings into a shlex-able string. 58 59 Shell-quotes each argument with `shell_quote()`. 60 """ 61 return " ".join(shell_quote(arg) for arg in arg_list) 62 63 64 def create_path_env_var( 65 new_entries: Iterable[str], 66 env: dict[str, str] | None = None, 67 env_var: str = "PATH", 68 delimiter: str = ":", 69 prepend: bool = False, 70 ): 71 """Join path entries, combining with an environment variable if specified.""" 72 if env is None: 73 env = {} 74 75 prev_path = env.get(env_var, None) 76 if prev_path is None: 77 path_dirs: list[str] = [] 78 else: 79 path_dirs = list(prev_path.split(delimiter)) 80 81 new_entries_list = list(new_entries) 82 83 if prepend: 84 path_dirs = new_entries_list + path_dirs 85 else: 86 path_dirs += new_entries_list 87 88 return delimiter.join(path_dirs) 89 90 91 def pluralize(count: int, item_type: str, include_count: bool = True) -> str: 92 """Pluralizes the item_type if the count does not equal one. 93 94 For example `pluralize(1, 'apple')` returns '1 apple', 95 while `pluralize(0, 'apple') returns '0 apples'. 96 97 When `include_count=False` does not add the count in front of the pluralized `item_type`. 98 99 :return The count and inflected item_type together as a string 100 """ 101 102 def pluralize_string(x: str) -> str: 103 if x.endswith("s"): 104 return x + "es" 105 elif x.endswith("y"): 106 return x[:-1] + "ies" 107 else: 108 return x + "s" 109 110 pluralized_item = item_type if count == 1 else pluralize_string(item_type) 111 if not include_count: 112 return pluralized_item 113 else: 114 text = f"{count} {pluralized_item}" 115 return text 116 117 118 def strip_prefix(string: str, prefix: str) -> str: 119 """Returns a copy of the string from which the multi-character prefix has been stripped. 120 121 Use strip_prefix() instead of lstrip() to remove a substring (instead of individual characters) 122 from the beginning of a string, if the substring is present. lstrip() does not match substrings 123 but rather treats a substring argument as a set of characters. 124 125 :param string: The string from which to strip the specified prefix. 126 :param prefix: The substring to strip from the left of string, if present. 127 :return: The string with prefix stripped from the left, if present. 128 """ 129 if string.startswith(prefix): 130 return string[len(prefix) :] 131 else: 132 return string 133 134 135 # NB: We allow bytes because `ProcessResult.std{err,out}` uses bytes. 136 def strip_v2_chroot_path(v: bytes | str) -> str: 137 """Remove all instances of the chroot tmpdir path from the str so that it only uses relative 138 paths. 139 140 This is useful when a tool that is run with the V2 engine outputs absolute paths. It is 141 confusing for the user to see the absolute path in the final output because it is an 142 implementation detail that Pants copies their source code into a chroot. 143 """ 144 if isinstance(v, bytes): 145 v = v.decode() 146 return re.sub(r"/.*/pants-sandbox-[a-zA-Z0-9]+/", "", v) 147 148 149 def hard_wrap(s: str, *, indent: int = 0, width: int = 96) -> list[str]: 150 """Hard wrap a string while still preserving any prior hard wrapping (new lines). 151 152 This works well when the input uses soft wrapping, e.g. via Python's implicit string 153 concatenation. 154 155 Usually, you will want to join the lines together with "\n".join(). 156 """ 157 # wrap() returns [] for an empty line, but we want to emit those, hence the `or [line]`. 158 return [ 159 f"{' ' * indent}{wrapped_line}" 160 for line in s.splitlines() 161 for wrapped_line in textwrap.wrap(line, width=width - indent) or [line] 162 ] 163 164 165 def bullet_list(elements: Iterable[str], max_elements: int = -1) -> str: 166 """Format a bullet list with padding. 167 168 Callers should normally use `\n\n` before and (if relevant) after this so that the bullets 169 appear as a distinct section. 170 171 The `max_elements` may be used to limit the number of bullet rows to output, and instead leave a 172 last bullet item with "* ... and N more". 173 """ 174 if not elements: 175 return "" 176 177 if max_elements > 0: 178 elements = tuple(elements) 179 if len(elements) > max_elements: 180 elements = elements[: max_elements - 1] + ( 181 f"... and {len(elements)-max_elements+1} more", 182 ) 183 184 sep = "\n * " 185 return f" * {sep.join(elements)}" 186 187 188 def first_paragraph(s: str) -> str: 189 """Get the first paragraph, where paragraphs are separated by blank lines.""" 190 lines = s.splitlines() 191 first_blank_line_index = next( 192 (i for i, line in enumerate(lines) if line.strip() == ""), len(lines) 193 ) 194 return " ".join(lines[:first_blank_line_index]) 195 196 197 # This is more conservative that it necessarily need be. In practice POSIX filesystems 198 # support any printable character except the path separator (forward slash), but it's 199 # better to be over-cautious. 200 201 # TODO: <> may not be safe in Windows paths. When we support Windows we will probably 202 # want to detect that here and be more restrictive on that platform. But we do want 203 # to support <> where possible, because they appear in target partition descriptions 204 # (e.g., "CPython>=2.7,<3") and those are sometimes converted to paths. 205 _non_path_safe_re = re.compile(r"[^a-zA-Z0-9_\-.()<>,= ]") 206 207 208 def path_safe(s: str) -> str: 209 return _non_path_safe_re.sub("_", s) 210 211 212 # TODO: This may be a bit too eager. Some strings might want to preserve multiple spaces in them 213 # (e.g. a Python code block which has a comment in it would have 2 spaces before the "#", which 214 # would be squashed by this eager regex). The challenge is that there's some overlap between prose 215 # (which shouldn't need multiple spaces) and code (which might) for non-alphanumeric characters. 216 # We can tighten as necessary. 217 _super_space_re = re.compile(r"(\S) +(\S)") 218 _more_than_2_newlines = re.compile(r"\n{2}\n+") 219 _leading_whitespace_re = re.compile(r"(^[ ]*)(?:[^ \n])", re.MULTILINE) 220 221 222 def softwrap(text: str) -> str: 223 """Turns a multiline-ish string into a softwrapped string. 224 225 This is primarily used to turn strings in source code, which often have a single paragraph 226 span multiple source lines, into consistently formatted blocks for hardwrapping later. 227 228 Applies the following rules: 229 - Dedents the text (you also don't need to start your string with a backslash) 230 (The algorithm used for dedention simply looks at the first indented line and 231 unambiguously tries to strip that much indentation from every indented line thereafter.) 232 - Replaces all occurrences of multiple spaces in a sentence with a single space 233 - Replaces all occurrences of multiple newlines with exactly 2 newlines 234 - Replaces singular newlines with a space (to turn a paragraph into one long line) 235 - Unless the following line is indented, in which case the newline and indentation 236 are preserved. 237 - Double-newlines are preserved 238 - Extra indentation is preserved, and also preserves the indented line's ending 239 (If your indented line needs to be continued due to it being longer than the suggested 240 width, use trailing backlashes to line-continue the line. Because we squash multiple 241 spaces, this will "just work".) 242 """ 243 if not text: 244 return text 245 # If callers didn't use a leading "\" thats OK. 246 if text[0] == "\n": 247 text = text[1:] 248 249 text = _more_than_2_newlines.sub("\n\n", text) 250 margin = _leading_whitespace_re.search(text) 251 if margin: 252 text = re.sub(r"(?m)^" + margin[1], "", text) 253 254 lines = text.splitlines(keepends=True) 255 # NB: collecting a list of strs and `"".join` is more performant than calling `+=` repeatedly. 256 result_strs = [] 257 for i, line in enumerate(lines): 258 line = _super_space_re.sub(r"\1 \2", line) 259 next_line = lines[i + 1] if i + 1 < len(lines) else "" 260 if "\n" in (line, next_line) or line.startswith(" ") or next_line.startswith(" "): 261 result_strs.append(line) 262 else: 263 result_strs.append(line.rstrip()) 264 result_strs.append(" ") 265 266 return "".join(result_strs).rstrip() 267 268 269 _MEMORY_UNITS = ["B", "KiB", "MiB", "GiB"] 270 271 272 def fmt_memory_size(value: int, *, units: Iterable[str] = _MEMORY_UNITS) -> str: 273 """Formats a numeric value as amount of bytes alongside the biggest byte-based unit from the 274 list that represents the same amount without using decimals.""" 275 276 if not units: 277 return str(value) 278 279 amount = value 280 unit_idx = 0 281 282 units = tuple(units) 283 while (amount >= 1024 and amount % 1024 == 0) and unit_idx < len(units) - 1: 284 amount = int(amount / 1024) 285 unit_idx += 1 286 287 return f"{int(amount)}{units[unit_idx]}" 288 [end of src/python/pants/util/strutil.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
7270296ab3d08faf8e38bb089caf2f9eca414855
mypy output is limited to 80 characters wide and no colours **Describe the bug** Mypy has various nifty affordances to make it easier to understand its errors, e.g. colours and (with `--pretty`) printing the relevant line. It seems like pants currently gets in the way of these, presumably because mypy is detecting the non-tty output buffering. For instance, https://gist.github.com/ce2e03d8788848ad6db27df6d66cde6d ```python a_really_really_really_long_variable_name = 12345678901234567890 + "this makes the line really long" ``` Running mypy via `./pants check ::` gives output: ![image](https://user-images.githubusercontent.com/1203825/183544558-131305c6-5eee-4ae0-a49a-bb9af6e6832f.png) While running it normally `mypy --pretty broken.py` (with the same version 0.961 installed via pip) gives output: ![image](https://user-images.githubusercontent.com/1203825/183544573-365dcce1-650d-4551-9f84-b47928cd4c75.png) The first output is worse: - line truncation despite (much) wider terminal (if the main error message was longer, it'd be wrapped, too, which makes it extra hard to read) - lack of colours/bold **Pants version** - 2.12.0 - 2.14.0.dev5 **OS** macOS **Additional info** Exact reproduction steps: ```shell git clone https://gist.github.com/ce2e03d8788848ad6db27df6d66cde6d cd ce2e03d8788848ad6db27df6d66cde6d ./pants check :: mypy --version mypy --pretty broken.py ```
2022-09-09T21:38:57Z
<patch> diff --git a/src/python/pants/backend/python/typecheck/mypy/rules.py b/src/python/pants/backend/python/typecheck/mypy/rules.py --- a/src/python/pants/backend/python/typecheck/mypy/rules.py +++ b/src/python/pants/backend/python/typecheck/mypy/rules.py @@ -44,6 +44,7 @@ from pants.engine.rules import Get, MultiGet, collect_rules, rule, rule_helper from pants.engine.target import CoarsenedTargets, FieldSet, Target from pants.engine.unions import UnionRule +from pants.option.global_options import GlobalOptions from pants.util.logging import LogLevel from pants.util.ordered_set import FrozenOrderedSet, OrderedSet from pants.util.strutil import pluralize, shell_quote @@ -144,6 +145,7 @@ async def mypy_typecheck_partition( mkdir: MkdirBinary, cp: CpBinary, mv: MvBinary, + global_options: GlobalOptions, ) -> CheckResult: # MyPy requires 3.5+ to run, but uses the typed-ast library to work with 2.7, 3.4, 3.5, 3.6, # and 3.7. However, typed-ast does not understand 3.8+, so instead we must run MyPy with @@ -319,6 +321,13 @@ async def mypy_typecheck_partition( env = { "PEX_EXTRA_SYS_PATH": ":".join(all_used_source_roots), "MYPYPATH": ":".join(all_used_source_roots), + # Always emit colors to improve cache hit rates, the results are post-processed to match the + # global setting + "MYPY_FORCE_COLOR": "1", + # Mypy needs to know the terminal so it can use appropriate escape sequences. ansi is a + # reasonable lowest common denominator for the sort of escapes mypy uses (NB. TERM=xterm + # uses some additional codes that colors.strip_color doesn't remove). + "TERM": "ansi", # Force a fixed terminal width. This is effectively infinite, disabling mypy's # builtin truncation and line wrapping. Terminals do an acceptable job of soft-wrapping # diagnostic text and source code is typically already hard-wrapped to a limited width. @@ -342,7 +351,10 @@ async def mypy_typecheck_partition( result = await Get(FallibleProcessResult, Process, process) report = await Get(Digest, RemovePrefix(result.output_digest, REPORT_DIR)) return CheckResult.from_fallible_process_result( - result, partition_description=partition.description(), report=report + result, + partition_description=partition.description(), + report=report, + strip_formatting=not global_options.colors, ) diff --git a/src/python/pants/core/goals/check.py b/src/python/pants/core/goals/check.py --- a/src/python/pants/core/goals/check.py +++ b/src/python/pants/core/goals/check.py @@ -7,6 +7,8 @@ from dataclasses import dataclass from typing import Any, Iterable, cast +import colors + from pants.core.goals.lint import REPORT_DIR as REPORT_DIR # noqa: F401 from pants.core.goals.style_request import ( StyleRequest, @@ -46,10 +48,13 @@ def from_fallible_process_result( *, partition_description: str | None = None, strip_chroot_path: bool = False, + strip_formatting: bool = False, report: Digest = EMPTY_DIGEST, ) -> CheckResult: def prep_output(s: bytes) -> str: - return strip_v2_chroot_path(s) if strip_chroot_path else s.decode() + chroot = strip_v2_chroot_path(s) if strip_chroot_path else s.decode() + formatting = cast(str, colors.strip_color(chroot)) if strip_formatting else chroot + return formatting return CheckResult( exit_code=process_result.exit_code, </patch>
[]
[]
pantsbuild__pants-16284
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> google_cloud_function backend: __path__ attribute not found on 'main' while trying to find 'main.handler' **Describe the bug** Pants version: 2.12.0 package build with `pants.backend.google_cloud_function.python` does not work. ``` python_google_cloud_function( name="cloud_function", runtime="python39", handler="function.py:main", type="http", ) ``` Error when deploy into GCP cloud function: ``` ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation. ``` Checking run log: ``` Traceback (most recent call last): File "/opt/python3.9/lib/python3.9/importlib/util.py", line 96, in find_spec parent_path = parent.__path__ AttributeError: module 'main' has no attribute '__path__' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/python3.9/lib/python3.9/pkgutil.py", line 495, in find_loader spec = importlib.util.find_spec(fullname) File "/opt/python3.9/lib/python3.9/importlib/util.py", line 98, in find_spec raise ModuleNotFoundError( ModuleNotFoundError: __path__ attribute not found on 'main' while trying to find 'main.handler' The above exception was the direct cause of the following exception: [...] Line 481, in get_loader return find_loader(fullname) File "/opt/python3.9/lib/python3.9/pkgutil.py", line 501, in find_loader raise ImportError(msg.format(fullname, type(ex), ex)) from ex ImportError: Error while finding loader for 'main.handler' (<class 'ModuleNotFoundError'>: __path__ attribute not found on 'main' while trying to find 'main.handler') ``` A minimal reproducible repo: https://github.com/stephentt42/rp-issue-pantsbuild-01 You can see deploy script in `deploy.sh` I really appreciate any help. **OS** * Linux </issue> <code> [start of README.md] 1 # Pants Build System 2 3 Pants is a scalable build system for _monorepos_: codebases containing 4 multiple projects, often using multiple programming languages and frameworks, 5 in a single unified code repository. 6 7 Some noteworthy features include: 8 9 * Explicit dependency modeling. 10 * Fine-grained invalidation. 11 * Shared result caching. 12 * Concurrent execution. 13 * Remote execution. 14 * Unified interface for multiple tools and languages. 15 * Extensibility and customizability via a plugin API. 16 17 Documentation: [www.pantsbuild.org](https://www.pantsbuild.org/). 18 19 # Requirements 20 21 To run Pants, you need: 22 23 * Linux or macOS. 24 * Python 3.7+ discoverable on your `PATH`. 25 * A C compiler, system headers and Python headers (to compile native Python modules). 26 * Internet access (so that Pants can fully bootstrap itself). 27 28 # Credits 29 30 We release to [PyPI](https://pypi.org/pypi) 31 32 [![version](https://img.shields.io/pypi/v/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 33 [![license](https://img.shields.io/pypi/l/pantsbuild.pants.svg)](https://pypi.org/pypi/pantsbuild.pants) 34 35 <img width="150" height="61" src="https://uploads-ssl.webflow.com/5ac3c046c82724970fc60918/5c019d917bba312af7553b49_MacStadium-developerlogo.png"> 36 [end of README.md] [start of build-support/bin/generate_docs.py] 1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 """Generates and uploads the Pants reference documentation. 5 6 Dry run: 7 8 ./pants run build-support/bin/generate_docs.py 9 10 Live run: 11 12 ./pants run build-support/bin/generate_docs.py -- --sync --api-key=<API_KEY> 13 14 where API_KEY is your readme.io API Key, found here: 15 https://dash.readme.com/project/pants/v2.6/api-key 16 """ 17 18 from __future__ import annotations 19 20 import argparse 21 import html 22 import json 23 import logging 24 import os 25 import pkgutil 26 import re 27 import subprocess 28 import textwrap 29 from html.parser import HTMLParser 30 from pathlib import Path, PosixPath 31 from typing import Any, Dict, Iterable, cast 32 33 import chevron 34 import requests 35 from common import die 36 from readme_api import DocRef, ReadmeAPI 37 38 from pants.base.build_environment import get_buildroot, get_pants_cachedir 39 from pants.help.help_info_extracter import to_help_str 40 from pants.util.strutil import softwrap 41 from pants.version import MAJOR_MINOR 42 43 logger = logging.getLogger(__name__) 44 45 DOC_URL_RE = re.compile( 46 r"https://www.pantsbuild.org/v(\d+\.[^/]+)/docs/(?P<slug>[a-zA-Z0-9_-]+)(?P<anchor>#[a-zA-Z0-9_-]+)?" 47 ) 48 49 50 def main() -> None: 51 logging.basicConfig(format="[%(levelname)s]: %(message)s", level=logging.INFO) 52 args = create_parser().parse_args() 53 54 if args.sync and not args.api_key: 55 raise Exception("You specified --sync so you must also specify --api-key") 56 57 version = determine_pants_version(args.no_prompt) 58 help_info = run_pants_help_all() 59 doc_urls = find_doc_urls(value_strs_iter(help_info)) 60 logger.info("Found the following docsite URLs:") 61 for url in sorted(doc_urls): 62 logger.info(f" {url}") 63 64 if not args.skip_check_urls: 65 logger.info("Fetching titles...") 66 slug_to_title = get_titles(doc_urls) 67 logger.info("Found the following titles:") 68 for slug, title in sorted(slug_to_title.items()): 69 logger.info(f" {slug}: {title}") 70 71 help_info = rewrite_value_strs(help_info, slug_to_title) 72 73 generator = ReferenceGenerator(args, version, help_info) 74 if args.sync: 75 generator.sync() 76 else: 77 generator.render() 78 79 80 def determine_pants_version(no_prompt: bool) -> str: 81 version = MAJOR_MINOR 82 if no_prompt: 83 logger.info(f"Generating docs for Pants {version}.") 84 return version 85 86 key_confirmation = input( 87 f"Generating docs for Pants {version}. Is this the correct version? [Y/n]: " 88 ) 89 if key_confirmation and key_confirmation.lower() != "y": 90 die( 91 softwrap( 92 """ 93 Please either `git checkout` to the appropriate branch (e.g. 2.1.x), or change 94 src/python/pants/VERSION. 95 """ 96 ) 97 ) 98 return version 99 100 101 # Code to replace doc urls with appropriate markdown, for rendering on the docsite. 102 103 104 def get_doc_slug(url: str) -> str: 105 mo = DOC_URL_RE.match(url) 106 if not mo: 107 raise ValueError(f"Not a docsite URL: {url}") 108 return cast(str, mo.group("slug")) 109 110 111 def find_doc_urls(strs: Iterable[str]) -> set[str]: 112 """Find all the docsite urls in the given strings.""" 113 return {mo.group(0) for s in strs for mo in DOC_URL_RE.finditer(s)} 114 115 116 class DocUrlRewriter: 117 def __init__(self, slug_to_title: dict[str, str]): 118 self._slug_to_title = slug_to_title 119 120 def _rewrite_url(self, mo: re.Match) -> str: 121 # The docsite injects the version automatically at markdown rendering time, so we 122 # must not also do so, or it will be doubled, and the resulting links will be broken. 123 slug = mo.group("slug") 124 anchor = mo.group("anchor") or "" 125 title = self._slug_to_title.get(slug) 126 if not title: 127 raise ValueError(f"Found empty or no title for {mo.group(0)}") 128 return f"[{title}](doc:{slug}{anchor})" 129 130 def rewrite(self, s: str) -> str: 131 return DOC_URL_RE.sub(self._rewrite_url, s) 132 133 134 class TitleFinder(HTMLParser): 135 """Grabs the page title out of a docsite page.""" 136 137 def __init__(self): 138 super().__init__() 139 self._in_title: bool = False 140 self._title: str | None = None 141 142 def handle_starttag(self, tag, attrs): 143 if tag == "title": 144 self._in_title = True 145 146 def handle_endtag(self, tag): 147 if tag == "title": 148 self._in_title = False 149 150 def handle_data(self, data): 151 if self._in_title: 152 self._title = data.strip() 153 154 @property 155 def title(self) -> str: 156 return self._title or "" 157 158 159 def get_url(url: str): 160 response = requests.get(url) 161 if response.status_code != 200: 162 die( 163 softwrap( 164 f""" 165 Error getting URL: {url} 166 167 If the URL is pantsbuild.org, a `doc_url` link might be using the wrong slug or the 168 docs for this version might be unpublished. Otherwise, the link might be dead. 169 170 You can use `--skip-check-urls` to skip. 171 """ 172 ) 173 ) 174 return response 175 176 177 def get_titles(urls: set[str]) -> dict[str, str]: 178 """Return map from slug->title for each given docsite URL.""" 179 180 # TODO: Parallelize the http requests. 181 # E.g., by turning generate_docs.py into a plugin goal and using the engine. 182 ret = {} 183 for url in urls: 184 title_finder = TitleFinder() 185 title_finder.feed(get_url(url).text) 186 ret[get_doc_slug(url)] = title_finder.title 187 return ret 188 189 190 def create_parser() -> argparse.ArgumentParser: 191 parser = argparse.ArgumentParser(description="Generate the Pants reference markdown files.") 192 parser.add_argument( 193 "--no-prompt", 194 action="store_true", 195 default=False, 196 help="Don't prompt the user, accept defaults for all questions.", 197 ) 198 parser.add_argument( 199 "--sync", 200 action="store_true", 201 default=False, 202 help=softwrap( 203 """ 204 Whether to sync the generated reference docs to the docsite. 205 If unset, will generate markdown files to the path in --output 206 instead. If set, --api-key must be set. 207 """ 208 ), 209 ) 210 parser.add_argument( 211 "--output", 212 default=PosixPath(os.path.sep) / "tmp" / "pants_docs" / "help" / "option", 213 type=Path, 214 help=softwrap( 215 """ 216 Path to a directory under which we generate the markdown files. 217 Useful for viewing the files locally when testing and debugging 218 the renderer. 219 """ 220 ), 221 ) 222 parser.add_argument("--api-key", help="The readme.io API key to use. Required for --sync.") 223 parser.add_argument( 224 "--skip-check-urls", 225 action="store_true", 226 default=False, 227 help="Skip checking URLs (including pantsbuild.org ones).", 228 ) 229 return parser 230 231 232 def run_pants_help_all() -> dict[str, Any]: 233 # List all (stable enough) backends here. 234 backends = [ 235 "pants.backend.awslambda.python", 236 "pants.backend.codegen.protobuf.lint.buf", 237 "pants.backend.codegen.protobuf.python", 238 "pants.backend.codegen.thrift.apache.python", 239 "pants.backend.docker", 240 "pants.backend.docker.lint.hadolint", 241 "pants.backend.experimental.codegen.protobuf.go", 242 "pants.backend.experimental.codegen.protobuf.java", 243 "pants.backend.experimental.codegen.protobuf.scala", 244 "pants.backend.experimental.go", 245 "pants.backend.experimental.helm", 246 "pants.backend.experimental.java", 247 "pants.backend.experimental.java.lint.google_java_format", 248 "pants.backend.experimental.kotlin", 249 "pants.backend.experimental.kotlin.lint.ktlint", 250 "pants.backend.experimental.python", 251 "pants.backend.experimental.python.lint.autoflake", 252 "pants.backend.experimental.python.lint.pyupgrade", 253 "pants.backend.experimental.python.packaging.pyoxidizer", 254 "pants.backend.experimental.scala", 255 "pants.backend.experimental.scala.lint.scalafmt", 256 "pants.backend.experimental.terraform", 257 "pants.backend.google_cloud_function.python", 258 "pants.backend.plugin_development", 259 "pants.backend.python", 260 "pants.backend.python.lint.bandit", 261 "pants.backend.python.lint.black", 262 "pants.backend.python.lint.docformatter", 263 "pants.backend.python.lint.flake8", 264 "pants.backend.python.lint.isort", 265 "pants.backend.python.lint.pylint", 266 "pants.backend.python.lint.yapf", 267 "pants.backend.python.mixed_interpreter_constraints", 268 "pants.backend.python.typecheck.mypy", 269 "pants.backend.shell", 270 "pants.backend.shell.lint.shellcheck", 271 "pants.backend.shell.lint.shfmt", 272 ] 273 argv = [ 274 "./pants", 275 "--concurrent", 276 "--plugins=[]", 277 f"--backend-packages={repr(backends)}", 278 "--no-verify-config", 279 "--remote-auth-plugin= ", 280 "help-all", 281 ] 282 run = subprocess.run(argv, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding="utf-8") 283 try: 284 run.check_returncode() 285 except subprocess.CalledProcessError: 286 logger.error( 287 softwrap( 288 f""" 289 Running {argv} failed with exit code {run.returncode}. 290 291 stdout: 292 {textwrap.indent(run.stdout, " " * 4)} 293 294 stderr: 295 {textwrap.indent(run.stderr, " " * 4)} 296 """ 297 ) 298 ) 299 raise 300 return cast("dict[str, Any]", json.loads(run.stdout)) 301 302 303 def value_strs_iter(help_info: dict[str, Any]) -> Iterable[str]: 304 def _recurse(val: Any) -> Iterable[str]: 305 if isinstance(val, str): 306 yield val 307 if isinstance(val, dict): 308 for v in val.values(): 309 yield from _recurse(v) 310 if isinstance(val, list): 311 for v in val: 312 yield from _recurse(v) 313 314 yield from _recurse(help_info) 315 316 317 def rewrite_value_strs(help_info: dict[str, Any], slug_to_title: dict[str, str]) -> dict[str, Any]: 318 """Return a copy of the argument with rewritten docsite URLs.""" 319 rewriter = DocUrlRewriter(slug_to_title) 320 321 def _recurse(val: Any) -> Any: 322 if isinstance(val, str): 323 return rewriter.rewrite(val) 324 if isinstance(val, dict): 325 return {k: _recurse(v) for k, v in val.items()} 326 if isinstance(val, list): 327 return [_recurse(x) for x in val] 328 return val 329 330 return cast("dict[str, Any]", _recurse(help_info)) 331 332 333 class ReferenceGenerator: 334 def __init__(self, args: argparse.Namespace, version: str, help_info: dict[str, Any]) -> None: 335 self._args = args 336 337 self._readme_api = ReadmeAPI(api_key=self._args.api_key, version=version) 338 339 def get_tpl(name: str) -> str: 340 # Note that loading relative to __name__ may not always work when __name__=='__main__'. 341 buf = pkgutil.get_data("generate_docs", f"docs_templates/{name}") 342 if buf is None: 343 raise ValueError(f"No such template: {name}") 344 return buf.decode() 345 346 options_scope_tpl = get_tpl("options_scope_reference.md.mustache") 347 single_option_tpl = get_tpl("single_option_reference.md.mustache") 348 target_tpl = get_tpl("target_reference.md.mustache") 349 self._renderer_args = { 350 "partials_dict": { 351 "scoped_options": options_scope_tpl, 352 "single_option": single_option_tpl, 353 "target": target_tpl, 354 } 355 } 356 self._category_id: str | None = None # Fetched lazily. 357 358 # Load the data. 359 self._options_info = self.process_options_input(help_info, sync=self._args.sync) 360 self._targets_info = self.process_targets_input(help_info) 361 362 @staticmethod 363 def _link(scope: str, *, sync: bool) -> str: 364 # docsite pages link to the slug, local pages to the .md source. 365 url_safe_scope = scope.replace(".", "-") 366 return f"reference-{url_safe_scope}" if sync else f"{url_safe_scope}.md" 367 368 @classmethod 369 def process_options_input(cls, help_info: dict[str, Any], *, sync: bool) -> dict: 370 scope_to_help_info = help_info["scope_to_help_info"] 371 372 # Process the list of consumed_scopes into a comma-separated list, and add it to the option 373 # info for the goal's scope, to make it easy to render in the goal's options page. 374 375 for goal, goal_info in help_info["name_to_goal_info"].items(): 376 assert isinstance(goal_info, dict) 377 consumed_scopes = sorted(goal_info["consumed_scopes"]) 378 linked_consumed_scopes = [ 379 f"[{cs}]({cls._link(cs, sync=sync)})" 380 for cs in consumed_scopes 381 if cs and cs != goal_info["name"] 382 ] 383 comma_separated_consumed_scopes = ", ".join(linked_consumed_scopes) 384 scope_to_help_info[goal][ 385 "comma_separated_consumed_scopes" 386 ] = comma_separated_consumed_scopes 387 388 # Process the option data. 389 390 def munge_option(option_data): 391 # Munge the default so we can display it nicely when it's multiline, while 392 # still displaying it inline if it's not. 393 default_help_repr = option_data.get("default_help_repr") 394 if default_help_repr is None: 395 default_str = to_help_str(option_data["default"]) 396 else: 397 # It should already be a string, but might as well be safe. 398 default_str = to_help_str(default_help_repr) 399 # Some option defaults are paths under the buildroot, and we don't want the paths 400 # of the environment in which we happened to run the doc generator to leak into the 401 # published docs. So we replace with a placeholder string. 402 default_str = default_str.replace(get_buildroot(), "<buildroot>") 403 # Similarly, some option defaults are paths under the user's cache dir, so we replace 404 # with a placeholder for the same reason. Using $XDG_CACHE_HOME as the placeholder is 405 # a useful hint to how the cache dir may be set, even though in practice the user may 406 # not have this env var set. But googling XDG_CACHE_HOME will bring up documentation 407 # of the ~/.cache fallback, so this seems like a sensible placeholder. 408 default_str = default_str.replace(get_pants_cachedir(), "$XDG_CACHE_HOME") 409 escaped_default_str = ( 410 html.escape(default_str, quote=False).replace("*", "&ast;").replace("_", "&lowbar;") 411 ) 412 if "\n" in default_str: 413 option_data["marked_up_default"] = f"<pre>{escaped_default_str}</pre>" 414 else: 415 option_data["marked_up_default"] = f"<code>{escaped_default_str}</code>" 416 417 for shi in scope_to_help_info.values(): 418 for opt in shi["basic"]: 419 munge_option(opt) 420 for opt in shi["advanced"]: 421 munge_option(opt) 422 for opt in shi["deprecated"]: 423 munge_option(opt) 424 425 return help_info 426 427 @classmethod 428 def process_targets_input(cls, help_info: dict[str, Any]) -> dict[str, dict[str, Any]]: 429 target_info = help_info["name_to_target_type_info"] 430 for target in target_info.values(): 431 for field in target["fields"]: 432 # Combine the `default` and `required` properties. 433 default_str = ( 434 html.escape(str(field["default"])) 435 .replace("*", "&ast;") 436 .replace("_", "&lowbar;") 437 ) 438 field["default_or_required"] = ( 439 "required" if field["required"] else f"default: <code>{default_str}</code>" 440 ) 441 field["description"] = str(field["description"]) 442 target["fields"] = sorted( 443 target["fields"], key=lambda fld: (-fld["required"], cast(str, fld["alias"])) 444 ) 445 target["description"] = str(target["description"]) 446 447 return cast(Dict[str, Dict[str, Any]], target_info) 448 449 @property 450 def category_id(self) -> str: 451 """The id of the "Reference" category on the docsite.""" 452 if self._category_id is None: 453 self._category_id = self._readme_api.get_category("reference").id 454 return self._category_id 455 456 def _create(self, parent_doc_id: str | None, slug_suffix: str, title: str, body: str) -> None: 457 """Create a new docsite reference page. 458 459 Operates by creating a placeholder page, and then populating it via _update(). 460 461 This works around a quirk of the readme.io API: You cannot set the page slug when you 462 create a page. Instead it is derived from the title. 463 In fact there is no way to set or modify the slug via the API at all, which makes sense 464 since the API references the page via the slug. When you change the slug in the UI 465 it is likely deleting and recreating the page under the covers. 466 467 This is a problem if you want the slug to be different than the human-readable title, 468 as we do in this case. Specifically, we want the human-readable page title to be just 469 the scope name, e.g., `test` (so it appears that way in the sidebar). But we want the 470 slug to be `reference-test`, so that it doesn't collide with any other, non-generated page 471 that happens to occupy the slug `test`. 472 473 To solve this we create the placeholder page with a title from which to derive the slug, 474 and when we update the page to set its content, we update the title to be the 475 one we want humans to see (this will not change the slug, see above). 476 """ 477 slug = f"reference-{slug_suffix}" 478 self._readme_api.create_doc( 479 title=slug, category=self.category_id, parentDoc=parent_doc_id, hidden=False 480 ) 481 482 # Placeholder page exists, now update it with the real title and body. 483 self._readme_api.update_doc(slug=slug, title=title, category=self.category_id, body=body) 484 485 def _render_target(self, alias: str) -> str: 486 return cast( 487 str, chevron.render("{{> target}}", self._targets_info[alias], **self._renderer_args) 488 ) 489 490 def _render_options_body(self, scope_help_info: dict) -> str: 491 """Renders the body of a single options help page.""" 492 return cast( 493 str, chevron.render("{{> scoped_options}}", scope_help_info, **self._renderer_args) 494 ) 495 496 @classmethod 497 def _render_parent_page_body(cls, items: Iterable[str], *, sync: bool) -> str: 498 """Returns the body of a parent page for the given items.""" 499 # The page just lists the items, with links to the page for each one. 500 lines = [f"- [{item}]({cls._link(item, sync=sync)})" for item in items] 501 return "\n".join(lines) 502 503 def render(self) -> None: 504 """Renders the pages to local disk. 505 506 Useful for debugging and iterating on the markdown. 507 """ 508 output_dir = Path(self._args.output) 509 output_dir.mkdir(parents=True, exist_ok=True) 510 511 goals = [ 512 scope 513 for scope, shi in self._options_info["scope_to_help_info"].items() 514 if shi["is_goal"] 515 ] 516 subsystems = [ 517 scope 518 for scope, shi in self._options_info["scope_to_help_info"].items() 519 if scope and not shi["is_goal"] 520 ] 521 522 def write(filename: str, content: str) -> None: 523 path = output_dir / filename 524 path.write_text(content) 525 logger.info(f"Wrote {path}") 526 527 write("goals-index.md", self._render_parent_page_body(sorted(goals), sync=False)) 528 write("subsystems-index.md", self._render_parent_page_body(sorted(subsystems), sync=False)) 529 for shi in self._options_info["scope_to_help_info"].values(): 530 write(f"{shi['scope'] or 'GLOBAL'}.md", self._render_options_body(shi)) 531 532 write( 533 "targets-index.md", 534 self._render_parent_page_body(sorted(self._targets_info.keys()), sync=False), 535 ) 536 for alias in self._targets_info.keys(): 537 write(f"{alias}.md", self._render_target(alias)) 538 539 def sync(self) -> None: 540 """Render the pages and sync them to the live docsite. 541 542 All pages live under the "reference" category. 543 544 There are four top-level pages under that category: 545 - Global options 546 - The Goals parent page 547 - The Subsystems parent page 548 - The Targets parent page 549 550 The individual reference pages are nested under these parent pages. 551 """ 552 # Docs appear on the site in creation order. If we only create new docs 553 # that don't already exist then they will appear at the end, instead of in 554 # alphabetical order. So we first delete all previous docs, then recreate them. 555 # 556 # TODO: Instead of deleting and recreating, we can set the order explicitly. 557 # 558 # Note that deleting a non-empty parent will fail, so we delete children first. 559 def do_delete(docref: DocRef): 560 for child in docref.children: 561 do_delete(child) 562 self._readme_api.delete_doc(docref.slug) 563 564 docrefs = self._readme_api.get_docs_for_category("reference") 565 566 for docref in docrefs: 567 do_delete(docref) 568 569 # Partition the scopes into goals and subsystems. 570 goals = {} 571 subsystems = {} 572 for scope, shi in self._options_info["scope_to_help_info"].items(): 573 if scope == "": 574 continue # We handle the global scope separately. 575 if shi["is_goal"]: 576 goals[scope] = shi 577 else: 578 subsystems[scope] = shi 579 580 # Create the top-level docs in order. 581 self._create( 582 parent_doc_id=None, 583 slug_suffix="global", 584 title="Global options", 585 body=self._render_options_body(self._options_info["scope_to_help_info"][""]), 586 ) 587 self._create( 588 parent_doc_id=None, 589 slug_suffix="all-goals", 590 title="Goals", 591 body=self._render_parent_page_body(sorted(goals.keys()), sync=True), 592 ) 593 self._create( 594 parent_doc_id=None, 595 slug_suffix="all-subsystems", 596 title="Subsystems", 597 body=self._render_parent_page_body(sorted(subsystems.keys()), sync=True), 598 ) 599 self._create( 600 parent_doc_id=None, 601 slug_suffix="all-targets", 602 title="Targets", 603 body=self._render_parent_page_body(sorted(self._targets_info.keys()), sync=True), 604 ) 605 606 # Create the individual goal/subsystem/target docs. 607 all_goals_doc_id = self._readme_api.get_doc("reference-all-goals").id 608 for scope, shi in sorted(goals.items()): 609 self._create( 610 parent_doc_id=all_goals_doc_id, 611 slug_suffix=scope, 612 title=scope, 613 body=self._render_options_body(shi), 614 ) 615 616 all_subsystems_doc_id = self._readme_api.get_doc("reference-all-subsystems").id 617 for scope, shi in sorted(subsystems.items()): 618 self._create( 619 parent_doc_id=all_subsystems_doc_id, 620 slug_suffix=scope.replace(".", "-"), 621 title=scope, 622 body=self._render_options_body(shi), 623 ) 624 625 all_targets_doc_id = self._readme_api.get_doc("reference-all-targets").id 626 for alias, data in sorted(self._targets_info.items()): 627 self._create( 628 parent_doc_id=all_targets_doc_id, 629 slug_suffix=alias, 630 title=alias, 631 body=self._render_target(alias), 632 ) 633 634 635 if __name__ == "__main__": 636 main() 637 [end of build-support/bin/generate_docs.py] [start of src/python/pants/backend/google_cloud_function/python/rules.py] 1 # Copyright 2019 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 import logging 7 from dataclasses import dataclass 8 9 from pants.backend.google_cloud_function.python.target_types import ( 10 PythonGoogleCloudFunctionHandlerField, 11 PythonGoogleCloudFunctionRuntime, 12 PythonGoogleCloudFunctionType, 13 ResolvedPythonGoogleHandler, 14 ResolvePythonGoogleHandlerRequest, 15 ) 16 from pants.backend.python.subsystems.lambdex import Lambdex 17 from pants.backend.python.target_types import PexCompletePlatformsField 18 from pants.backend.python.util_rules import pex_from_targets 19 from pants.backend.python.util_rules.pex import ( 20 CompletePlatforms, 21 Pex, 22 PexPlatforms, 23 PexRequest, 24 VenvPex, 25 VenvPexProcess, 26 ) 27 from pants.backend.python.util_rules.pex_from_targets import PexFromTargetsRequest 28 from pants.core.goals.package import ( 29 BuiltPackage, 30 BuiltPackageArtifact, 31 OutputPathField, 32 PackageFieldSet, 33 ) 34 from pants.core.target_types import FileSourceField 35 from pants.engine.platform import Platform 36 from pants.engine.process import ProcessResult 37 from pants.engine.rules import Get, MultiGet, collect_rules, rule 38 from pants.engine.target import ( 39 TransitiveTargets, 40 TransitiveTargetsRequest, 41 targets_with_sources_types, 42 ) 43 from pants.engine.unions import UnionMembership, UnionRule 44 from pants.util.docutil import bin_name, doc_url 45 from pants.util.logging import LogLevel 46 47 logger = logging.getLogger(__name__) 48 49 50 @dataclass(frozen=True) 51 class PythonGoogleCloudFunctionFieldSet(PackageFieldSet): 52 required_fields = (PythonGoogleCloudFunctionHandlerField,) 53 54 handler: PythonGoogleCloudFunctionHandlerField 55 runtime: PythonGoogleCloudFunctionRuntime 56 complete_platforms: PexCompletePlatformsField 57 type: PythonGoogleCloudFunctionType 58 output_path: OutputPathField 59 60 61 @rule(desc="Create Python Google Cloud Function", level=LogLevel.DEBUG) 62 async def package_python_google_cloud_function( 63 field_set: PythonGoogleCloudFunctionFieldSet, 64 lambdex: Lambdex, 65 platform: Platform, 66 union_membership: UnionMembership, 67 ) -> BuiltPackage: 68 if platform.is_macos: 69 logger.warning( 70 "Google Cloud Functions built on macOS may fail to build. If your function uses any" 71 " third-party dependencies without binary wheels (bdist) for Linux available, it will" 72 " fail to build. If this happens, you will either need to update your dependencies to" 73 " only use dependencies with pre-built wheels, or find a Linux environment to run" 74 f" {bin_name()} package. (See https://realpython.com/python-wheels/ for more about" 75 " wheels.)\n\n(If the build does not raise an exception, it's safe to use macOS.)" 76 ) 77 78 output_filename = field_set.output_path.value_or_default( 79 # Cloud Functions typically use the .zip suffix, so we use that instead of .pex. 80 file_ending="zip", 81 ) 82 83 # We hardcode the platform value to the appropriate one for each Google Cloud Function runtime. 84 # (Running the "hello world" cloud function in the example code will report the platform, and can be 85 # used to verify correctness of these platform strings.) 86 pex_platforms = [] 87 interpreter_version = field_set.runtime.to_interpreter_version() 88 if interpreter_version: 89 py_major, py_minor = interpreter_version 90 platform_str = f"linux_x86_64-cp-{py_major}{py_minor}-cp{py_major}{py_minor}" 91 # set pymalloc ABI flag - this was removed in python 3.8 https://bugs.python.org/issue36707 92 if py_major <= 3 and py_minor < 8: 93 platform_str += "m" 94 pex_platforms.append(platform_str) 95 96 additional_pex_args = ( 97 # Ensure we can resolve manylinux wheels in addition to any AMI-specific wheels. 98 "--manylinux=manylinux2014", 99 # When we're executing Pex on Linux, allow a local interpreter to be resolved if 100 # available and matching the AMI platform. 101 "--resolve-local-platforms", 102 ) 103 104 complete_platforms = await Get( 105 CompletePlatforms, PexCompletePlatformsField, field_set.complete_platforms 106 ) 107 108 pex_request = PexFromTargetsRequest( 109 addresses=[field_set.address], 110 internal_only=False, 111 output_filename=output_filename, 112 platforms=PexPlatforms(pex_platforms), 113 complete_platforms=complete_platforms, 114 additional_args=additional_pex_args, 115 additional_lockfile_args=additional_pex_args, 116 ) 117 118 lambdex_request = PexRequest( 119 output_filename="lambdex.pex", 120 internal_only=True, 121 requirements=lambdex.pex_requirements(), 122 interpreter_constraints=lambdex.interpreter_constraints, 123 main=lambdex.main, 124 ) 125 126 lambdex_pex, pex_result, handler, transitive_targets = await MultiGet( 127 Get(VenvPex, PexRequest, lambdex_request), 128 Get(Pex, PexFromTargetsRequest, pex_request), 129 Get(ResolvedPythonGoogleHandler, ResolvePythonGoogleHandlerRequest(field_set.handler)), 130 Get(TransitiveTargets, TransitiveTargetsRequest([field_set.address])), 131 ) 132 133 # Warn if users depend on `files` targets, which won't be included in the PEX and is a common 134 # gotcha. 135 file_tgts = targets_with_sources_types( 136 [FileSourceField], transitive_targets.dependencies, union_membership 137 ) 138 if file_tgts: 139 files_addresses = sorted(tgt.address.spec for tgt in file_tgts) 140 logger.warning( 141 f"The `python_google_cloud_function` target {field_set.address} transitively depends " 142 "on the below `files` targets, but Pants will not include them in the built Cloud " 143 "Function. Filesystem APIs like `open()` are not able to load files within the binary " 144 "itself; instead, they read from the current working directory." 145 f"\n\nInstead, use `resources` targets. See {doc_url('resources')}." 146 f"\n\nFiles targets dependencies: {files_addresses}" 147 ) 148 149 # NB: Lambdex modifies its input pex in-place, so the input file is also the output file. 150 result = await Get( 151 ProcessResult, 152 VenvPexProcess( 153 lambdex_pex, 154 argv=("build", "-M", "main.py", "-e", handler.val, output_filename), 155 input_digest=pex_result.digest, 156 output_files=(output_filename,), 157 description=f"Setting up handler in {output_filename}", 158 ), 159 ) 160 161 extra_log_data: list[tuple[str, str]] = [] 162 if field_set.runtime.value: 163 extra_log_data.append(("Runtime", field_set.runtime.value)) 164 extra_log_data.extend(("Complete platform", path) for path in complete_platforms) 165 # The GCP-facing handler function is always main.handler, which is the 166 # wrapper injected by lambdex that manages invocation of the actual handler. 167 extra_log_data.append(("Handler", "main.handler")) 168 169 first_column_width = 4 + max(len(header) for header, _ in extra_log_data) 170 artifact = BuiltPackageArtifact( 171 output_filename, 172 extra_log_lines=tuple( 173 f"{header.rjust(first_column_width, ' ')}: {data}" for header, data in extra_log_data 174 ), 175 ) 176 return BuiltPackage(digest=result.output_digest, artifacts=(artifact,)) 177 178 179 def rules(): 180 return [ 181 *collect_rules(), 182 UnionRule(PackageFieldSet, PythonGoogleCloudFunctionFieldSet), 183 *pex_from_targets.rules(), 184 ] 185 [end of src/python/pants/backend/google_cloud_function/python/rules.py] [start of src/python/pants/bsp/protocol.py] 1 # Copyright 2022 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 from __future__ import annotations 4 5 import logging 6 from concurrent.futures import Future 7 from typing import Any, BinaryIO, ClassVar 8 9 from pylsp_jsonrpc.endpoint import Endpoint # type: ignore[import] 10 from pylsp_jsonrpc.exceptions import ( # type: ignore[import] 11 JsonRpcException, 12 JsonRpcInvalidRequest, 13 JsonRpcMethodNotFound, 14 ) 15 from pylsp_jsonrpc.streams import JsonRpcStreamReader, JsonRpcStreamWriter # type: ignore[import] 16 17 from pants.bsp.context import BSPContext 18 from pants.bsp.spec.notification import BSPNotification 19 from pants.engine.fs import Workspace 20 from pants.engine.internals.scheduler import SchedulerSession 21 from pants.engine.internals.selectors import Params 22 from pants.engine.unions import UnionMembership, union 23 24 try: 25 from typing import Protocol # Python 3.8+ 26 except ImportError: 27 # See https://github.com/python/mypy/issues/4427 re the ignore 28 from typing_extensions import Protocol # type: ignore 29 30 _logger = logging.getLogger(__name__) 31 32 33 class BSPRequestTypeProtocol(Protocol): 34 @classmethod 35 def from_json_dict(cls, d: dict[str, Any]) -> Any: 36 ... 37 38 39 class BSPResponseTypeProtocol(Protocol): 40 def to_json_dict(self) -> dict[str, Any]: 41 ... 42 43 44 @union 45 class BSPHandlerMapping: 46 """Union type for rules to register handlers for BSP methods.""" 47 48 # Name of the JSON-RPC method to be handled. 49 method_name: ClassVar[str] 50 51 # Type requested from the engine. This will be provided as the "subject" of an engine query. 52 # Must implement class method `from_json_dict`. 53 request_type: type[BSPRequestTypeProtocol] 54 55 # Type produced by the handler rule. This will be requested as the "product" of the engine query. 56 # Must implement instance method `to_json_dict`. 57 response_type: type[BSPResponseTypeProtocol] 58 59 # True if this handler is for a notification. 60 # TODO: Consider how to pass notifications (which do not have responses) to the engine rules. 61 is_notification: bool = False 62 63 64 def _make_error_future(exc: Exception) -> Future: 65 fut: Future = Future() 66 fut.set_exception(exc) 67 return fut 68 69 70 class BSPConnection: 71 _INITIALIZE_METHOD_NAME = "build/initialize" 72 _SHUTDOWN_METHOD_NAME = "build/shutdown" 73 _EXIT_NOTIFCATION_NAME = "build/exit" 74 75 def __init__( 76 self, 77 scheduler_session: SchedulerSession, 78 union_membership: UnionMembership, 79 context: BSPContext, 80 inbound: BinaryIO, 81 outbound: BinaryIO, 82 max_workers: int = 5, 83 ) -> None: 84 self._scheduler_session = scheduler_session 85 self._inbound = JsonRpcStreamReader(inbound) 86 self._outbound = JsonRpcStreamWriter(outbound) 87 self._context: BSPContext = context 88 self._endpoint = Endpoint(self, self._send_outbound_message, max_workers=max_workers) 89 90 self._handler_mappings: dict[str, type[BSPHandlerMapping]] = {} 91 impls = union_membership.get(BSPHandlerMapping) 92 for impl in impls: 93 self._handler_mappings[impl.method_name] = impl 94 95 def run(self) -> None: 96 """Run the listener for inbound JSON-RPC messages.""" 97 self._inbound.listen(self._received_inbound_message) 98 99 def _received_inbound_message(self, msg): 100 """Process each inbound JSON-RPC message.""" 101 _logger.info(f"_received_inbound_message: msg={msg}") 102 self._endpoint.consume(msg) 103 104 def _send_outbound_message(self, msg): 105 _logger.info(f"_send_outbound_message: msg={msg}") 106 self._outbound.write(msg) 107 108 # TODO: Figure out how to run this on the `Endpoint`'s thread pool by returing a callable. For now, we 109 # need to return errors as futures given that `Endpoint` only handles exceptions returned that way versus using a try ... except block. 110 def _handle_inbound_message(self, *, method_name: str, params: Any): 111 # If the connection is not yet initialized and this is not the initialization request, BSP requires 112 # returning an error for methods (and to discard all notifications). 113 # 114 # Concurrency: This method can be invoked from multiple threads (for each individual request). By returning 115 # an error for all other requests, only the thread running the initialization RPC should be able to proceed. 116 # This ensures that we can safely call `initialize_connection` on the BSPContext with the client-supplied 117 # init parameters without worrying about multiple threads. (Not entirely true though as this does not handle 118 # the client making multiple concurrent initialization RPCs, but which would violate the protocol in any case.) 119 if ( 120 not self._context.is_connection_initialized 121 and method_name != self._INITIALIZE_METHOD_NAME 122 ): 123 return _make_error_future( 124 JsonRpcException( 125 code=-32002, message=f"Client must first call `{self._INITIALIZE_METHOD_NAME}`." 126 ) 127 ) 128 129 # Handle the `build/shutdown` method and `build/exit` notification. 130 if method_name == self._SHUTDOWN_METHOD_NAME: 131 # Return no-op success for the `build/shutdown` method. This doesn't actually cause the server to 132 # exit. That will occur once the client sends the `build/exit` notification. 133 return None 134 elif method_name == self._EXIT_NOTIFCATION_NAME: 135 # The `build/exit` notification directs the BSP server to immediately exit. 136 # The read-dispatch loop will exit once it notices that the inbound handle is closed. So close the 137 # inbound handle (and outbound handle for completeness) and then return to the dispatch loop 138 # to trigger the exit. 139 self._inbound.close() 140 self._outbound.close() 141 return None 142 143 method_mapping = self._handler_mappings.get(method_name) 144 if not method_mapping: 145 return _make_error_future(JsonRpcMethodNotFound.of(method_name)) 146 147 try: 148 request = method_mapping.request_type.from_json_dict(params) 149 except Exception: 150 return _make_error_future(JsonRpcInvalidRequest()) 151 152 # TODO: This should not be necessary: see https://github.com/pantsbuild/pants/issues/15435. 153 self._scheduler_session.new_run_id() 154 155 workspace = Workspace(self._scheduler_session) 156 params = Params(request, workspace) 157 execution_request = self._scheduler_session.execution_request( 158 requests=[(method_mapping.response_type, params)], 159 ) 160 (result,) = self._scheduler_session.execute(execution_request) 161 # Initialize the BSPContext with the client-supplied init parameters. See earlier comment on why this 162 # call to `BSPContext.initialize_connection` is safe. 163 if method_name == self._INITIALIZE_METHOD_NAME: 164 self._context.initialize_connection(request, self.notify_client) 165 return result.to_json_dict() 166 167 # Called by `Endpoint` to dispatch requests and notifications. 168 # TODO: Should probably vendor `Endpoint` so we can detect notifications versus method calls, which 169 # matters when ignoring unknown notifications versus erroring for unknown methods. 170 def __getitem__(self, method_name): 171 def handler(params): 172 return self._handle_inbound_message(method_name=method_name, params=params) 173 174 return handler 175 176 def notify_client(self, notification: BSPNotification) -> None: 177 try: 178 self._endpoint.notify(notification.notification_name, notification.to_json_dict()) 179 except Exception as ex: 180 _logger.warning(f"Received exception while notifying BSP client: {ex}") 181 [end of src/python/pants/bsp/protocol.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pantsbuild/pants
bf10c3792c5c3b74d40968f0d2e806fa6d47c8bb
google_cloud_function backend: __path__ attribute not found on 'main' while trying to find 'main.handler' **Describe the bug** Pants version: 2.12.0 package build with `pants.backend.google_cloud_function.python` does not work. ``` python_google_cloud_function( name="cloud_function", runtime="python39", handler="function.py:main", type="http", ) ``` Error when deploy into GCP cloud function: ``` ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation. ``` Checking run log: ``` Traceback (most recent call last): File "/opt/python3.9/lib/python3.9/importlib/util.py", line 96, in find_spec parent_path = parent.__path__ AttributeError: module 'main' has no attribute '__path__' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/python3.9/lib/python3.9/pkgutil.py", line 495, in find_loader spec = importlib.util.find_spec(fullname) File "/opt/python3.9/lib/python3.9/importlib/util.py", line 98, in find_spec raise ModuleNotFoundError( ModuleNotFoundError: __path__ attribute not found on 'main' while trying to find 'main.handler' The above exception was the direct cause of the following exception: [...] Line 481, in get_loader return find_loader(fullname) File "/opt/python3.9/lib/python3.9/pkgutil.py", line 501, in find_loader raise ImportError(msg.format(fullname, type(ex), ex)) from ex ImportError: Error while finding loader for 'main.handler' (<class 'ModuleNotFoundError'>: __path__ attribute not found on 'main' while trying to find 'main.handler') ``` A minimal reproducible repo: https://github.com/stephentt42/rp-issue-pantsbuild-01 You can see deploy script in `deploy.sh` I really appreciate any help. **OS** * Linux
Thanks for providing the example repository - that is a huge help. I repro: ``` $ GCP_PROJECT=pants-issues-16242 GCS_BUCKET=rp-issue-pantsbuild-01 ./deploy.sh 17:58:32.53 [INFO] Wrote dist/cloud_function.zip Runtime: python39 Handler: main.handler Copying file://dist/cloud_function.zip [Content-Type=application/zip]... / [1 files][472.5 KiB/472.5 KiB] Operation completed over 1 objects/472.5 KiB. API [cloudfunctions.googleapis.com] not enabled on project [1032983878945]. Would you like to enable and retry (this will take a few minutes)? (y/N)? y Enabling service [cloudfunctions.googleapis.com] on project [1032983878945]... Operation "operations/acf.p2-1032983878945-37f800c3-13db-4cd3-b047-867978183111" finished successfully. API [cloudbuild.googleapis.com] not enabled on project [pants-issues-16242]. Would you like to enable and retry (this will take a few minutes)? (y/N)? y Enabling service [cloudbuild.googleapis.com] on project [pants-issues-16242]... Operation "operations/acf.p2-1032983878945-0a042946-234e-4a5d-8a79-e694d04b9f80" finished successfully. Deploying function (may take a while - up to 2 minutes)...⠧ For Cloud Build Logs, visit: https://console.cloud.google.com/cloud-build/builds;region=us-central1/97bcdcb1-6990-4c90-a2cc-a555a92d71f6?project=1032983878945 Deploying function (may take a while - up to 2 minutes)...failed. ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation. ``` And the logs show: ``` 2022-07-21 18:00:43.887 MDT topstops6o0bhe82pw0h Traceback (most recent call last): File "/layers/google.python.pip/pip/bin/functions-framework", line 8, in <module> sys.exit(_cli()) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/_cli.py", line 37, in _cli app = create_app(target, source, signature_type) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/__init__.py", line 263, in create_app _app = flask.Flask(target, template_folder=template_folder) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 397, in __init__ super().__init__( File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/scaffold.py", line 113, in __init__ root_path = get_root_path(self.import_name) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/helpers.py", line 721, in get_root_path loader = pkgutil.get_loader(import_name) File "/opt/python3.9/lib/python3.9/pkgutil.py", line 481, in get_loader return find_loader(fullname) File "/opt/python3.9/lib/python3.9/pkgutil.py", line 501, in find_loader raise ImportError(msg.format(fullname, type(ex), ex)) from ex ImportError: Error while finding loader for 'main.handler' (<class 'ModuleNotFoundError'>: __path__ attribute not found on 'main' while trying to find 'main.handler') Traceback (most recent call last): File "/layers/google.python.pip/pip/bin/functions-framework", line 8, in <module> sys.exit(_cli()) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/_cli.py", line 37, in _cli app = create_app(target, source, signature_type) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/__init__.py", line 263, in create_app _app = flask.Flask(target, template_folder=template_folder) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py", line 397, in __init__ super().__init__( File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/scaffold.py", line 113, in __init__ root_path = get_root_path(self.import_name) File "/layers/google.python.pip/pip/lib/python3.9/site-packages/flask/helpers.py", line 721, in get_root_path loader = pkgutil.get_loader(import_name) File "/opt/python3.9/lib/python3.9/pkgutil.py", line 481, in get_loader return find_loader(fullname) File "/opt/python3.9/lib/python3.9/pkgutil.py", line 501, in find_loader raise ImportError(msg.format(fullname, type(ex), ex)) from ex ImportError: Error while finding loader for 'main.handler' (<class 'ModuleNotFoundError'>: __path__ attribute not found on 'main' while trying to find 'main.handler') ``` But with this diff: ```diff diff --git a/deploy.sh b/deploy.sh index 66e6cd4..6c7c747 100755 --- a/deploy.sh +++ b/deploy.sh @@ -6,6 +6,6 @@ set -o pipefail gsutil cp dist/cloud_function.zip gs://$GCS_BUCKET/ gcloud functions deploy topstops \ --source gs://$GCS_BUCKET/cloud_function.zip \ - --entry-point main.handler \ + --entry-point handler \ --runtime python39 --trigger-http \ --allow-unauthenticated --project $GCP_PROJECT ``` Success: ``` $ GCP_PROJECT=pants-issues-16242 GCS_BUCKET=rp-issue-pantsbuild-01 ./deploy.sh 18:14:17.74 [INFO] Wrote dist/cloud_function.zip Runtime: python39 Handler: main.handler Copying file://dist/cloud_function.zip [Content-Type=application/zip]... / [1 files][472.5 KiB/472.5 KiB] Operation completed over 1 objects/472.5 KiB. Deploying function (may take a while - up to 2 minutes)...⠛ For Cloud Build Logs, visit: https://console.cloud.google.com/cloud-build/builds;region=us-central1/9e05c598-4298-43c1-b110-9dfc83c51ee2?project=1032983878945 Deploying function (may take a while - up to 2 minutes)...done. availableMemoryMb: 256 buildId: 9e05c598-4298-43c1-b110-9dfc83c51ee2 buildName: projects/1032983878945/locations/us-central1/builds/9e05c598-4298-43c1-b110-9dfc83c51ee2 dockerRegistry: CONTAINER_REGISTRY entryPoint: handler httpsTrigger: securityLevel: SECURE_OPTIONAL url: https://us-central1-pants-issues-16242.cloudfunctions.net/topstops ingressSettings: ALLOW_ALL labels: deployment-tool: cli-gcloud name: projects/pants-issues-16242/locations/us-central1/functions/topstops runtime: python39 serviceAccountEmail: [email protected] sourceArchiveUrl: gs://rp-issue-pantsbuild-01/cloud_function.zip status: ACTIVE timeout: 60s updateTime: '2022-07-22T00:16:01.829Z' versionId: '4' ``` So the issue is in your `--entry-point` argument. It appears GCF for Python mandates the entrypoint will live in main.py and so the `main` module is assumed and should be left out. Just give your function name and all is well. @stephentt42 I'm going to close this as answered, but please speak up if you have further questions or difficulties getting this working. Aha, @stephentt42 is it this bit of documentation that led you astray?: https://www.pantsbuild.org/docs/google-cloud-function-python#step-4-upload-to-google-cloud Thank you for your help. It works as expected. Maybe in [this docs](https://www.pantsbuild.org/docs/google-cloud-function-python) the line: > You must specify the handler as `main.handler`. create some confusion. But I am truly grateful for your help. Thanks a lot. :D Hm, so what should the docs be? I'm going to re-open this issue because it sounds like we still have a "docs bug" Per my investigation above it seems like `s/main.handler/handler/`. That said, my investigation was narrow and directly tied to @stephentt42's sample repo and as a GCF total noob. What we need is someone who actually knows the toolsets to definitively suss this. Barring that, it needs one of us noobs to invest more time playing with GCF and making sure this is actually generally correct advice. A continual pitfall in Pants is not knowing the tools we support well and / or dogfooding that support we add. OK. This is all pretty baroque, but the function need not be in a top-level `main.py`, it just defaults that way. It can be in any file iff: 1. You configure a `GOOGLE_FUNCTION_SOURCE` build environment variable (`gcloud functions deploy {--build-env-vars-file,--set-build-env-vars}`). 2. That file's parent directory is acceptable as the `sys.path` entry for locating other requirements in the deployment zip. Either way, using the default top-level zip file of `main.py` or some other file in the zip pointed to by the `GOOGLE_FUNCTION_SOURCE` build env var, the handler is expected to be just the function name within that file. The practical upshot is you really need to be using a top-level `main.py` which is what Pants Lambdex GCF integration sets up; so this is a simple doc change in the end.
2022-07-23T17:31:36Z
<patch> diff --git a/src/python/pants/backend/google_cloud_function/python/rules.py b/src/python/pants/backend/google_cloud_function/python/rules.py --- a/src/python/pants/backend/google_cloud_function/python/rules.py +++ b/src/python/pants/backend/google_cloud_function/python/rules.py @@ -151,7 +151,7 @@ async def package_python_google_cloud_function( ProcessResult, VenvPexProcess( lambdex_pex, - argv=("build", "-M", "main.py", "-e", handler.val, output_filename), + argv=("build", "-M", "main.py", "-H", "handler", "-e", handler.val, output_filename), input_digest=pex_result.digest, output_files=(output_filename,), description=f"Setting up handler in {output_filename}", @@ -162,9 +162,15 @@ async def package_python_google_cloud_function( if field_set.runtime.value: extra_log_data.append(("Runtime", field_set.runtime.value)) extra_log_data.extend(("Complete platform", path) for path in complete_platforms) - # The GCP-facing handler function is always main.handler, which is the - # wrapper injected by lambdex that manages invocation of the actual handler. - extra_log_data.append(("Handler", "main.handler")) + # The GCP-facing handler function is always `main.handler` (We pass `-M main.py -H handler` to + # Lambdex to ensure this), which is the wrapper injected by Lambdex that manages invocation of + # the actual user-supplied handler function. This arrangement works well since GCF assumes the + # handler function is housed in `main.py` in the root of the zip (you can re-direct this by + # setting a `GOOGLE_FUNCTION_SOURCE` Google Cloud build environment variable; e.g.: + # `gcloud functions deploy {--build-env-vars-file,--set-build-env-vars}`, but it's non-trivial + # to do this right or with intended effect) and the handler name you configure GCF with is just + # the unqualified function name, which we log here. + extra_log_data.append(("Handler", "handler")) first_column_width = 4 + max(len(header) for header, _ in extra_log_data) artifact = BuiltPackageArtifact( </patch>
[]
[]
huggingface__transformers-7456
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> import error in version 3.3.0, conflict with local directory "datasets" ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.0 - Platform: Google Colab Model I am using :Bert ## To reproduce Steps to reproduce the behavior: Traceback (most recent call last): File "train.py", line 19, in <module> from mydataset import load_data,dist_load_data,load_data2 File "/content/drive/My Drive/mrc4ner/mydataset.py", line 5, in <module> from transformers import BertTokenizer File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 42, in <module> from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 6, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 72, in <module> logger.debug(f"Succesfully imported datasets version {datasets.__version__}") AttributeError: module 'datasets' has no attribute '__version__' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> My code works well before, and there is a "datasets" folder in my working directory. When my transformers version upgraded to 3.3.0, I get this error. If I change the name of the folder "datasets" or downgrade transformers to version 3.2.0, the error is get fixed. Is this a bug? Because it doesn't allow me to use "datasets" as a folder name. </issue> <code> [start of README.md] 1 <p align="center"> 2 <br> 3 <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/> 4 <br> 5 <p> 6 <p align="center"> 7 <a href="https://circleci.com/gh/huggingface/transformers"> 8 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"> 9 </a> 10 <a href="https://github.com/huggingface/transformers/blob/master/LICENSE"> 11 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 12 </a> 13 <a href="https://huggingface.co/transformers/index.html"> 14 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/transformers/index.html.svg?down_color=red&down_message=offline&up_message=online"> 15 </a> 16 <a href="https://github.com/huggingface/transformers/releases"> 17 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 18 </a> 19 </p> 20 21 <h3 align="center"> 22 <p>State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0 23 </h3> 24 25 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation, etc in 100+ languages. Its aim is to make cutting-edge NLP easier to use for everyone. 26 27 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture can be used as a standalone and modified to enable quick research experiments. 28 29 🤗 Transformers is backed by the two most popular deep learning libraries, [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/), with a seamless integration between them, allowing you to train your models with one then load it for inference with the other. 30 31 ### Recent contributors 32 [![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/0)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/0)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/1)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/1)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/2)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/2)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/3)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/3)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/4)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/4)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/5)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/5)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/6)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/6)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/7)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/7) 33 34 ## Online demos 35 36 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer an [inference API](https://huggingface.co/pricing) to use those models. 37 38 Here are a few examples: 39 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 40 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 41 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 42 - [Natural Langugage Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 43 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 44 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 45 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 46 47 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities. 48 49 ## Quick tour 50 51 To immediately use a model on a given text, we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model training. Here is how to quickly use a pipeline to classify positive versus negative texts 52 53 ```python 54 >>> from transformers import pipeline 55 56 # Allocate a pipeline for sentiment-analysis 57 >>> classifier = pipeline('sentiment-analysis') 58 >>> classifier('We are very happy to include pipeline into the transformers repository.') 59 [{'label': 'POSITIVE', 'score': 0.9978193640708923}] 60 ``` 61 62 The second line of code downloads and caches the pretrained model used by the pipeline, the third line evaluates it on the given text. Here the answer is "positive" with a confidence of 99.8%. 63 64 This is another example of pipeline used for that can extract question answers from some context: 65 66 ``` python 67 >>> from transformers import pipeline 68 69 # Allocate a pipeline for question-answering 70 >>> question_answerer = pipeline('question-answering') 71 >>> question_answerer({ 72 ... 'question': 'What is the name of the repository ?', 73 ... 'context': 'Pipeline have been included in the huggingface/transformers repository' 74 ... }) 75 {'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'} 76 77 ``` 78 79 On top of the answer, the pretrained model used here returned its confidence score, along with the start position and its end position in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/transformers/task_summary.html). 80 81 To download and use any of the pretrained models on your given task, you just need to use those three lines of codes (PyTorch verison): 82 ```python 83 >>> from transformers import AutoTokenizer, AutoModel 84 85 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 86 >>> model = AutoModel.from_pretrained("bert-base-uncased") 87 88 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 89 >>> outputs = model(**inputs) 90 ``` 91 or for TensorFlow: 92 ```python 93 >>> from transformers import AutoTokenizer, TFAutoModel 94 95 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 96 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 97 98 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 99 >>> outputs = model(**inputs) 100 ``` 101 102 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on one (or list) of texts (as we can see on the fourth line of both code examples). It will output a dictionary you can directly pass to your model (which is done on the fifth line). 103 104 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. For instance, [this tutorial](https://huggingface.co/transformers/training.html) explains how to integrate such a model in classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune the on a new dataset. 105 106 ## Why should I use transformers? 107 108 1. Easy-to-use state-of-the-art models: 109 - High performance on NLU and NLG tasks. 110 - Low barrier to entry for educators and practitioners. 111 - Few user-facing abastractions with just three classes to learn. 112 - A unified API for using all our pretrained models. 113 114 1. Lower compute costs, smaller carbon footprint: 115 - Researchers can share trained models instead of always retraining. 116 - Practitioners can reduce compute time and production costs. 117 - Dozens of architectures with over 2,000 pretrained models, some in more than 100 languages. 118 119 1. Choose the right framework for every part of a model's lifetime: 120 - Train state-of-the-art models in 3 lines of code. 121 - Move a single model between TF2.0/PyTorch frameworks at will. 122 - Seamlessly pick the right framework for training, evaluation, production. 123 124 1. Easily customize a model or an example to your needs: 125 - Examples for each architecture to reproduce the results by the official authors of said architecture. 126 - Expose the models internal as consistently as possible. 127 - Model files can be used independently of the library for quick experiments. 128 129 ## Why shouldn't I use transformers? 130 131 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving in additional abstractions/files. 132 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library. 133 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. 134 135 ## Installation 136 137 This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for [examples](https://github.com/huggingface/transformers/tree/master/examples)) and TensorFlow 2.0. 138 139 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). 140 141 First, create a virtual environment with the version of Python you're going to use and activate it. 142 143 Then, you will need to install one of, or both, TensorFlow 2.0 and PyTorch. 144 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available) and/or [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform. 145 146 When TensorFlow 2.0 and/or PyTorch has been installed, 🤗 Transformers can be installed using pip as follows: 147 148 ```bash 149 pip install transformers 150 ``` 151 152 If you'd like to play with the examples, you must [install the library from source](https://huggingface.co/transformers/installation.html#installing-from-source). 153 154 ## Models architectures 155 156 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/transformers/model_summary.html) for a high-level summary of each them): 157 158 1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 159 2. **[GPT](https://huggingface.co/transformers/model_doc/gpt.html)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 160 3. **[GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 161 4. **[Transformer-XL](https://huggingface.co/transformers/model_doc/transformerxl.html)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 162 5. **[XLNet](https://huggingface.co/transformers/model_doc/xlnet.html)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 163 6. **[XLM](https://huggingface.co/transformers/model_doc/xlm.html)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 164 7. **[RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 165 8. **[DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT. 166 9. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 167 10. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 168 11. **[ALBERT](https://huggingface.co/transformers/model_doc/albert.html)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 169 12. **[T5](https://huggingface.co/transformers/model_doc/t5.html)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 170 13. **[XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 171 14. **[MMBT](https://github.com/facebookresearch/mmbt/)** (from Facebook), released together with the paper a [Supervised Multimodal Bitransformers for Classifying Images and Text](https://arxiv.org/pdf/1909.02950.pdf) by Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine. 172 15. **[FlauBERT](https://huggingface.co/transformers/model_doc/flaubert.html)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 173 16. **[BART](https://huggingface.co/transformers/model_doc/bart.html)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 174 17. **[ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 175 18. **[DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 176 19. **[Reformer](https://huggingface.co/transformers/model_doc/reformer.html)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 177 20. **[MarianMT](https://huggingface.co/transformers/model_doc/marian.html)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 178 21. **[Longformer](https://huggingface.co/transformers/model_doc/longformer.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 179 22. **[DPR](https://github.com/facebookresearch/DPR)** (from Facebook) released with the paper [Dense Passage Retrieval 180 for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon 181 Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 182 23. **[Pegasus](https://github.com/google-research/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)> by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 183 24. **[MBart](https://github.com/pytorch/fairseq/tree/master/examples/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 184 25. **[LXMERT](https://github.com/airsplay/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 185 26. **[Funnel Transformer](https://github.com/laiguokun/Funnel-Transformer)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 186 27. **[LayoutLM](https://github.com/microsoft/unilm/tree/master/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 187 28. **[Other community models](https://huggingface.co/models)**, contributed by the [community](https://huggingface.co/users). 188 29. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR. 189 190 These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations. You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html). 191 192 193 ## Learn more 194 195 | Section | Description | 196 |-|-| 197 | [Documentation](https://huggingface.co/transformers/) | Full API documentation and tutorials | 198 | [Task summary](https://huggingface.co/transformers/task_summary.html) | Tasks supported by 🤗 Transformers | 199 | [Preprocessing tutorial](https://huggingface.co/transformers/preprocessing.html) | Using the `Tokenizer` class to prepare data for the models | 200 | [Training and fine-tuning](https://huggingface.co/transformers/training.html) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API | 201 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/master/examples) | Example scripts for fine-tuning models on a wide range of tasks | 202 | [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) | Upload and share your fine-tuned models with the community | 203 | [Migration](https://huggingface.co/transformers/migration.html) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` | 204 205 ## Citation 206 207 We now have a [paper](https://arxiv.org/abs/1910.03771) you can cite for the 🤗 Transformers library: 208 ```bibtex 209 @article{Wolf2019HuggingFacesTS, 210 title={HuggingFace's Transformers: State-of-the-art Natural Language Processing}, 211 author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush}, 212 journal={ArXiv}, 213 year={2019}, 214 volume={abs/1910.03771} 215 } 216 ``` 217 [end of README.md] [start of setup.py] 1 """ 2 Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/master/setup.py 3 4 To create the package for pypi. 5 6 1. Change the version in __init__.py, setup.py as well as docs/source/conf.py. 7 8 2. Unpin specific versions from setup.py (like isort). 9 10 2. Commit these changes with the message: "Release: VERSION" 11 12 3. Add a tag in git to mark the release: "git tag VERSION -m'Adds tag VERSION for pypi' " 13 Push the tag to git: git push --tags origin master 14 15 4. Build both the sources and the wheel. Do not change anything in setup.py between 16 creating the wheel and the source distribution (obviously). 17 18 For the wheel, run: "python setup.py bdist_wheel" in the top level directory. 19 (this will build a wheel for the python version you use to build it). 20 21 For the sources, run: "python setup.py sdist" 22 You should now have a /dist directory with both .whl and .tar.gz source versions. 23 24 5. Check that everything looks correct by uploading the package to the pypi test server: 25 26 twine upload dist/* -r pypitest 27 (pypi suggest using twine as other methods upload files via plaintext.) 28 You may have to specify the repository url, use the following command then: 29 twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/ 30 31 Check that you can install it in a virtualenv by running: 32 pip install -i https://testpypi.python.org/pypi transformers 33 34 6. Upload the final version to actual pypi: 35 twine upload dist/* -r pypi 36 37 7. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory. 38 39 8. Add the release version to docs/source/_static/js/custom.js and .circleci/deploy.sh 40 41 9. Update README.md to redirect to correct documentation. 42 """ 43 44 import shutil 45 from pathlib import Path 46 47 from setuptools import find_packages, setup 48 49 50 # Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466 51 stale_egg_info = Path(__file__).parent / "transformers.egg-info" 52 if stale_egg_info.exists(): 53 print( 54 ( 55 "Warning: {} exists.\n\n" 56 "If you recently updated transformers to 3.0 or later, this is expected,\n" 57 "but it may prevent transformers from installing in editable mode.\n\n" 58 "This directory is automatically generated by Python's packaging tools.\n" 59 "I will remove it now.\n\n" 60 "See https://github.com/pypa/pip/issues/5466 for details.\n" 61 ).format(stale_egg_info) 62 ) 63 shutil.rmtree(stale_egg_info) 64 65 66 extras = {} 67 68 extras["ja"] = ["fugashi>=1.0", "ipadic>=1.0.0,<2.0", "unidic_lite>=1.0.7", "unidic>=1.0.2"] 69 extras["sklearn"] = ["scikit-learn"] 70 71 # keras2onnx and onnxconverter-common version is specific through a commit until 1.7.0 lands on pypi 72 extras["tf"] = [ 73 "tensorflow>=2.0", 74 "onnxconverter-common", 75 "keras2onnx" 76 # "onnxconverter-common @ git+git://github.com/microsoft/onnxconverter-common.git@f64ca15989b6dc95a1f3507ff6e4c395ba12dff5#egg=onnxconverter-common", 77 # "keras2onnx @ git+git://github.com/onnx/keras-onnx.git@cbdc75cb950b16db7f0a67be96a278f8d2953b48#egg=keras2onnx", 78 ] 79 extras["tf-cpu"] = [ 80 "tensorflow-cpu>=2.0", 81 "onnxconverter-common", 82 "keras2onnx" 83 # "onnxconverter-common @ git+git://github.com/microsoft/onnxconverter-common.git@f64ca15989b6dc95a1f3507ff6e4c395ba12dff5#egg=onnxconverter-common", 84 # "keras2onnx @ git+git://github.com/onnx/keras-onnx.git@cbdc75cb950b16db7f0a67be96a278f8d2953b48#egg=keras2onnx", 85 ] 86 extras["torch"] = ["torch>=1.0"] 87 extras["onnxruntime"] = ["onnxruntime>=1.4.0", "onnxruntime-tools>=1.4.2"] 88 89 extras["serving"] = ["pydantic", "uvicorn", "fastapi", "starlette"] 90 extras["all"] = extras["serving"] + ["tensorflow", "torch"] 91 92 extras["retrieval"] = ["faiss-cpu", "datasets"] 93 extras["testing"] = ["pytest", "pytest-xdist", "timeout-decorator", "parameterized", "psutil"] + extras["retrieval"] 94 # sphinx-rtd-theme==0.5.0 introduced big changes in the style. 95 extras["docs"] = ["recommonmark", "sphinx", "sphinx-markdown-tables", "sphinx-rtd-theme==0.4.3", "sphinx-copybutton"] 96 extras["quality"] = ["black >= 20.8b1", "isort >= 5", "flake8 >= 3.8.3"] 97 extras["dev"] = extras["testing"] + extras["quality"] + extras["ja"] + ["scikit-learn", "tensorflow", "torch"] 98 99 setup( 100 name="transformers", 101 version="3.3.0", 102 author="Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors", 103 author_email="[email protected]", 104 description="State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch", 105 long_description=open("README.md", "r", encoding="utf-8").read(), 106 long_description_content_type="text/markdown", 107 keywords="NLP deep learning transformer pytorch tensorflow BERT GPT GPT-2 google openai CMU", 108 license="Apache", 109 url="https://github.com/huggingface/transformers", 110 package_dir={"": "src"}, 111 packages=find_packages("src"), 112 install_requires=[ 113 "numpy", 114 "tokenizers == 0.8.1.rc2", 115 # dataclasses for Python versions that don't have it 116 "dataclasses;python_version<'3.7'", 117 # utilities from PyPA to e.g. compare versions 118 "packaging", 119 # filesystem locks e.g. to prevent parallel downloads 120 "filelock", 121 # for downloading models over HTTPS 122 "requests", 123 # progress bars in model download and training scripts 124 "tqdm >= 4.27", 125 # for OpenAI GPT 126 "regex != 2019.12.17", 127 # for XLNet 128 "sentencepiece != 0.1.92", 129 # for XLM 130 "sacremoses", 131 ], 132 extras_require=extras, 133 entry_points={ 134 "console_scripts": ["transformers-cli=transformers.commands.transformers_cli:main"] 135 }, 136 python_requires=">=3.6.0", 137 classifiers=[ 138 "Development Status :: 5 - Production/Stable", 139 "Intended Audience :: Developers", 140 "Intended Audience :: Education", 141 "Intended Audience :: Science/Research", 142 "License :: OSI Approved :: Apache Software License", 143 "Operating System :: OS Independent", 144 "Programming Language :: Python :: 3", 145 "Programming Language :: Python :: 3.6", 146 "Programming Language :: Python :: 3.7", 147 "Topic :: Scientific/Engineering :: Artificial Intelligence", 148 ], 149 ) 150 [end of setup.py] [start of src/transformers/trainer_utils.py] 1 import dataclasses 2 import json 3 import random 4 from dataclasses import dataclass 5 from typing import Any, Dict, List, NamedTuple, Optional, Tuple, Union 6 7 import numpy as np 8 9 from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available 10 from .tokenization_utils_base import ExplicitEnum 11 12 13 if is_torch_available(): 14 import torch 15 16 17 def set_seed(seed: int): 18 """ 19 Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` 20 (if installed). 21 22 Args: 23 seed (:obj:`int`): The seed to set. 24 """ 25 random.seed(seed) 26 np.random.seed(seed) 27 if is_torch_available(): 28 import torch 29 30 torch.manual_seed(seed) 31 torch.cuda.manual_seed_all(seed) 32 # ^^ safe to call this function even if cuda is not available 33 if is_tf_available(): 34 import tensorflow as tf 35 36 tf.random.set_seed(seed) 37 38 39 class EvalPrediction(NamedTuple): 40 """ 41 Evaluation output (always contains labels), to be used to compute metrics. 42 43 Parameters: 44 predictions (:obj:`np.ndarray`): Predictions of the model. 45 label_ids (:obj:`np.ndarray`): Targets to be matched. 46 """ 47 48 predictions: Union[np.ndarray, Tuple[np.ndarray]] 49 label_ids: np.ndarray 50 51 52 class PredictionOutput(NamedTuple): 53 predictions: Union[np.ndarray, Tuple[np.ndarray]] 54 label_ids: Optional[np.ndarray] 55 metrics: Optional[Dict[str, float]] 56 57 58 class TrainOutput(NamedTuple): 59 global_step: int 60 training_loss: float 61 62 63 PREFIX_CHECKPOINT_DIR = "checkpoint" 64 65 66 class EvaluationStrategy(ExplicitEnum): 67 NO = "no" 68 STEPS = "steps" 69 EPOCH = "epoch" 70 71 72 class BestRun(NamedTuple): 73 """ 74 The best run found by an hyperparameter search (see :class:`~transformers.Trainer.hyperparameter_search`). 75 76 Parameters: 77 run_id (:obj:`str`): 78 The id of the best run (if models were saved, the corresponding checkpoint will be in the folder ending 79 with run-{run_id}). 80 objective (:obj:`float`): 81 The objective that was obtained for this run. 82 hyperparameters (:obj:`Dict[str, Any]`): 83 The hyperparameters picked to get this run. 84 """ 85 86 run_id: str 87 objective: float 88 hyperparameters: Dict[str, Any] 89 90 91 def default_compute_objective(metrics: Dict[str, float]) -> float: 92 """ 93 The default objective to maximize/minimize when doing an hyperparameter search. It is the evaluation loss if no 94 metrics are provided to the :class:`~transformers.Trainer`, the sum of all metrics otherwise. 95 96 Args: 97 metrics (:obj:`Dict[str, float]`): The metrics returned by the evaluate method. 98 99 Return: 100 :obj:`float`: The objective to minimize or maximize 101 """ 102 loss = metrics.pop("eval_loss", None) 103 _ = metrics.pop("epoch", None) 104 return loss if len(metrics) == 0 else sum(metrics.values()) 105 106 107 def default_hp_space_optuna(trial) -> Dict[str, float]: 108 from .integrations import is_optuna_available 109 110 assert is_optuna_available(), "This function needs Optuna installed: `pip install optuna`" 111 return { 112 "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), 113 "num_train_epochs": trial.suggest_int("num_train_epochs", 1, 5), 114 "seed": trial.suggest_int("seed", 1, 40), 115 "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [4, 8, 16, 32, 64]), 116 } 117 118 119 def default_hp_space_ray(trial) -> Dict[str, float]: 120 from .integrations import is_ray_available 121 122 assert is_ray_available(), "This function needs ray installed: `pip install ray[tune]`" 123 from ray import tune 124 125 return { 126 "learning_rate": tune.loguniform(1e-6, 1e-4), 127 "num_train_epochs": tune.choice(list(range(1, 6))), 128 "seed": tune.uniform(1, 40), 129 "per_device_train_batch_size": tune.choice([4, 8, 16, 32, 64]), 130 } 131 132 133 class HPSearchBackend(ExplicitEnum): 134 OPTUNA = "optuna" 135 RAY = "ray" 136 137 138 default_hp_space = { 139 HPSearchBackend.OPTUNA: default_hp_space_optuna, 140 HPSearchBackend.RAY: default_hp_space_ray, 141 } 142 143 144 def nested_concat(tensors, new_tensors, dim=0): 145 "Concat the `new_tensors` to `tensors` on `dim`. Works for tensors or nested list/tuples of tensors." 146 if is_torch_available(): 147 assert type(tensors) == type( 148 new_tensors 149 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}." 150 if isinstance(tensors, (list, tuple)): 151 return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) 152 return torch.cat((tensors, new_tensors), dim=dim) 153 else: 154 raise ImportError("Torch must be installed to use `nested_concat`") 155 156 157 def nested_numpify(tensors): 158 "Numpify `tensors` (even if it's a nested list/tuple of tensors)." 159 if isinstance(tensors, (list, tuple)): 160 return type(tensors)(nested_numpify(t) for t in tensors) 161 return tensors.cpu().numpy() 162 163 164 def nested_detach(tensors): 165 "Detach `tensors` (even if it's a nested list/tuple of tensors)." 166 if isinstance(tensors, (list, tuple)): 167 return type(tensors)(nested_detach(t) for t in tensors) 168 return tensors.detach() 169 170 171 def nested_xla_mesh_reduce(tensors, name): 172 if is_torch_tpu_available(): 173 import torch_xla.core.xla_model as xm 174 175 if isinstance(tensors, (list, tuple)): 176 return type(tensors)(nested_xla_mesh_reduce(t, f"{name}_{i}") for i, t in enumerate(tensors)) 177 return xm.mesh_reduce(name, tensors, torch.cat) 178 else: 179 raise ImportError("Torch xla must be installed to use `nested_xla_mesh_reduce`") 180 181 182 def distributed_concat(tensor: "torch.Tensor", num_total_examples: Optional[int] = None) -> "torch.Tensor": 183 if is_torch_available(): 184 try: 185 if isinstance(tensor, (tuple, list)): 186 return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) 187 output_tensors = [tensor.clone() for _ in range(torch.distributed.get_world_size())] 188 torch.distributed.all_gather(output_tensors, tensor) 189 concat = torch.cat(output_tensors, dim=0) 190 191 # truncate the dummy elements added by SequentialDistributedSampler 192 if num_total_examples is not None: 193 concat = concat[:num_total_examples] 194 return concat 195 except AssertionError: 196 raise AssertionError("Not currently using distributed training") 197 else: 198 raise ImportError("Torch must be installed to use `distributed_concat`") 199 200 201 def distributed_broadcast_scalars( 202 scalars: List[Union[int, float]], num_total_examples: Optional[int] = None 203 ) -> "torch.Tensor": 204 if is_torch_available(): 205 try: 206 tensorized_scalar = torch.Tensor(scalars).cuda() 207 output_tensors = [tensorized_scalar.clone() for _ in range(torch.distributed.get_world_size())] 208 torch.distributed.all_gather(output_tensors, tensorized_scalar) 209 concat = torch.cat(output_tensors, dim=0) 210 211 # truncate the dummy elements added by SequentialDistributedSampler 212 if num_total_examples is not None: 213 concat = concat[:num_total_examples] 214 return concat 215 except AssertionError: 216 raise AssertionError("Not currently using distributed training") 217 else: 218 raise ImportError("Torch must be installed to use `distributed_broadcast_scalars`") 219 220 221 @dataclass 222 class TrainerState: 223 """ 224 A class containing the `Trainer` fields that will be saved along the model and optimizer. 225 """ 226 227 best_metric: Optional[float] = None 228 best_model_checkpoint: Optional[str] = None 229 230 def save_to_json(self, json_path: str): 231 """ Save the content of this instance in JSON format inside :obj:`json_path`.""" 232 json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n" 233 with open(json_path, "w", encoding="utf-8") as f: 234 f.write(json_string) 235 236 @classmethod 237 def load_from_json(cls, json_path: str): 238 """ Create an instance from the content of :obj:`json_path`.""" 239 with open(json_path, "r", encoding="utf-8") as f: 240 text = f.read() 241 return cls(**json.loads(text)) 242 [end of src/transformers/trainer_utils.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
huggingface/transformers
9e9a1fb8c75e2ef00fea9c4c0dc511fc0178081c
import error in version 3.3.0, conflict with local directory "datasets" ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.0 - Platform: Google Colab Model I am using :Bert ## To reproduce Steps to reproduce the behavior: Traceback (most recent call last): File "train.py", line 19, in <module> from mydataset import load_data,dist_load_data,load_data2 File "/content/drive/My Drive/mrc4ner/mydataset.py", line 5, in <module> from transformers import BertTokenizer File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 42, in <module> from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 6, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 72, in <module> logger.debug(f"Succesfully imported datasets version {datasets.__version__}") AttributeError: module 'datasets' has no attribute '__version__' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> My code works well before, and there is a "datasets" folder in my working directory. When my transformers version upgraded to 3.3.0, I get this error. If I change the name of the folder "datasets" or downgrade transformers to version 3.2.0, the error is get fixed. Is this a bug? Because it doesn't allow me to use "datasets" as a folder name.
Sadly that is how python works, it will try to import the datasets library from a local folder if you have a folder named like this in the path your are working in. However, this should only work if there is a `__init__.py` in your folder named datasets. Removing that file should then solve the bug. This change just broke [DeepChem](https://github.com/deepchem/deepchem). In the short term we can work around it by pinning to an older version, but that's not a reasonable long term solution. Directories called "datasets" are very common, and this will impact a lot of people. Using a common, generic word as the top level package violates the [PEP 423](https://www.python.org/dev/peps/pep-0423/) guidelines for package naming. Indeed, we are working on a fix and will release soon.
2020-09-29T17:31:49Z
<patch> diff --git a/src/transformers/file_utils.py b/src/transformers/file_utils.py --- a/src/transformers/file_utils.py +++ b/src/transformers/file_utils.py @@ -68,8 +68,12 @@ try: import datasets # noqa: F401 - _datasets_available = True - logger.debug(f"Succesfully imported datasets version {datasets.__version__}") + # Check we're not importing a "datasets" directory somewhere + _datasets_available = hasattr(datasets, "__version__") and hasattr(datasets, "load_dataset") + if _datasets_available: + logger.debug(f"Succesfully imported datasets version {datasets.__version__}") + else: + logger.debug("Imported a datasets object but this doesn't seem to be the 🤗 datasets library.") except ImportError: _datasets_available = False </patch>
[]
[]
numpy__numpy-11850
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> MAINT: _polybase __div__ method Line 420 - 422 of `numpy/polynomial/_polybase.py` has the following method: ``` def __div__(self, other): # set to __floordiv__, /, for now. return self.__floordiv__(other) ``` This implies the original author may have intended to come back and implement some other `__div__` method here. Is there a consensus on whether we would want to do that, remove the comment, or kick the can down the road? </issue> <code> [start of README.md] 1 # <img alt="NumPy" src="https://cdn.rawgit.com/numpy/numpy/master/branding/icons/numpylogo.svg" height="60"> 2 3 [![Travis](https://img.shields.io/travis/numpy/numpy/master.svg?label=Travis%20CI)](https://travis-ci.org/numpy/numpy) 4 [![AppVeyor](https://img.shields.io/appveyor/ci/charris/numpy/master.svg?label=AppVeyor)](https://ci.appveyor.com/project/charris/numpy) 5 [![codecov](https://codecov.io/gh/numpy/numpy/branch/master/graph/badge.svg)](https://codecov.io/gh/numpy/numpy) 6 7 NumPy is the fundamental package needed for scientific computing with Python. 8 9 - **Website (including documentation):** https://www.numpy.org 10 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion 11 - **Source:** https://github.com/numpy/numpy 12 - **Bug reports:** https://github.com/numpy/numpy/issues 13 14 It provides: 15 16 - a powerful N-dimensional array object 17 - sophisticated (broadcasting) functions 18 - tools for integrating C/C++ and Fortran code 19 - useful linear algebra, Fourier transform, and random number capabilities 20 21 Testing: 22 23 - NumPy versions &ge; 1.15 require `pytest` 24 - NumPy versions &lt; 1.15 require `nose` 25 26 Tests can then be run after installation with: 27 28 python -c 'import numpy; numpy.test()' 29 30 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org) 31 [end of README.md] [start of numpy/lib/mixins.py] 1 """Mixin classes for custom array types that don't inherit from ndarray.""" 2 from __future__ import division, absolute_import, print_function 3 4 import sys 5 6 from numpy.core import umath as um 7 8 # Nothing should be exposed in the top-level NumPy module. 9 __all__ = [] 10 11 12 def _disables_array_ufunc(obj): 13 """True when __array_ufunc__ is set to None.""" 14 try: 15 return obj.__array_ufunc__ is None 16 except AttributeError: 17 return False 18 19 20 def _binary_method(ufunc, name): 21 """Implement a forward binary method with a ufunc, e.g., __add__.""" 22 def func(self, other): 23 if _disables_array_ufunc(other): 24 return NotImplemented 25 return ufunc(self, other) 26 func.__name__ = '__{}__'.format(name) 27 return func 28 29 30 def _reflected_binary_method(ufunc, name): 31 """Implement a reflected binary method with a ufunc, e.g., __radd__.""" 32 def func(self, other): 33 if _disables_array_ufunc(other): 34 return NotImplemented 35 return ufunc(other, self) 36 func.__name__ = '__r{}__'.format(name) 37 return func 38 39 40 def _inplace_binary_method(ufunc, name): 41 """Implement an in-place binary method with a ufunc, e.g., __iadd__.""" 42 def func(self, other): 43 return ufunc(self, other, out=(self,)) 44 func.__name__ = '__i{}__'.format(name) 45 return func 46 47 48 def _numeric_methods(ufunc, name): 49 """Implement forward, reflected and inplace binary methods with a ufunc.""" 50 return (_binary_method(ufunc, name), 51 _reflected_binary_method(ufunc, name), 52 _inplace_binary_method(ufunc, name)) 53 54 55 def _unary_method(ufunc, name): 56 """Implement a unary special method with a ufunc.""" 57 def func(self): 58 return ufunc(self) 59 func.__name__ = '__{}__'.format(name) 60 return func 61 62 63 class NDArrayOperatorsMixin(object): 64 """Mixin defining all operator special methods using __array_ufunc__. 65 66 This class implements the special methods for almost all of Python's 67 builtin operators defined in the `operator` module, including comparisons 68 (``==``, ``>``, etc.) and arithmetic (``+``, ``*``, ``-``, etc.), by 69 deferring to the ``__array_ufunc__`` method, which subclasses must 70 implement. 71 72 This class does not yet implement the special operators corresponding 73 to ``matmul`` (``@``), because ``np.matmul`` is not yet a NumPy ufunc. 74 75 It is useful for writing classes that do not inherit from `numpy.ndarray`, 76 but that should support arithmetic and numpy universal functions like 77 arrays as described in `A Mechanism for Overriding Ufuncs 78 <../../neps/nep-0013-ufunc-overrides.html>`_. 79 80 As an trivial example, consider this implementation of an ``ArrayLike`` 81 class that simply wraps a NumPy array and ensures that the result of any 82 arithmetic operation is also an ``ArrayLike`` object:: 83 84 class ArrayLike(np.lib.mixins.NDArrayOperatorsMixin): 85 def __init__(self, value): 86 self.value = np.asarray(value) 87 88 # One might also consider adding the built-in list type to this 89 # list, to support operations like np.add(array_like, list) 90 _HANDLED_TYPES = (np.ndarray, numbers.Number) 91 92 def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): 93 out = kwargs.get('out', ()) 94 for x in inputs + out: 95 # Only support operations with instances of _HANDLED_TYPES. 96 # Use ArrayLike instead of type(self) for isinstance to 97 # allow subclasses that don't override __array_ufunc__ to 98 # handle ArrayLike objects. 99 if not isinstance(x, self._HANDLED_TYPES + (ArrayLike,)): 100 return NotImplemented 101 102 # Defer to the implementation of the ufunc on unwrapped values. 103 inputs = tuple(x.value if isinstance(x, ArrayLike) else x 104 for x in inputs) 105 if out: 106 kwargs['out'] = tuple( 107 x.value if isinstance(x, ArrayLike) else x 108 for x in out) 109 result = getattr(ufunc, method)(*inputs, **kwargs) 110 111 if type(result) is tuple: 112 # multiple return values 113 return tuple(type(self)(x) for x in result) 114 elif method == 'at': 115 # no return value 116 return None 117 else: 118 # one return value 119 return type(self)(result) 120 121 def __repr__(self): 122 return '%s(%r)' % (type(self).__name__, self.value) 123 124 In interactions between ``ArrayLike`` objects and numbers or numpy arrays, 125 the result is always another ``ArrayLike``: 126 127 >>> x = ArrayLike([1, 2, 3]) 128 >>> x - 1 129 ArrayLike(array([0, 1, 2])) 130 >>> 1 - x 131 ArrayLike(array([ 0, -1, -2])) 132 >>> np.arange(3) - x 133 ArrayLike(array([-1, -1, -1])) 134 >>> x - np.arange(3) 135 ArrayLike(array([1, 1, 1])) 136 137 Note that unlike ``numpy.ndarray``, ``ArrayLike`` does not allow operations 138 with arbitrary, unrecognized types. This ensures that interactions with 139 ArrayLike preserve a well-defined casting hierarchy. 140 141 .. versionadded:: 1.13 142 """ 143 # Like np.ndarray, this mixin class implements "Option 1" from the ufunc 144 # overrides NEP. 145 146 # comparisons don't have reflected and in-place versions 147 __lt__ = _binary_method(um.less, 'lt') 148 __le__ = _binary_method(um.less_equal, 'le') 149 __eq__ = _binary_method(um.equal, 'eq') 150 __ne__ = _binary_method(um.not_equal, 'ne') 151 __gt__ = _binary_method(um.greater, 'gt') 152 __ge__ = _binary_method(um.greater_equal, 'ge') 153 154 # numeric methods 155 __add__, __radd__, __iadd__ = _numeric_methods(um.add, 'add') 156 __sub__, __rsub__, __isub__ = _numeric_methods(um.subtract, 'sub') 157 __mul__, __rmul__, __imul__ = _numeric_methods(um.multiply, 'mul') 158 if sys.version_info.major < 3: 159 # Python 3 uses only __truediv__ and __floordiv__ 160 __div__, __rdiv__, __idiv__ = _numeric_methods(um.divide, 'div') 161 __truediv__, __rtruediv__, __itruediv__ = _numeric_methods( 162 um.true_divide, 'truediv') 163 __floordiv__, __rfloordiv__, __ifloordiv__ = _numeric_methods( 164 um.floor_divide, 'floordiv') 165 __mod__, __rmod__, __imod__ = _numeric_methods(um.remainder, 'mod') 166 __divmod__ = _binary_method(um.divmod, 'divmod') 167 __rdivmod__ = _reflected_binary_method(um.divmod, 'divmod') 168 # __idivmod__ does not exist 169 # TODO: handle the optional third argument for __pow__? 170 __pow__, __rpow__, __ipow__ = _numeric_methods(um.power, 'pow') 171 __lshift__, __rlshift__, __ilshift__ = _numeric_methods( 172 um.left_shift, 'lshift') 173 __rshift__, __rrshift__, __irshift__ = _numeric_methods( 174 um.right_shift, 'rshift') 175 __and__, __rand__, __iand__ = _numeric_methods(um.bitwise_and, 'and') 176 __xor__, __rxor__, __ixor__ = _numeric_methods(um.bitwise_xor, 'xor') 177 __or__, __ror__, __ior__ = _numeric_methods(um.bitwise_or, 'or') 178 179 # unary methods 180 __neg__ = _unary_method(um.negative, 'neg') 181 __pos__ = _unary_method(um.positive, 'pos') 182 __abs__ = _unary_method(um.absolute, 'abs') 183 __invert__ = _unary_method(um.invert, 'invert') 184 [end of numpy/lib/mixins.py] [start of numpy/polynomial/_polybase.py] 1 """ 2 Abstract base class for the various polynomial Classes. 3 4 The ABCPolyBase class provides the methods needed to implement the common API 5 for the various polynomial classes. It operates as a mixin, but uses the 6 abc module from the stdlib, hence it is only available for Python >= 2.6. 7 8 """ 9 from __future__ import division, absolute_import, print_function 10 11 from abc import ABCMeta, abstractmethod, abstractproperty 12 import numbers 13 14 import numpy as np 15 from . import polyutils as pu 16 17 __all__ = ['ABCPolyBase'] 18 19 class ABCPolyBase(object): 20 """An abstract base class for immutable series classes. 21 22 ABCPolyBase provides the standard Python numerical methods 23 '+', '-', '*', '//', '%', 'divmod', '**', and '()' along with the 24 methods listed below. 25 26 .. versionadded:: 1.9.0 27 28 Parameters 29 ---------- 30 coef : array_like 31 Series coefficients in order of increasing degree, i.e., 32 ``(1, 2, 3)`` gives ``1*P_0(x) + 2*P_1(x) + 3*P_2(x)``, where 33 ``P_i`` is the basis polynomials of degree ``i``. 34 domain : (2,) array_like, optional 35 Domain to use. The interval ``[domain[0], domain[1]]`` is mapped 36 to the interval ``[window[0], window[1]]`` by shifting and scaling. 37 The default value is the derived class domain. 38 window : (2,) array_like, optional 39 Window, see domain for its use. The default value is the 40 derived class window. 41 42 Attributes 43 ---------- 44 coef : (N,) ndarray 45 Series coefficients in order of increasing degree. 46 domain : (2,) ndarray 47 Domain that is mapped to window. 48 window : (2,) ndarray 49 Window that domain is mapped to. 50 51 Class Attributes 52 ---------------- 53 maxpower : int 54 Maximum power allowed, i.e., the largest number ``n`` such that 55 ``p(x)**n`` is allowed. This is to limit runaway polynomial size. 56 domain : (2,) ndarray 57 Default domain of the class. 58 window : (2,) ndarray 59 Default window of the class. 60 61 """ 62 __metaclass__ = ABCMeta 63 64 # Not hashable 65 __hash__ = None 66 67 # Opt out of numpy ufuncs and Python ops with ndarray subclasses. 68 __array_ufunc__ = None 69 70 # Limit runaway size. T_n^m has degree n*m 71 maxpower = 100 72 73 @abstractproperty 74 def domain(self): 75 pass 76 77 @abstractproperty 78 def window(self): 79 pass 80 81 @abstractproperty 82 def nickname(self): 83 pass 84 85 @abstractproperty 86 def basis_name(self): 87 pass 88 89 @abstractmethod 90 def _add(self): 91 pass 92 93 @abstractmethod 94 def _sub(self): 95 pass 96 97 @abstractmethod 98 def _mul(self): 99 pass 100 101 @abstractmethod 102 def _div(self): 103 pass 104 105 @abstractmethod 106 def _pow(self): 107 pass 108 109 @abstractmethod 110 def _val(self): 111 pass 112 113 @abstractmethod 114 def _int(self): 115 pass 116 117 @abstractmethod 118 def _der(self): 119 pass 120 121 @abstractmethod 122 def _fit(self): 123 pass 124 125 @abstractmethod 126 def _line(self): 127 pass 128 129 @abstractmethod 130 def _roots(self): 131 pass 132 133 @abstractmethod 134 def _fromroots(self): 135 pass 136 137 def has_samecoef(self, other): 138 """Check if coefficients match. 139 140 .. versionadded:: 1.6.0 141 142 Parameters 143 ---------- 144 other : class instance 145 The other class must have the ``coef`` attribute. 146 147 Returns 148 ------- 149 bool : boolean 150 True if the coefficients are the same, False otherwise. 151 152 """ 153 if len(self.coef) != len(other.coef): 154 return False 155 elif not np.all(self.coef == other.coef): 156 return False 157 else: 158 return True 159 160 def has_samedomain(self, other): 161 """Check if domains match. 162 163 .. versionadded:: 1.6.0 164 165 Parameters 166 ---------- 167 other : class instance 168 The other class must have the ``domain`` attribute. 169 170 Returns 171 ------- 172 bool : boolean 173 True if the domains are the same, False otherwise. 174 175 """ 176 return np.all(self.domain == other.domain) 177 178 def has_samewindow(self, other): 179 """Check if windows match. 180 181 .. versionadded:: 1.6.0 182 183 Parameters 184 ---------- 185 other : class instance 186 The other class must have the ``window`` attribute. 187 188 Returns 189 ------- 190 bool : boolean 191 True if the windows are the same, False otherwise. 192 193 """ 194 return np.all(self.window == other.window) 195 196 def has_sametype(self, other): 197 """Check if types match. 198 199 .. versionadded:: 1.7.0 200 201 Parameters 202 ---------- 203 other : object 204 Class instance. 205 206 Returns 207 ------- 208 bool : boolean 209 True if other is same class as self 210 211 """ 212 return isinstance(other, self.__class__) 213 214 def _get_coefficients(self, other): 215 """Interpret other as polynomial coefficients. 216 217 The `other` argument is checked to see if it is of the same 218 class as self with identical domain and window. If so, 219 return its coefficients, otherwise return `other`. 220 221 .. versionadded:: 1.9.0 222 223 Parameters 224 ---------- 225 other : anything 226 Object to be checked. 227 228 Returns 229 ------- 230 coef 231 The coefficients of`other` if it is a compatible instance, 232 of ABCPolyBase, otherwise `other`. 233 234 Raises 235 ------ 236 TypeError 237 When `other` is an incompatible instance of ABCPolyBase. 238 239 """ 240 if isinstance(other, ABCPolyBase): 241 if not isinstance(other, self.__class__): 242 raise TypeError("Polynomial types differ") 243 elif not np.all(self.domain == other.domain): 244 raise TypeError("Domains differ") 245 elif not np.all(self.window == other.window): 246 raise TypeError("Windows differ") 247 return other.coef 248 return other 249 250 def __init__(self, coef, domain=None, window=None): 251 [coef] = pu.as_series([coef], trim=False) 252 self.coef = coef 253 254 if domain is not None: 255 [domain] = pu.as_series([domain], trim=False) 256 if len(domain) != 2: 257 raise ValueError("Domain has wrong number of elements.") 258 self.domain = domain 259 260 if window is not None: 261 [window] = pu.as_series([window], trim=False) 262 if len(window) != 2: 263 raise ValueError("Window has wrong number of elements.") 264 self.window = window 265 266 def __repr__(self): 267 format = "%s(%s, domain=%s, window=%s)" 268 coef = repr(self.coef)[6:-1] 269 domain = repr(self.domain)[6:-1] 270 window = repr(self.window)[6:-1] 271 name = self.__class__.__name__ 272 return format % (name, coef, domain, window) 273 274 def __str__(self): 275 format = "%s(%s)" 276 coef = str(self.coef) 277 name = self.nickname 278 return format % (name, coef) 279 280 @classmethod 281 def _repr_latex_term(cls, i, arg_str, needs_parens): 282 if cls.basis_name is None: 283 raise NotImplementedError( 284 "Subclasses must define either a basis name, or override " 285 "_repr_latex_term(i, arg_str, needs_parens)") 286 # since we always add parens, we don't care if the expression needs them 287 return "{{{basis}}}_{{{i}}}({arg_str})".format( 288 basis=cls.basis_name, i=i, arg_str=arg_str 289 ) 290 291 @staticmethod 292 def _repr_latex_scalar(x): 293 # TODO: we're stuck with disabling math formatting until we handle 294 # exponents in this function 295 return r'\text{{{}}}'.format(x) 296 297 def _repr_latex_(self): 298 # get the scaled argument string to the basis functions 299 off, scale = self.mapparms() 300 if off == 0 and scale == 1: 301 term = 'x' 302 needs_parens = False 303 elif scale == 1: 304 term = '{} + x'.format( 305 self._repr_latex_scalar(off) 306 ) 307 needs_parens = True 308 elif off == 0: 309 term = '{}x'.format( 310 self._repr_latex_scalar(scale) 311 ) 312 needs_parens = True 313 else: 314 term = '{} + {}x'.format( 315 self._repr_latex_scalar(off), 316 self._repr_latex_scalar(scale) 317 ) 318 needs_parens = True 319 320 # filter out uninteresting coefficients 321 filtered_coeffs = [ 322 (i, c) 323 for i, c in enumerate(self.coef) 324 # if not (c == 0) # handle NaN 325 ] 326 327 mute = r"\color{{LightGray}}{{{}}}".format 328 329 parts = [] 330 for i, c in enumerate(self.coef): 331 # prevent duplication of + and - signs 332 if i == 0: 333 coef_str = '{}'.format(self._repr_latex_scalar(c)) 334 elif not isinstance(c, numbers.Real): 335 coef_str = ' + ({})'.format(self._repr_latex_scalar(c)) 336 elif not np.signbit(c): 337 coef_str = ' + {}'.format(self._repr_latex_scalar(c)) 338 else: 339 coef_str = ' - {}'.format(self._repr_latex_scalar(-c)) 340 341 # produce the string for the term 342 term_str = self._repr_latex_term(i, term, needs_parens) 343 if term_str == '1': 344 part = coef_str 345 else: 346 part = r'{}\,{}'.format(coef_str, term_str) 347 348 if c == 0: 349 part = mute(part) 350 351 parts.append(part) 352 353 if parts: 354 body = ''.join(parts) 355 else: 356 # in case somehow there are no coefficients at all 357 body = '0' 358 359 return r'$x \mapsto {}$'.format(body) 360 361 362 363 # Pickle and copy 364 365 def __getstate__(self): 366 ret = self.__dict__.copy() 367 ret['coef'] = self.coef.copy() 368 ret['domain'] = self.domain.copy() 369 ret['window'] = self.window.copy() 370 return ret 371 372 def __setstate__(self, dict): 373 self.__dict__ = dict 374 375 # Call 376 377 def __call__(self, arg): 378 off, scl = pu.mapparms(self.domain, self.window) 379 arg = off + scl*arg 380 return self._val(arg, self.coef) 381 382 def __iter__(self): 383 return iter(self.coef) 384 385 def __len__(self): 386 return len(self.coef) 387 388 # Numeric properties. 389 390 def __neg__(self): 391 return self.__class__(-self.coef, self.domain, self.window) 392 393 def __pos__(self): 394 return self 395 396 def __add__(self, other): 397 othercoef = self._get_coefficients(other) 398 try: 399 coef = self._add(self.coef, othercoef) 400 except Exception: 401 return NotImplemented 402 return self.__class__(coef, self.domain, self.window) 403 404 def __sub__(self, other): 405 othercoef = self._get_coefficients(other) 406 try: 407 coef = self._sub(self.coef, othercoef) 408 except Exception: 409 return NotImplemented 410 return self.__class__(coef, self.domain, self.window) 411 412 def __mul__(self, other): 413 othercoef = self._get_coefficients(other) 414 try: 415 coef = self._mul(self.coef, othercoef) 416 except Exception: 417 return NotImplemented 418 return self.__class__(coef, self.domain, self.window) 419 420 def __div__(self, other): 421 # set to __floordiv__, /, for now. 422 return self.__floordiv__(other) 423 424 def __truediv__(self, other): 425 # there is no true divide if the rhs is not a Number, although it 426 # could return the first n elements of an infinite series. 427 # It is hard to see where n would come from, though. 428 if not isinstance(other, numbers.Number) or isinstance(other, bool): 429 form = "unsupported types for true division: '%s', '%s'" 430 raise TypeError(form % (type(self), type(other))) 431 return self.__floordiv__(other) 432 433 def __floordiv__(self, other): 434 res = self.__divmod__(other) 435 if res is NotImplemented: 436 return res 437 return res[0] 438 439 def __mod__(self, other): 440 res = self.__divmod__(other) 441 if res is NotImplemented: 442 return res 443 return res[1] 444 445 def __divmod__(self, other): 446 othercoef = self._get_coefficients(other) 447 try: 448 quo, rem = self._div(self.coef, othercoef) 449 except ZeroDivisionError as e: 450 raise e 451 except Exception: 452 return NotImplemented 453 quo = self.__class__(quo, self.domain, self.window) 454 rem = self.__class__(rem, self.domain, self.window) 455 return quo, rem 456 457 def __pow__(self, other): 458 coef = self._pow(self.coef, other, maxpower=self.maxpower) 459 res = self.__class__(coef, self.domain, self.window) 460 return res 461 462 def __radd__(self, other): 463 try: 464 coef = self._add(other, self.coef) 465 except Exception: 466 return NotImplemented 467 return self.__class__(coef, self.domain, self.window) 468 469 def __rsub__(self, other): 470 try: 471 coef = self._sub(other, self.coef) 472 except Exception: 473 return NotImplemented 474 return self.__class__(coef, self.domain, self.window) 475 476 def __rmul__(self, other): 477 try: 478 coef = self._mul(other, self.coef) 479 except Exception: 480 return NotImplemented 481 return self.__class__(coef, self.domain, self.window) 482 483 def __rdiv__(self, other): 484 # set to __floordiv__ /. 485 return self.__rfloordiv__(other) 486 487 def __rtruediv__(self, other): 488 # An instance of ABCPolyBase is not considered a 489 # Number. 490 return NotImplemented 491 492 def __rfloordiv__(self, other): 493 res = self.__rdivmod__(other) 494 if res is NotImplemented: 495 return res 496 return res[0] 497 498 def __rmod__(self, other): 499 res = self.__rdivmod__(other) 500 if res is NotImplemented: 501 return res 502 return res[1] 503 504 def __rdivmod__(self, other): 505 try: 506 quo, rem = self._div(other, self.coef) 507 except ZeroDivisionError as e: 508 raise e 509 except Exception: 510 return NotImplemented 511 quo = self.__class__(quo, self.domain, self.window) 512 rem = self.__class__(rem, self.domain, self.window) 513 return quo, rem 514 515 def __eq__(self, other): 516 res = (isinstance(other, self.__class__) and 517 np.all(self.domain == other.domain) and 518 np.all(self.window == other.window) and 519 (self.coef.shape == other.coef.shape) and 520 np.all(self.coef == other.coef)) 521 return res 522 523 def __ne__(self, other): 524 return not self.__eq__(other) 525 526 # 527 # Extra methods. 528 # 529 530 def copy(self): 531 """Return a copy. 532 533 Returns 534 ------- 535 new_series : series 536 Copy of self. 537 538 """ 539 return self.__class__(self.coef, self.domain, self.window) 540 541 def degree(self): 542 """The degree of the series. 543 544 .. versionadded:: 1.5.0 545 546 Returns 547 ------- 548 degree : int 549 Degree of the series, one less than the number of coefficients. 550 551 """ 552 return len(self) - 1 553 554 def cutdeg(self, deg): 555 """Truncate series to the given degree. 556 557 Reduce the degree of the series to `deg` by discarding the 558 high order terms. If `deg` is greater than the current degree a 559 copy of the current series is returned. This can be useful in least 560 squares where the coefficients of the high degree terms may be very 561 small. 562 563 .. versionadded:: 1.5.0 564 565 Parameters 566 ---------- 567 deg : non-negative int 568 The series is reduced to degree `deg` by discarding the high 569 order terms. The value of `deg` must be a non-negative integer. 570 571 Returns 572 ------- 573 new_series : series 574 New instance of series with reduced degree. 575 576 """ 577 return self.truncate(deg + 1) 578 579 def trim(self, tol=0): 580 """Remove trailing coefficients 581 582 Remove trailing coefficients until a coefficient is reached whose 583 absolute value greater than `tol` or the beginning of the series is 584 reached. If all the coefficients would be removed the series is set 585 to ``[0]``. A new series instance is returned with the new 586 coefficients. The current instance remains unchanged. 587 588 Parameters 589 ---------- 590 tol : non-negative number. 591 All trailing coefficients less than `tol` will be removed. 592 593 Returns 594 ------- 595 new_series : series 596 Contains the new set of coefficients. 597 598 """ 599 coef = pu.trimcoef(self.coef, tol) 600 return self.__class__(coef, self.domain, self.window) 601 602 def truncate(self, size): 603 """Truncate series to length `size`. 604 605 Reduce the series to length `size` by discarding the high 606 degree terms. The value of `size` must be a positive integer. This 607 can be useful in least squares where the coefficients of the 608 high degree terms may be very small. 609 610 Parameters 611 ---------- 612 size : positive int 613 The series is reduced to length `size` by discarding the high 614 degree terms. The value of `size` must be a positive integer. 615 616 Returns 617 ------- 618 new_series : series 619 New instance of series with truncated coefficients. 620 621 """ 622 isize = int(size) 623 if isize != size or isize < 1: 624 raise ValueError("size must be a positive integer") 625 if isize >= len(self.coef): 626 coef = self.coef 627 else: 628 coef = self.coef[:isize] 629 return self.__class__(coef, self.domain, self.window) 630 631 def convert(self, domain=None, kind=None, window=None): 632 """Convert series to a different kind and/or domain and/or window. 633 634 Parameters 635 ---------- 636 domain : array_like, optional 637 The domain of the converted series. If the value is None, 638 the default domain of `kind` is used. 639 kind : class, optional 640 The polynomial series type class to which the current instance 641 should be converted. If kind is None, then the class of the 642 current instance is used. 643 window : array_like, optional 644 The window of the converted series. If the value is None, 645 the default window of `kind` is used. 646 647 Returns 648 ------- 649 new_series : series 650 The returned class can be of different type than the current 651 instance and/or have a different domain and/or different 652 window. 653 654 Notes 655 ----- 656 Conversion between domains and class types can result in 657 numerically ill defined series. 658 659 Examples 660 -------- 661 662 """ 663 if kind is None: 664 kind = self.__class__ 665 if domain is None: 666 domain = kind.domain 667 if window is None: 668 window = kind.window 669 return self(kind.identity(domain, window=window)) 670 671 def mapparms(self): 672 """Return the mapping parameters. 673 674 The returned values define a linear map ``off + scl*x`` that is 675 applied to the input arguments before the series is evaluated. The 676 map depends on the ``domain`` and ``window``; if the current 677 ``domain`` is equal to the ``window`` the resulting map is the 678 identity. If the coefficients of the series instance are to be 679 used by themselves outside this class, then the linear function 680 must be substituted for the ``x`` in the standard representation of 681 the base polynomials. 682 683 Returns 684 ------- 685 off, scl : float or complex 686 The mapping function is defined by ``off + scl*x``. 687 688 Notes 689 ----- 690 If the current domain is the interval ``[l1, r1]`` and the window 691 is ``[l2, r2]``, then the linear mapping function ``L`` is 692 defined by the equations:: 693 694 L(l1) = l2 695 L(r1) = r2 696 697 """ 698 return pu.mapparms(self.domain, self.window) 699 700 def integ(self, m=1, k=[], lbnd=None): 701 """Integrate. 702 703 Return a series instance that is the definite integral of the 704 current series. 705 706 Parameters 707 ---------- 708 m : non-negative int 709 The number of integrations to perform. 710 k : array_like 711 Integration constants. The first constant is applied to the 712 first integration, the second to the second, and so on. The 713 list of values must less than or equal to `m` in length and any 714 missing values are set to zero. 715 lbnd : Scalar 716 The lower bound of the definite integral. 717 718 Returns 719 ------- 720 new_series : series 721 A new series representing the integral. The domain is the same 722 as the domain of the integrated series. 723 724 """ 725 off, scl = self.mapparms() 726 if lbnd is None: 727 lbnd = 0 728 else: 729 lbnd = off + scl*lbnd 730 coef = self._int(self.coef, m, k, lbnd, 1./scl) 731 return self.__class__(coef, self.domain, self.window) 732 733 def deriv(self, m=1): 734 """Differentiate. 735 736 Return a series instance of that is the derivative of the current 737 series. 738 739 Parameters 740 ---------- 741 m : non-negative int 742 Find the derivative of order `m`. 743 744 Returns 745 ------- 746 new_series : series 747 A new series representing the derivative. The domain is the same 748 as the domain of the differentiated series. 749 750 """ 751 off, scl = self.mapparms() 752 coef = self._der(self.coef, m, scl) 753 return self.__class__(coef, self.domain, self.window) 754 755 def roots(self): 756 """Return the roots of the series polynomial. 757 758 Compute the roots for the series. Note that the accuracy of the 759 roots decrease the further outside the domain they lie. 760 761 Returns 762 ------- 763 roots : ndarray 764 Array containing the roots of the series. 765 766 """ 767 roots = self._roots(self.coef) 768 return pu.mapdomain(roots, self.window, self.domain) 769 770 def linspace(self, n=100, domain=None): 771 """Return x, y values at equally spaced points in domain. 772 773 Returns the x, y values at `n` linearly spaced points across the 774 domain. Here y is the value of the polynomial at the points x. By 775 default the domain is the same as that of the series instance. 776 This method is intended mostly as a plotting aid. 777 778 .. versionadded:: 1.5.0 779 780 Parameters 781 ---------- 782 n : int, optional 783 Number of point pairs to return. The default value is 100. 784 domain : {None, array_like}, optional 785 If not None, the specified domain is used instead of that of 786 the calling instance. It should be of the form ``[beg,end]``. 787 The default is None which case the class domain is used. 788 789 Returns 790 ------- 791 x, y : ndarray 792 x is equal to linspace(self.domain[0], self.domain[1], n) and 793 y is the series evaluated at element of x. 794 795 """ 796 if domain is None: 797 domain = self.domain 798 x = np.linspace(domain[0], domain[1], n) 799 y = self(x) 800 return x, y 801 802 @classmethod 803 def fit(cls, x, y, deg, domain=None, rcond=None, full=False, w=None, 804 window=None): 805 """Least squares fit to data. 806 807 Return a series instance that is the least squares fit to the data 808 `y` sampled at `x`. The domain of the returned instance can be 809 specified and this will often result in a superior fit with less 810 chance of ill conditioning. 811 812 Parameters 813 ---------- 814 x : array_like, shape (M,) 815 x-coordinates of the M sample points ``(x[i], y[i])``. 816 y : array_like, shape (M,) or (M, K) 817 y-coordinates of the sample points. Several data sets of sample 818 points sharing the same x-coordinates can be fitted at once by 819 passing in a 2D-array that contains one dataset per column. 820 deg : int or 1-D array_like 821 Degree(s) of the fitting polynomials. If `deg` is a single integer 822 all terms up to and including the `deg`'th term are included in the 823 fit. For NumPy versions >= 1.11.0 a list of integers specifying the 824 degrees of the terms to include may be used instead. 825 domain : {None, [beg, end], []}, optional 826 Domain to use for the returned series. If ``None``, 827 then a minimal domain that covers the points `x` is chosen. If 828 ``[]`` the class domain is used. The default value was the 829 class domain in NumPy 1.4 and ``None`` in later versions. 830 The ``[]`` option was added in numpy 1.5.0. 831 rcond : float, optional 832 Relative condition number of the fit. Singular values smaller 833 than this relative to the largest singular value will be 834 ignored. The default value is len(x)*eps, where eps is the 835 relative precision of the float type, about 2e-16 in most 836 cases. 837 full : bool, optional 838 Switch determining nature of return value. When it is False 839 (the default) just the coefficients are returned, when True 840 diagnostic information from the singular value decomposition is 841 also returned. 842 w : array_like, shape (M,), optional 843 Weights. If not None the contribution of each point 844 ``(x[i],y[i])`` to the fit is weighted by `w[i]`. Ideally the 845 weights are chosen so that the errors of the products 846 ``w[i]*y[i]`` all have the same variance. The default value is 847 None. 848 849 .. versionadded:: 1.5.0 850 window : {[beg, end]}, optional 851 Window to use for the returned series. The default 852 value is the default class domain 853 854 .. versionadded:: 1.6.0 855 856 Returns 857 ------- 858 new_series : series 859 A series that represents the least squares fit to the data and 860 has the domain and window specified in the call. If the 861 coefficients for the unscaled and unshifted basis polynomials are 862 of interest, do ``new_series.convert().coef``. 863 864 [resid, rank, sv, rcond] : list 865 These values are only returned if `full` = True 866 867 resid -- sum of squared residuals of the least squares fit 868 rank -- the numerical rank of the scaled Vandermonde matrix 869 sv -- singular values of the scaled Vandermonde matrix 870 rcond -- value of `rcond`. 871 872 For more details, see `linalg.lstsq`. 873 874 """ 875 if domain is None: 876 domain = pu.getdomain(x) 877 elif type(domain) is list and len(domain) == 0: 878 domain = cls.domain 879 880 if window is None: 881 window = cls.window 882 883 xnew = pu.mapdomain(x, domain, window) 884 res = cls._fit(xnew, y, deg, w=w, rcond=rcond, full=full) 885 if full: 886 [coef, status] = res 887 return cls(coef, domain=domain, window=window), status 888 else: 889 coef = res 890 return cls(coef, domain=domain, window=window) 891 892 @classmethod 893 def fromroots(cls, roots, domain=[], window=None): 894 """Return series instance that has the specified roots. 895 896 Returns a series representing the product 897 ``(x - r[0])*(x - r[1])*...*(x - r[n-1])``, where ``r`` is a 898 list of roots. 899 900 Parameters 901 ---------- 902 roots : array_like 903 List of roots. 904 domain : {[], None, array_like}, optional 905 Domain for the resulting series. If None the domain is the 906 interval from the smallest root to the largest. If [] the 907 domain is the class domain. The default is []. 908 window : {None, array_like}, optional 909 Window for the returned series. If None the class window is 910 used. The default is None. 911 912 Returns 913 ------- 914 new_series : series 915 Series with the specified roots. 916 917 """ 918 [roots] = pu.as_series([roots], trim=False) 919 if domain is None: 920 domain = pu.getdomain(roots) 921 elif type(domain) is list and len(domain) == 0: 922 domain = cls.domain 923 924 if window is None: 925 window = cls.window 926 927 deg = len(roots) 928 off, scl = pu.mapparms(domain, window) 929 rnew = off + scl*roots 930 coef = cls._fromroots(rnew) / scl**deg 931 return cls(coef, domain=domain, window=window) 932 933 @classmethod 934 def identity(cls, domain=None, window=None): 935 """Identity function. 936 937 If ``p`` is the returned series, then ``p(x) == x`` for all 938 values of x. 939 940 Parameters 941 ---------- 942 domain : {None, array_like}, optional 943 If given, the array must be of the form ``[beg, end]``, where 944 ``beg`` and ``end`` are the endpoints of the domain. If None is 945 given then the class domain is used. The default is None. 946 window : {None, array_like}, optional 947 If given, the resulting array must be if the form 948 ``[beg, end]``, where ``beg`` and ``end`` are the endpoints of 949 the window. If None is given then the class window is used. The 950 default is None. 951 952 Returns 953 ------- 954 new_series : series 955 Series of representing the identity. 956 957 """ 958 if domain is None: 959 domain = cls.domain 960 if window is None: 961 window = cls.window 962 off, scl = pu.mapparms(window, domain) 963 coef = cls._line(off, scl) 964 return cls(coef, domain, window) 965 966 @classmethod 967 def basis(cls, deg, domain=None, window=None): 968 """Series basis polynomial of degree `deg`. 969 970 Returns the series representing the basis polynomial of degree `deg`. 971 972 .. versionadded:: 1.7.0 973 974 Parameters 975 ---------- 976 deg : int 977 Degree of the basis polynomial for the series. Must be >= 0. 978 domain : {None, array_like}, optional 979 If given, the array must be of the form ``[beg, end]``, where 980 ``beg`` and ``end`` are the endpoints of the domain. If None is 981 given then the class domain is used. The default is None. 982 window : {None, array_like}, optional 983 If given, the resulting array must be if the form 984 ``[beg, end]``, where ``beg`` and ``end`` are the endpoints of 985 the window. If None is given then the class window is used. The 986 default is None. 987 988 Returns 989 ------- 990 new_series : series 991 A series with the coefficient of the `deg` term set to one and 992 all others zero. 993 994 """ 995 if domain is None: 996 domain = cls.domain 997 if window is None: 998 window = cls.window 999 ideg = int(deg) 1000 1001 if ideg != deg or ideg < 0: 1002 raise ValueError("deg must be non-negative integer") 1003 return cls([0]*ideg + [1], domain, window) 1004 1005 @classmethod 1006 def cast(cls, series, domain=None, window=None): 1007 """Convert series to series of this class. 1008 1009 The `series` is expected to be an instance of some polynomial 1010 series of one of the types supported by by the numpy.polynomial 1011 module, but could be some other class that supports the convert 1012 method. 1013 1014 .. versionadded:: 1.7.0 1015 1016 Parameters 1017 ---------- 1018 series : series 1019 The series instance to be converted. 1020 domain : {None, array_like}, optional 1021 If given, the array must be of the form ``[beg, end]``, where 1022 ``beg`` and ``end`` are the endpoints of the domain. If None is 1023 given then the class domain is used. The default is None. 1024 window : {None, array_like}, optional 1025 If given, the resulting array must be if the form 1026 ``[beg, end]``, where ``beg`` and ``end`` are the endpoints of 1027 the window. If None is given then the class window is used. The 1028 default is None. 1029 1030 Returns 1031 ------- 1032 new_series : series 1033 A series of the same kind as the calling class and equal to 1034 `series` when evaluated. 1035 1036 See Also 1037 -------- 1038 convert : similar instance method 1039 1040 """ 1041 if domain is None: 1042 domain = cls.domain 1043 if window is None: 1044 window = cls.window 1045 return series.convert(domain, cls, window) 1046 [end of numpy/polynomial/_polybase.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
numpy/numpy
6c1b6e4e5f58d3501a0d2fbf33e7e506d42a0fbb
MAINT: _polybase __div__ method Line 420 - 422 of `numpy/polynomial/_polybase.py` has the following method: ``` def __div__(self, other): # set to __floordiv__, /, for now. return self.__floordiv__(other) ``` This implies the original author may have intended to come back and implement some other `__div__` method here. Is there a consensus on whether we would want to do that, remove the comment, or kick the can down the road?
Python 2 compatibility. There is no true divide for polynomials, at least without going to infinite series, so this operator could be deprecated at some future time after Python 2 is dropped. The `//` operator was new in Python 2.2. Thanks for the context! > this operator could be deprecated at some future time after Python 2 is dropped It can be removed entirely - `__div__` is never called in python 3. Feel free to create a PR that updates this comment accordingly
2018-08-31T20:08:35Z
<patch> diff --git a/numpy/polynomial/_polybase.py b/numpy/polynomial/_polybase.py --- a/numpy/polynomial/_polybase.py +++ b/numpy/polynomial/_polybase.py @@ -418,7 +418,7 @@ def __mul__(self, other): return self.__class__(coef, self.domain, self.window) def __div__(self, other): - # set to __floordiv__, /, for now. + # this can be removed when python 2 support is dropped. return self.__floordiv__(other) def __truediv__(self, other): </patch>
[]
[]
huggingface__transformers-18018
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Calling `generate` on a `T5ForConditionalGeneration` returns `n` tokens but `n-1` scores ### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @patrickvonplaten, @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch if __name__ == '__main__': torch.manual_seed(0) tokenizer = AutoTokenizer.from_pretrained('t5-small') model = AutoModelForSeq2SeqLM.from_pretrained('t5-small') input = tokenizer.encode("I enjoy walking with my cute dog", return_tensors='pt') result = model.generate( input, max_new_tokens=15, do_sample=True, return_dict_in_generate=True, output_scores=True, ) print(len(result["scores"])) for sequence in result["sequences"]: print(len(sequence)) print(tokenizer.decode(sequence)) ``` Output: ``` 15 16 <pad> Ich, liebe es, mes lustig beim laufen ``` ### Expected behavior I would have expected to have up to 15 tokens (as `max_new_tokens=15`) and `len(result["scores"]) == len(result["sequences"][0])`. However, the size of the returned sequence of tokens is always `len(result["scores"]) + 1`. In addition, if `max_new_tokens` is reached we have `len(result["sequences"][0]) == max_new_tokens + 1`. When looking at the decoded sequence, there is always a pad token at the beginning. I don't know if this is necessarily a bug but this behaviour is somewhat confusing, especially when trying to compute the probability of the sequence given scores. </issue> <code> [start of README.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <b>English</b> | 44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 46 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> 47 <p> 48 </h4> 49 50 <h3 align="center"> 51 <p>State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow</p> 52 </h3> 53 54 <h3 align="center"> 55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 56 </h3> 57 58 🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. 59 60 These models can be applied on: 61 62 * 📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages. 63 * 🖼️ Images, for tasks like image classification, object detection, and segmentation. 64 * 🗣️ Audio, for tasks like speech recognition and audio classification. 65 66 Transformer models can also perform tasks on **several modalities combined**, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering. 67 68 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our [model hub](https://huggingface.co/models). At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments. 69 70 🤗 Transformers is backed by the three most popular deep learning libraries — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. 71 72 ## Online demos 73 74 You can test most of our models directly on their pages from the [model hub](https://huggingface.co/models). We also offer [private model hosting, versioning, & an inference API](https://huggingface.co/pricing) for public and private models. 75 76 Here are a few examples: 77 78 In Natural Language Processing: 79 - [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 80 - [Name Entity Recognition with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 81 - [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 82 - [Natural Language Inference with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 83 - [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 84 - [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 85 - [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 86 87 In Computer Vision: 88 - [Image classification with ViT](https://huggingface.co/google/vit-base-patch16-224) 89 - [Object Detection with DETR](https://huggingface.co/facebook/detr-resnet-50) 90 - [Image Segmentation with DETR](https://huggingface.co/facebook/detr-resnet-50-panoptic) 91 92 In Audio: 93 - [Automatic Speech Recognition with Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) 94 - [Keyword Spotting with Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) 95 96 **[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities. 97 98 ## If you are looking for custom support from the Hugging Face team 99 100 <a target="_blank" href="https://huggingface.co/support"> 101 <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 102 </a><br> 103 104 ## Quick tour 105 106 To immediately use a model on a given input (text, image, audio, ...), we provide the `pipeline` API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts: 107 108 ```python 109 >>> from transformers import pipeline 110 111 # Allocate a pipeline for sentiment-analysis 112 >>> classifier = pipeline('sentiment-analysis') 113 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 114 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 115 ``` 116 117 The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%. 118 119 Many NLP tasks have a pre-trained `pipeline` ready to go. For example, we can easily extract question answers given context: 120 121 ``` python 122 >>> from transformers import pipeline 123 124 # Allocate a pipeline for question-answering 125 >>> question_answerer = pipeline('question-answering') 126 >>> question_answerer({ 127 ... 'question': 'What is the name of the repository ?', 128 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 129 ... }) 130 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 131 132 ``` 133 134 In addition to the answer, the pretrained model used here returned its confidence score, along with the start position and end position of the answer in the tokenized sentence. You can learn more about the tasks supported by the `pipeline` API in [this tutorial](https://huggingface.co/docs/transformers/task_summary). 135 136 To download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version: 137 ```python 138 >>> from transformers import AutoTokenizer, AutoModel 139 140 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 141 >>> model = AutoModel.from_pretrained("bert-base-uncased") 142 143 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 144 >>> outputs = model(**inputs) 145 ``` 146 And here is the equivalent code for TensorFlow: 147 ```python 148 >>> from transformers import AutoTokenizer, TFAutoModel 149 150 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 151 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 152 153 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 154 >>> outputs = model(**inputs) 155 ``` 156 157 The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator. 158 159 The model itself is a regular [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (depending on your backend) which you can use normally. [This tutorial](https://huggingface.co/docs/transformers/training) explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our `Trainer` API to quickly fine-tune on a new dataset. 160 161 ## Why should I use transformers? 162 163 1. Easy-to-use state-of-the-art models: 164 - High performance on natural language understanding & generation, computer vision, and audio tasks. 165 - Low barrier to entry for educators and practitioners. 166 - Few user-facing abstractions with just three classes to learn. 167 - A unified API for using all our pretrained models. 168 169 1. Lower compute costs, smaller carbon footprint: 170 - Researchers can share trained models instead of always retraining. 171 - Practitioners can reduce compute time and production costs. 172 - Dozens of architectures with over 20,000 pretrained models, some in more than 100 languages. 173 174 1. Choose the right framework for every part of a model's lifetime: 175 - Train state-of-the-art models in 3 lines of code. 176 - Move a single model between TF2.0/PyTorch/JAX frameworks at will. 177 - Seamlessly pick the right framework for training, evaluation and production. 178 179 1. Easily customize a model or an example to your needs: 180 - We provide examples for each architecture to reproduce the results published by its original authors. 181 - Model internals are exposed as consistently as possible. 182 - Model files can be used independently of the library for quick experiments. 183 184 ## Why shouldn't I use transformers? 185 186 - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. 187 - The training API is not intended to work on any model but is optimized to work with the models provided by the library. For generic machine learning loops, you should use another library. 188 - While we strive to present as many use cases as possible, the scripts in our [examples folder](https://github.com/huggingface/transformers/tree/main/examples) are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. 189 190 ## Installation 191 192 ### With pip 193 194 This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+. 195 196 You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). 197 198 First, create a virtual environment with the version of Python you're going to use and activate it. 199 200 Then, you will need to install at least one of Flax, PyTorch or TensorFlow. 201 Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/), [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) and/or [Flax](https://github.com/google/flax#quick-install) and [Jax](https://github.com/google/jax#installation) installation pages regarding the specific install command for your platform. 202 203 When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows: 204 205 ```bash 206 pip install transformers 207 ``` 208 209 If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must [install the library from source](https://huggingface.co/docs/transformers/installation#installing-from-source). 210 211 ### With conda 212 213 Since Transformers version v4.0.0, we now have a conda channel: `huggingface`. 214 215 🤗 Transformers can be installed using conda as follows: 216 217 ```shell script 218 conda install -c huggingface transformers 219 ``` 220 221 Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda. 222 223 ## Model architectures 224 225 **[All the model checkpoints](https://huggingface.co/models)** provided by 🤗 Transformers are seamlessly integrated from the huggingface.co [model hub](https://huggingface.co) where they are uploaded directly by [users](https://huggingface.co/users) and [organizations](https://huggingface.co/organizations). 226 227 Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 228 229 🤗 Transformers currently provides the following architectures (see [here](https://huggingface.co/docs/transformers/model_summary) for a high-level summary of each them): 230 231 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 232 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 233 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 234 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 235 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 236 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 237 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 238 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 239 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 240 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 241 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 242 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 243 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). 244 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 245 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 246 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 247 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 248 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 249 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 250 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 251 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 252 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 253 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 254 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 255 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 256 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 257 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 258 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 259 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 260 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 261 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 262 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 263 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 264 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 265 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 266 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 267 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 268 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 269 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 270 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 271 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 272 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 273 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 274 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 275 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 276 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 277 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 278 1. **[GroupViT](https://huggingface.co/docs/transformers/main/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 279 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 280 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 281 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 282 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 283 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 284 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 285 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 286 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 287 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 288 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 289 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 290 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 291 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 292 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 293 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 294 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 295 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 296 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 297 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 298 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 299 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 300 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 301 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 302 1. **[MobileViT](https://huggingface.co/docs/transformers/main/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 303 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 304 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 305 1. **[MVP](https://huggingface.co/docs/transformers/main/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 306 1. **[Nezha](https://huggingface.co/docs/transformers/main/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 307 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 308 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 309 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 310 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 311 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 312 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 313 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 314 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 315 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 316 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 317 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 318 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 319 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 320 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 321 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 322 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 323 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 324 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 325 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 326 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 327 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 328 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 329 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 330 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 331 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 332 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 333 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 334 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 335 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 336 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 337 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 338 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 339 1. **[UL2](https://huggingface.co/docs/transformers/main/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 340 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 341 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 342 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 343 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 344 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 345 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 346 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 347 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 348 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 349 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 350 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 351 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 352 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 353 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 354 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 355 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 356 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 357 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 358 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 359 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 360 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 361 1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR. 362 363 To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to [this table](https://huggingface.co/docs/transformers/index#supported-frameworks). 364 365 These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the [documentation](https://huggingface.co/docs/transformers/examples). 366 367 368 ## Learn more 369 370 | Section | Description | 371 |-|-| 372 | [Documentation](https://huggingface.co/docs/transformers/) | Full API documentation and tutorials | 373 | [Task summary](https://huggingface.co/docs/transformers/task_summary) | Tasks supported by 🤗 Transformers | 374 | [Preprocessing tutorial](https://huggingface.co/docs/transformers/preprocessing) | Using the `Tokenizer` class to prepare data for the models | 375 | [Training and fine-tuning](https://huggingface.co/docs/transformers/training) | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the `Trainer` API | 376 | [Quick tour: Fine-tuning/usage scripts](https://github.com/huggingface/transformers/tree/main/examples) | Example scripts for fine-tuning models on a wide range of tasks | 377 | [Model sharing and uploading](https://huggingface.co/docs/transformers/model_sharing) | Upload and share your fine-tuned models with the community | 378 | [Migration](https://huggingface.co/docs/transformers/migration) | Migrate to 🤗 Transformers from `pytorch-transformers` or `pytorch-pretrained-bert` | 379 380 ## Citation 381 382 We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library: 383 ```bibtex 384 @inproceedings{wolf-etal-2020-transformers, 385 title = "Transformers: State-of-the-Art Natural Language Processing", 386 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 387 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 388 month = oct, 389 year = "2020", 390 address = "Online", 391 publisher = "Association for Computational Linguistics", 392 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 393 pages = "38--45" 394 } 395 ``` 396 [end of README.md] [start of README_ko.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <p align="center"> 18 <br> 19 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 20 <br> 21 <p> 22 <p align="center"> 23 <a href="https://circleci.com/gh/huggingface/transformers"> 24 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 25 </a> 26 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 27 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 28 </a> 29 <a href="https://huggingface.co/docs/transformers/index"> 30 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 31 </a> 32 <a href="https://github.com/huggingface/transformers/releases"> 33 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 34 </a> 35 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 36 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 37 </a> 38 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 39 </p> 40 41 <h4 align="center"> 42 <p> 43 <a href="https://github.com/huggingface/transformers/">English</a> | 44 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 45 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 46 <b>한국어</b> 47 <p> 48 </h4> 49 50 <h3 align="center"> 51 <p> Jax, Pytorch, TensorFlow를 위한 최첨단 자연어처리</p> 52 </h3> 53 54 <h3 align="center"> 55 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 56 </h3> 57 58 🤗 Transformers는 분류, 정보 추출, 질문 답변, 요약, 번역, 문장 생성 등을 100개 이상의 언어로 수행할 수 있는 수천개의 사전학습된 모델을 제공합니다. 우리의 목표는 모두가 최첨단의 NLP 기술을 쉽게 사용하는 것입니다. 59 60 🤗 Transformers는 이러한 사전학습 모델을 빠르게 다운로드해 특정 텍스트에 사용하고, 원하는 데이터로 fine-tuning해 커뮤니티나 우리의 [모델 허브](https://huggingface.co/models)에 공유할 수 있도록 API를 제공합니다. 또한, 모델 구조를 정의하는 각 파이썬 모듈은 완전히 독립적이여서 연구 실험을 위해 손쉽게 수정할 수 있습니다. 61 62 🤗 Transformers는 가장 유명한 3개의 딥러닝 라이브러리를 지원합니다. 이들은 서로 완벽히 연동됩니다 — [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/). 간단하게 이 라이브러리 중 하나로 모델을 학습하고, 또 다른 라이브러리로 추론을 위해 모델을 불러올 수 있습니다. 63 64 ## 온라인 데모 65 66 대부분의 모델을 [모델 허브](https://huggingface.co/models) 페이지에서 바로 테스트해볼 수 있습니다. 공개 및 비공개 모델을 위한 [비공개 모델 호스팅, 버전 관리, 추론 API](https://huggingface.co/pricing)도 제공합니다. 67 68 예시: 69 - [BERT로 마스킹된 단어 완성하기](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 70 - [Electra를 이용한 개체명 인식](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 71 - [GPT-2로 텍스트 생성하기](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 72 - [RoBERTa로 자연어 추론하기](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 73 - [BART를 이용한 요약](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 74 - [DistilBERT를 이용한 질문 답변](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 75 - [T5로 번역하기](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 76 77 **[Transformer와 글쓰기](https://transformer.huggingface.co)** 는 이 저장소의 텍스트 생성 능력에 관한 Hugging Face 팀의 공식 데모입니다. 78 79 ## Hugging Face 팀의 커스텀 지원을 원한다면 80 81 <a target="_blank" href="https://huggingface.co/support"> 82 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 83 </a><br> 84 85 ## 퀵 투어 86 87 원하는 텍스트에 바로 모델을 사용할 수 있도록, 우리는 `pipeline` API를 제공합니다. Pipeline은 사전학습 모델과 그 모델을 학습할 때 적용한 전처리 방식을 하나로 합칩니다. 다음은 긍정적인 텍스트와 부정적인 텍스트를 분류하기 위해 pipeline을 사용한 간단한 예시입니다: 88 89 ```python 90 >>> from transformers import pipeline 91 92 # Allocate a pipeline for sentiment-analysis 93 >>> classifier = pipeline('sentiment-analysis') 94 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 95 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 96 ``` 97 98 코드의 두번째 줄은 pipeline이 사용하는 사전학습 모델을 다운로드하고 캐시로 저장합니다. 세번째 줄에선 그 모델이 주어진 텍스트를 평가합니다. 여기서 모델은 99.97%의 확률로 텍스트가 긍정적이라고 평가했습니다. 99 100 많은 NLP 과제들을 `pipeline`으로 바로 수행할 수 있습니다. 예를 들어, 질문과 문맥이 주어지면 손쉽게 답변을 추출할 수 있습니다: 101 102 ``` python 103 >>> from transformers import pipeline 104 105 # Allocate a pipeline for question-answering 106 >>> question_answerer = pipeline('question-answering') 107 >>> question_answerer({ 108 ... 'question': 'What is the name of the repository ?', 109 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 110 ... }) 111 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 112 113 ``` 114 115 답변뿐만 아니라, 여기에 사용된 사전학습 모델은 확신도와 토크나이즈된 문장 속 답변의 시작점, 끝점까지 반환합니다. [이 튜토리얼](https://huggingface.co/docs/transformers/task_summary)에서 `pipeline` API가 지원하는 다양한 과제를 확인할 수 있습니다. 116 117 코드 3줄로 원하는 과제에 맞게 사전학습 모델을 다운로드 받고 사용할 수 있습니다. 다음은 PyTorch 버전입니다: 118 ```python 119 >>> from transformers import AutoTokenizer, AutoModel 120 121 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 122 >>> model = AutoModel.from_pretrained("bert-base-uncased") 123 124 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 125 >>> outputs = model(**inputs) 126 ``` 127 다음은 TensorFlow 버전입니다: 128 ```python 129 >>> from transformers import AutoTokenizer, TFAutoModel 130 131 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 132 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 133 134 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 135 >>> outputs = model(**inputs) 136 ``` 137 138 토크나이저는 사전학습 모델의 모든 전처리를 책임집니다. 그리고 (위의 예시처럼) 1개의 스트링이나 리스트도 처리할 수 있습니다. 토크나이저는 딕셔너리를 반환하는데, 이는 다운스트림 코드에 사용하거나 언패킹 연산자 ** 를 이용해 모델에 바로 전달할 수도 있습니다. 139 140 모델 자체는 일반적으로 사용되는 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)나 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)입니다. [이 튜토리얼](https://huggingface.co/transformers/training.html)은 이러한 모델을 표준적인 PyTorch나 TensorFlow 학습 과정에서 사용하는 방법, 또는 새로운 데이터로 fine-tune하기 위해 `Trainer` API를 사용하는 방법을 설명해줍니다. 141 142 ## 왜 transformers를 사용해야 할까요? 143 144 1. 손쉽게 사용할 수 있는 최첨단 모델: 145 - NLU와 NLG 과제에서 뛰어난 성능을 보입니다. 146 - 교육자 실무자에게 진입 장벽이 낮습니다. 147 - 3개의 클래스만 배우면 바로 사용할 수 있습니다. 148 - 하나의 API로 모든 사전학습 모델을 사용할 수 있습니다. 149 150 1. 더 적은 계산 비용, 더 적은 탄소 발자국: 151 - 연구자들은 모델을 계속 다시 학습시키는 대신 학습된 모델을 공유할 수 있습니다. 152 - 실무자들은 학습에 필요한 시간과 비용을 절약할 수 있습니다. 153 - 수십개의 모델 구조, 2,000개 이상의 사전학습 모델, 100개 이상의 언어로 학습된 모델 등. 154 155 1. 모델의 각 생애주기에 적합한 프레임워크: 156 - 코드 3줄로 최첨단 모델을 학습하세요. 157 - 자유롭게 모델을 TF2.0나 PyTorch 프레임워크로 변환하세요. 158 - 학습, 평가, 공개 등 각 단계에 맞는 프레임워크를 원하는대로 선택하세요. 159 160 1. 필요한 대로 모델이나 예시를 커스터마이즈하세요: 161 - 우리는 저자가 공개한 결과를 재현하기 위해 각 모델 구조의 예시를 제공합니다. 162 - 모델 내부 구조는 가능한 일관적으로 공개되어 있습니다. 163 - 빠른 실험을 위해 모델 파일은 라이브러리와 독립적으로 사용될 수 있습니다. 164 165 ## 왜 transformers를 사용하지 말아야 할까요? 166 167 - 이 라이브러리는 신경망 블록을 만들기 위한 모듈이 아닙니다. 연구자들이 여러 파일을 살펴보지 않고 바로 각 모델을 사용할 수 있도록, 모델 파일 코드의 추상화 수준을 적정하게 유지했습니다. 168 - 학습 API는 모든 모델에 적용할 수 있도록 만들어지진 않았지만, 라이브러리가 제공하는 모델들에 적용할 수 있도록 최적화되었습니다. 일반적인 머신 러닝을 위해선, 다른 라이브러리를 사용하세요. 169 - 가능한 많은 사용 예시를 보여드리고 싶어서, [예시 폴더](https://github.com/huggingface/transformers/tree/main/examples)의 스크립트를 준비했습니다. 이 스크립트들을 수정 없이 특정한 문제에 바로 적용하지 못할 수 있습니다. 필요에 맞게 일부 코드를 수정해야 할 수 있습니다. 170 171 ## 설치 172 173 ### pip로 설치하기 174 175 이 저장소는 Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+, TensorFlow 2.3+에서 테스트 되었습니다. 176 177 [가상 환경](https://docs.python.org/3/library/venv.html)에 🤗 Transformers를 설치하세요. Python 가상 환경에 익숙하지 않다면, [사용자 가이드](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)를 확인하세요. 178 179 우선, 사용할 Python 버전으로 가상 환경을 만들고 실행하세요. 180 181 그 다음, Flax, PyTorch, TensorFlow 중 적어도 하나는 설치해야 합니다. 182 플랫폼에 맞는 설치 명령어를 확인하기 위해 [TensorFlow 설치 페이지](https://www.tensorflow.org/install/), [PyTorch 설치 페이지](https://pytorch.org/get-started/locally/#start-locally), [Flax 설치 페이지](https://github.com/google/flax#quick-install)를 확인하세요. 183 184 이들 중 적어도 하나가 설치되었다면, 🤗 Transformers는 다음과 같이 pip을 이용해 설치할 수 있습니다: 185 186 ```bash 187 pip install transformers 188 ``` 189 190 예시들을 체험해보고 싶거나, 최최최첨단 코드를 원하거나, 새로운 버전이 나올 때까지 기다릴 수 없다면 [라이브러리를 소스에서 바로 설치](https://huggingface.co/docs/transformers/installation#installing-from-source)하셔야 합니다. 191 192 ### conda로 설치하기 193 194 Transformers 버전 v4.0.0부터, conda 채널이 생겼습니다: `huggingface`. 195 196 🤗 Transformers는 다음과 같이 conda로 설치할 수 있습니다: 197 198 ```shell script 199 conda install -c huggingface transformers 200 ``` 201 202 Flax, PyTorch, TensorFlow 설치 페이지에서 이들을 conda로 설치하는 방법을 확인하세요. 203 204 ## 모델 구조 205 206 **🤗 Transformers가 제공하는 [모든 모델 체크포인트](https://huggingface.co/models)** 는 huggingface.co [모델 허브](https://huggingface.co)에 완벽히 연동되어 있습니다. [개인](https://huggingface.co/users)과 [기관](https://huggingface.co/organizations)이 모델 허브에 직접 업로드할 수 있습니다. 207 208 현재 사용 가능한 모델 체크포인트의 개수: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 209 210 🤗 Transformers는 다음 모델들을 제공합니다 (각 모델의 요약은 [여기](https://huggingface.co/docs/transformers/model_summary)서 확인하세요): 211 212 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 213 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 214 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 215 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 216 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 217 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 218 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 219 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 220 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 221 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 222 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 223 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 224 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). 225 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 226 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 227 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 228 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 229 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 230 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 231 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 232 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 233 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 234 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 235 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 236 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 237 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 238 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 239 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 240 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 241 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 242 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 243 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT. 244 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 245 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 246 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 247 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 248 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 249 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 250 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 251 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 252 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 253 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 254 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 255 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 256 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 257 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 258 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 259 1. **[GroupViT](https://huggingface.co/docs/transformers/main/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 260 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 261 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 262 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 263 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 264 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 265 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 266 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 267 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 268 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 269 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 270 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 271 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 272 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 273 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 274 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 275 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 276 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 277 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 278 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 279 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 280 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 281 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 282 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 283 1. **[MobileViT](https://huggingface.co/docs/transformers/main/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 284 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 285 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 286 1. **[MVP](https://huggingface.co/docs/transformers/main/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 287 1. **[Nezha](https://huggingface.co/docs/transformers/main/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 288 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 289 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 290 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 291 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 292 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 293 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 294 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 295 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 296 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 297 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 298 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 299 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 300 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 301 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 302 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 303 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 304 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 305 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 306 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 307 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 308 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 309 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 310 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 311 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 312 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 313 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 314 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 315 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 316 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 317 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 318 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 319 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 320 1. **[UL2](https://huggingface.co/docs/transformers/main/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 321 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 322 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 323 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 324 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 325 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 326 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 327 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 328 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 329 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 330 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 331 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 332 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 333 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 334 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 335 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 336 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 337 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 338 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 339 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 340 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 341 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 342 1. 새로운 모델을 올리고 싶나요? 우리가 **상세한 가이드와 템플릿** 으로 새로운 모델을 올리도록 도와드릴게요. 가이드와 템플릿은 이 저장소의 [`templates`](./templates) 폴더에서 확인하실 수 있습니다. [컨트리뷰션 가이드라인](./CONTRIBUTING.md)을 꼭 확인해주시고, PR을 올리기 전에 메인테이너에게 연락하거나 이슈를 오픈해 피드백을 받으시길 바랍니다. 343 344 각 모델이 Flax, PyTorch, TensorFlow으로 구현되었는지 또는 🤗 Tokenizers 라이브러리가 지원하는 토크나이저를 사용하는지 확인하려면, [이 표](https://huggingface.co/docs/transformers/index#supported-frameworks)를 확인하세요. 345 346 이 구현은 여러 데이터로 검증되었고 (예시 스크립트를 참고하세요) 오리지널 구현의 성능과 같아야 합니다. [도큐먼트](https://huggingface.co/docs/transformers/examples)의 Examples 섹션에서 성능에 대한 자세한 설명을 확인할 수 있습니다. 347 348 ## 더 알아보기 349 350 | 섹션 | 설명 | 351 |-|-| 352 | [도큐먼트](https://huggingface.co/transformers/) | 전체 API 도큐먼트와 튜토리얼 | 353 | [과제 요약](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers가 지원하는 과제들 | 354 | [전처리 튜토리얼](https://huggingface.co/docs/transformers/preprocessing) | `Tokenizer` 클래스를 이용해 모델을 위한 데이터 준비하기 | 355 | [학습과 fine-tuning](https://huggingface.co/docs/transformers/training) | 🤗 Transformers가 제공하는 모델 PyTorch/TensorFlow 학습 과정과 `Trainer` API에서 사용하기 | 356 | [퀵 투어: Fine-tuning/사용 스크립트](https://github.com/huggingface/transformers/tree/main/examples) | 다양한 과제에서 모델 fine-tuning하는 예시 스크립트 | 357 | [모델 공유 및 업로드](https://huggingface.co/docs/transformers/model_sharing) | 커뮤니티에 fine-tune된 모델을 업로드 및 공유하기 | 358 | [마이그레이션](https://huggingface.co/docs/transformers/migration) | `pytorch-transformers`나 `pytorch-pretrained-bert`에서 🤗 Transformers로 이동하기| 359 360 ## 인용 361 362 🤗 Transformers 라이브러리를 인용하고 싶다면, 이 [논문](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)을 인용해 주세요: 363 ```bibtex 364 @inproceedings{wolf-etal-2020-transformers, 365 title = "Transformers: State-of-the-Art Natural Language Processing", 366 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 367 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 368 month = oct, 369 year = "2020", 370 address = "Online", 371 publisher = "Association for Computational Linguistics", 372 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 373 pages = "38--45" 374 } 375 ``` 376 [end of README_ko.md] [start of README_zh-hans.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Chinese translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多种语言; 使用 transformers 库。 20 - Use square quotes, e.g.,「引用」 21 22 Dictionary 23 24 Hugging Face: 抱抱脸 25 token: 词符(并用括号标注原英文) 26 tokenize: 词符化(并用括号标注原英文) 27 tokenizer: 词符化器(并用括号标注原英文) 28 transformer: transformer(不翻译) 29 pipeline: 流水线 30 API: API (不翻译) 31 inference: 推理 32 Trainer: 训练器。当作为类名出现时不翻译。 33 pretrained/pretrain: 预训练 34 finetune: 微调 35 community: 社区 36 example: 当特指仓库中 example 目录时翻译为「用例」 37 Python data structures (e.g., list, set, dict): 翻译为列表,集合,词典,并用括号标注原英文 38 NLP/Natural Language Processing: 以 NLP 出现时不翻译,以 Natural Language Processing 出现时翻译为自然语言处理 39 checkpoint: 检查点 40 --> 41 42 <p align="center"> 43 <br> 44 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 45 <br> 46 <p> 47 <p align="center"> 48 <a href="https://circleci.com/gh/huggingface/transformers"> 49 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 50 </a> 51 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 52 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 53 </a> 54 <a href="https://huggingface.co/docs/transformers/index"> 55 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 56 </a> 57 <a href="https://github.com/huggingface/transformers/releases"> 58 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 59 </a> 60 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 61 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 62 </a> 63 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 64 </p> 65 66 <h4 align="center"> 67 <p> 68 <a href="https://github.com/huggingface/transformers/">English</a> | 69 <b>简体中文</b> | 70 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | 71 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> 72 <p> 73 </h4> 74 75 <h3 align="center"> 76 <p>为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理</p> 77 </h3> 78 79 <h3 align="center"> 80 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 81 </h3> 82 83 🤗 Transformers 提供了数以千计的预训练模型,支持 100 多种语言的文本分类、信息抽取、问答、摘要、翻译、文本生成。它的宗旨让最先进的 NLP 技术人人易用。 84 85 🤗 Transformers 提供了便于快速下载和使用的API,让你可以把预训练模型用在给定文本、在你的数据集上微调然后通过 [model hub](https://huggingface.co/models) 与社区共享。同时,每个定义的 Python 模块均完全独立,方便修改和快速研究实验。 86 87 🤗 Transformers 支持三个最热门的深度学习库: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — 并与之无缝整合。你可以直接使用一个框架训练你的模型然后用另一个加载和推理。 88 89 ## 在线演示 90 91 你可以直接在模型页面上测试大多数 [model hub](https://huggingface.co/models) 上的模型。 我们也提供了 [私有模型托管、模型版本管理以及推理API](https://huggingface.co/pricing)。 92 93 这里是一些例子: 94 - [用 BERT 做掩码填词](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 95 - [用 Electra 做命名实体识别](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 96 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 97 - [用 RoBERTa 做自然语言推理](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 98 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 99 - [用 DistilBERT 做问答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 100 - [用 T5 做翻译](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 101 102 **[Write With Transformer](https://transformer.huggingface.co)**,由抱抱脸团队打造,是一个文本生成的官方 demo。 103 104 ## 如果你在寻找由抱抱脸团队提供的定制化支持服务 105 106 <a target="_blank" href="https://huggingface.co/support"> 107 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 108 </a><br> 109 110 ## 快速上手 111 112 我们为快速使用模型提供了 `pipeline` (流水线)API。流水线聚合了预训练模型和对应的文本预处理。下面是一个快速使用流水线去判断正负面情绪的例子: 113 114 ```python 115 >>> from transformers import pipeline 116 117 # 使用情绪分析流水线 118 >>> classifier = pipeline('sentiment-analysis') 119 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 120 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 121 ``` 122 123 第二行代码下载并缓存了流水线使用的预训练模型,而第三行代码则在给定的文本上进行了评估。这里的答案“正面” (positive) 具有 99 的置信度。 124 125 许多的 NLP 任务都有开箱即用的预训练流水线。比如说,我们可以轻松的从给定文本中抽取问题答案: 126 127 ``` python 128 >>> from transformers import pipeline 129 130 # 使用问答流水线 131 >>> question_answerer = pipeline('question-answering') 132 >>> question_answerer({ 133 ... 'question': 'What is the name of the repository ?', 134 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 135 ... }) 136 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 137 138 ``` 139 140 除了给出答案,预训练模型还给出了对应的置信度分数、答案在词符化 (tokenized) 后的文本中开始和结束的位置。你可以从[这个教程](https://huggingface.co/docs/transformers/task_summary)了解更多流水线API支持的任务。 141 142 要在你的任务上下载和使用任意预训练模型也很简单,只需三行代码。这里是 PyTorch 版的示例: 143 ```python 144 >>> from transformers import AutoTokenizer, AutoModel 145 146 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 147 >>> model = AutoModel.from_pretrained("bert-base-uncased") 148 149 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 150 >>> outputs = model(**inputs) 151 ``` 152 这里是等效的 TensorFlow 代码: 153 ```python 154 >>> from transformers import AutoTokenizer, TFAutoModel 155 156 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 157 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 158 159 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 160 >>> outputs = model(**inputs) 161 ``` 162 163 词符化器 (tokenizer) 为所有的预训练模型提供了预处理,并可以直接对单个字符串进行调用(比如上面的例子)或对列表 (list) 调用。它会输出一个你可以在下游代码里使用或直接通过 `**` 解包表达式传给模型的词典 (dict)。 164 165 模型本身是一个常规的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取决于你的后端),可以常规方式使用。 [这个教程](https://huggingface.co/transformers/training.html)解释了如何将这样的模型整合到经典的 PyTorch 或 TensorFlow 训练循环中,或是如何使用我们的 `Trainer` 训练器)API 来在一个新的数据集上快速微调。 166 167 ## 为什么要用 transformers? 168 169 1. 便于使用的先进模型: 170 - NLU 和 NLG 上表现优越 171 - 对教学和实践友好且低门槛 172 - 高级抽象,只需了解三个类 173 - 对所有模型统一的API 174 175 1. 更低计算开销,更少的碳排放: 176 - 研究人员可以分享亿训练的模型而非次次从头开始训练 177 - 工程师可以减少计算用时和生产环境开销 178 - 数十种模型架构、两千多个预训练模型、100多种语言支持 179 180 1. 对于模型生命周期的每一个部分都面面俱到: 181 - 训练先进的模型,只需 3 行代码 182 - 模型在不同深度学习框架间任意转移,随你心意 183 - 为训练、评估和生产选择最适合的框架,衔接无缝 184 185 1. 为你的需求轻松定制专属模型和用例: 186 - 我们为每种模型架构提供了多个用例来复现原论文结果 187 - 模型内部结构保持透明一致 188 - 模型文件可单独使用,方便魔改和快速实验 189 190 ## 什么情况下我不该用 transformers? 191 192 - 本库并不是模块化的神经网络工具箱。模型文件中的代码特意呈若璞玉,未经额外抽象封装,以便研究人员快速迭代魔改而不致溺于抽象和文件跳转之中。 193 - `Trainer` API 并非兼容任何模型,只为本库之模型优化。若是在寻找适用于通用机器学习的训练循环实现,请另觅他库。 194 - 尽管我们已尽力而为,[examples 目录](https://github.com/huggingface/transformers/tree/main/examples)中的脚本也仅为用例而已。对于你的特定问题,它们并不一定开箱即用,可能需要改几行代码以适之。 195 196 ## 安装 197 198 ### 使用 pip 199 200 这个仓库已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下经过测试。 201 202 你可以在[虚拟环境](https://docs.python.org/3/library/venv.html)中安装 🤗 Transformers。如果你还不熟悉 Python 的虚拟环境,请阅此[用户说明](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 203 204 首先,用你打算使用的版本的 Python 创建一个虚拟环境并激活。 205 206 然后,你需要安装 Flax、PyTorch 或 TensorFlow 其中之一。关于在你使用的平台上安装这些框架,请参阅 [TensorFlow 安装页](https://www.tensorflow.org/install/), [PyTorch 安装页](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安装页](https://github.com/google/flax#quick-install)。 207 208 当这些后端之一安装成功后, 🤗 Transformers 可依此安装: 209 210 ```bash 211 pip install transformers 212 ``` 213 214 如果你想要试试用例或者想在正式发布前使用最新的开发中代码,你得[从源代码安装](https://huggingface.co/docs/transformers/installation#installing-from-source)。 215 216 ### 使用 conda 217 218 自 Transformers 4.0.0 版始,我们有了一个 conda 频道: `huggingface`。 219 220 🤗 Transformers 可以通过 conda 依此安装: 221 222 ```shell script 223 conda install -c huggingface transformers 224 ``` 225 226 要通过 conda 安装 Flax、PyTorch 或 TensorFlow 其中之一,请参阅它们各自安装页的说明。 227 228 ## 模型架构 229 230 🤗 Transformers 支持的[**所有的模型检查点**](https://huggingface.co/models)由[用户](https://huggingface.co/users)和[组织](https://huggingface.co/organizations)上传,均与 huggingface.co [model hub](https://huggingface.co) 无缝整合。 231 232 目前的检查点数量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 233 234 🤗 Transformers 目前支持如下的架构(模型概述请阅[这里](https://huggingface.co/docs/transformers/model_summary)): 235 236 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (来自 Google Research and the Toyota Technological Institute at Chicago) 伴随论文 [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), 由 Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut 发布。 237 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (来自 Facebook) 伴随论文 [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) 由 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer 发布。 238 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (来自 École polytechnique) 伴随论文 [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) 由 Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis 发布。 239 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (来自 VinAI Research) 伴随论文 [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) 由 Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen 发布。 240 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (来自 Microsoft) 伴随论文 [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) 由 Hangbo Bao, Li Dong, Furu Wei 发布。 241 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (来自 Google) 伴随论文 [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) 由 Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova 发布。 242 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (来自 Google) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 243 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (来自 VinAI Research) 伴随论文 [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) 由 Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen 发布。 244 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 245 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (来自 Google Research) 伴随论文 [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) 由 Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed 发布。 246 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 247 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (来自 Facebook) 伴随论文 [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) 由 Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston 发布。 248 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). 249 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (来自 Alexa) 伴随论文 [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) 由 Adrian de Wynter and Daniel J. Perry 发布。 250 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (来自 Google Research) 伴随论文 [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) 由 Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel 发布。 251 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (来自 Inria/Facebook/Sorbonne) 伴随论文 [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) 由 Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot 发布。 252 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (来自 Google Research) 伴随论文 [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) 由 Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting 发布。 253 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (来自 OpenAI) 伴随论文 [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) 由 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever 发布。 254 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (来自 Salesforce) 伴随论文 [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) 由 Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong 发布。 255 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (来自 YituTech) 伴随论文 [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) 由 Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan 发布。 256 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (来自 Facebook AI) 伴随论文 [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) 由 Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie 发布。 257 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (来自 Tsinghua University) 伴随论文 [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) 由 Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun 发布。 258 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (来自 Salesforce) 伴随论文 [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) 由 Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher 发布。 259 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (来自 Microsoft) 伴随论文 [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) 由 Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang 发布。 260 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (来自 Facebook) 伴随论文 [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) 由 Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli 发布。 261 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 262 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (来自 Microsoft) 伴随论文 [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) 由 Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen 发布。 263 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (来自 Berkeley/Facebook/Google) 伴随论文 [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) 由 Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch 发布。 264 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (来自 Facebook) 伴随论文 [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) 由 Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou 发布。 265 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (来自 Facebook) 伴随论文 [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) 由 Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko 发布。 266 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (来自 Microsoft Research) 伴随论文 [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) 由 Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan 发布。 267 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (来自 HuggingFace), 伴随论文 [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) 由 Victor Sanh, Lysandre Debut and Thomas Wolf 发布。 同样的方法也应用于压缩 GPT-2 到 [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa 到 [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT 到 [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) 和德语版 DistilBERT。 268 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (来自 Microsoft Research) 伴随论文 [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) 由 Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei 发布。 269 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (来自 Facebook) 伴随论文 [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) 由 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih 发布。 270 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (来自 Intel Labs) 伴随论文 [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) 由 René Ranftl, Alexey Bochkovskiy, Vladlen Koltun 发布。 271 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (来自 Google Research/Stanford University) 伴随论文 [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) 由 Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning 发布。 272 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (来自 Google Research) 伴随论文 [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) 由 Sascha Rothe, Shashi Narayan, Aliaksei Severyn 发布。 273 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (来自 CNRS) 伴随论文 [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) 由 Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab 发布。 274 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (来自 Facebook AI) 伴随论文 [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) 由 Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela 发布。 275 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (来自 Google Research) 伴随论文 [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) 由 James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon 发布。 276 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (来自 CMU/Google Brain) 伴随论文 [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) 由 Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le 发布。 277 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (来自 KAIST) 伴随论文 [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) 由 Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim 发布。 278 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (来自 OpenAI) 伴随论文 [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) 由 Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever 发布。 279 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (来自 EleutherAI) 随仓库 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) 发布。作者为 Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy 发布。 280 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 281 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (来自 OpenAI) 伴随论文 [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) 由 Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever** 发布。 282 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (来自 EleutherAI) 伴随论文 [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) 由 Ben Wang and Aran Komatsuzaki 发布。 283 1. **[GroupViT](https://huggingface.co/docs/transformers/main/model_doc/groupvit)** (来自 UCSD, NVIDIA) 伴随论文 [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) 由 Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang 发布。 284 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (来自 Facebook) 伴随论文 [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) 由 Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed 发布。 285 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (来自 Berkeley) 伴随论文 [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) 由 Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer 发布。 286 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (来自 OpenAI) 伴随论文 [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) 由 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever 发布。 287 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) 由 Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou 发布。 288 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) 由 Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou 发布。 289 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (来自 Microsoft Research Asia) 伴随论文 [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) 由 Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei 发布。 290 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (来自 Microsoft Research Asia) 伴随论文 [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) 由 Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei 发布。 291 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 292 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (来自 Meta AI) 伴随论文 [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) 由 Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze 发布。 293 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (来自 AllenAI) 伴随论文 [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) 由 Iz Beltagy, Matthew E. Peters, Arman Cohan 发布。 294 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (来自 Google AI) released 伴随论文 [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) 由 Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang 发布。 295 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (来自 Studio Ousia) 伴随论文 [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) 由 Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto 发布。 296 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (来自 UNC Chapel Hill) 伴随论文 [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) 由 Hao Tan and Mohit Bansal 发布。 297 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (来自 Facebook) 伴随论文 [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) 由 Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert 发布。 298 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (来自 Facebook) 伴随论文 [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) 由 Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin 发布。 299 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** 用 [OPUS](http://opus.nlpl.eu/) 数据训练的机器翻译模型由 Jörg Tiedemann 发布。[Marian Framework](https://marian-nmt.github.io/) 由微软翻译团队开发。 300 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov 301 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) 由 Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer 发布。 302 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (来自 Facebook) 伴随论文 [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) 由 Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan 发布。 303 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 304 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (来自 NVIDIA) 伴随论文 [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) 由 Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro 发布。 305 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (来自 Studio Ousia) 伴随论文 [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) 由 Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka 发布。 306 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (来自 CMU/Google Brain) 伴随论文 [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) 由 Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou 发布。 307 1. **[MobileViT](https://huggingface.co/docs/transformers/main/model_doc/mobilevit)** (来自 Apple) 伴随论文 [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) 由 Sachin Mehta and Mohammad Rastegari 发布。 308 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (来自 Microsoft Research) 伴随论文 [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) 由 Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu 发布。 309 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (来自 Google AI) 伴随论文 [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) 由 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel 发布。 310 1. **[MVP](https://huggingface.co/docs/transformers/main/model_doc/mvp)** (来自 中国人民大学 AI Box) 伴随论文 [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) 由 Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen 发布。 311 1. **[Nezha](https://huggingface.co/docs/transformers/main/model_doc/nezha)** (来自华为诺亚方舟实验室) 伴随论文 [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) 由 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu 发布。 312 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (来自 the University of Wisconsin - Madison) 伴随论文 [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) 由 Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh 发布。 313 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (来自 Meta AI) 伴随论文 [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) 由 Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al 发布。 314 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (来自 Google) 伴随论文 [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) 由 Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu 发布。 315 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (来自 Deepmind) 伴随论文 [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) 由 Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira 发布。 316 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (来自 VinAI Research) 伴随论文 [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) 由 Dat Quoc Nguyen and Anh Tuan Nguyen 发布。 317 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (来自 UCLA NLP) 伴随论文 [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) 由 Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang 发布。 318 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (来自 Sea AI Labs) 伴随论文 [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) 由 Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng 发布。 319 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 320 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (来自 NVIDIA) 伴随论文 [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) 由 Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius 发布。 321 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (来自 Facebook) 伴随论文 [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) 由 Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela 发布。 322 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (来自 Google Research) 伴随论文 [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) 由 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang 发布。 323 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (来自 Google Research) 伴随论文 [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) 由 Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya 发布。 324 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 325 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (来自 Google Research) 伴随论文 [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) 由 Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder 发布。 326 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 327 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (来自 Facebook), 伴随论文 [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) 由 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov 发布。 328 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (来自 ZhuiyiTechnology), 伴随论文 [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) 由 Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu 发布。 329 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (来自 NVIDIA) 伴随论文 [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) 由 Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo 发布。 330 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 331 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (来自 ASAPP) 伴随论文 [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) 由 Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi 发布。 332 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (来自 Facebook), 伴随论文 [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino 发布。 333 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (来自 Facebook) 伴随论文 [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) 由 Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau 发布。 334 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (来自 Tel Aviv University) 伴随论文 [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) 由 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy 发布。 335 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (来自 Berkeley) 伴随论文 [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) 由 Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer 发布。 336 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (来自 Microsoft) 伴随论文 [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) 由 Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo 发布。 337 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI) 伴随论文 [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 338 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (来自 Google AI) 伴随论文 [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) 由 Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu 发布。 339 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (来自 Google AI) 伴随论文 [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) 由 Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos 发布。 340 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (来自 Microsoft Research) 伴随论文 [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) 由 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou 发布。 341 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 342 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (来自 Google/CMU) 伴随论文 [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) 由 Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov 发布。 343 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (来自 Microsoft) 伴随论文 [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) 由 Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei 发布。 344 1. **[UL2](https://huggingface.co/docs/transformers/main/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 345 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (来自 Microsoft Research) 伴随论文 [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) 由 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang 发布。 346 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (来自 Microsoft Research) 伴随论文 [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) 由 Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu 发布。 347 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (来自 Tsinghua University and Nankai University) 伴随论文 [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) 由 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu 发布。 348 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (来自 NAVER AI Lab/Kakao Enterprise/Kakao Brain) 伴随论文 [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) 由 Wonjae Kim, Bokyung Son, Ildoo Kim 发布。 349 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (来自 Google AI) 伴随论文 [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) 由 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby 发布。 350 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (来自 UCLA NLP) 伴随论文 [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) 由 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang 发布。 351 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (来自 Meta AI) 伴随论文 [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) 由 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick 发布。 352 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (来自 Facebook AI) 伴随论文 [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) 由 Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli 发布。 353 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (来自 Facebook AI) 伴随论文 [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) 由 Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino 发布。 354 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (来自 Facebook AI) 伴随论文 [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) 由 Qiantong Xu, Alexei Baevski, Michael Auli 发布。 355 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 356 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 357 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (来自 Facebook) 伴随论文 [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) 由 Guillaume Lample and Alexis Conneau 发布。 358 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (来自 Microsoft Research) 伴随论文 [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) 由 Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou 发布。 359 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (来自 Facebook AI), 伴随论文 [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) 由 Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov 发布。 360 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (来自 Facebook AI) 伴随论文 [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) 由 Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau 发布。 361 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (来自 Google/CMU) 伴随论文 [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) 由 Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le 发布。 362 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (来自 Facebook AI) 伴随论文 [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) 由 Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli 发布。 363 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (来自 Facebook AI) 伴随论文 [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) 由 Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli 发布。 364 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (来自 Huazhong University of Science & Technology) 伴随论文 [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) 由 Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu 发布。 365 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (来自 the University of Wisconsin - Madison) 伴随论文 [You Only Sample (Almost) 由 Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh 发布。 366 1. 想要贡献新的模型?我们这里有一份**详细指引和模板**来引导你添加新的模型。你可以在 [`templates`](./templates) 目录中找到他们。记得查看 [贡献指南](./CONTRIBUTING.md) 并在开始写 PR 前联系维护人员或开一个新的 issue 来获得反馈。 367 368 要检查某个模型是否已有 Flax、PyTorch 或 TensorFlow 的实现,或其是否在 🤗 Tokenizers 库中有对应词符化器(tokenizer),敬请参阅[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 369 370 这些实现均已于多个数据集测试(请参看用例脚本)并应于原版实现表现相当。你可以在用例文档的[此节](https://huggingface.co/docs/transformers/examples)中了解表现的细节。 371 372 373 ## 了解更多 374 375 | 章节 | 描述 | 376 |-|-| 377 | [文档](https://huggingface.co/transformers/) | 完整的 API 文档和教程 | 378 | [任务总结](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支持的任务 | 379 | [预处理教程](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 来为模型准备数据 | 380 | [训练和微调](https://huggingface.co/docs/transformers/training) | 在 PyTorch/TensorFlow 的训练循环或 `Trainer` API 中使用 🤗 Transformers 提供的模型 | 381 | [快速上手:微调和用例脚本](https://github.com/huggingface/transformers/tree/main/examples) | 为各种任务提供的用例脚本 | 382 | [模型分享和上传](https://huggingface.co/docs/transformers/model_sharing) | 和社区上传和分享你微调的模型 | 383 | [迁移](https://huggingface.co/docs/transformers/migration) | 从 `pytorch-transformers` 或 `pytorch-pretrained-bert` 迁移到 🤗 Transformers | 384 385 ## 引用 386 387 我们已将此库的[论文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式发表,如果你使用了 🤗 Transformers 库,请引用: 388 ```bibtex 389 @inproceedings{wolf-etal-2020-transformers, 390 title = "Transformers: State-of-the-Art Natural Language Processing", 391 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 392 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 393 month = oct, 394 year = "2020", 395 address = "Online", 396 publisher = "Association for Computational Linguistics", 397 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 398 pages = "38--45" 399 } 400 ``` 401 [end of README_zh-hans.md] [start of README_zh-hant.md] 1 <!--- 2 Copyright 2020 The HuggingFace Team. All rights reserved. 3 4 Licensed under the Apache License, Version 2.0 (the "License"); 5 you may not use this file except in compliance with the License. 6 You may obtain a copy of the License at 7 8 http://www.apache.org/licenses/LICENSE-2.0 9 10 Unless required by applicable law or agreed to in writing, software 11 distributed under the License is distributed on an "AS IS" BASIS, 12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 See the License for the specific language governing permissions and 14 limitations under the License. 15 --> 16 17 <!--- 18 A useful guide for English-Traditional Chinese translation of Hugging Face documentation 19 - Add space around English words and numbers when they appear between Chinese characters. E.g., 共 100 多種語言; 使用 transformers 函式庫。 20 - Use square quotes, e.g.,「引用」 21 - Some of terms in the file can be found at National Academy for Educational Research (https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese. 22 23 Dictionary 24 25 API: API (不翻譯) 26 add: 加入 27 checkpoint: 檢查點 28 code: 程式碼 29 community: 社群 30 confidence: 信賴度 31 dataset: 資料集 32 documentation: 文件 33 example: 基本翻譯為「範例」,或依語意翻為「例子」 34 finetune: 微調 35 Hugging Face: Hugging Face(不翻譯) 36 implementation: 實作 37 inference: 推論 38 library: 函式庫 39 module: 模組 40 NLP/Natural Language Processing: 以 NLP 出現時不翻譯,以 Natural Language Processing 出現時翻譯為自然語言處理 41 online demos: 線上Demo 42 pipeline: pipeline(不翻譯) 43 pretrained/pretrain: 預訓練 44 Python data structures (e.g., list, set, dict): 翻譯為串列,集合,字典,並用括號標註原英文 45 repository: repository(不翻譯) 46 summary: 概覽 47 token-: token-(不翻譯) 48 Trainer: Trainer(不翻譯) 49 transformer: transformer(不翻譯) 50 tutorial: 教學 51 user: 使用者 52 --> 53 54 <p align="center"> 55 <br> 56 <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> 57 <br> 58 <p> 59 <p align="center"> 60 <a href="https://circleci.com/gh/huggingface/transformers"> 61 <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> 62 </a> 63 <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> 64 <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> 65 </a> 66 <a href="https://huggingface.co/docs/transformers/index"> 67 <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> 68 </a> 69 <a href="https://github.com/huggingface/transformers/releases"> 70 <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> 71 </a> 72 <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> 73 <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> 74 </a> 75 <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> 76 </p> 77 78 <h4 align="center"> 79 <p> 80 <a href="https://github.com/huggingface/transformers/">English</a> | 81 <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | 82 <b>繁體中文</b> | 83 <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> 84 <p> 85 </h4> 86 87 <h3 align="center"> 88 <p>為 Jax、PyTorch 以及 TensorFlow 打造的先進自然語言處理函式庫</p> 89 </h3> 90 91 <h3 align="center"> 92 <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> 93 </h3> 94 95 🤗 Transformers 提供了數以千計的預訓練模型,支援 100 多種語言的文本分類、資訊擷取、問答、摘要、翻譯、文本生成。它的宗旨是讓最先進的 NLP 技術人人易用。 96 97 🤗 Transformers 提供了便於快速下載和使用的API,讓你可以將預訓練模型用在給定文本、在你的資料集上微調然後經由 [model hub](https://huggingface.co/models) 與社群共享。同時,每個定義的 Python 模組架構均完全獨立,方便修改和快速研究實驗。 98 99 🤗 Transformers 支援三個最熱門的深度學習函式庫: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) 以及 [TensorFlow](https://www.tensorflow.org/) — 並與之完美整合。你可以直接使用其中一個框架訓練你的模型,然後用另一個載入和推論。 100 101 ## 線上Demo 102 103 你可以直接在 [model hub](https://huggingface.co/models) 上測試大多數的模型。我們也提供了 [私有模型託管、模型版本管理以及推論API](https://huggingface.co/pricing)。 104 105 這裡是一些範例: 106 - [用 BERT 做遮蓋填詞](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) 107 - [用 Electra 做專有名詞辨識](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) 108 - [用 GPT-2 做文本生成](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) 109 - [用 RoBERTa 做自然語言推論](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) 110 - [用 BART 做文本摘要](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) 111 - [用 DistilBERT 做問答](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) 112 - [用 T5 做翻譯](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) 113 114 **[Write With Transformer](https://transformer.huggingface.co)**,由 Hugging Face 團隊所打造,是一個文本生成的官方 demo。 115 116 ## 如果你在尋找由 Hugging Face 團隊所提供的客製化支援服務 117 118 <a target="_blank" href="https://huggingface.co/support"> 119 <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> 120 </a><br> 121 122 ## 快速上手 123 124 我們為快速使用模型提供了 `pipeline` API。 Pipeline 包含了預訓練模型和對應的文本預處理。下面是一個快速使用 pipeline 去判斷正負面情緒的例子: 125 126 ```python 127 >>> from transformers import pipeline 128 129 # 使用情緒分析 pipeline 130 >>> classifier = pipeline('sentiment-analysis') 131 >>> classifier('We are very happy to introduce pipeline to the transformers repository.') 132 [{'label': 'POSITIVE', 'score': 0.9996980428695679}] 133 ``` 134 135 第二行程式碼下載並快取 pipeline 使用的預訓練模型,而第三行程式碼則在給定的文本上進行了評估。這裡的答案“正面” (positive) 具有 99.97% 的信賴度。 136 137 許多的 NLP 任務都有隨選即用的預訓練 `pipeline`。例如,我們可以輕鬆地從給定文本中擷取問題答案: 138 139 ``` python 140 >>> from transformers import pipeline 141 142 # 使用問答 pipeline 143 >>> question_answerer = pipeline('question-answering') 144 >>> question_answerer({ 145 ... 'question': 'What is the name of the repository ?', 146 ... 'context': 'Pipeline has been included in the huggingface/transformers repository' 147 ... }) 148 {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} 149 150 ``` 151 152 除了提供問題解答,預訓練模型還提供了對應的信賴度分數以及解答在 tokenized 後的文本中開始和結束的位置。你可以從[這個教學](https://huggingface.co/docs/transformers/task_summary)了解更多 `pipeline` API支援的任務。 153 154 要在你的任務中下載和使用任何預訓練模型很簡單,只需三行程式碼。這裡是 PyTorch 版的範例: 155 ```python 156 >>> from transformers import AutoTokenizer, AutoModel 157 158 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 159 >>> model = AutoModel.from_pretrained("bert-base-uncased") 160 161 >>> inputs = tokenizer("Hello world!", return_tensors="pt") 162 >>> outputs = model(**inputs) 163 ``` 164 這裡是對應的 TensorFlow 程式碼: 165 ```python 166 >>> from transformers import AutoTokenizer, TFAutoModel 167 168 >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") 169 >>> model = TFAutoModel.from_pretrained("bert-base-uncased") 170 171 >>> inputs = tokenizer("Hello world!", return_tensors="tf") 172 >>> outputs = model(**inputs) 173 ``` 174 175 Tokenizer 為所有的預訓練模型提供了預處理,並可以直接轉換單一字串(比如上面的例子)或串列 (list)。它會輸出一個的字典 (dict) 讓你可以在下游程式碼裡使用或直接藉由 `**` 運算式傳給模型。 176 177 模型本身是一個常規的 [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) 或 [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)(取決於你的後端),可依常規方式使用。 [這個教學](https://huggingface.co/transformers/training.html)解釋了如何將這樣的模型整合到一般的 PyTorch 或 TensorFlow 訓練迴圈中,或是如何使用我們的 `Trainer` API 在一個新的資料集上快速進行微調。 178 179 ## 為什麼要用 transformers? 180 181 1. 便於使用的先進模型: 182 - NLU 和 NLG 上性能卓越 183 - 對教學和實作友好且低門檻 184 - 高度抽象,使用者只須學習 3 個類別 185 - 對所有模型使用的制式化API 186 187 1. 更低的運算成本,更少的碳排放: 188 - 研究人員可以分享預訓練的模型而非從頭開始訓練 189 - 工程師可以減少計算時間以及生產成本 190 - 數十種模型架構、兩千多個預訓練模型、100多種語言支援 191 192 1. 對於模型生命週期的每一個部分都面面俱到: 193 - 訓練先進的模型,只需 3 行程式碼 194 - 模型可以在不同深度學習框架之間任意轉換 195 - 為訓練、評估和生產選擇最適合的框架,並完美銜接 196 197 1. 為你的需求輕鬆客製化專屬模型和範例: 198 - 我們為每種模型架構提供了多個範例來重現原論文結果 199 - 一致的模型內部架構 200 - 模型檔案可單獨使用,便於修改和快速實驗 201 202 ## 什麼情況下我不該用 transformers? 203 204 - 本函式庫並不是模組化的神經網絡工具箱。模型文件中的程式碼並未做額外的抽象封裝,以便研究人員快速地翻閱及修改程式碼,而不會深陷複雜的類別包裝之中。 205 - `Trainer` API 並非相容任何模型,它只為本函式庫中的模型最佳化。對於一般的機器學習用途,請使用其他函式庫。 206 - 儘管我們已盡力而為,[examples 目錄](https://github.com/huggingface/transformers/tree/main/examples)中的腳本也僅為範例而已。對於特定問題,它們並不一定隨選即用,可能需要修改幾行程式碼以符合需求。 207 208 ## 安裝 209 210 ### 使用 pip 211 212 這個 Repository 已在 Python 3.6+、Flax 0.3.2+、PyTorch 1.3.1+ 和 TensorFlow 2.3+ 下經過測試。 213 214 你可以在[虛擬環境](https://docs.python.org/3/library/venv.html)中安裝 🤗 Transformers。如果你還不熟悉 Python 的虛擬環境,請閱此[使用者指引](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/)。 215 216 首先,用你打算使用的版本的 Python 創建一個虛擬環境並進入。 217 218 然後,你需要安裝 Flax、PyTorch 或 TensorFlow 其中之一。對於該如何在你使用的平台上安裝這些框架,請參閱 [TensorFlow 安裝頁面](https://www.tensorflow.org/install/), [PyTorch 安裝頁面](https://pytorch.org/get-started/locally/#start-locally) 或 [Flax 安裝頁面](https://github.com/google/flax#quick-install)。 219 220 當其中一個後端安裝成功後,🤗 Transformers 可依此安裝: 221 222 ```bash 223 pip install transformers 224 ``` 225 226 如果你想要試試範例或者想在正式發布前使用最新開發中的程式碼,你必須[從原始碼安裝](https://huggingface.co/docs/transformers/installation#installing-from-source)。 227 228 ### 使用 conda 229 230 自 Transformers 4.0.0 版始,我們有了一個 conda channel: `huggingface`。 231 232 🤗 Transformers 可以藉由 conda 依此安裝: 233 234 ```shell script 235 conda install -c huggingface transformers 236 ``` 237 238 要藉由 conda 安裝 Flax、PyTorch 或 TensorFlow 其中之一,請參閱它們各自安裝頁面的說明。 239 240 ## 模型架構 241 242 **🤗 Transformers 支援的[所有的模型檢查點](https://huggingface.co/models)**,由[使用者](https://huggingface.co/users)和[組織](https://huggingface.co/organizations)上傳,均與 huggingface.co [model hub](https://huggingface.co) 完美結合。 243 244 目前的檢查點數量: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 245 246 🤗 Transformers 目前支援以下的架構(模型概覽請參閱[這裡](https://huggingface.co/docs/transformers/model_summary)): 247 248 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 249 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 250 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 251 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 252 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 253 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 254 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 255 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 256 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 257 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 258 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 259 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 260 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). 261 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 262 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 263 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot. 264 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 265 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 266 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 267 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 268 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 269 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 270 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 271 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 272 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 273 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 274 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 275 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 276 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. 277 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 278 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 279 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) and a German version of DistilBERT. 280 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 281 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 282 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 283 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 284 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 285 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab. 286 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 287 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 288 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 289 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 290 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 291 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 292 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 293 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 294 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released with the paper [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 295 1. **[GroupViT](https://huggingface.co/docs/transformers/main/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 296 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 297 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 298 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 299 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 300 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 301 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 302 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 303 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 304 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. 305 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 306 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 307 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 308 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 309 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 310 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 311 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 312 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov 313 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 314 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 315 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 316 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 317 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 318 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 319 1. **[MobileViT](https://huggingface.co/docs/transformers/main/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 320 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 321 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 322 1. **[MVP](https://huggingface.co/docs/transformers/main/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 323 1. **[Nezha](https://huggingface.co/docs/transformers/main/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 324 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 325 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 326 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 327 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira. 328 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 329 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 330 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 331 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 332 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 333 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. 334 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 335 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 336 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Research) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. 337 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/pdf/2010.12821.pdf) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder. 338 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 339 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 340 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper a [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 341 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 342 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 343 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 344 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 345 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook) released with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 346 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University) released with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 347 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 348 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 349 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 350 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released with the paper [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 351 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. 352 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 353 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 354 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 355 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 356 1. **[UL2](https://huggingface.co/docs/transformers/main/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 357 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 358 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 359 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 360 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 361 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 362 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 363 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. 364 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 365 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 366 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 367 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 368 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 369 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 370 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 371 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 372 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI) released with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 373 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 374 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 375 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 376 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 377 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. 378 1. 想要貢獻新的模型?我們這裡有一份**詳細指引和模板**來引導你加入新的模型。你可以在 [`templates`](./templates) 目錄中找到它們。記得查看[貢獻指引](./CONTRIBUTING.md)並在開始寫 PR 前聯繫維護人員或開一個新的 issue 來獲得 feedbacks。 379 380 要檢查某個模型是否已有 Flax、PyTorch 或 TensorFlow 的實作,或其是否在🤗 Tokenizers 函式庫中有對應的 tokenizer,敬請參閱[此表](https://huggingface.co/docs/transformers/index#supported-frameworks)。 381 382 這些實作均已於多個資料集測試(請參閱範例腳本)並應與原版實作表現相當。你可以在範例文件的[此節](https://huggingface.co/docs/transformers/examples)中了解實作的細節。 383 384 385 ## 了解更多 386 387 | 章節 | 描述 | 388 |-|-| 389 | [文件](https://huggingface.co/transformers/) | 完整的 API 文件和教學 | 390 | [任務概覽](https://huggingface.co/docs/transformers/task_summary) | 🤗 Transformers 支援的任務 | 391 | [預處理教學](https://huggingface.co/docs/transformers/preprocessing) | 使用 `Tokenizer` 來為模型準備資料 | 392 | [訓練和微調](https://huggingface.co/docs/transformers/training) | 使用 PyTorch/TensorFlow 的內建的訓練方式或於 `Trainer` API 中使用 🤗 Transformers 提供的模型 | 393 | [快速上手:微調和範例腳本](https://github.com/huggingface/transformers/tree/main/examples) | 為各種任務提供的範例腳本 | 394 | [模型分享和上傳](https://huggingface.co/docs/transformers/model_sharing) | 上傳並與社群分享你微調的模型 | 395 | [遷移](https://huggingface.co/docs/transformers/migration) | 從 `pytorch-transformers` 或 `pytorch-pretrained-bert` 遷移到 🤗 Transformers | 396 397 ## 引用 398 399 我們已將此函式庫的[論文](https://www.aclweb.org/anthology/2020.emnlp-demos.6/)正式發表。如果你使用了 🤗 Transformers 函式庫,可以引用: 400 ```bibtex 401 @inproceedings{wolf-etal-2020-transformers, 402 title = "Transformers: State-of-the-Art Natural Language Processing", 403 author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", 404 booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", 405 month = oct, 406 year = "2020", 407 address = "Online", 408 publisher = "Association for Computational Linguistics", 409 url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", 410 pages = "38--45" 411 } 412 ``` 413 [end of README_zh-hant.md] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
huggingface/transformers
a462fc9232eb9e04f9bc9c710d8c9a3ac21056de
Calling `generate` on a `T5ForConditionalGeneration` returns `n` tokens but `n-1` scores ### System Info ```shell - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @patrickvonplaten, @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch if __name__ == '__main__': torch.manual_seed(0) tokenizer = AutoTokenizer.from_pretrained('t5-small') model = AutoModelForSeq2SeqLM.from_pretrained('t5-small') input = tokenizer.encode("I enjoy walking with my cute dog", return_tensors='pt') result = model.generate( input, max_new_tokens=15, do_sample=True, return_dict_in_generate=True, output_scores=True, ) print(len(result["scores"])) for sequence in result["sequences"]: print(len(sequence)) print(tokenizer.decode(sequence)) ``` Output: ``` 15 16 <pad> Ich, liebe es, mes lustig beim laufen ``` ### Expected behavior I would have expected to have up to 15 tokens (as `max_new_tokens=15`) and `len(result["scores"]) == len(result["sequences"][0])`. However, the size of the returned sequence of tokens is always `len(result["scores"]) + 1`. In addition, if `max_new_tokens` is reached we have `len(result["sequences"][0]) == max_new_tokens + 1`. When looking at the decoded sequence, there is always a pad token at the beginning. I don't know if this is necessarily a bug but this behaviour is somewhat confusing, especially when trying to compute the probability of the sequence given scores.
Hi, @ClementRomac If you look the [config.json](https://huggingface.co/t5-small/blob/main/config.json) file of the `t5-small` model, you will see it uses `pad_token_id` as `decoder_start_token_id` (both are `0`). The `scores` having length `len(sequence) - 1` is expected. Think it this way, ```python generated sequence = [decoder_start_token_id, token_1, token_2] ``` The scores is/are: - score for generating `token_1` while we have `[decoder_start_token_id]` - score for generating `token_2` while we have `[decoder_start_token_id, token_1]` This is also documented in [generation_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py), for example (`SampleEncoderDecoderOutput`) https://github.com/huggingface/transformers/blob/afb71b672679e57449085e4955a321db8e5705b9/src/transformers/generation_utils.py#L172 or (`GreedySearchEncoderDecoderOutput`) https://github.com/huggingface/transformers/blob/afb71b672679e57449085e4955a321db8e5705b9/src/transformers/generation_utils.py#L101 etc. Hey @ydshieh, Thanks for your answer, it makes sense! Could we consider documenting it a little bit more somewhere? I don't have any clear idea on where to put it but to be honest this behaviour can appear a bit confusing when looking at the documentation. For instance, in [generation_utils.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py), it is mentioned (both for `SampleEncoderDecoderOutput` and `GreedySearchEncoderDecoderOutput`): 1. that `sequence_length` should be up to `max_length` (however we get `max_length +1` in the above example) 2. that `scores` will have size `max_length-1` (however we get `max_length` scores in the above example) https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/generation_utils.py#L169-L175 @ClementRomac , I think it is because you use `max_new_tokens=15,` instead of the argument `max_length.` See https://github.com/huggingface/transformers/blob/1dfa03f12b3748dc7e9c2b5ada40c3401ada23a5/src/transformers/generation_utils.py#L925-L929 I think it is quite well documented. It is possible to make it even more explicit to include `max_new_tokens` regarding the output format. @patrickvonplaten Do you think we should add this in `GreedySearchEncoderDecoderOutput` etc ..? Always happy to make the generate docs more explicit! Also gently pinging @gante here for feedback :-) Note: Some docstrings associated with `scores` have ``` `(max_length-1,)`-shaped tuple of `torch.FloatTensor` ``` while others have ``` `(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor` ``` depending on whether the model is an encoder-decoder or a decoder-only (respectively) ______________________ I see two minor problems with the current docstrings: 1. Generation may stop before we generate `max_length` tokens (or `max_new_tokens` new tokens); 2. We are pushing away from `max_length` towards `max_new_tokens`. As such, it would be nice to improve the docs to address these two issues! Since the previous sentence in the docstring contains `(...) at each generation step`, perhaps something like this: ``` Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element per generation step), ``` The complete docstring would be: ``` scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element per generation step), with each tensor of shape `(batch_size, config.vocab_size)`). ``` WDYT? @gante Looks good to me, as long as we keep `batch_size*num_return_sequences` instead of `batch_size` wherever it applies. Very much agree with @gante here! Assigned to me to update the docstring for all three frameworks
2022-07-04T19:34:34Z
<patch> diff --git a/src/transformers/generation_flax_utils.py b/src/transformers/generation_flax_utils.py --- a/src/transformers/generation_flax_utils.py +++ b/src/transformers/generation_flax_utils.py @@ -15,6 +15,7 @@ # limitations under the License. +import warnings from functools import partial from typing import Dict, Optional @@ -163,6 +164,7 @@ def generate( self, input_ids: jnp.ndarray, max_length: Optional[int] = None, + max_new_tokens: Optional[int] = None, pad_token_id: Optional[int] = None, bos_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, @@ -209,8 +211,12 @@ def generate( input_ids (`jnp.ndarray` of shape `(batch_size, sequence_length)`): The sequence used as a prompt for the generation. - max_length (`int`, *optional*, defaults to 20): - The maximum length of the sequence to be generated. + max_length (`int`, *optional*, defaults to `model.config.max_length`): + The maximum length the generated tokens can have. Corresponds to the length of the input prompt + + `max_new_tokens`. In general, prefer the use of `max_new_tokens`, which ignores the number of tokens in + the prompt. + max_new_tokens (`int`, *optional*): + The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. do_sample (`bool`, *optional*, defaults to `False`): Whether or not to use sampling ; use greedy decoding otherwise. temperature (`float`, *optional*, defaults to 1.0): @@ -258,8 +264,6 @@ def generate( >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ```""" # set init values - max_length = max_length if max_length is not None else self.config.max_length - min_length = min_length if min_length is not None else self.config.min_length bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id @@ -270,11 +274,6 @@ def generate( if decoder_start_token_id is None and self.config.is_encoder_decoder: raise ValueError("`decoder_start_token_id` has to be defined for encoder-decoder generation.") - if min_length is not None and min_length > max_length: - raise ValueError( - f"Unfeasable length constraints: the minimum length ({min_length}) is larger than the maximum " - f"length ({max_length})" - ) if self.config.is_encoder_decoder: # add encoder_outputs to model_kwargs @@ -283,6 +282,42 @@ def generate( # prepare decoder_input_ids for generation input_ids = jnp.ones((input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id + # Prepare `max_length` depending on other stopping criteria. + input_ids_seq_length = input_ids.shape[-1] + if max_length is None and max_new_tokens is None: + warnings.warn( + "Neither `max_length` nor `max_new_tokens` have been set, `max_length` will default to " + f"{self.config.max_length} (`self.config.max_length`). Controlling `max_length` via the config is " + "deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend " + "using `max_new_tokens` to control the maximum length of the generation.", + UserWarning, + ) + elif max_length is None and max_new_tokens is not None: + max_length = max_new_tokens + input_ids_seq_length + elif max_length is not None and max_new_tokens is not None: + raise ValueError( + "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a" + " limit to the generated output length. Remove one of those arguments. Please refer to the" + " documentation for more information. " + "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)" + ) + # default to config if still None + max_length = max_length if max_length is not None else self.config.max_length + min_length = min_length if min_length is not None else self.config.min_length + + if min_length is not None and min_length > max_length: + raise ValueError( + f"Unfeasable length constraints: the minimum length ({min_length}) is larger than the maximum " + f"length ({max_length})" + ) + if input_ids_seq_length >= max_length: + input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids" + logger.warning( + f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to" + f" {max_length}. This can lead to unexpected behavior. You should consider increasing" + "`max_new_tokens`." + ) + do_sample = do_sample if do_sample is not None else self.config.do_sample num_beams = num_beams if num_beams is not None else self.config.num_beams diff --git a/src/transformers/generation_tf_utils.py b/src/transformers/generation_tf_utils.py --- a/src/transformers/generation_tf_utils.py +++ b/src/transformers/generation_tf_utils.py @@ -15,6 +15,7 @@ # limitations under the License. import inspect +import warnings from dataclasses import dataclass from typing import Any, Dict, List, Optional, Tuple, Union @@ -53,8 +54,8 @@ class TFGreedySearchDecoderOnlyOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-input_ids.shape[-1],)`-shaped tuple of `tf.Tensor` with each tensor - of shape `(batch_size, config.vocab_size)`). + at each generation step. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each + generated token), with each tensor of shape `(batch_size, config.vocab_size)`. attentions (`tuple(tuple(tf.Tensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `tf.Tensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. @@ -83,8 +84,8 @@ class TFGreedySearchEncoderDecoderOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-1,)`-shaped tuple of `tf.Tensor` with each tensor of shape - `(batch_size, config.vocab_size)`). + at each generation step. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each + generated token), with each tensor of shape `(batch_size, config.vocab_size)`. encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple of `tf.Tensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. @@ -123,8 +124,8 @@ class TFSampleDecoderOnlyOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-input_ids.shape[-1],)`-shaped tuple of `tf.Tensor` with each tensor - of shape `(batch_size*num_return_sequences, config.vocab_size)`). + at each generation step. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each + generated token), with each tensor of shape `(batch_size*num_return_sequences, config.vocab_size)`. attentions (`tuple(tuple(tf.Tensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `tf.Tensor` of shape `(num_return_sequences*batch_size, num_heads, generated_length, sequence_length)`. @@ -153,8 +154,8 @@ class TFSampleEncoderDecoderOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-1,)`-shaped tuple of `tf.Tensor` with each tensor of shape - `(batch_size*num_return_sequences, config.vocab_size)`). + at each generation step. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each + generated token), with each tensor of shape `(batch_size*num_return_sequences, config.vocab_size)`. encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple of `tf.Tensor` (one for each layer of the decoder) of shape `(batch_size*num_return_sequences, num_heads, sequence_length, sequence_length)`. @@ -194,9 +195,9 @@ class TFBeamSearchDecoderOnlyOutput(ModelOutput): Final beam scores of the generated `sequences`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log - softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam - . `(max_length-input_ids.shape[-1],)`-shaped tuple of `tf.Tensor` with each tensor of shape - `(batch_size*num_beams*num_return_sequences, config.vocab_size)`). + softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this + beam. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams*num_return_sequences, config.vocab_size)`. attentions (`tuple(tuple(tf.Tensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `tf.Tensor` of shape `(batch_size*num_beams, num_heads, generated_length, sequence_length)`. @@ -227,9 +228,9 @@ class TFBeamSearchEncoderDecoderOutput(ModelOutput): Final beam scores of the generated `sequences`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log - softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam - . `(max_length-1,)`-shaped tuple of `tf.Tensor` with each tensor of shape `(batch_size*num_beams, - config.vocab_size)`). + softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this + beam. `Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams, config.vocab_size)`. attentions (`tuple(tuple(tf.Tensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple of `tf.Tensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, @@ -272,9 +273,9 @@ class TFBeamSampleDecoderOnlyOutput(ModelOutput): Final beam scores of the generated `sequences`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log - softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam - . `(max_length-input_ids.shape[-1],)`-shaped tuple of `tf.Tensor` with each tensor of shape - `(batch_size*num_beams*num_return_sequences, config.vocab_size)`). + softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this + beam. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams*num_return_sequences, config.vocab_size)`. attentions (`tuple(tuple(tf.Tensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `tf.Tensor` of shape `(batch_size*num_beams, num_heads, generated_length, sequence_length)`. @@ -305,9 +306,9 @@ class TFBeamSampleEncoderDecoderOutput(ModelOutput): Final beam scores of the generated `sequences`. scores (`tuple(tf.Tensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log - softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam - . `(max_length-1,)`-shaped tuple of `tf.Tensor` with each tensor of shape `(batch_size*num_beams, - config.vocab_size)`). + softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this + beam. Tuple of `tf.Tensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams, config.vocab_size)`. encoder_attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple of `tf.Tensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. @@ -375,6 +376,7 @@ def generate( self, input_ids=None, max_length=None, + max_new_tokens=None, min_length=None, do_sample=None, early_stopping=None, @@ -423,8 +425,12 @@ def generate( method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs` should of in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of `input_ids`, `input_values`, `input_features`, or `pixel_values`. - max_length (`int`, *optional*, defaults to 20): - The maximum length of the sequence to be generated. + max_length (`int`, *optional*, defaults to `model.config.max_length`): + The maximum length the generated tokens can have. Corresponds to the length of the input prompt + + `max_new_tokens`. In general, prefer the use of `max_new_tokens`, which ignores the number of tokens in + the prompt. + max_new_tokens (`int`, *optional*): + The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. min_length (`int`, *optional*, defaults to 10): The minimum length of the sequence to be generated. do_sample (`bool`, *optional*, defaults to `False`): @@ -577,6 +583,7 @@ def generate( return self._generate( input_ids=input_ids, max_length=max_length, + max_new_tokens=max_new_tokens, min_length=min_length, do_sample=do_sample, early_stopping=early_stopping, @@ -1286,6 +1293,7 @@ def _generate( self, input_ids=None, max_length=None, + max_new_tokens=None, min_length=None, do_sample=None, early_stopping=None, @@ -1332,8 +1340,12 @@ def _generate( input_ids (`tf.Tensor` of `dtype=tf.int32` and shape `(batch_size, sequence_length)`, *optional*): The sequence used as a prompt for the generation. If `None` the method initializes it with `bos_token_id` and a batch size of 1. - max_length (`int`, *optional*, defaults to 20): - The maximum length of the sequence to be generated. + max_length (`int`, *optional*, defaults to `model.config.max_length`): + The maximum length the generated tokens can have. Corresponds to the length of the input prompt + + `max_new_tokens`. In general, prefer the use of `max_new_tokens`, which ignores the number of tokens in + the prompt. + max_new_tokens (`int`, *optional*): + The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. min_length (`int`, *optional*, defaults to 10): The minimum length of the sequence to be generated. do_sample (`bool`, *optional*, defaults to `False`): @@ -1474,8 +1486,6 @@ def _generate( outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) ```""" # 1. Set generation parameters if not already defined - max_length = max_length if max_length is not None else self.config.max_length - min_length = min_length if min_length is not None else self.config.min_length length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping @@ -1514,12 +1524,6 @@ def _generate( logger.warning(f"Setting `pad_token_id` to {eos_token_id} (first `eos_token_id`) to generate sequence") pad_token_id = eos_token_id - if min_length is not None and min_length > max_length: - raise ValueError( - f"Unfeasable length constraints: the minimum length ({min_length}) is larger than the maximum " - f"length ({max_length})" - ) - use_xla = not tf.executing_eagerly() if use_xla and not self.supports_xla_generation: raise ValueError( @@ -1561,21 +1565,49 @@ def _generate( model_kwargs=model_kwargs, ) - if input_ids.shape[-1] >= max_length: + # 5. Prepare `max_length` depending on other stopping criteria. + input_ids_seq_length = input_ids.shape[-1] + if max_length is None and max_new_tokens is None: + warnings.warn( + "Neither `max_length` nor `max_new_tokens` have been set, `max_length` will default to " + f"{self.config.max_length} (`self.config.max_length`). Controlling `max_length` via the config is " + "deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend " + "using `max_new_tokens` to control the maximum length of the generation.", + UserWarning, + ) + elif max_length is None and max_new_tokens is not None: + max_length = max_new_tokens + input_ids_seq_length + elif max_length is not None and max_new_tokens is not None: raise ValueError( - f"The context has {input_ids.shape[-1]} number of tokens, " - f"but `max_length` is only {max_length}. " - "Please make sure that `max_length` is bigger than the number of tokens, " - "by setting either `generate(max_length=...,...)` or `config.max_length = ...`" + "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a" + " limit to the generated output length. Remove one of those arguments. Please refer to the" + " documentation for more information. " + "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)" + ) + # default to config if still None + max_length = max_length if max_length is not None else self.config.max_length + min_length = min_length if min_length is not None else self.config.min_length + + if min_length is not None and min_length > max_length: + raise ValueError( + f"Unfeasable length constraints: the minimum length ({min_length}) is larger than the maximum " + f"length ({max_length})" + ) + if input_ids_seq_length >= max_length: + input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids" + logger.warning( + f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to" + f" {max_length}. This can lead to unexpected behavior. You should consider increasing" + "`max_new_tokens`." ) - # 5. determine generation mode + # 6. determine generation mode # TODO(Matt, Joao, Patrick) - add more use cases here is_greedy_gen_mode = (num_beams == 1) and do_sample is False is_sample_gen_mode = (num_beams == 1) and do_sample is True is_beam_gen_mode = (num_beams > 1) and do_sample is False - # 6. prepare distribution pre_processing samplers + # 7. prepare distribution pre_processing samplers logits_processor = self._get_logits_processor( repetition_penalty=repetition_penalty, no_repeat_ngram_size=no_repeat_ngram_size, @@ -1587,13 +1619,13 @@ def _generate( forced_eos_token_id=forced_eos_token_id, ) - # 7. go into different generation modes + # 8. go into different generation modes if is_greedy_gen_mode: if num_return_sequences > 1: raise ValueError( f"num_return_sequences has to be 1, but is {num_return_sequences} when doing greedy search." ) - # 8. run greedy search + # 9. run greedy search return self.greedy_search( input_ids, max_length=max_length, @@ -1605,10 +1637,10 @@ def _generate( **model_kwargs, ) elif is_sample_gen_mode: - # 8. prepare logits warper + # 9. prepare logits warper logits_warper = self._get_logits_warper(top_k=top_k, top_p=top_p, temperature=temperature) - # 9. expand input_ids with `num_return_sequences` additional sequences per batch + # 10. expand input_ids with `num_return_sequences` additional sequences per batch input_ids, model_kwargs = self._expand_inputs_for_generation( input_ids, expand_size=num_return_sequences, @@ -1616,7 +1648,7 @@ def _generate( **model_kwargs, ) - # 10. run sample + # 11. run sample return self.sample( input_ids, logits_processor=logits_processor, @@ -1637,7 +1669,7 @@ def _generate( f"num_beams >= num_return_sequences, got {num_beams} and {num_return_sequences} (respectivelly)" ) - # 8. broadcast inputs to the desired number of beams + # 9. broadcast inputs to the desired number of beams input_ids = self._expand_to_num_beams(input_ids, num_beams=num_beams) if "encoder_outputs" in model_kwargs: @@ -1650,7 +1682,7 @@ def _generate( model_kwargs["attention_mask"], num_beams=num_beams ) - # 9. run beam search + # 10. run beam search return self.beam_search( input_ids, max_length=max_length, diff --git a/src/transformers/generation_utils.py b/src/transformers/generation_utils.py --- a/src/transformers/generation_utils.py +++ b/src/transformers/generation_utils.py @@ -70,8 +70,8 @@ class GreedySearchDecoderOnlyOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor` with each - tensor of shape `(batch_size, config.vocab_size)`). + at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for + each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`. @@ -100,8 +100,8 @@ class GreedySearchEncoderDecoderOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-1,)`-shaped tuple of `torch.FloatTensor` with each tensor of shape - `(batch_size, config.vocab_size)`). + at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for + each generated token), with each tensor of shape `(batch_size, config.vocab_size)`. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. @@ -140,8 +140,8 @@ class SampleDecoderOnlyOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor` with each - tensor of shape `(batch_size*num_return_sequences, config.vocab_size)`). + at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for + each generated token), with each tensor of shape `(batch_size*num_return_sequences, config.vocab_size)`. attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(num_return_sequences*batch_size, num_heads, generated_length, @@ -171,8 +171,8 @@ class SampleEncoderDecoderOutput(ModelOutput): if all batches finished early due to the `eos_token_id`. scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) - at each generation step. `(max_length-1,)`-shaped tuple of `torch.FloatTensor` with each tensor of shape - `(batch_size*num_return_sequences, config.vocab_size)`). + at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for + each generated token), with each tensor of shape `(batch_size*num_return_sequences, config.vocab_size)`. encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size*num_return_sequences, num_heads, sequence_length, sequence_length)`. @@ -214,8 +214,8 @@ class BeamSearchDecoderOnlyOutput(ModelOutput): scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. - `(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor` with each tensor of shape - `(batch_size*num_beams*num_return_sequences, config.vocab_size)`). + Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams*num_return_sequences, config.vocab_size)`. beam_indices (`tuple(tuple(torch.LongTensor))`, *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, input_ids.shape[-1])`. @@ -251,8 +251,8 @@ class BeamSearchEncoderDecoderOutput(ModelOutput): scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. - `(max_length-1,)`-shaped tuple of `torch.FloatTensor` with each tensor of shape `(batch_size*num_beams, - config.vocab_size)`). + Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams, config.vocab_size)`. beam_indices (`tuple(tuple(torch.LongTensor))`, *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, max_length-1)`. @@ -300,8 +300,8 @@ class BeamSampleDecoderOnlyOutput(ModelOutput): scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. - `(max_length-input_ids.shape[-1],)`-shaped tuple of `torch.FloatTensor` with each tensor of shape - `(batch_size*num_beams*num_return_sequences, config.vocab_size)`). + Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams*num_return_sequences, config.vocab_size)`. beam_indices (`tuple(tuple(torch.LongTensor))`, *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, input_ids.shape[-1])`. @@ -337,8 +337,8 @@ class BeamSampleEncoderDecoderOutput(ModelOutput): scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. - `(max_length-1,)`-shaped tuple of `torch.FloatTensor` with each tensor of shape `(batch_size*num_beams, - config.vocab_size)`). + Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), + with each tensor of shape `(batch_size*num_beams, config.vocab_size)`). beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`): Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, max_length-1)`. @@ -923,10 +923,11 @@ def generate( should of in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of `input_ids`, `input_values`, `input_features`, or `pixel_values`. max_length (`int`, *optional*, defaults to `model.config.max_length`): - The maximum length of the sequence to be generated. - max_new_tokens (`int`, *optional*, defaults to None): - The maximum numbers of tokens to generate, ignore the current number of tokens. Use either - `max_new_tokens` or `max_length` but not both, they serve the same purpose. + The maximum length the generated tokens can have. Corresponds to the length of the input prompt + + `max_new_tokens`. In general, prefer the use of `max_new_tokens`, which ignores the number of tokens in + the prompt. + max_new_tokens (`int`, *optional*): + The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. min_length (`int`, *optional*, defaults to 10): The minimum length of the sequence to be generated. do_sample (`bool`, *optional*, defaults to `False`): @@ -974,7 +975,7 @@ def generate( where one can allow different forms of each word. num_return_sequences(`int`, *optional*, defaults to 1): The number of independently computed returned sequences for each element in the batch. - max_time(`float`, *optional*, defaults to None): + max_time(`float`, *optional*): The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed. attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): @@ -1195,20 +1196,25 @@ def generate( # if decoder-only then inputs_tensor has to be `input_ids` input_ids = inputs_tensor + # 5. Prepare `max_length` depending on other stopping criteria. input_ids_seq_length = input_ids.shape[-1] - - # 5. Prepare `max_length` depending on other stopping criteria - # if `max_new_tokens` is passed, but not `max_length` -> set `max_length = max_new_tokens` - if max_length is None and max_new_tokens is not None: - max_length = max_new_tokens + input_ids_seq_length - elif max_length is not None and max_new_tokens is not None: - # Both are set, this is odd, raise a warning + if max_length is None and max_new_tokens is None: warnings.warn( - "Both `max_length` and `max_new_tokens` have been set " - f"but they serve the same purpose. `max_length` {max_length} " - f"will take priority over `max_new_tokens` {max_new_tokens}.", + "Neither `max_length` nor `max_new_tokens` have been set, `max_length` will default to " + f"{self.config.max_length} (`self.config.max_length`). Controlling `max_length` via the config is " + "deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend " + "using `max_new_tokens` to control the maximum length of the generation.", UserWarning, ) + elif max_length is None and max_new_tokens is not None: + max_length = max_new_tokens + input_ids_seq_length + elif max_length is not None and max_new_tokens is not None: + raise ValueError( + "Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a" + " limit to the generated output length. Remove one of those arguments. Please refer to the" + " documentation for more information. " + "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)" + ) # default to config if still None max_length = max_length if max_length is not None else self.config.max_length min_length = min_length if min_length is not None else self.config.min_length @@ -1221,9 +1227,9 @@ def generate( if input_ids_seq_length >= max_length: input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids" logger.warning( - f"Input length of {input_ids_string} is {input_ids_seq_length}, but ``max_length`` is set to" - f" {max_length}. This can lead to unexpected behavior. You should consider increasing" - " ``config.max_length`` or ``max_length``." + f"Input length of {input_ids_string} is {input_ids_seq_length}, but `max_length` is set to" + f" {max_length}. This can lead to unexpected behavior. You should consider increasing " + "`max_new_tokens`." ) # 6. determine generation mode </patch>
[]
[]
mesonbuild__meson-4354
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Cross-drive build is not supported on Windows ``` Stdout: Traceback (most recent call last): File "D:\dev\meson\mesonbuild\mesonmain.py", line 368, in run app.generate() File "D:\dev\meson\mesonbuild\mesonmain.py", line 150, in generate self._generate(env) File "D:\dev\meson\mesonbuild\mesonmain.py", line 168, in _generate g = ninjabackend.NinjaBackend(b) File "D:\dev\meson\mesonbuild\backend\ninjabackend.py", line 143, in __init__ super().__init__(build) File "D:\dev\meson\mesonbuild\backend\backends.py", line 111, in __init__ self.environment.get_build_dir()) File "C:\Python36\lib\ntpath.py", line 585, in relpath path_drive, start_drive)) ValueError: path is on mount 'D:', start on mount 'C:' ``` A little bit of debugging shows that this happens in `Backend.__init__()`, ```python self.build_to_src = os.path.relpath(self.environment.get_source_dir(), # D:\dev\meson\test cases\common\5 linkstatic self.environment.get_build_dir()) # C:\Users\$NAME\AppData\Local\Temp\tmp1iii95om ``` </issue> <code> [start of README.md] 1 <p align="center"> 2 <img src="http://mesonbuild.com/assets/images/meson_logo.png"> 3 </p> 4 Meson® is a project to create the best possible next-generation 5 build system. 6 7 #### Status 8 9 [![PyPI](https://img.shields.io/pypi/v/meson.svg)](https://pypi.python.org/pypi/meson) 10 [![Travis](https://travis-ci.org/mesonbuild/meson.svg?branch=master)](https://travis-ci.org/mesonbuild/meson) 11 [![Appveyor](https://ci.appveyor.com/api/projects/status/7jfaotriu8d8ncov?svg=true)](https://ci.appveyor.com/project/mesonbuild/meson) 12 [![Codecov](https://codecov.io/gh/mesonbuild/meson/coverage.svg?branch=master)](https://codecov.io/gh/mesonbuild/meson/branch/master) 13 [![Code Quality: Python](https://img.shields.io/lgtm/grade/python/g/mesonbuild/meson.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/mesonbuild/meson/context:python) 14 [![Total Alerts](https://img.shields.io/lgtm/alerts/g/mesonbuild/meson.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/mesonbuild/meson/alerts) 15 16 #### Dependencies 17 18 - [Python](http://python.org) (version 3.5 or newer) 19 - [Ninja](https://ninja-build.org) (version 1.5 or newer) 20 21 #### Installing from source 22 23 You can run Meson directly from a revision control checkout or an 24 extracted tarball. If you wish you can install it locally with the 25 standard Python distutils command `python3 setup.py install <your 26 options here>`. 27 28 Meson is also available from 29 [PyPi](https://pypi.python.org/pypi/meson), so it can be installed 30 with `pip3 install meson` (this does not require a source checkout, 31 pip will download the package automatically). The exact command to 32 type to install with pip can vary between systems, be sure to use the 33 Python 3 version of pip. 34 35 #### Running 36 37 Meson requires that you have a source directory and a build directory 38 and that these two are different. In your source root must exist a file 39 called 'meson.build'. To generate the build system run this command: 40 41 `meson <source directory> <build directory>` 42 43 Depending on how you obtained Meson the command might also be called 44 `meson.py` instead of plain `meson`. In the rest of this document we 45 are going to use the latter form. 46 47 You can omit either of the two directories, and Meson will substitute 48 the current directory and autodetect what you mean. This allows you to 49 do things like this: 50 51 `cd source_root; mkdir builddir; cd builddir; meson ..` 52 53 or 54 55 `cd source_root; mkdir builddir; meson builddir` 56 57 To compile, cd into your build directory and type `ninja`. To run unit 58 tests, type `ninja test`. 59 60 Install is the same but it can take an extra argument: 61 62 `DESTDIR=/destdir/path ninja install` 63 64 `DESTDIR` can be omitted. If you are installing to system directories, 65 you may need to run this command with sudo. 66 67 68 #### Contributing 69 70 We love code contributions. See the [contributing.md](contributing.md) file for 71 details. 72 73 74 #### IRC 75 76 The irc channel for Meson is `#mesonbuild` over at Freenode. 77 78 You can use [FreeNode's official webchat](https://webchat.freenode.net/#mesonbuild) 79 to connect to this channel. 80 81 82 #### Further info 83 84 More information about the Meson build system can be found at the 85 [project's home page](http://mesonbuild.com). 86 87 Meson is a registered trademark of Jussi Pakkanen. 88 [end of README.md] [start of mesonbuild/mesonmain.py] 1 # Copyright 2012-2016 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import sys 16 import os.path 17 import importlib 18 import traceback 19 import argparse 20 21 from . import mesonlib 22 from . import mlog 23 from . import mconf, minit, minstall, mintro, msetup, mtest, rewriter 24 from .mesonlib import MesonException 25 from .environment import detect_msys2_arch 26 from .wrap import wraptool 27 28 29 class CommandLineParser: 30 def __init__(self): 31 self.commands = {} 32 self.hidden_commands = [] 33 self.parser = argparse.ArgumentParser(prog='meson') 34 self.subparsers = self.parser.add_subparsers(title='Commands', 35 description='If no command is specified it defaults to setup command.') 36 self.add_command('setup', msetup.add_arguments, msetup.run, 37 help='Configure the project') 38 self.add_command('configure', mconf.add_arguments, mconf.run, 39 help='Change project options',) 40 self.add_command('install', minstall.add_arguments, minstall.run, 41 help='Install the project') 42 self.add_command('introspect', mintro.add_arguments, mintro.run, 43 help='Introspect project') 44 self.add_command('init', minit.add_arguments, minit.run, 45 help='Create a new project') 46 self.add_command('test', mtest.add_arguments, mtest.run, 47 help='Run tests') 48 self.add_command('wrap', wraptool.add_arguments, wraptool.run, 49 help='Wrap tools') 50 self.add_command('help', self.add_help_arguments, self.run_help_command, 51 help='Print help of a subcommand') 52 53 # Hidden commands 54 self.add_command('rewrite', rewriter.add_arguments, rewriter.run, 55 help=argparse.SUPPRESS) 56 self.add_command('runpython', self.add_runpython_arguments, self.run_runpython_command, 57 help=argparse.SUPPRESS) 58 59 def add_command(self, name, add_arguments_func, run_func, help): 60 # FIXME: Cannot have hidden subparser: 61 # https://bugs.python.org/issue22848 62 if help == argparse.SUPPRESS: 63 p = argparse.ArgumentParser(prog='meson ' + name) 64 self.hidden_commands.append(name) 65 else: 66 p = self.subparsers.add_parser(name, help=help) 67 add_arguments_func(p) 68 p.set_defaults(run_func=run_func) 69 self.commands[name] = p 70 71 def add_runpython_arguments(self, parser): 72 parser.add_argument('script_file') 73 parser.add_argument('script_args', nargs=argparse.REMAINDER) 74 75 def run_runpython_command(self, options): 76 import runpy 77 sys.argv[1:] = options.script_args 78 runpy.run_path(options.script_file, run_name='__main__') 79 return 0 80 81 def add_help_arguments(self, parser): 82 parser.add_argument('command', nargs='?') 83 84 def run_help_command(self, options): 85 if options.command: 86 self.commands[options.command].print_help() 87 else: 88 self.parser.print_help() 89 return 0 90 91 def run(self, args): 92 # If first arg is not a known command, assume user wants to run the setup 93 # command. 94 known_commands = list(self.commands.keys()) + ['-h', '--help'] 95 if len(args) == 0 or args[0] not in known_commands: 96 args = ['setup'] + args 97 98 # Hidden commands have their own parser instead of using the global one 99 if args[0] in self.hidden_commands: 100 parser = self.commands[args[0]] 101 args = args[1:] 102 else: 103 parser = self.parser 104 105 args = mesonlib.expand_arguments(args) 106 options = parser.parse_args(args) 107 108 try: 109 return options.run_func(options) 110 except MesonException as e: 111 mlog.exception(e) 112 logfile = mlog.shutdown() 113 if logfile is not None: 114 mlog.log("\nA full log can be found at", mlog.bold(logfile)) 115 if os.environ.get('MESON_FORCE_BACKTRACE'): 116 raise 117 return 1 118 except Exception as e: 119 if os.environ.get('MESON_FORCE_BACKTRACE'): 120 raise 121 traceback.print_exc() 122 return 2 123 finally: 124 mlog.shutdown() 125 126 def run_script_command(script_name, script_args): 127 # Map script name to module name for those that doesn't match 128 script_map = {'exe': 'meson_exe', 129 'install': 'meson_install', 130 'delsuffix': 'delwithsuffix', 131 'gtkdoc': 'gtkdochelper', 132 'hotdoc': 'hotdochelper', 133 'regencheck': 'regen_checker'} 134 module_name = script_map.get(script_name, script_name) 135 136 try: 137 module = importlib.import_module('mesonbuild.scripts.' + module_name) 138 except ModuleNotFoundError as e: 139 mlog.exception(e) 140 return 1 141 142 try: 143 return module.run(script_args) 144 except MesonException as e: 145 mlog.error('Error in {} helper script:'.format(script_name)) 146 mlog.exception(e) 147 return 1 148 149 def run(original_args, mainfile): 150 if sys.version_info < (3, 5): 151 print('Meson works correctly only with python 3.5+.') 152 print('You have python %s.' % sys.version) 153 print('Please update your environment') 154 return 1 155 156 # https://github.com/mesonbuild/meson/issues/3653 157 if sys.platform.lower() == 'msys': 158 mlog.error('This python3 seems to be msys/python on MSYS2 Windows, which is known to have path semantics incompatible with Meson') 159 msys2_arch = detect_msys2_arch() 160 if msys2_arch: 161 mlog.error('Please install and use mingw-w64-i686-python3 and/or mingw-w64-x86_64-python3 with Pacman') 162 else: 163 mlog.error('Please download and use Python as detailed at: https://mesonbuild.com/Getting-meson.html') 164 return 2 165 166 # Set the meson command that will be used to run scripts and so on 167 mesonlib.set_meson_command(mainfile) 168 169 args = original_args[:] 170 171 # Special handling of internal commands called from backends, they don't 172 # need to go through argparse. 173 if len(args) >= 2 and args[0] == '--internal': 174 if args[1] == 'regenerate': 175 # Rewrite "meson --internal regenerate" command line to 176 # "meson --reconfigure" 177 args = ['--reconfigure'] + args[2:] 178 else: 179 return run_script_command(args[1], args[2:]) 180 181 return CommandLineParser().run(args) 182 183 def main(): 184 # Always resolve the command path so Ninja can find it for regen, tests, etc. 185 if 'meson.exe' in sys.executable: 186 assert(os.path.isabs(sys.executable)) 187 launcher = sys.executable 188 else: 189 launcher = os.path.realpath(sys.argv[0]) 190 return run(sys.argv[1:], launcher) 191 192 if __name__ == '__main__': 193 sys.exit(main()) 194 [end of mesonbuild/mesonmain.py] [start of mesonbuild/mtest.py] 1 # Copyright 2016-2017 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 # A tool to run tests in many different ways. 16 17 import shlex 18 import subprocess, sys, os, argparse 19 import pickle 20 from mesonbuild import build 21 from mesonbuild import environment 22 from mesonbuild.dependencies import ExternalProgram 23 from mesonbuild.mesonlib import substring_is_in_list, MesonException 24 from mesonbuild import mlog 25 26 import time, datetime, multiprocessing, json 27 import concurrent.futures as conc 28 import platform 29 import signal 30 import random 31 from copy import deepcopy 32 import enum 33 34 # GNU autotools interprets a return code of 77 from tests it executes to 35 # mean that the test should be skipped. 36 GNU_SKIP_RETURNCODE = 77 37 38 def is_windows(): 39 platname = platform.system().lower() 40 return platname == 'windows' or 'mingw' in platname 41 42 def is_cygwin(): 43 platname = platform.system().lower() 44 return 'cygwin' in platname 45 46 def determine_worker_count(): 47 varname = 'MESON_TESTTHREADS' 48 if varname in os.environ: 49 try: 50 num_workers = int(os.environ[varname]) 51 except ValueError: 52 print('Invalid value in %s, using 1 thread.' % varname) 53 num_workers = 1 54 else: 55 try: 56 # Fails in some weird environments such as Debian 57 # reproducible build. 58 num_workers = multiprocessing.cpu_count() 59 except Exception: 60 num_workers = 1 61 return num_workers 62 63 def add_arguments(parser): 64 parser.add_argument('--repeat', default=1, dest='repeat', type=int, 65 help='Number of times to run the tests.') 66 parser.add_argument('--no-rebuild', default=False, action='store_true', 67 help='Do not rebuild before running tests.') 68 parser.add_argument('--gdb', default=False, dest='gdb', action='store_true', 69 help='Run test under gdb.') 70 parser.add_argument('--list', default=False, dest='list', action='store_true', 71 help='List available tests.') 72 parser.add_argument('--wrapper', default=None, dest='wrapper', type=shlex.split, 73 help='wrapper to run tests with (e.g. Valgrind)') 74 parser.add_argument('-C', default='.', dest='wd', 75 help='directory to cd into before running') 76 parser.add_argument('--suite', default=[], dest='include_suites', action='append', metavar='SUITE', 77 help='Only run tests belonging to the given suite.') 78 parser.add_argument('--no-suite', default=[], dest='exclude_suites', action='append', metavar='SUITE', 79 help='Do not run tests belonging to the given suite.') 80 parser.add_argument('--no-stdsplit', default=True, dest='split', action='store_false', 81 help='Do not split stderr and stdout in test logs.') 82 parser.add_argument('--print-errorlogs', default=False, action='store_true', 83 help="Whether to print failing tests' logs.") 84 parser.add_argument('--benchmark', default=False, action='store_true', 85 help="Run benchmarks instead of tests.") 86 parser.add_argument('--logbase', default='testlog', 87 help="Base name for log file.") 88 parser.add_argument('--num-processes', default=determine_worker_count(), type=int, 89 help='How many parallel processes to use.') 90 parser.add_argument('-v', '--verbose', default=False, action='store_true', 91 help='Do not redirect stdout and stderr') 92 parser.add_argument('-q', '--quiet', default=False, action='store_true', 93 help='Produce less output to the terminal.') 94 parser.add_argument('-t', '--timeout-multiplier', type=float, default=None, 95 help='Define a multiplier for test timeout, for example ' 96 ' when running tests in particular conditions they might take' 97 ' more time to execute.') 98 parser.add_argument('--setup', default=None, dest='setup', 99 help='Which test setup to use.') 100 parser.add_argument('--test-args', default=[], type=shlex.split, 101 help='Arguments to pass to the specified test(s) or all tests') 102 parser.add_argument('args', nargs='*', 103 help='Optional list of tests to run') 104 105 106 def returncode_to_status(retcode): 107 # Note: We can't use `os.WIFSIGNALED(result.returncode)` and the related 108 # functions here because the status returned by subprocess is munged. It 109 # returns a negative value if the process was killed by a signal rather than 110 # the raw status returned by `wait()`. Also, If a shell sits between Meson 111 # the the actual unit test that shell is likely to convert a termination due 112 # to a signal into an exit status of 128 plus the signal number. 113 if retcode < 0: 114 signum = -retcode 115 try: 116 signame = signal.Signals(signum).name 117 except ValueError: 118 signame = 'SIGinvalid' 119 return '(killed by signal %d %s)' % (signum, signame) 120 121 if retcode <= 128: 122 return '(exit status %d)' % (retcode,) 123 124 signum = retcode - 128 125 try: 126 signame = signal.Signals(signum).name 127 except ValueError: 128 signame = 'SIGinvalid' 129 return '(exit status %d or signal %d %s)' % (retcode, signum, signame) 130 131 def env_tuple_to_str(env): 132 return ''.join(["%s='%s' " % (k, v) for k, v in env]) 133 134 135 class TestException(MesonException): 136 pass 137 138 139 @enum.unique 140 class TestResult(enum.Enum): 141 142 OK = 'OK' 143 TIMEOUT = 'TIMEOUT' 144 SKIP = 'SKIP' 145 FAIL = 'FAIL' 146 147 148 class TestRun: 149 def __init__(self, res, returncode, should_fail, duration, stdo, stde, cmd, 150 env): 151 assert isinstance(res, TestResult) 152 self.res = res 153 self.returncode = returncode 154 self.duration = duration 155 self.stdo = stdo 156 self.stde = stde 157 self.cmd = cmd 158 self.env = env 159 self.should_fail = should_fail 160 161 def get_log(self): 162 res = '--- command ---\n' 163 if self.cmd is None: 164 res += 'NONE\n' 165 else: 166 test_only_env = set(self.env.items()) - set(os.environ.items()) 167 res += '{}{}\n'.format(env_tuple_to_str(test_only_env), ' '.join(self.cmd)) 168 if self.stdo: 169 res += '--- stdout ---\n' 170 res += self.stdo 171 if self.stde: 172 if res[-1:] != '\n': 173 res += '\n' 174 res += '--- stderr ---\n' 175 res += self.stde 176 if res[-1:] != '\n': 177 res += '\n' 178 res += '-------\n\n' 179 return res 180 181 def decode(stream): 182 if stream is None: 183 return '' 184 try: 185 return stream.decode('utf-8') 186 except UnicodeDecodeError: 187 return stream.decode('iso-8859-1', errors='ignore') 188 189 def write_json_log(jsonlogfile, test_name, result): 190 jresult = {'name': test_name, 191 'stdout': result.stdo, 192 'result': result.res.value, 193 'duration': result.duration, 194 'returncode': result.returncode, 195 'command': result.cmd} 196 if isinstance(result.env, dict): 197 jresult['env'] = result.env 198 else: 199 jresult['env'] = result.env.get_env(os.environ) 200 if result.stde: 201 jresult['stderr'] = result.stde 202 jsonlogfile.write(json.dumps(jresult) + '\n') 203 204 def run_with_mono(fname): 205 if fname.endswith('.exe') and not (is_windows() or is_cygwin()): 206 return True 207 return False 208 209 def load_benchmarks(build_dir): 210 datafile = os.path.join(build_dir, 'meson-private', 'meson_benchmark_setup.dat') 211 if not os.path.isfile(datafile): 212 raise TestException('Directory ${!r} does not seem to be a Meson build directory.'.format(build_dir)) 213 with open(datafile, 'rb') as f: 214 obj = pickle.load(f) 215 return obj 216 217 def load_tests(build_dir): 218 datafile = os.path.join(build_dir, 'meson-private', 'meson_test_setup.dat') 219 if not os.path.isfile(datafile): 220 raise TestException('Directory ${!r} does not seem to be a Meson build directory.'.format(build_dir)) 221 with open(datafile, 'rb') as f: 222 obj = pickle.load(f) 223 return obj 224 225 226 class SingleTestRunner: 227 228 def __init__(self, test, env, options): 229 self.test = test 230 self.env = env 231 self.options = options 232 233 def _get_cmd(self): 234 if self.test.fname[0].endswith('.jar'): 235 return ['java', '-jar'] + self.test.fname 236 elif not self.test.is_cross_built and run_with_mono(self.test.fname[0]): 237 return ['mono'] + self.test.fname 238 else: 239 if self.test.is_cross_built: 240 if self.test.exe_runner is None: 241 # Can not run test on cross compiled executable 242 # because there is no execute wrapper. 243 return None 244 else: 245 if not self.test.exe_runner.found(): 246 msg = 'The exe_wrapper defined in the cross file {!r} was not ' \ 247 'found. Please check the command and/or add it to PATH.' 248 raise TestException(msg.format(self.test.exe_runner.name)) 249 return self.test.exe_runner.get_command() + self.test.fname 250 else: 251 return self.test.fname 252 253 def run(self): 254 cmd = self._get_cmd() 255 if cmd is None: 256 skip_stdout = 'Not run because can not execute cross compiled binaries.' 257 return TestRun(res=TestResult.SKIP, returncode=GNU_SKIP_RETURNCODE, 258 should_fail=self.test.should_fail, duration=0.0, 259 stdo=skip_stdout, stde=None, cmd=None, env=self.test.env) 260 else: 261 wrap = TestHarness.get_wrapper(self.options) 262 if self.options.gdb: 263 self.test.timeout = None 264 return self._run_cmd(wrap + cmd + self.test.cmd_args + self.options.test_args) 265 266 def _run_cmd(self, cmd): 267 starttime = time.time() 268 269 if len(self.test.extra_paths) > 0: 270 self.env['PATH'] = os.pathsep.join(self.test.extra_paths + ['']) + self.env['PATH'] 271 if substring_is_in_list('wine', cmd): 272 wine_paths = ['Z:' + p for p in self.test.extra_paths] 273 wine_path = ';'.join(wine_paths) 274 # Don't accidentally end with an `;` because that will add the 275 # current directory and might cause unexpected behaviour 276 if 'WINEPATH' in self.env: 277 self.env['WINEPATH'] = wine_path + ';' + self.env['WINEPATH'] 278 else: 279 self.env['WINEPATH'] = wine_path 280 281 # If MALLOC_PERTURB_ is not set, or if it is set to an empty value, 282 # (i.e., the test or the environment don't explicitly set it), set 283 # it ourselves. We do this unconditionally for regular tests 284 # because it is extremely useful to have. 285 # Setting MALLOC_PERTURB_="0" will completely disable this feature. 286 if ('MALLOC_PERTURB_' not in self.env or not self.env['MALLOC_PERTURB_']) and not self.options.benchmark: 287 self.env['MALLOC_PERTURB_'] = str(random.randint(1, 255)) 288 289 stdout = None 290 stderr = None 291 if not self.options.verbose: 292 stdout = subprocess.PIPE 293 stderr = subprocess.PIPE if self.options and self.options.split else subprocess.STDOUT 294 295 # Let gdb handle ^C instead of us 296 if self.options.gdb: 297 previous_sigint_handler = signal.getsignal(signal.SIGINT) 298 # Make the meson executable ignore SIGINT while gdb is running. 299 signal.signal(signal.SIGINT, signal.SIG_IGN) 300 301 def preexec_fn(): 302 if self.options.gdb: 303 # Restore the SIGINT handler for the child process to 304 # ensure it can handle it. 305 signal.signal(signal.SIGINT, signal.SIG_DFL) 306 else: 307 # We don't want setsid() in gdb because gdb needs the 308 # terminal in order to handle ^C and not show tcsetpgrp() 309 # errors avoid not being able to use the terminal. 310 os.setsid() 311 312 p = subprocess.Popen(cmd, 313 stdout=stdout, 314 stderr=stderr, 315 env=self.env, 316 cwd=self.test.workdir, 317 preexec_fn=preexec_fn if not is_windows() else None) 318 timed_out = False 319 kill_test = False 320 if self.test.timeout is None: 321 timeout = None 322 elif self.options.timeout_multiplier is not None: 323 timeout = self.test.timeout * self.options.timeout_multiplier 324 else: 325 timeout = self.test.timeout 326 try: 327 (stdo, stde) = p.communicate(timeout=timeout) 328 except subprocess.TimeoutExpired: 329 if self.options.verbose: 330 print('%s time out (After %d seconds)' % (self.test.name, timeout)) 331 timed_out = True 332 except KeyboardInterrupt: 333 mlog.warning('CTRL-C detected while running %s' % (self.test.name)) 334 kill_test = True 335 finally: 336 if self.options.gdb: 337 # Let us accept ^C again 338 signal.signal(signal.SIGINT, previous_sigint_handler) 339 340 if kill_test or timed_out: 341 # Python does not provide multiplatform support for 342 # killing a process and all its children so we need 343 # to roll our own. 344 if is_windows(): 345 subprocess.call(['taskkill', '/F', '/T', '/PID', str(p.pid)]) 346 else: 347 try: 348 # Kill the process group that setsid() created. 349 os.killpg(p.pid, signal.SIGKILL) 350 except ProcessLookupError: 351 # Sometimes (e.g. with Wine) this happens. 352 # There's nothing we can do (maybe the process 353 # already died) so carry on. 354 pass 355 try: 356 (stdo, stde) = p.communicate(timeout=1) 357 except subprocess.TimeoutExpired: 358 # An earlier kill attempt has not worked for whatever reason. 359 # Try to kill it one last time with a direct call. 360 # If the process has spawned children, they will remain around. 361 p.kill() 362 try: 363 (stdo, stde) = p.communicate(timeout=1) 364 except subprocess.TimeoutExpired: 365 stdo = b'Test process could not be killed.' 366 stde = b'' 367 except ValueError: 368 stdo = b'Could not read output. Maybe the process has redirected its stdout/stderr?' 369 stde = b'' 370 endtime = time.time() 371 duration = endtime - starttime 372 stdo = decode(stdo) 373 if stde: 374 stde = decode(stde) 375 if timed_out: 376 res = TestResult.TIMEOUT 377 elif p.returncode == GNU_SKIP_RETURNCODE: 378 res = TestResult.SKIP 379 elif self.test.should_fail == bool(p.returncode): 380 res = TestResult.OK 381 else: 382 res = TestResult.FAIL 383 return TestRun(res, p.returncode, self.test.should_fail, duration, stdo, stde, cmd, self.test.env) 384 385 386 class TestHarness: 387 def __init__(self, options): 388 self.options = options 389 self.collected_logs = [] 390 self.fail_count = 0 391 self.success_count = 0 392 self.skip_count = 0 393 self.timeout_count = 0 394 self.is_run = False 395 self.tests = None 396 self.suites = None 397 self.logfilename = None 398 self.logfile = None 399 self.jsonlogfile = None 400 if self.options.benchmark: 401 self.tests = load_benchmarks(options.wd) 402 else: 403 self.tests = load_tests(options.wd) 404 self.load_suites() 405 406 def __del__(self): 407 if self.logfile: 408 self.logfile.close() 409 if self.jsonlogfile: 410 self.jsonlogfile.close() 411 412 def merge_suite_options(self, options, test): 413 if ':' in options.setup: 414 if options.setup not in self.build_data.test_setups: 415 sys.exit("Unknown test setup '%s'." % options.setup) 416 current = self.build_data.test_setups[options.setup] 417 else: 418 full_name = test.project_name + ":" + options.setup 419 if full_name not in self.build_data.test_setups: 420 sys.exit("Test setup '%s' not found from project '%s'." % (options.setup, test.project_name)) 421 current = self.build_data.test_setups[full_name] 422 if not options.gdb: 423 options.gdb = current.gdb 424 if options.timeout_multiplier is None: 425 options.timeout_multiplier = current.timeout_multiplier 426 # if options.env is None: 427 # options.env = current.env # FIXME, should probably merge options here. 428 if options.wrapper is not None and current.exe_wrapper is not None: 429 sys.exit('Conflict: both test setup and command line specify an exe wrapper.') 430 if options.wrapper is None: 431 options.wrapper = current.exe_wrapper 432 return current.env.get_env(os.environ.copy()) 433 434 def get_test_runner(self, test): 435 options = deepcopy(self.options) 436 if options.setup: 437 env = self.merge_suite_options(options, test) 438 else: 439 env = os.environ.copy() 440 if isinstance(test.env, build.EnvironmentVariables): 441 test.env = test.env.get_env(env) 442 env.update(test.env) 443 return SingleTestRunner(test, env, options) 444 445 def process_test_result(self, result): 446 if result.res is TestResult.TIMEOUT: 447 self.timeout_count += 1 448 self.fail_count += 1 449 elif result.res is TestResult.SKIP: 450 self.skip_count += 1 451 elif result.res is TestResult.OK: 452 self.success_count += 1 453 elif result.res is TestResult.FAIL: 454 self.fail_count += 1 455 else: 456 sys.exit('Unknown test result encountered: {}'.format(result.res)) 457 458 def print_stats(self, numlen, tests, name, result, i): 459 startpad = ' ' * (numlen - len('%d' % (i + 1))) 460 num = '%s%d/%d' % (startpad, i + 1, len(tests)) 461 padding1 = ' ' * (38 - len(name)) 462 padding2 = ' ' * (8 - len(result.res.value)) 463 status = '' 464 465 if result.res is TestResult.FAIL: 466 status = returncode_to_status(result.returncode) 467 result_str = '%s %s %s%s%s%5.2f s %s' % \ 468 (num, name, padding1, result.res.value, padding2, result.duration, 469 status) 470 if not self.options.quiet or result.res is not TestResult.OK: 471 if result.res is not TestResult.OK and mlog.colorize_console: 472 if result.res in (TestResult.FAIL, TestResult.TIMEOUT): 473 decorator = mlog.red 474 elif result.res is TestResult.SKIP: 475 decorator = mlog.yellow 476 else: 477 sys.exit('Unreachable code was ... well ... reached.') 478 print(decorator(result_str).get_text(True)) 479 else: 480 print(result_str) 481 result_str += "\n\n" + result.get_log() 482 if (result.returncode != GNU_SKIP_RETURNCODE) \ 483 and (result.returncode != 0) != result.should_fail: 484 if self.options.print_errorlogs: 485 self.collected_logs.append(result_str) 486 if self.logfile: 487 self.logfile.write(result_str) 488 if self.jsonlogfile: 489 write_json_log(self.jsonlogfile, name, result) 490 491 def print_summary(self): 492 msg = ''' 493 OK: %4d 494 FAIL: %4d 495 SKIP: %4d 496 TIMEOUT: %4d 497 ''' % (self.success_count, self.fail_count, self.skip_count, self.timeout_count) 498 print(msg) 499 if self.logfile: 500 self.logfile.write(msg) 501 502 def print_collected_logs(self): 503 if len(self.collected_logs) > 0: 504 if len(self.collected_logs) > 10: 505 print('\nThe output from 10 first failed tests:\n') 506 else: 507 print('\nThe output from the failed tests:\n') 508 for log in self.collected_logs[:10]: 509 lines = log.splitlines() 510 if len(lines) > 104: 511 print('\n'.join(lines[0:4])) 512 print('--- Listing only the last 100 lines from a long log. ---') 513 lines = lines[-100:] 514 for line in lines: 515 print(line) 516 517 def doit(self): 518 if self.is_run: 519 raise RuntimeError('Test harness object can only be used once.') 520 self.is_run = True 521 tests = self.get_tests() 522 if not tests: 523 return 0 524 self.run_tests(tests) 525 return self.fail_count 526 527 @staticmethod 528 def split_suite_string(suite): 529 if ':' in suite: 530 return suite.split(':', 1) 531 else: 532 return suite, "" 533 534 @staticmethod 535 def test_in_suites(test, suites): 536 for suite in suites: 537 (prj_match, st_match) = TestHarness.split_suite_string(suite) 538 for prjst in test.suite: 539 (prj, st) = TestHarness.split_suite_string(prjst) 540 541 # the SUITE can be passed as 542 # suite_name 543 # or 544 # project_name:suite_name 545 # so we need to select only the test belonging to project_name 546 547 # this if hanlde the first case (i.e., SUITE == suite_name) 548 549 # in this way we can run tests belonging to different 550 # (sub)projects which share the same suite_name 551 if not st_match and st == prj_match: 552 return True 553 554 # these two conditions are needed to handle the second option 555 # i.e., SUITE == project_name:suite_name 556 557 # in this way we select the only the tests of 558 # project_name with suite_name 559 if prj_match and prj != prj_match: 560 continue 561 if st_match and st != st_match: 562 continue 563 return True 564 return False 565 566 def test_suitable(self, test): 567 return (not self.options.include_suites or TestHarness.test_in_suites(test, self.options.include_suites)) \ 568 and not TestHarness.test_in_suites(test, self.options.exclude_suites) 569 570 def load_suites(self): 571 ss = set() 572 for t in self.tests: 573 for s in t.suite: 574 ss.add(s) 575 self.suites = list(ss) 576 577 def get_tests(self): 578 if not self.tests: 579 print('No tests defined.') 580 return [] 581 582 if len(self.options.include_suites) or len(self.options.exclude_suites): 583 tests = [] 584 for tst in self.tests: 585 if self.test_suitable(tst): 586 tests.append(tst) 587 else: 588 tests = self.tests 589 590 if self.options.args: 591 tests = [t for t in tests if t.name in self.options.args] 592 593 if not tests: 594 print('No suitable tests defined.') 595 return [] 596 597 for test in tests: 598 test.rebuilt = False 599 600 return tests 601 602 def open_log_files(self): 603 if not self.options.logbase or self.options.verbose: 604 return None, None, None, None 605 606 namebase = None 607 logfile_base = os.path.join(self.options.wd, 'meson-logs', self.options.logbase) 608 609 if self.options.wrapper: 610 namebase = os.path.basename(self.get_wrapper(self.options)[0]) 611 elif self.options.setup: 612 namebase = self.options.setup.replace(":", "_") 613 614 if namebase: 615 logfile_base += '-' + namebase.replace(' ', '_') 616 self.logfilename = logfile_base + '.txt' 617 self.jsonlogfilename = logfile_base + '.json' 618 619 self.jsonlogfile = open(self.jsonlogfilename, 'w', encoding='utf-8') 620 self.logfile = open(self.logfilename, 'w', encoding='utf-8') 621 622 self.logfile.write('Log of Meson test suite run on %s\n\n' 623 % datetime.datetime.now().isoformat()) 624 inherit_env = env_tuple_to_str(os.environ.items()) 625 self.logfile.write('Inherited environment: {}\n\n'.format(inherit_env)) 626 627 @staticmethod 628 def get_wrapper(options): 629 wrap = [] 630 if options.gdb: 631 wrap = ['gdb', '--quiet', '--nh'] 632 if options.repeat > 1: 633 wrap += ['-ex', 'run', '-ex', 'quit'] 634 # Signal the end of arguments to gdb 635 wrap += ['--args'] 636 if options.wrapper: 637 wrap += options.wrapper 638 assert(isinstance(wrap, list)) 639 return wrap 640 641 def get_pretty_suite(self, test): 642 if len(self.suites) > 1: 643 rv = TestHarness.split_suite_string(test.suite[0])[0] 644 s = "+".join(TestHarness.split_suite_string(s)[1] for s in test.suite) 645 if len(s): 646 rv += ":" 647 return rv + s + " / " + test.name 648 else: 649 return test.name 650 651 def run_tests(self, tests): 652 executor = None 653 futures = [] 654 numlen = len('%d' % len(tests)) 655 self.open_log_files() 656 startdir = os.getcwd() 657 if self.options.wd: 658 os.chdir(self.options.wd) 659 self.build_data = build.load(os.getcwd()) 660 661 try: 662 for _ in range(self.options.repeat): 663 for i, test in enumerate(tests): 664 visible_name = self.get_pretty_suite(test) 665 666 if not test.is_parallel or self.options.gdb: 667 self.drain_futures(futures) 668 futures = [] 669 single_test = self.get_test_runner(test) 670 res = single_test.run() 671 self.process_test_result(res) 672 self.print_stats(numlen, tests, visible_name, res, i) 673 else: 674 if not executor: 675 executor = conc.ThreadPoolExecutor(max_workers=self.options.num_processes) 676 single_test = self.get_test_runner(test) 677 f = executor.submit(single_test.run) 678 futures.append((f, numlen, tests, visible_name, i)) 679 if self.options.repeat > 1 and self.fail_count: 680 break 681 if self.options.repeat > 1 and self.fail_count: 682 break 683 684 self.drain_futures(futures) 685 self.print_summary() 686 self.print_collected_logs() 687 688 if self.logfilename: 689 print('Full log written to %s' % self.logfilename) 690 finally: 691 os.chdir(startdir) 692 693 def drain_futures(self, futures): 694 for i in futures: 695 (result, numlen, tests, name, i) = i 696 if self.options.repeat > 1 and self.fail_count: 697 result.cancel() 698 if self.options.verbose: 699 result.result() 700 self.process_test_result(result.result()) 701 self.print_stats(numlen, tests, name, result.result(), i) 702 703 def run_special(self): 704 '''Tests run by the user, usually something like "under gdb 1000 times".''' 705 if self.is_run: 706 raise RuntimeError('Can not use run_special after a full run.') 707 tests = self.get_tests() 708 if not tests: 709 return 0 710 self.run_tests(tests) 711 return self.fail_count 712 713 714 def list_tests(th): 715 tests = th.get_tests() 716 for t in tests: 717 print(th.get_pretty_suite(t)) 718 719 def rebuild_all(wd): 720 if not os.path.isfile(os.path.join(wd, 'build.ninja')): 721 print('Only ninja backend is supported to rebuild tests before running them.') 722 return True 723 724 ninja = environment.detect_ninja() 725 if not ninja: 726 print("Can't find ninja, can't rebuild test.") 727 return False 728 729 p = subprocess.Popen([ninja, '-C', wd]) 730 p.communicate() 731 732 if p.returncode != 0: 733 print('Could not rebuild') 734 return False 735 736 return True 737 738 def run(options): 739 if options.benchmark: 740 options.num_processes = 1 741 742 if options.verbose and options.quiet: 743 print('Can not be both quiet and verbose at the same time.') 744 return 1 745 746 check_bin = None 747 if options.gdb: 748 options.verbose = True 749 if options.wrapper: 750 print('Must not specify both a wrapper and gdb at the same time.') 751 return 1 752 check_bin = 'gdb' 753 754 if options.wrapper: 755 check_bin = options.wrapper[0] 756 757 if check_bin is not None: 758 exe = ExternalProgram(check_bin, silent=True) 759 if not exe.found(): 760 print('Could not find requested program: {!r}'.format(check_bin)) 761 return 1 762 options.wd = os.path.abspath(options.wd) 763 764 if not options.list and not options.no_rebuild: 765 if not rebuild_all(options.wd): 766 return 1 767 768 try: 769 th = TestHarness(options) 770 if options.list: 771 list_tests(th) 772 return 0 773 if not options.args: 774 return th.doit() 775 return th.run_special() 776 except TestException as e: 777 print('Meson test encountered an error:\n') 778 if os.environ.get('MESON_FORCE_BACKTRACE'): 779 raise e 780 else: 781 print(e) 782 return 1 783 784 def run_with_args(args): 785 parser = argparse.ArgumentParser(prog='meson test') 786 add_arguments(parser) 787 options = parser.parse_args(args) 788 return run(options) 789 [end of mesonbuild/mtest.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
mesonbuild/meson
25fef3d1facecfb64108ef0a1439d24100593935
Cross-drive build is not supported on Windows ``` Stdout: Traceback (most recent call last): File "D:\dev\meson\mesonbuild\mesonmain.py", line 368, in run app.generate() File "D:\dev\meson\mesonbuild\mesonmain.py", line 150, in generate self._generate(env) File "D:\dev\meson\mesonbuild\mesonmain.py", line 168, in _generate g = ninjabackend.NinjaBackend(b) File "D:\dev\meson\mesonbuild\backend\ninjabackend.py", line 143, in __init__ super().__init__(build) File "D:\dev\meson\mesonbuild\backend\backends.py", line 111, in __init__ self.environment.get_build_dir()) File "C:\Python36\lib\ntpath.py", line 585, in relpath path_drive, start_drive)) ValueError: path is on mount 'D:', start on mount 'C:' ``` A little bit of debugging shows that this happens in `Backend.__init__()`, ```python self.build_to_src = os.path.relpath(self.environment.get_source_dir(), # D:\dev\meson\test cases\common\5 linkstatic self.environment.get_build_dir()) # C:\Users\$NAME\AppData\Local\Temp\tmp1iii95om ```
2018-10-09T19:46:45Z
<patch> diff --git a/mesonbuild/backend/backends.py b/mesonbuild/backend/backends.py --- a/mesonbuild/backend/backends.py +++ b/mesonbuild/backend/backends.py @@ -132,8 +132,8 @@ def __init__(self, build): self.build = build self.environment = build.environment self.processed_targets = {} - self.build_to_src = os.path.relpath(self.environment.get_source_dir(), - self.environment.get_build_dir()) + self.build_to_src = mesonlib.relpath(self.environment.get_source_dir(), + self.environment.get_build_dir()) def get_target_filename(self, t): if isinstance(t, build.CustomTarget): diff --git a/mesonbuild/interpreter.py b/mesonbuild/interpreter.py --- a/mesonbuild/interpreter.py +++ b/mesonbuild/interpreter.py @@ -1540,8 +1540,8 @@ def method_call(self, method_name, args, kwargs): # because the Build object contains dicts and lists. num_targets = len(self.interpreter.build.targets) state = ModuleState( - build_to_src=os.path.relpath(self.interpreter.environment.get_source_dir(), - self.interpreter.environment.get_build_dir()), + build_to_src=mesonlib.relpath(self.interpreter.environment.get_source_dir(), + self.interpreter.environment.get_build_dir()), subproject=self.interpreter.subproject, subdir=self.interpreter.subdir, current_lineno=self.interpreter.current_lineno, @@ -2182,14 +2182,7 @@ def run_command_impl(self, node, args, kwargs, in_builddir=False): raise InterpreterException('Program or command {!r} not found ' 'or not executable'.format(cmd)) cmd = prog - try: - cmd_path = os.path.relpath(cmd.get_path(), start=srcdir) - except ValueError: - # On Windows a relative path can't be evaluated for - # paths on two different drives (i.e. c:\foo and f:\bar). - # The only thing left to is is to use the original absolute - # path. - cmd_path = cmd.get_path() + cmd_path = mesonlib.relpath(cmd.get_path(), start=srcdir) if not cmd_path.startswith('..') and cmd_path not in self.build_def_files: self.build_def_files.append(cmd_path) expanded_args = [] @@ -2206,7 +2199,7 @@ def run_command_impl(self, node, args, kwargs, in_builddir=False): if not os.path.isabs(a): a = os.path.join(builddir if in_builddir else srcdir, self.subdir, a) if os.path.isfile(a): - a = os.path.relpath(a, start=srcdir) + a = mesonlib.relpath(a, start=srcdir) if not a.startswith('..'): if a not in self.build_def_files: self.build_def_files.append(a) diff --git a/mesonbuild/mesonlib.py b/mesonbuild/mesonlib.py --- a/mesonbuild/mesonlib.py +++ b/mesonbuild/mesonlib.py @@ -1143,3 +1143,12 @@ def __exit__(self, *args): elif have_msvcrt: msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1) self.lockfile.close() + +def relpath(path, start): + # On Windows a relative path can't be evaluated for paths on two different + # drives (i.e. c:\foo and f:\bar). The only thing left to do is to use the + # original absolute path. + try: + return os.path.relpath(path, start) + except ValueError: + return path </patch>
[]
[]
mesonbuild__meson-11951
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> meson subprojects command does not consider subproject_dir option **Describe the bug** When a project configures an alternative directory for subprojects `project( ... , subproject_dir: 'lib', ...)`, `meson subprojects ...` fails with `Directory . does not seem to have subprojects` Looking at the source, looks like the subproject dir is hard-coded in: https://github.com/mesonbuild/meson/blob/8369dbbfecafa87629f0624e6dc7c9cd235043a4/mesonbuild/msubprojects.py#L692-L696 This looks like a fairly easy fix (keyword looks, I'm not familiar with the code), I should be able to tackle it if the maintainers wish/need, let me know I'll note that I realize `subproject_dir` is supplied as a compatibility option, and ideally the projects build-system would be "fixed" to use the recommended directory, but that's not always ideal, and I think this is undesired behavior and warrants a fix **To Reproduce** A very simple project with a [any] subproject, and alternate `subproject_dir` should be enough ``` project/ ├── meson.build └── lib/ └── libusb.wrap ``` `meson.build` ``` project('project', subproject_dir: 'lib') libusb = subproject('libusb') ``` `libusb.wrap` ``` [wrap-git] url = https://github.com/dragonCodecs/libusb revision = blackmagic/meson clone-recursive = false ``` Running the following commands ```bash cd project/ meson subprojects download ``` **Expected behavior** `meson subprojects ...` should either parse the `subproject_dir` from the source meson.build, or more simply, support an argument for an alternative directory like `--subproject-dir='lib'`, and behave exactly like it does for the default `subprojects` i.e. when I rename `lib` to `subprojects` ``` bash $ meson subprojects download Cloning into 'libusb'... remote: Enumerating objects: 16939, done. remote: Counting objects: 100% (3107/3107), done. remote: Compressing objects: 100% (434/434), done. remote: Total 16939 (delta 2714), reused 2707 (delta 2670), pack-reused 13832 Receiving objects: 100% (16939/16939), 5.11 MiB | 1.28 MiB/s, done. Resolving deltas: 100% (12218/12218), done. branch 'blackmagic/meson' set up to track 'origin/blackmagic/meson'. Switched to a new branch 'blackmagic/meson' Download libusb... -> done ``` **system parameters** * meson version `1.1.1` I don't think anything matters here other than the meson version but for the sake of completeness * operating system `Arch Linux` * python version `Python 3.11.3` * ninja version `1.11.1` </issue> <code> [start of README.md] 1 <p align="center"> 2 <img src="https://mesonbuild.com/assets/images/meson_logo.png"> 3 </p> 4 Meson® is a project to create the best possible next-generation 5 build system. 6 7 #### Status 8 9 [![PyPI](https://img.shields.io/pypi/v/meson.svg)](https://pypi.python.org/pypi/meson) 10 [![Build Status](https://dev.azure.com/jussi0947/jussi/_apis/build/status/mesonbuild.meson)](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1) 11 [![Codecov](https://codecov.io/gh/mesonbuild/meson/coverage.svg?branch=master)](https://codecov.io/gh/mesonbuild/meson/branch/master) 12 13 #### Dependencies 14 15 - [Python](https://python.org) (version 3.7 or newer) 16 - [Ninja](https://ninja-build.org) (version 1.8.2 or newer) 17 18 #### Installing from source 19 20 Meson is available on [PyPi](https://pypi.python.org/pypi/meson), so 21 it can be installed with `pip3 install meson`. The exact command to 22 type to install with `pip` can vary between systems, be sure to use 23 the Python 3 version of `pip`. 24 25 If you wish you can install it locally with the standard Python command: 26 27 ```console 28 python3 -m pip install meson 29 ``` 30 31 For builds using Ninja, Ninja can be downloaded directly from Ninja 32 [GitHub release page](https://github.com/ninja-build/ninja/releases) 33 or via [PyPi](https://pypi.python.org/pypi/ninja) 34 35 ```console 36 python3 -m pip install ninja 37 ``` 38 39 More on Installing Meson build can be found at the 40 [getting meson page](https://mesonbuild.com/Getting-meson.html). 41 42 #### Creating a standalone script 43 44 Meson can be run as a [Python zip 45 app](https://docs.python.org/3/library/zipapp.html). To generate the 46 executable run the following command: 47 48 ./packaging/create_zipapp.py --outfile meson.pyz --interpreter '/usr/bin/env python3' <source checkout> 49 50 #### Running 51 52 Meson requires that you have a source directory and a build directory 53 and that these two are different. In your source root must exist a 54 file called `meson.build`. To generate the build system run this 55 command: 56 57 `meson setup <source directory> <build directory>` 58 59 Depending on how you obtained Meson the command might also be called 60 `meson.py` instead of plain `meson`. In the rest of this document we 61 are going to use the latter form. 62 63 You can omit either of the two directories, and Meson will substitute 64 the current directory and autodetect what you mean. This allows you to 65 do things like this: 66 67 ```console 68 cd <source root> 69 meson setup builddir 70 ``` 71 72 To compile, cd into your build directory and type `ninja`. To run unit 73 tests, type `ninja test`. 74 75 More on running Meson build system commands can be found at the 76 [running meson page](https://mesonbuild.com/Running-Meson.html) 77 or by typing `meson --help`. 78 79 #### Contributing 80 81 We love code contributions. See the [contribution 82 page](https://mesonbuild.com/Contributing.html) on the website for 83 details. 84 85 86 #### IRC 87 88 The channel to use is `#mesonbuild` either via Matrix ([web 89 interface][matrix_web]) or [OFTC IRC][oftc_irc]. 90 91 [matrix_web]: https://app.element.io/#/room/#mesonbuild:matrix.org 92 [oftc_irc]: https://www.oftc.net/ 93 94 #### Further info 95 96 More information about the Meson build system can be found at the 97 [project's home page](https://mesonbuild.com). 98 99 Meson is a registered trademark of ***Jussi Pakkanen***. 100 [end of README.md] [start of mesonbuild/wrap/wrap.py] 1 # Copyright 2015 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 from __future__ import annotations 15 16 from .. import mlog 17 import contextlib 18 from dataclasses import dataclass 19 import urllib.request 20 import urllib.error 21 import urllib.parse 22 import os 23 import hashlib 24 import shutil 25 import tempfile 26 import stat 27 import subprocess 28 import sys 29 import configparser 30 import time 31 import typing as T 32 import textwrap 33 import json 34 35 from base64 import b64encode 36 from netrc import netrc 37 from pathlib import Path, PurePath 38 39 from . import WrapMode 40 from .. import coredata 41 from ..mesonlib import quiet_git, GIT, ProgressBar, MesonException, windows_proof_rmtree, Popen_safe 42 from ..interpreterbase import FeatureNew 43 from ..interpreterbase import SubProject 44 from .. import mesonlib 45 46 if T.TYPE_CHECKING: 47 import http.client 48 49 try: 50 # Importing is just done to check if SSL exists, so all warnings 51 # regarding 'imported but unused' can be safely ignored 52 import ssl # noqa 53 has_ssl = True 54 except ImportError: 55 has_ssl = False 56 57 REQ_TIMEOUT = 600.0 58 WHITELIST_SUBDOMAIN = 'wrapdb.mesonbuild.com' 59 60 ALL_TYPES = ['file', 'git', 'hg', 'svn'] 61 62 PATCH = shutil.which('patch') 63 64 def whitelist_wrapdb(urlstr: str) -> urllib.parse.ParseResult: 65 """ raises WrapException if not whitelisted subdomain """ 66 url = urllib.parse.urlparse(urlstr) 67 if not url.hostname: 68 raise WrapException(f'{urlstr} is not a valid URL') 69 if not url.hostname.endswith(WHITELIST_SUBDOMAIN): 70 raise WrapException(f'{urlstr} is not a whitelisted WrapDB URL') 71 if has_ssl and not url.scheme == 'https': 72 raise WrapException(f'WrapDB did not have expected SSL https url, instead got {urlstr}') 73 return url 74 75 def open_wrapdburl(urlstring: str, allow_insecure: bool = False, have_opt: bool = False) -> 'http.client.HTTPResponse': 76 if have_opt: 77 insecure_msg = '\n\n To allow connecting anyway, pass `--allow-insecure`.' 78 else: 79 insecure_msg = '' 80 81 url = whitelist_wrapdb(urlstring) 82 if has_ssl: 83 try: 84 return T.cast('http.client.HTTPResponse', urllib.request.urlopen(urllib.parse.urlunparse(url), timeout=REQ_TIMEOUT)) 85 except urllib.error.URLError as excp: 86 msg = f'WrapDB connection failed to {urlstring} with error {excp}.' 87 if isinstance(excp.reason, ssl.SSLCertVerificationError): 88 if allow_insecure: 89 mlog.warning(f'{msg}\n\n Proceeding without authentication.') 90 else: 91 raise WrapException(f'{msg}{insecure_msg}') 92 else: 93 raise WrapException(msg) 94 elif not allow_insecure: 95 raise WrapException(f'SSL module not available in {sys.executable}: Cannot contact the WrapDB.{insecure_msg}') 96 else: 97 # following code is only for those without Python SSL 98 mlog.warning(f'SSL module not available in {sys.executable}: WrapDB traffic not authenticated.', once=True) 99 100 # If we got this far, allow_insecure was manually passed 101 nossl_url = url._replace(scheme='http') 102 try: 103 return T.cast('http.client.HTTPResponse', urllib.request.urlopen(urllib.parse.urlunparse(nossl_url), timeout=REQ_TIMEOUT)) 104 except urllib.error.URLError as excp: 105 raise WrapException(f'WrapDB connection failed to {urlstring} with error {excp}') 106 107 def get_releases_data(allow_insecure: bool) -> bytes: 108 url = open_wrapdburl('https://wrapdb.mesonbuild.com/v2/releases.json', allow_insecure, True) 109 return url.read() 110 111 def get_releases(allow_insecure: bool) -> T.Dict[str, T.Any]: 112 data = get_releases_data(allow_insecure) 113 return T.cast('T.Dict[str, T.Any]', json.loads(data.decode())) 114 115 def update_wrap_file(wrapfile: str, name: str, new_version: str, new_revision: str, allow_insecure: bool) -> None: 116 url = open_wrapdburl(f'https://wrapdb.mesonbuild.com/v2/{name}_{new_version}-{new_revision}/{name}.wrap', 117 allow_insecure, True) 118 with open(wrapfile, 'wb') as f: 119 f.write(url.read()) 120 121 def parse_patch_url(patch_url: str) -> T.Tuple[str, str]: 122 u = urllib.parse.urlparse(patch_url) 123 if u.netloc != 'wrapdb.mesonbuild.com': 124 raise WrapException(f'URL {patch_url} does not seems to be a wrapdb patch') 125 arr = u.path.strip('/').split('/') 126 if arr[0] == 'v1': 127 # e.g. https://wrapdb.mesonbuild.com/v1/projects/zlib/1.2.11/5/get_zip 128 return arr[-3], arr[-2] 129 elif arr[0] == 'v2': 130 # e.g. https://wrapdb.mesonbuild.com/v2/zlib_1.2.11-5/get_patch 131 tag = arr[-2] 132 _, version = tag.rsplit('_', 1) 133 version, revision = version.rsplit('-', 1) 134 return version, revision 135 else: 136 raise WrapException(f'Invalid wrapdb URL {patch_url}') 137 138 class WrapException(MesonException): 139 pass 140 141 class WrapNotFoundException(WrapException): 142 pass 143 144 class PackageDefinition: 145 def __init__(self, fname: str, subproject: str = ''): 146 self.filename = fname 147 self.subproject = SubProject(subproject) 148 self.type = None # type: T.Optional[str] 149 self.values = {} # type: T.Dict[str, str] 150 self.provided_deps = {} # type: T.Dict[str, T.Optional[str]] 151 self.provided_programs = [] # type: T.List[str] 152 self.diff_files = [] # type: T.List[Path] 153 self.basename = os.path.basename(fname) 154 self.has_wrap = self.basename.endswith('.wrap') 155 self.name = self.basename[:-5] if self.has_wrap else self.basename 156 # must be lowercase for consistency with dep=variable assignment 157 self.provided_deps[self.name.lower()] = None 158 # What the original file name was before redirection 159 self.original_filename = fname 160 self.redirected = False 161 if self.has_wrap: 162 self.parse_wrap() 163 with open(fname, 'r', encoding='utf-8') as file: 164 self.wrapfile_hash = hashlib.sha256(file.read().encode('utf-8')).hexdigest() 165 self.directory = self.values.get('directory', self.name) 166 if os.path.dirname(self.directory): 167 raise WrapException('Directory key must be a name and not a path') 168 if self.type and self.type not in ALL_TYPES: 169 raise WrapException(f'Unknown wrap type {self.type!r}') 170 self.filesdir = os.path.join(os.path.dirname(self.filename), 'packagefiles') 171 172 def parse_wrap(self) -> None: 173 try: 174 config = configparser.ConfigParser(interpolation=None) 175 config.read(self.filename, encoding='utf-8') 176 except configparser.Error as e: 177 raise WrapException(f'Failed to parse {self.basename}: {e!s}') 178 self.parse_wrap_section(config) 179 if self.type == 'redirect': 180 # [wrap-redirect] have a `filename` value pointing to the real wrap 181 # file we should parse instead. It must be relative to the current 182 # wrap file location and must be in the form foo/subprojects/bar.wrap. 183 dirname = Path(self.filename).parent 184 fname = Path(self.values['filename']) 185 for i, p in enumerate(fname.parts): 186 if i % 2 == 0: 187 if p == '..': 188 raise WrapException('wrap-redirect filename cannot contain ".."') 189 else: 190 if p != 'subprojects': 191 raise WrapException('wrap-redirect filename must be in the form foo/subprojects/bar.wrap') 192 if fname.suffix != '.wrap': 193 raise WrapException('wrap-redirect filename must be a .wrap file') 194 fname = dirname / fname 195 if not fname.is_file(): 196 raise WrapException(f'wrap-redirect {fname} filename does not exist') 197 self.filename = str(fname) 198 self.parse_wrap() 199 self.redirected = True 200 else: 201 self.parse_provide_section(config) 202 if 'patch_directory' in self.values: 203 FeatureNew('Wrap files with patch_directory', '0.55.0').use(self.subproject) 204 for what in ['patch', 'source']: 205 if f'{what}_filename' in self.values and f'{what}_url' not in self.values: 206 FeatureNew(f'Local wrap patch files without {what}_url', '0.55.0').use(self.subproject) 207 208 def parse_wrap_section(self, config: configparser.ConfigParser) -> None: 209 if len(config.sections()) < 1: 210 raise WrapException(f'Missing sections in {self.basename}') 211 self.wrap_section = config.sections()[0] 212 if not self.wrap_section.startswith('wrap-'): 213 raise WrapException(f'{self.wrap_section!r} is not a valid first section in {self.basename}') 214 self.type = self.wrap_section[5:] 215 self.values = dict(config[self.wrap_section]) 216 if 'diff_files' in self.values: 217 FeatureNew('Wrap files with diff_files', '0.63.0').use(self.subproject) 218 for s in self.values['diff_files'].split(','): 219 path = Path(s.strip()) 220 if path.is_absolute(): 221 raise WrapException('diff_files paths cannot be absolute') 222 if '..' in path.parts: 223 raise WrapException('diff_files paths cannot contain ".."') 224 self.diff_files.append(path) 225 226 def parse_provide_section(self, config: configparser.ConfigParser) -> None: 227 if config.has_section('provides'): 228 raise WrapException('Unexpected "[provides]" section, did you mean "[provide]"?') 229 if config.has_section('provide'): 230 for k, v in config['provide'].items(): 231 if k == 'dependency_names': 232 # A comma separated list of dependency names that does not 233 # need a variable name; must be lowercase for consistency with 234 # dep=variable assignment 235 names_dict = {n.strip().lower(): None for n in v.split(',')} 236 self.provided_deps.update(names_dict) 237 continue 238 if k == 'program_names': 239 # A comma separated list of program names 240 names_list = [n.strip() for n in v.split(',')] 241 self.provided_programs += names_list 242 continue 243 if not v: 244 m = (f'Empty dependency variable name for {k!r} in {self.basename}. ' 245 'If the subproject uses meson.override_dependency() ' 246 'it can be added in the "dependency_names" special key.') 247 raise WrapException(m) 248 self.provided_deps[k] = v 249 250 def get(self, key: str) -> str: 251 try: 252 return self.values[key] 253 except KeyError: 254 raise WrapException(f'Missing key {key!r} in {self.basename}') 255 256 def get_hashfile(self, subproject_directory: str) -> str: 257 return os.path.join(subproject_directory, '.meson-subproject-wrap-hash.txt') 258 259 def update_hash_cache(self, subproject_directory: str) -> None: 260 if self.has_wrap: 261 with open(self.get_hashfile(subproject_directory), 'w', encoding='utf-8') as file: 262 file.write(self.wrapfile_hash + '\n') 263 264 def get_directory(subdir_root: str, packagename: str) -> str: 265 fname = os.path.join(subdir_root, packagename + '.wrap') 266 if os.path.isfile(fname): 267 wrap = PackageDefinition(fname) 268 return wrap.directory 269 return packagename 270 271 def verbose_git(cmd: T.List[str], workingdir: str, check: bool = False) -> bool: 272 ''' 273 Wrapper to convert GitException to WrapException caught in interpreter. 274 ''' 275 try: 276 return mesonlib.verbose_git(cmd, workingdir, check=check) 277 except mesonlib.GitException as e: 278 raise WrapException(str(e)) 279 280 @dataclass(eq=False) 281 class Resolver: 282 source_dir: str 283 subdir: str 284 subproject: str = '' 285 wrap_mode: WrapMode = WrapMode.default 286 wrap_frontend: bool = False 287 allow_insecure: bool = False 288 silent: bool = False 289 290 def __post_init__(self) -> None: 291 self.subdir_root = os.path.join(self.source_dir, self.subdir) 292 self.cachedir = os.path.join(self.subdir_root, 'packagecache') 293 self.wraps = {} # type: T.Dict[str, PackageDefinition] 294 self.netrc: T.Optional[netrc] = None 295 self.provided_deps = {} # type: T.Dict[str, PackageDefinition] 296 self.provided_programs = {} # type: T.Dict[str, PackageDefinition] 297 self.wrapdb: T.Dict[str, T.Any] = {} 298 self.wrapdb_provided_deps: T.Dict[str, str] = {} 299 self.wrapdb_provided_programs: T.Dict[str, str] = {} 300 self.load_wraps() 301 self.load_netrc() 302 self.load_wrapdb() 303 304 def load_netrc(self) -> None: 305 try: 306 self.netrc = netrc() 307 except FileNotFoundError: 308 return 309 except Exception as e: 310 mlog.warning(f'failed to process netrc file: {e}.', fatal=False) 311 312 def load_wraps(self) -> None: 313 if not os.path.isdir(self.subdir_root): 314 return 315 root, dirs, files = next(os.walk(self.subdir_root)) 316 ignore_dirs = {'packagecache', 'packagefiles'} 317 for i in files: 318 if not i.endswith('.wrap'): 319 continue 320 fname = os.path.join(self.subdir_root, i) 321 wrap = PackageDefinition(fname, self.subproject) 322 self.wraps[wrap.name] = wrap 323 ignore_dirs |= {wrap.directory, wrap.name} 324 # Add dummy package definition for directories not associated with a wrap file. 325 for i in dirs: 326 if i in ignore_dirs: 327 continue 328 fname = os.path.join(self.subdir_root, i) 329 wrap = PackageDefinition(fname, self.subproject) 330 self.wraps[wrap.name] = wrap 331 332 for wrap in self.wraps.values(): 333 self.add_wrap(wrap) 334 335 def add_wrap(self, wrap: PackageDefinition) -> None: 336 for k in wrap.provided_deps.keys(): 337 if k in self.provided_deps: 338 prev_wrap = self.provided_deps[k] 339 m = f'Multiple wrap files provide {k!r} dependency: {wrap.basename} and {prev_wrap.basename}' 340 raise WrapException(m) 341 self.provided_deps[k] = wrap 342 for k in wrap.provided_programs: 343 if k in self.provided_programs: 344 prev_wrap = self.provided_programs[k] 345 m = f'Multiple wrap files provide {k!r} program: {wrap.basename} and {prev_wrap.basename}' 346 raise WrapException(m) 347 self.provided_programs[k] = wrap 348 349 def load_wrapdb(self) -> None: 350 try: 351 with Path(self.subdir_root, 'wrapdb.json').open('r', encoding='utf-8') as f: 352 self.wrapdb = json.load(f) 353 except FileNotFoundError: 354 return 355 for name, info in self.wrapdb.items(): 356 self.wrapdb_provided_deps.update({i: name for i in info.get('dependency_names', [])}) 357 self.wrapdb_provided_programs.update({i: name for i in info.get('program_names', [])}) 358 359 def get_from_wrapdb(self, subp_name: str) -> PackageDefinition: 360 info = self.wrapdb.get(subp_name) 361 if not info: 362 return None 363 self.check_can_download() 364 latest_version = info['versions'][0] 365 version, revision = latest_version.rsplit('-', 1) 366 url = urllib.request.urlopen(f'https://wrapdb.mesonbuild.com/v2/{subp_name}_{version}-{revision}/{subp_name}.wrap') 367 fname = Path(self.subdir_root, f'{subp_name}.wrap') 368 with fname.open('wb') as f: 369 f.write(url.read()) 370 mlog.log(f'Installed {subp_name} version {version} revision {revision}') 371 wrap = PackageDefinition(str(fname)) 372 self.wraps[wrap.name] = wrap 373 self.add_wrap(wrap) 374 return wrap 375 376 def merge_wraps(self, other_resolver: 'Resolver') -> None: 377 for k, v in other_resolver.wraps.items(): 378 self.wraps.setdefault(k, v) 379 for k, v in other_resolver.provided_deps.items(): 380 self.provided_deps.setdefault(k, v) 381 for k, v in other_resolver.provided_programs.items(): 382 self.provided_programs.setdefault(k, v) 383 384 def find_dep_provider(self, packagename: str) -> T.Tuple[T.Optional[str], T.Optional[str]]: 385 # Python's ini parser converts all key values to lowercase. 386 # Thus the query name must also be in lower case. 387 packagename = packagename.lower() 388 wrap = self.provided_deps.get(packagename) 389 if wrap: 390 dep_var = wrap.provided_deps.get(packagename) 391 return wrap.name, dep_var 392 wrap_name = self.wrapdb_provided_deps.get(packagename) 393 return wrap_name, None 394 395 def get_varname(self, subp_name: str, depname: str) -> T.Optional[str]: 396 wrap = self.wraps.get(subp_name) 397 return wrap.provided_deps.get(depname) if wrap else None 398 399 def find_program_provider(self, names: T.List[str]) -> T.Optional[str]: 400 for name in names: 401 wrap = self.provided_programs.get(name) 402 if wrap: 403 return wrap.name 404 wrap_name = self.wrapdb_provided_programs.get(name) 405 if wrap_name: 406 return wrap_name 407 return None 408 409 def resolve(self, packagename: str, method: str) -> str: 410 self.packagename = packagename 411 self.directory = packagename 412 self.wrap = self.wraps.get(packagename) 413 if not self.wrap: 414 self.wrap = self.get_from_wrapdb(packagename) 415 if not self.wrap: 416 m = f'Neither a subproject directory nor a {self.packagename}.wrap file was found.' 417 raise WrapNotFoundException(m) 418 self.directory = self.wrap.directory 419 420 if self.wrap.has_wrap: 421 # We have a .wrap file, use directory relative to the location of 422 # the wrap file if it exists, otherwise source code will be placed 423 # into main project's subproject_dir even if the wrap file comes 424 # from another subproject. 425 self.dirname = os.path.join(os.path.dirname(self.wrap.filename), self.wrap.directory) 426 if not os.path.exists(self.dirname): 427 self.dirname = os.path.join(self.subdir_root, self.directory) 428 # Check if the wrap comes from the main project. 429 main_fname = os.path.join(self.subdir_root, self.wrap.basename) 430 if self.wrap.filename != main_fname: 431 rel = os.path.relpath(self.wrap.filename, self.source_dir) 432 mlog.log('Using', mlog.bold(rel)) 433 # Write a dummy wrap file in main project that redirect to the 434 # wrap we picked. 435 with open(main_fname, 'w', encoding='utf-8') as f: 436 f.write(textwrap.dedent(f'''\ 437 [wrap-redirect] 438 filename = {PurePath(os.path.relpath(self.wrap.filename, self.subdir_root)).as_posix()} 439 ''')) 440 else: 441 # No wrap file, it's a dummy package definition for an existing 442 # directory. Use the source code in place. 443 self.dirname = self.wrap.filename 444 rel_path = os.path.relpath(self.dirname, self.source_dir) 445 446 if method == 'meson': 447 buildfile = os.path.join(self.dirname, 'meson.build') 448 elif method == 'cmake': 449 buildfile = os.path.join(self.dirname, 'CMakeLists.txt') 450 else: 451 raise WrapException('Only the methods "meson" and "cmake" are supported') 452 453 # The directory is there and has meson.build? Great, use it. 454 if os.path.exists(buildfile): 455 self.validate() 456 return rel_path 457 458 # Check if the subproject is a git submodule 459 self.resolve_git_submodule() 460 461 if os.path.exists(self.dirname): 462 if not os.path.isdir(self.dirname): 463 raise WrapException('Path already exists but is not a directory') 464 else: 465 if self.wrap.type == 'file': 466 self.get_file() 467 else: 468 self.check_can_download() 469 if self.wrap.type == 'git': 470 self.get_git() 471 elif self.wrap.type == "hg": 472 self.get_hg() 473 elif self.wrap.type == "svn": 474 self.get_svn() 475 else: 476 raise WrapException(f'Unknown wrap type {self.wrap.type!r}') 477 try: 478 self.apply_patch() 479 self.apply_diff_files() 480 except Exception: 481 windows_proof_rmtree(self.dirname) 482 raise 483 484 # A meson.build or CMakeLists.txt file is required in the directory 485 if not os.path.exists(buildfile): 486 raise WrapException(f'Subproject exists but has no {os.path.basename(buildfile)} file') 487 488 # At this point, the subproject has been successfully resolved for the 489 # first time so save off the hash of the entire wrap file for future 490 # reference. 491 self.wrap.update_hash_cache(self.dirname) 492 493 return rel_path 494 495 def check_can_download(self) -> None: 496 # Don't download subproject data based on wrap file if requested. 497 # Git submodules are ok (see above)! 498 if self.wrap_mode is WrapMode.nodownload: 499 m = 'Automatic wrap-based subproject downloading is disabled' 500 raise WrapException(m) 501 502 def resolve_git_submodule(self) -> bool: 503 # Is git installed? If not, we're probably not in a git repository and 504 # definitely cannot try to conveniently set up a submodule. 505 if not GIT: 506 return False 507 # Does the directory exist? Even uninitialised submodules checkout an 508 # empty directory to work in 509 if not os.path.isdir(self.dirname): 510 return False 511 # Are we in a git repository? 512 ret, out = quiet_git(['rev-parse'], Path(self.dirname).parent) 513 if not ret: 514 return False 515 # Is `dirname` a submodule? 516 ret, out = quiet_git(['submodule', 'status', '.'], self.dirname) 517 if not ret: 518 return False 519 # Submodule has not been added, add it 520 if out.startswith('+'): 521 mlog.warning('git submodule might be out of date') 522 return True 523 elif out.startswith('U'): 524 raise WrapException('git submodule has merge conflicts') 525 # Submodule exists, but is deinitialized or wasn't initialized 526 elif out.startswith('-'): 527 if verbose_git(['submodule', 'update', '--init', '.'], self.dirname): 528 return True 529 raise WrapException('git submodule failed to init') 530 # Submodule looks fine, but maybe it wasn't populated properly. Do a checkout. 531 elif out.startswith(' '): 532 verbose_git(['submodule', 'update', '.'], self.dirname) 533 verbose_git(['checkout', '.'], self.dirname) 534 # Even if checkout failed, try building it anyway and let the user 535 # handle any problems manually. 536 return True 537 elif out == '': 538 # It is not a submodule, just a folder that exists in the main repository. 539 return False 540 raise WrapException(f'Unknown git submodule output: {out!r}') 541 542 def get_file(self) -> None: 543 path = self.get_file_internal('source') 544 extract_dir = self.subdir_root 545 # Some upstreams ship packages that do not have a leading directory. 546 # Create one for them. 547 if 'lead_directory_missing' in self.wrap.values: 548 os.mkdir(self.dirname) 549 extract_dir = self.dirname 550 shutil.unpack_archive(path, extract_dir) 551 552 def get_git(self) -> None: 553 if not GIT: 554 raise WrapException(f'Git program not found, cannot download {self.packagename}.wrap via git.') 555 revno = self.wrap.get('revision') 556 checkout_cmd = ['-c', 'advice.detachedHead=false', 'checkout', revno, '--'] 557 is_shallow = False 558 depth_option = [] # type: T.List[str] 559 if self.wrap.values.get('depth', '') != '': 560 is_shallow = True 561 depth_option = ['--depth', self.wrap.values.get('depth')] 562 # for some reason git only allows commit ids to be shallowly fetched by fetch not with clone 563 if is_shallow and self.is_git_full_commit_id(revno): 564 # git doesn't support directly cloning shallowly for commits, 565 # so we follow https://stackoverflow.com/a/43136160 566 verbose_git(['-c', 'init.defaultBranch=meson-dummy-branch', 'init', self.directory], self.subdir_root, check=True) 567 verbose_git(['remote', 'add', 'origin', self.wrap.get('url')], self.dirname, check=True) 568 revno = self.wrap.get('revision') 569 verbose_git(['fetch', *depth_option, 'origin', revno], self.dirname, check=True) 570 verbose_git(checkout_cmd, self.dirname, check=True) 571 if self.wrap.values.get('clone-recursive', '').lower() == 'true': 572 verbose_git(['submodule', 'update', '--init', '--checkout', 573 '--recursive', *depth_option], self.dirname, check=True) 574 push_url = self.wrap.values.get('push-url') 575 if push_url: 576 verbose_git(['remote', 'set-url', '--push', 'origin', push_url], self.dirname, check=True) 577 else: 578 if not is_shallow: 579 verbose_git(['clone', self.wrap.get('url'), self.directory], self.subdir_root, check=True) 580 if revno.lower() != 'head': 581 if not verbose_git(checkout_cmd, self.dirname): 582 verbose_git(['fetch', self.wrap.get('url'), revno], self.dirname, check=True) 583 verbose_git(checkout_cmd, self.dirname, check=True) 584 else: 585 args = ['-c', 'advice.detachedHead=false', 'clone', *depth_option] 586 if revno.lower() != 'head': 587 args += ['--branch', revno] 588 args += [self.wrap.get('url'), self.directory] 589 verbose_git(args, self.subdir_root, check=True) 590 if self.wrap.values.get('clone-recursive', '').lower() == 'true': 591 verbose_git(['submodule', 'update', '--init', '--checkout', '--recursive', *depth_option], 592 self.dirname, check=True) 593 push_url = self.wrap.values.get('push-url') 594 if push_url: 595 verbose_git(['remote', 'set-url', '--push', 'origin', push_url], self.dirname, check=True) 596 597 def validate(self) -> None: 598 # This check is only for subprojects with wraps. 599 if not self.wrap.has_wrap: 600 return 601 602 # Retrieve original hash, if it exists. 603 hashfile = self.wrap.get_hashfile(self.dirname) 604 if os.path.isfile(hashfile): 605 with open(hashfile, 'r', encoding='utf-8') as file: 606 expected_hash = file.read().strip() 607 else: 608 # If stored hash doesn't exist then don't warn. 609 return 610 611 actual_hash = self.wrap.wrapfile_hash 612 613 # Compare hashes and warn the user if they don't match. 614 if expected_hash != actual_hash: 615 mlog.warning(f'Subproject {self.wrap.name}\'s revision may be out of date; its wrap file has changed since it was first configured') 616 617 def is_git_full_commit_id(self, revno: str) -> bool: 618 result = False 619 if len(revno) in {40, 64}: # 40 for sha1, 64 for upcoming sha256 620 result = all(ch in '0123456789AaBbCcDdEeFf' for ch in revno) 621 return result 622 623 def get_hg(self) -> None: 624 revno = self.wrap.get('revision') 625 hg = shutil.which('hg') 626 if not hg: 627 raise WrapException('Mercurial program not found.') 628 subprocess.check_call([hg, 'clone', self.wrap.get('url'), 629 self.directory], cwd=self.subdir_root) 630 if revno.lower() != 'tip': 631 subprocess.check_call([hg, 'checkout', revno], 632 cwd=self.dirname) 633 634 def get_svn(self) -> None: 635 revno = self.wrap.get('revision') 636 svn = shutil.which('svn') 637 if not svn: 638 raise WrapException('SVN program not found.') 639 subprocess.check_call([svn, 'checkout', '-r', revno, self.wrap.get('url'), 640 self.directory], cwd=self.subdir_root) 641 642 def get_netrc_credentials(self, netloc: str) -> T.Optional[T.Tuple[str, str]]: 643 if self.netrc is None or netloc not in self.netrc.hosts: 644 return None 645 646 login, account, password = self.netrc.authenticators(netloc) 647 if account is not None: 648 login = account 649 650 return login, password 651 652 def get_data(self, urlstring: str) -> T.Tuple[str, str]: 653 blocksize = 10 * 1024 654 h = hashlib.sha256() 655 tmpfile = tempfile.NamedTemporaryFile(mode='wb', dir=self.cachedir, delete=False) 656 url = urllib.parse.urlparse(urlstring) 657 if url.hostname and url.hostname.endswith(WHITELIST_SUBDOMAIN): 658 resp = open_wrapdburl(urlstring, allow_insecure=self.allow_insecure, have_opt=self.wrap_frontend) 659 elif WHITELIST_SUBDOMAIN in urlstring: 660 raise WrapException(f'{urlstring} may be a WrapDB-impersonating URL') 661 else: 662 headers = {'User-Agent': f'mesonbuild/{coredata.version}'} 663 creds = self.get_netrc_credentials(url.netloc) 664 665 if creds is not None and '@' not in url.netloc: 666 login, password = creds 667 if url.scheme == 'https': 668 enc_creds = b64encode(f'{login}:{password}'.encode()).decode() 669 headers.update({'Authorization': f'Basic {enc_creds}'}) 670 elif url.scheme == 'ftp': 671 urlstring = urllib.parse.urlunparse(url._replace(netloc=f'{login}:{password}@{url.netloc}')) 672 else: 673 mlog.warning('Meson is not going to use netrc credentials for protocols other than https/ftp', 674 fatal=False) 675 676 try: 677 req = urllib.request.Request(urlstring, headers=headers) 678 resp = urllib.request.urlopen(req, timeout=REQ_TIMEOUT) 679 except urllib.error.URLError as e: 680 mlog.log(str(e)) 681 raise WrapException(f'could not get {urlstring} is the internet available?') 682 with contextlib.closing(resp) as resp, tmpfile as tmpfile: 683 try: 684 dlsize = int(resp.info()['Content-Length']) 685 except TypeError: 686 dlsize = None 687 if dlsize is None: 688 print('Downloading file of unknown size.') 689 while True: 690 block = resp.read(blocksize) 691 if block == b'': 692 break 693 h.update(block) 694 tmpfile.write(block) 695 hashvalue = h.hexdigest() 696 return hashvalue, tmpfile.name 697 sys.stdout.flush() 698 progress_bar = ProgressBar(bar_type='download', total=dlsize, 699 desc='Downloading', 700 disable=(self.silent or None)) 701 while True: 702 block = resp.read(blocksize) 703 if block == b'': 704 break 705 h.update(block) 706 tmpfile.write(block) 707 progress_bar.update(len(block)) 708 progress_bar.close() 709 hashvalue = h.hexdigest() 710 return hashvalue, tmpfile.name 711 712 def check_hash(self, what: str, path: str, hash_required: bool = True) -> None: 713 if what + '_hash' not in self.wrap.values and not hash_required: 714 return 715 expected = self.wrap.get(what + '_hash').lower() 716 h = hashlib.sha256() 717 with open(path, 'rb') as f: 718 h.update(f.read()) 719 dhash = h.hexdigest() 720 if dhash != expected: 721 raise WrapException(f'Incorrect hash for {what}:\n {expected} expected\n {dhash} actual.') 722 723 def get_data_with_backoff(self, urlstring: str) -> T.Tuple[str, str]: 724 delays = [1, 2, 4, 8, 16] 725 for d in delays: 726 try: 727 return self.get_data(urlstring) 728 except Exception as e: 729 mlog.warning(f'failed to download with error: {e}. Trying after a delay...', fatal=False) 730 time.sleep(d) 731 return self.get_data(urlstring) 732 733 def download(self, what: str, ofname: str, fallback: bool = False) -> None: 734 self.check_can_download() 735 srcurl = self.wrap.get(what + ('_fallback_url' if fallback else '_url')) 736 mlog.log('Downloading', mlog.bold(self.packagename), what, 'from', mlog.bold(srcurl)) 737 try: 738 dhash, tmpfile = self.get_data_with_backoff(srcurl) 739 expected = self.wrap.get(what + '_hash').lower() 740 if dhash != expected: 741 os.remove(tmpfile) 742 raise WrapException(f'Incorrect hash for {what}:\n {expected} expected\n {dhash} actual.') 743 except WrapException: 744 if not fallback: 745 if what + '_fallback_url' in self.wrap.values: 746 return self.download(what, ofname, fallback=True) 747 mlog.log('A fallback URL could be specified using', 748 mlog.bold(what + '_fallback_url'), 'key in the wrap file') 749 raise 750 os.rename(tmpfile, ofname) 751 752 def get_file_internal(self, what: str) -> str: 753 filename = self.wrap.get(what + '_filename') 754 if what + '_url' in self.wrap.values: 755 cache_path = os.path.join(self.cachedir, filename) 756 757 if os.path.exists(cache_path): 758 self.check_hash(what, cache_path) 759 mlog.log('Using', mlog.bold(self.packagename), what, 'from cache.') 760 return cache_path 761 762 os.makedirs(self.cachedir, exist_ok=True) 763 self.download(what, cache_path) 764 return cache_path 765 else: 766 path = Path(self.wrap.filesdir) / filename 767 768 if not path.exists(): 769 raise WrapException(f'File "{path}" does not exist') 770 self.check_hash(what, path.as_posix(), hash_required=False) 771 772 return path.as_posix() 773 774 def apply_patch(self) -> None: 775 if 'patch_filename' in self.wrap.values and 'patch_directory' in self.wrap.values: 776 m = f'Wrap file {self.wrap.basename!r} must not have both "patch_filename" and "patch_directory"' 777 raise WrapException(m) 778 if 'patch_filename' in self.wrap.values: 779 path = self.get_file_internal('patch') 780 try: 781 shutil.unpack_archive(path, self.subdir_root) 782 except Exception: 783 with tempfile.TemporaryDirectory() as workdir: 784 shutil.unpack_archive(path, workdir) 785 self.copy_tree(workdir, self.subdir_root) 786 elif 'patch_directory' in self.wrap.values: 787 patch_dir = self.wrap.values['patch_directory'] 788 src_dir = os.path.join(self.wrap.filesdir, patch_dir) 789 if not os.path.isdir(src_dir): 790 raise WrapException(f'patch directory does not exist: {patch_dir}') 791 self.copy_tree(src_dir, self.dirname) 792 793 def apply_diff_files(self) -> None: 794 for filename in self.wrap.diff_files: 795 mlog.log(f'Applying diff file "{filename}"') 796 path = Path(self.wrap.filesdir) / filename 797 if not path.exists(): 798 raise WrapException(f'Diff file "{path}" does not exist') 799 relpath = os.path.relpath(str(path), self.dirname) 800 if PATCH: 801 # Always pass a POSIX path to patch, because on Windows it's MSYS 802 cmd = [PATCH, '-f', '-p1', '-i', str(Path(relpath).as_posix())] 803 elif GIT: 804 # If the `patch` command is not available, fall back to `git 805 # apply`. The `--work-tree` is necessary in case we're inside a 806 # Git repository: by default, Git will try to apply the patch to 807 # the repository root. 808 cmd = [GIT, '--work-tree', '.', 'apply', '-p1', relpath] 809 else: 810 raise WrapException('Missing "patch" or "git" commands to apply diff files') 811 812 p, out, _ = Popen_safe(cmd, cwd=self.dirname, stderr=subprocess.STDOUT) 813 if p.returncode != 0: 814 mlog.log(out.strip()) 815 raise WrapException(f'Failed to apply diff file "{filename}"') 816 817 def copy_tree(self, root_src_dir: str, root_dst_dir: str) -> None: 818 """ 819 Copy directory tree. Overwrites also read only files. 820 """ 821 for src_dir, _, files in os.walk(root_src_dir): 822 dst_dir = src_dir.replace(root_src_dir, root_dst_dir, 1) 823 if not os.path.exists(dst_dir): 824 os.makedirs(dst_dir) 825 for file_ in files: 826 src_file = os.path.join(src_dir, file_) 827 dst_file = os.path.join(dst_dir, file_) 828 if os.path.exists(dst_file): 829 try: 830 os.remove(dst_file) 831 except PermissionError: 832 os.chmod(dst_file, stat.S_IWUSR) 833 os.remove(dst_file) 834 shutil.copy2(src_file, dst_dir) 835 [end of mesonbuild/wrap/wrap.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
mesonbuild/meson
ecbba0c45b05b53563b23b84191a0acccdfcc291
meson subprojects command does not consider subproject_dir option **Describe the bug** When a project configures an alternative directory for subprojects `project( ... , subproject_dir: 'lib', ...)`, `meson subprojects ...` fails with `Directory . does not seem to have subprojects` Looking at the source, looks like the subproject dir is hard-coded in: https://github.com/mesonbuild/meson/blob/8369dbbfecafa87629f0624e6dc7c9cd235043a4/mesonbuild/msubprojects.py#L692-L696 This looks like a fairly easy fix (keyword looks, I'm not familiar with the code), I should be able to tackle it if the maintainers wish/need, let me know I'll note that I realize `subproject_dir` is supplied as a compatibility option, and ideally the projects build-system would be "fixed" to use the recommended directory, but that's not always ideal, and I think this is undesired behavior and warrants a fix **To Reproduce** A very simple project with a [any] subproject, and alternate `subproject_dir` should be enough ``` project/ ├── meson.build └── lib/ └── libusb.wrap ``` `meson.build` ``` project('project', subproject_dir: 'lib') libusb = subproject('libusb') ``` `libusb.wrap` ``` [wrap-git] url = https://github.com/dragonCodecs/libusb revision = blackmagic/meson clone-recursive = false ``` Running the following commands ```bash cd project/ meson subprojects download ``` **Expected behavior** `meson subprojects ...` should either parse the `subproject_dir` from the source meson.build, or more simply, support an argument for an alternative directory like `--subproject-dir='lib'`, and behave exactly like it does for the default `subprojects` i.e. when I rename `lib` to `subprojects` ``` bash $ meson subprojects download Cloning into 'libusb'... remote: Enumerating objects: 16939, done. remote: Counting objects: 100% (3107/3107), done. remote: Compressing objects: 100% (434/434), done. remote: Total 16939 (delta 2714), reused 2707 (delta 2670), pack-reused 13832 Receiving objects: 100% (16939/16939), 5.11 MiB | 1.28 MiB/s, done. Resolving deltas: 100% (12218/12218), done. branch 'blackmagic/meson' set up to track 'origin/blackmagic/meson'. Switched to a new branch 'blackmagic/meson' Download libusb... -> done ``` **system parameters** * meson version `1.1.1` I don't think anything matters here other than the meson version but for the sake of completeness * operating system `Arch Linux` * python version `Python 3.11.3` * ninja version `1.11.1`
> This looks like a fairly easy fix Quote of the year ahah :D More seriously, the difficulty is `meson subprojects` works without a builddir which means it does not have the `subproject_dir` value without parsing `meson.build` file, at least the first `project()` function call. Nothing impossible of course, PR welcome :) Yep I realize that, which is why I mentioned, and is probably more that enough for the purpose while keeping it simple, the use of a argument/flag, this is probably the extent of what I can contribute too, someone more familiar with the code might see a path to "properly" fixing this by parsing the `meson.build` :) `meson configure` parses the project() call. I would look through the code in that path I think `meson configure` does way too much, it will parse and interpreter the whole project. Some pointers if you want to work on this: - See `InterpreterBase.load_root_meson_file()` to parse the root `meson.build` file into an AST. - See `Interpreter.handle_meson_version_from_ast()` to extract a kwarg from the top `project()` function. At the core, this is just running an "introspection" interpreter, which is like running a real `meson setup` except that all functions are stubbed out to do no real work, and depending on function it will generate some lightweight metadata. As noted: > I think `meson configure` does way too much, it will parse and interpreter the whole project. For `configure` / `introspect` / `rewrite`, we do need that to an extent as we need to check subprojects too, or generally print/manipulate a full AST. For `meson subprojects download`, definitely not. :D The implementation for the AST interpreter is simple enough: ```python def analyze(self) -> None: self.load_root_meson_file() self.sanity_check_ast() self.parse_project() self.run() ``` Parsing the project should be enough here. We do not need to do the kind of hacks we do in handle_meson_version_from_ast, which exists solely to ensure we can have the `project()` function itself generate version-based warnings and must therefore know the meson_version before running project(), a chicken-and-egg problem.
2023-07-07T03:21:55Z
<patch> diff --git a/mesonbuild/msubprojects.py b/mesonbuild/msubprojects.py --- a/mesonbuild/msubprojects.py +++ b/mesonbuild/msubprojects.py @@ -14,6 +14,7 @@ import zipfile from . import mlog +from .ast import IntrospectionInterpreter, AstIDGenerator from .mesonlib import quiet_git, GitException, Popen_safe, MesonException, windows_proof_rmtree from .wrap.wrap import (Resolver, WrapException, ALL_TYPES, PackageDefinition, parse_patch_url, update_wrap_file, get_releases) @@ -685,15 +686,20 @@ def add_arguments(parser: argparse.ArgumentParser) -> None: p.set_defaults(subprojects_func=Runner.packagefiles) def run(options: 'Arguments') -> int: - src_dir = os.path.relpath(os.path.realpath(options.sourcedir)) - if not os.path.isfile(os.path.join(src_dir, 'meson.build')): - mlog.error('Directory', mlog.bold(src_dir), 'does not seem to be a Meson source directory.') + source_dir = os.path.relpath(os.path.realpath(options.sourcedir)) + if not os.path.isfile(os.path.join(source_dir, 'meson.build')): + mlog.error('Directory', mlog.bold(source_dir), 'does not seem to be a Meson source directory.') return 1 - subprojects_dir = os.path.join(src_dir, 'subprojects') - if not os.path.isdir(subprojects_dir): - mlog.log('Directory', mlog.bold(src_dir), 'does not seem to have subprojects.') + with mlog.no_logging(): + intr = IntrospectionInterpreter(source_dir, '', 'none', visitors = [AstIDGenerator()]) + intr.load_root_meson_file() + intr.sanity_check_ast() + intr.parse_project() + subproject_dir = intr.subproject_dir + if not os.path.isdir(os.path.join(source_dir, subproject_dir)): + mlog.log('Directory', mlog.bold(source_dir), 'does not seem to have subprojects.') return 0 - r = Resolver(src_dir, 'subprojects', wrap_frontend=True, allow_insecure=options.allow_insecure, silent=True) + r = Resolver(source_dir, subproject_dir, wrap_frontend=True, allow_insecure=options.allow_insecure, silent=True) if options.subprojects: wraps = [wrap for name, wrap in r.wraps.items() if name in options.subprojects] else: @@ -714,7 +720,7 @@ def run(options: 'Arguments') -> int: pre_func(options) logger = Logger(len(wraps)) for wrap in wraps: - dirname = Path(subprojects_dir, wrap.directory).as_posix() + dirname = Path(subproject_dir, wrap.directory).as_posix() runner = Runner(logger, r, wrap, dirname, options) task = loop.run_in_executor(executor, runner.run) tasks.append(task) </patch>
[]
[]
dagster-io__dagster-6986
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> computing config for a partitioned asset job fails when it includes an asset downstream of a non-partitioned asset This test fails: ``` def test_access_partition_keys_from_context_only_one_asset_partitioned(): upstream_partitions_def = StaticPartitionsDefinition(["a", "b", "c"]) class MyIOManager(IOManager): def handle_output(self, context, obj): if context.op_def.name == "upstream_asset": assert context.asset_partition_key == "b" elif context.op_def.name == "downstream_asset": assert not context.has_asset_partitions with pytest.raises(Exception): # TODO: better error message assert context.asset_partition_key_range else: assert False def load_input(self, context): assert not context.has_asset_partitions @asset(partitions_def=upstream_partitions_def) def upstream_asset(context): assert context.output_asset_partition_key() == "b" @asset def downstream_asset(upstream_asset): assert upstream_asset is None @asset def double_downstream_asset(downstream_asset): assert upstream_asset is None my_job = build_assets_job( "my_job", assets=[upstream_asset, downstream_asset, double_downstream_asset], resource_defs={"io_manager": IOManagerDefinition.hardcoded_io_manager(MyIOManager())}, ) result = my_job.execute_in_process(partition_key="b") assert result.asset_materializations_for_node("upstream_asset") == [ AssetMaterialization(asset_key=AssetKey(["upstream_asset"]), partition="b") ] ``` with ``` partition_key = 'b' def run_config_for_partition_fn(partition_key: str) -> Dict[str, Any]: ops_config: Dict[str, Any] = {} asset_partitions_by_asset_key = asset_partitions_for_job_partition(partition_key) for assets_def in assets: outputs_dict: Dict[str, Dict[str, Any]] = {} if assets_def.partitions_def is not None: for asset_key, output_def in assets_def.output_defs_by_asset_key.items(): asset_partition_key_range = asset_partitions_by_asset_key[asset_key] outputs_dict[output_def.name] = { "start": asset_partition_key_range.start, "end": asset_partition_key_range.end, } inputs_dict: Dict[str, Dict[str, Any]] = {} for in_asset_key, input_def in assets_def.input_defs_by_asset_key.items(): > upstream_partitions_def = partitions_defs_by_asset_key[in_asset_key] E KeyError: AssetKey(['downstream_asset']) python_modules/dagster/dagster/core/asset_defs/assets_job.py:161: KeyError ``` </issue> <code> [start of README.md] 1 <p align="center"> 2 <img src="assets/dagster-logo.png" /> 3 <br /><br /> 4 <a href="https://badge.fury.io/py/dagster"><img src="https://badge.fury.io/py/dagster.svg"></> 5 <a href="https://coveralls.io/github/dagster-io/dagster?branch=master"><img src="https://coveralls.io/repos/github/dagster-io/dagster/badge.svg?branch=master"></a> 6 <a href="https://buildkite.com/dagster/dagster"><img src="https://badge.buildkite.com/888545beab829e41e5d7303db15525a2bc3b0f0e33a72759ac.svg?branch=master"></a> 7 <a href="https://dagster-slackin.herokuapp.com/"><img src="https://dagster-slackin.herokuapp.com/badge.svg"></a> 8 </p> 9 10 # Dagster 11 12 An orchestration platform for the development, production, and observation of data assets. 13 14 Dagster lets you define jobs in terms of the data flow between reusable, logical components, then test locally and run anywhere. With a unified view of jobs and the assets they produce, Dagster can schedule and orchestrate Pandas, Spark, SQL, or anything else that Python can invoke. 15 16 Dagster is designed for data platform engineers, data engineers, and full-stack data scientists. Building a data platform with Dagster makes your stakeholders more independent and your systems more robust. Developing data pipelines with Dagster makes testing easier and deploying faster. 17 18 ### Develop and test locally, then deploy anywhere 19 20 With Dagster’s pluggable execution, the same computations can run in-process against your local file system, or on a distributed work queue against your production data lake. You can set up Dagster’s web interface in a minute on your laptop, deploy it on-premise, or in any cloud. 21 22 ### Model and type the data produced and consumed by each step 23 24 Dagster models data dependencies between steps in your orchestration graph and handles passing data between them. Optional typing on inputs and outputs helps catch bugs early. 25 26 ### Link data to computations 27 28 Dagster’s Asset Manager tracks the data sets and ML models produced by your jobs, so you can understand how they were generated and trace issues when they don’t look how you expect. 29 30 ### Build a self-service data platform 31 32 Dagster helps platform teams build systems for data practitioners. Jobs are built from shared, reusable, configurable data processing and infrastructure components. Dagit, Dagster’s web interface, lets anyone inspect these objects and discover how to use them. 33 34 ### Avoid dependency nightmares 35 36 Dagster’s repository model lets you isolate codebases so that problems in one job don’t bring down the rest. Each job can have its own package dependencies and Python version. Jobs are run in isolated processes so user code issues can't bring the system down. 37 38 ### Debug pipelines from a rich UI 39 40 Dagit, Dagster’s web interface, includes expansive facilities for understanding the jobs it orchestrates. When inspecting a run of your job, you can query over logs, discover the most time consuming tasks via a Gantt chart, re-execute subsets of steps, and more. 41 42 ## Getting Started 43 44 ### Installation 45 46 Dagster is available on PyPI, and officially supports Python 3.6+. 47 48 ```bash 49 $ pip install dagster dagit 50 ``` 51 52 This installs two modules: 53 54 - **Dagster**: the core programming model and abstraction stack; stateless, single-node, 55 single-process and multi-process execution engines; and a CLI tool for driving those engines. 56 - **Dagit**: the UI for developing and operating Dagster pipelines, including a DAG browser, a 57 type-aware config editor, and a live execution interface. 58 59 ### Learn 60 61 Next, jump right into our [tutorial](https://docs.dagster.io/tutorial/), or read our [complete 62 documentation](https://docs.dagster.io). If you're actively using Dagster or have questions on 63 getting started, we'd love to hear from you: 64 65 <br /> 66 <p align="center"> 67 <a href="https://join.slack.com/t/dagster/shared_invite/enQtNjEyNjkzNTA2OTkzLTI0MzdlNjU0ODVhZjQyOTMyMGM1ZDUwZDQ1YjJmYjI3YzExZGViMDI1ZDlkNTY5OThmYWVlOWM1MWVjN2I3NjU"><img src="https://user-images.githubusercontent.com/609349/63558739-f60a7e00-c502-11e9-8434-c8a95b03ce62.png" width=160px; /></a> 68 </p> 69 70 ## Contributing 71 72 For details on contributing or running the project for development, check out our [contributing 73 guide](https://docs.dagster.io/community/contributing/). <br /> 74 75 ## Integrations 76 77 Dagster works with the tools and systems that you're already using with your data, including: 78 79 <table> 80 <thead> 81 <tr style="background-color: #ddd" align="center"> 82 <td colspan=2><b>Integration</b></td> 83 <td><b>Dagster Library</b></td> 84 </tr> 85 </thead> 86 <tbody> 87 <tr> 88 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57987547-a7e36b80-7a37-11e9-95ae-4c4de2618e87.png"></td> 89 <td style="border-left: 0px"> <b>Apache Airflow</b></td> 90 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-airflow" />dagster-airflow</a><br />Allows Dagster pipelines to be scheduled and executed, either containerized or uncontainerized, as <a href="https://github.com/apache/airflow">Apache Airflow DAGs</a>.</td> 91 </tr> 92 <tr> 93 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57987976-5ccc5700-7a3d-11e9-9fa5-1a51299b1ccb.png"></td> 94 <td style="border-left: 0px"> <b>Apache Spark</b></td> 95 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-spark" />dagster-spark</a> &middot; <a href="https://docs.dagster.io/_apidocs/libraries/dagster-pyspark" />dagster-pyspark</a> 96 <br />Libraries for interacting with Apache Spark and PySpark. 97 </td> 98 </tr> 99 <tr> 100 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/58348728-48f66b80-7e16-11e9-9e9f-1a0fea9a49b4.png"></td> 101 <td style="border-left: 0px"> <b>Dask</b></td> 102 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-dask" />dagster-dask</a> 103 <br />Provides a Dagster integration with Dask / Dask.Distributed. 104 </td> 105 </tr> 106 <tr> 107 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/58349731-f36f8e00-7e18-11e9-8a2e-86e086caab66.png"></td> 108 <td style="border-left: 0px"> <b>Datadog</b></td> 109 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-datadog" />dagster-datadog</a> 110 <br />Provides a Dagster resource for publishing metrics to Datadog. 111 </td> 112 </tr> 113 <tr> 114 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57987809-bf245800-7a3b-11e9-8905-494ed99d0852.png" /> 115 &nbsp;/&nbsp; <img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57987827-fa268b80-7a3b-11e9-8a18-b675d76c19aa.png"> 116 </td> 117 <td style="border-left: 0px"> <b>Jupyter / Papermill</b></td> 118 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagstermill" />dagstermill</a><br />Built on the <a href="https://github.com/nteract/papermill">papermill library</a>, dagstermill is meant for integrating productionized Jupyter notebooks into dagster pipelines.</td> 119 </tr> 120 <tr> 121 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57988016-f431aa00-7a3d-11e9-8cb6-1309d4246b27.png"></td> 122 <td style="border-left: 0px"> <b>PagerDuty</b></td> 123 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-pagerduty" />dagster-pagerduty</a> 124 <br />A library for creating PagerDuty alerts from Dagster workflows. 125 </td> 126 </tr> 127 <tr> 128 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/58349397-fcac2b00-7e17-11e9-900c-9ab8cf7cb64a.png"></td> 129 <td style="border-left: 0px"> <b>Snowflake</b></td> 130 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-snowflake" />dagster-snowflake</a> 131 <br />A library for interacting with the Snowflake Data Warehouse. 132 </td> 133 </tr> 134 <tr style="background-color: #ddd"> 135 <td colspan=2 align="center"><b>Cloud Providers</b></td> 136 <td><b></b></td> 137 </tr> 138 <tr> 139 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57987557-c2b5e000-7a37-11e9-9310-c274481a4682.png"> </td> 140 <td style="border-left: 0px"><b>AWS</b></td> 141 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-aws" />dagster-aws</a> 142 <br />A library for interacting with Amazon Web Services. Provides integrations with Cloudwatch, S3, EMR, and Redshift. 143 </td> 144 </tr> 145 <tr> 146 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/84176312-0bbb4680-aa36-11ea-9580-a70758b12161.png"> </td> 147 <td style="border-left: 0px"><b>Azure</b></td> 148 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-azure" />dagster-azure</a> 149 <br />A library for interacting with Microsoft Azure. 150 </td> 151 </tr> 152 <tr> 153 <td align="center" style="border-right: 0px"><img style="vertical-align:middle" src="https://user-images.githubusercontent.com/609349/57987566-f98bf600-7a37-11e9-81fa-b8ca1ea6cc1e.png"> </td> 154 <td style="border-left: 0px"><b>GCP</b></td> 155 <td><a href="https://docs.dagster.io/_apidocs/libraries/dagster-gcp" />dagster-gcp</a> 156 <br />A library for interacting with Google Cloud Platform. Provides integrations with GCS, BigQuery, and Cloud Dataproc. 157 </td> 158 </tr> 159 </tbody> 160 </table> 161 162 This list is growing as we are actively building more integrations, and we welcome contributions! 163 [end of README.md] [start of examples/docs_snippets/docs_snippets/concepts/assets/asset_group.py] 1 # pylint: disable=redefined-outer-name 2 # start_marker 3 from dagster import AssetGroup, asset 4 5 6 @asset 7 def upstream_asset(): 8 return [1, 2, 3] 9 10 11 @asset 12 def downstream_asset(upstream_asset): 13 return upstream_asset + [4] 14 15 16 asset_group = AssetGroup([upstream_asset, downstream_asset]) 17 # end_marker 18 [end of examples/docs_snippets/docs_snippets/concepts/assets/asset_group.py] [start of examples/docs_snippets/docs_snippets/concepts/assets/asset_io_manager.py] 1 # pylint: disable=redefined-outer-name 2 # start_marker 3 from dagster_aws.s3 import s3_pickle_asset_io_manager, s3_resource 4 5 from dagster import AssetGroup, asset 6 7 8 @asset 9 def upstream_asset(): 10 return [1, 2, 3] 11 12 13 @asset 14 def downstream_asset(upstream_asset): 15 return upstream_asset + [4] 16 17 18 asset_group = AssetGroup( 19 [upstream_asset, downstream_asset], 20 resource_defs={"io_manager": s3_pickle_asset_io_manager, "s3": s3_resource}, 21 ) 22 23 # end_marker 24 [end of examples/docs_snippets/docs_snippets/concepts/assets/asset_io_manager.py] [start of examples/docs_snippets/docs_snippets/concepts/assets/asset_io_manager_prod_local.py] 1 # pylint: disable=redefined-outer-name 2 # start_marker 3 from dagster_aws.s3 import s3_pickle_asset_io_manager, s3_resource 4 5 from dagster import AssetGroup, asset, fs_asset_io_manager 6 7 8 @asset 9 def upstream_asset(): 10 return [1, 2, 3] 11 12 13 @asset 14 def downstream_asset(upstream_asset): 15 return upstream_asset + [4] 16 17 18 prod_asset_group = AssetGroup( 19 [upstream_asset, downstream_asset], 20 resource_defs={"io_manager": s3_pickle_asset_io_manager, "s3": s3_resource}, 21 ) 22 23 local_asset_group = AssetGroup( 24 [upstream_asset, downstream_asset], 25 resource_defs={"io_manager": fs_asset_io_manager}, 26 ) 27 28 # end_marker 29 [end of examples/docs_snippets/docs_snippets/concepts/assets/asset_io_manager_prod_local.py] [start of examples/docs_snippets/docs_snippets/concepts/partitions_schedules_sensors/sensors/sensors.py] 1 """isort:skip_file""" 2 3 from dagster import repository, DefaultSensorStatus, SkipReason 4 5 6 # start_sensor_job_marker 7 from dagster import op, job 8 9 10 @op(config_schema={"filename": str}) 11 def process_file(context): 12 filename = context.op_config["filename"] 13 context.log.info(filename) 14 15 16 @job 17 def log_file_job(): 18 process_file() 19 20 21 # end_sensor_job_marker 22 23 MY_DIRECTORY = "./" 24 25 # start_directory_sensor_marker 26 import os 27 from dagster import sensor, RunRequest 28 29 30 @sensor(job=log_file_job) 31 def my_directory_sensor(): 32 for filename in os.listdir(MY_DIRECTORY): 33 filepath = os.path.join(MY_DIRECTORY, filename) 34 if os.path.isfile(filepath): 35 yield RunRequest( 36 run_key=filename, 37 run_config={ 38 "ops": {"process_file": {"config": {"filename": filename}}} 39 }, 40 ) 41 42 43 # end_directory_sensor_marker 44 45 # start_running_in_code 46 @sensor(job=log_file_job, default_status=DefaultSensorStatus.RUNNING) 47 def my_running_sensor(): 48 ... 49 50 51 # end_running_in_code 52 53 54 # start_sensor_testing_no 55 from dagster import validate_run_config 56 57 58 @sensor(job=log_file_job) 59 def sensor_to_test(): 60 yield RunRequest( 61 run_key="foo", 62 run_config={"ops": {"process_file": {"config": {"filename": "foo"}}}}, 63 ) 64 65 66 def test_sensor(): 67 for run_request in sensor_to_test(): 68 assert validate_run_config(log_file_job, run_request.run_config) 69 70 71 # end_sensor_testing_no 72 73 74 @job 75 def my_job(): 76 pass 77 78 79 # start_interval_sensors_maker 80 81 82 @sensor(job=my_job, minimum_interval_seconds=30) 83 def sensor_A(): 84 yield RunRequest(run_key=None, run_config={}) 85 86 87 @sensor(job=my_job, minimum_interval_seconds=45) 88 def sensor_B(): 89 yield RunRequest(run_key=None, run_config={}) 90 91 92 # end_interval_sensors_maker 93 94 95 # start_cursor_sensors_marker 96 @sensor(job=log_file_job) 97 def my_directory_sensor_cursor(context): 98 last_mtime = float(context.cursor) if context.cursor else 0 99 100 max_mtime = last_mtime 101 for filename in os.listdir(MY_DIRECTORY): 102 filepath = os.path.join(MY_DIRECTORY, filename) 103 if os.path.isfile(filepath): 104 fstats = os.stat(filepath) 105 file_mtime = fstats.st_mtime 106 if file_mtime <= last_mtime: 107 continue 108 109 # the run key should include mtime if we want to kick off new runs based on file modifications 110 run_key = f"{filename}:{str(file_mtime)}" 111 run_config = {"ops": {"process_file": {"config": {"filename": filename}}}} 112 yield RunRequest(run_key=run_key, run_config=run_config) 113 max_mtime = max(max_mtime, file_mtime) 114 115 context.update_cursor(str(max_mtime)) 116 117 118 # end_cursor_sensors_marker 119 120 # start_sensor_testing_with_context 121 from dagster import build_sensor_context 122 123 124 def test_my_directory_sensor_cursor(): 125 context = build_sensor_context(cursor="0") 126 for run_request in my_directory_sensor_cursor(context): 127 assert validate_run_config(log_file_job, run_request.run_config) 128 129 130 # end_sensor_testing_with_context 131 132 133 # start_skip_sensors_marker 134 @sensor(job=log_file_job) 135 def my_directory_sensor_with_skip_reasons(): 136 has_files = False 137 for filename in os.listdir(MY_DIRECTORY): 138 filepath = os.path.join(MY_DIRECTORY, filename) 139 if os.path.isfile(filepath): 140 yield RunRequest( 141 run_key=filename, 142 run_config={ 143 "ops": {"process_file": {"config": {"filename": filename}}} 144 }, 145 ) 146 has_files = True 147 if not has_files: 148 yield SkipReason(f"No files found in {MY_DIRECTORY}.") 149 150 151 # end_skip_sensors_marker 152 153 # start_asset_sensor_marker 154 from dagster import AssetKey, asset_sensor 155 156 157 @asset_sensor(asset_key=AssetKey("my_table"), job=my_job) 158 def my_asset_sensor(context, asset_event): 159 yield RunRequest( 160 run_key=context.cursor, 161 run_config={ 162 "ops": { 163 "read_materialization": { 164 "config": { 165 "asset_key": asset_event.dagster_event.asset_key.path, 166 } 167 } 168 } 169 }, 170 ) 171 172 173 # end_asset_sensor_marker 174 175 # start_multi_asset_sensor_marker 176 import json 177 from dagster import EventRecordsFilter, DagsterEventType 178 179 180 @sensor(job=my_job) 181 def multi_asset_sensor(context): 182 cursor_dict = json.loads(context.cursor) if context.cursor else {} 183 a_cursor = cursor_dict.get("a") 184 b_cursor = cursor_dict.get("b") 185 186 a_event_records = context.instance.get_event_records( 187 EventRecordsFilter( 188 event_type=DagsterEventType.ASSET_MATERIALIZATION, 189 asset_key=AssetKey("table_a"), 190 after_cursor=a_cursor, 191 ), 192 ascending=False, 193 limit=1, 194 ) 195 b_event_records = context.instance.get_event_records( 196 EventRecordsFilter( 197 event_type=DagsterEventType.ASSET_MATERIALIZATION, 198 asset_key=AssetKey("table_a"), 199 after_cursor=b_cursor, 200 ), 201 ascending=False, 202 limit=1, 203 ) 204 205 if not a_event_records or not b_event_records: 206 return 207 208 # make sure we only generate events if both table_a and table_b have been materialized since 209 # the last evaluation. 210 yield RunRequest(run_key=None) 211 212 # update the sensor cursor by combining the individual event cursors from the two separate 213 # asset event streams 214 context.update_cursor( 215 json.dumps( 216 { 217 "a": a_event_records[0].storage_id, 218 "b": b_event_records[0].storage_id, 219 } 220 ) 221 ) 222 223 224 # end_multi_asset_sensor_marker 225 226 227 # start_s3_sensors_marker 228 from dagster_aws.s3.sensor import get_s3_keys 229 230 231 @sensor(job=my_job) 232 def my_s3_sensor(context): 233 new_s3_keys = get_s3_keys("my_s3_bucket", since_key=context.last_run_key) 234 if not new_s3_keys: 235 yield SkipReason("No new s3 files found for bucket my_s3_bucket.") 236 return 237 for s3_key in new_s3_keys: 238 yield RunRequest(run_key=s3_key, run_config={}) 239 240 241 # end_s3_sensors_marker 242 243 244 @job 245 def the_job(): 246 ... 247 248 249 def get_the_db_connection(_): 250 ... 251 252 253 # pylint: disable=unused-variable,reimported 254 # start_build_resources_example 255 from dagster import resource, build_resources, sensor 256 257 258 @resource 259 def the_credentials(): 260 ... 261 262 263 @resource(required_resource_keys={"credentials"}) 264 def the_db_connection(init_context): 265 get_the_db_connection(init_context.resources.credentials) 266 267 268 @sensor(job=the_job) 269 def uses_db_connection(): 270 with build_resources( 271 {"db_connection": the_db_connection, "credentials": the_credentials} 272 ) as resources: 273 conn = resources.db_connection 274 ... 275 276 277 # end_build_resources_example 278 279 280 @repository 281 def my_repository(): 282 return [my_job, log_file_job, my_directory_sensor, sensor_A, sensor_B] 283 [end of examples/docs_snippets/docs_snippets/concepts/partitions_schedules_sensors/sensors/sensors.py] [start of python_modules/dagster/dagster/core/asset_defs/assets_job.py] 1 import itertools 2 from typing import AbstractSet, Any, Dict, List, Mapping, Optional, Sequence, Tuple, Union, cast 3 4 from dagster import check 5 from dagster.core.definitions.config import ConfigMapping 6 from dagster.core.definitions.decorators.op import op 7 from dagster.core.definitions.dependency import ( 8 DependencyDefinition, 9 IDependencyDefinition, 10 NodeInvocation, 11 ) 12 from dagster.core.definitions.events import AssetKey 13 from dagster.core.definitions.executor_definition import ExecutorDefinition 14 from dagster.core.definitions.graph_definition import GraphDefinition 15 from dagster.core.definitions.job_definition import JobDefinition 16 from dagster.core.definitions.op_definition import OpDefinition 17 from dagster.core.definitions.output import Out, OutputDefinition 18 from dagster.core.definitions.partition import PartitionedConfig, PartitionsDefinition 19 from dagster.core.definitions.partition_key_range import PartitionKeyRange 20 from dagster.core.definitions.resource_definition import ResourceDefinition 21 from dagster.core.errors import DagsterInvalidDefinitionError 22 from dagster.core.execution.context.input import InputContext, build_input_context 23 from dagster.core.execution.context.output import build_output_context 24 from dagster.core.storage.fs_asset_io_manager import fs_asset_io_manager 25 from dagster.core.storage.root_input_manager import RootInputManagerDefinition, root_input_manager 26 from dagster.utils.backcompat import experimental 27 from dagster.utils.merger import merge_dicts 28 29 from .asset import AssetsDefinition 30 from .asset_partitions import get_upstream_partitions_for_partition_range 31 from .source_asset import SourceAsset 32 33 34 @experimental 35 def build_assets_job( 36 name: str, 37 assets: List[AssetsDefinition], 38 source_assets: Optional[Sequence[Union[SourceAsset, AssetsDefinition]]] = None, 39 resource_defs: Optional[Dict[str, ResourceDefinition]] = None, 40 description: Optional[str] = None, 41 config: Optional[Union[ConfigMapping, Dict[str, Any], PartitionedConfig]] = None, 42 tags: Optional[Dict[str, Any]] = None, 43 executor_def: Optional[ExecutorDefinition] = None, 44 ) -> JobDefinition: 45 """Builds a job that materializes the given assets. 46 47 The dependencies between the ops in the job are determined by the asset dependencies defined 48 in the metadata on the provided asset nodes. 49 50 Args: 51 name (str): The name of the job. 52 assets (List[AssetsDefinition]): A list of assets or 53 multi-assets - usually constructed using the :py:func:`@asset` or :py:func:`@multi_asset` 54 decorator. 55 source_assets (Optional[Sequence[Union[SourceAsset, AssetsDefinition]]]): A list of 56 assets that are not materialized by this job, but that assets in this job depend on. 57 resource_defs (Optional[Dict[str, ResourceDefinition]]): Resource defs to be included in 58 this job. 59 description (Optional[str]): A description of the job. 60 61 Examples: 62 63 .. code-block:: python 64 65 @asset 66 def asset1(): 67 return 5 68 69 @asset 70 def asset2(asset1): 71 return my_upstream_asset + 1 72 73 my_assets_job = build_assets_job("my_assets_job", assets=[asset1, asset2]) 74 75 Returns: 76 JobDefinition: A job that materializes the given assets. 77 """ 78 check.str_param(name, "name") 79 check.list_param(assets, "assets", of_type=AssetsDefinition) 80 check.opt_list_param(source_assets, "source_assets", of_type=(SourceAsset, AssetsDefinition)) 81 check.opt_str_param(description, "description") 82 source_assets_by_key = build_source_assets_by_key(source_assets) 83 84 op_defs = build_op_deps(assets, source_assets_by_key.keys()) 85 root_manager = build_root_manager(source_assets_by_key) 86 partitioned_config = build_job_partitions_from_assets(assets, source_assets or []) 87 88 return GraphDefinition( 89 name=name, 90 node_defs=[asset.op for asset in assets], 91 dependencies=op_defs, 92 description=description, 93 input_mappings=None, 94 output_mappings=None, 95 config=None, 96 ).to_job( 97 resource_defs=merge_dicts( 98 {"io_manager": fs_asset_io_manager}, resource_defs or {}, {"root_manager": root_manager} 99 ), 100 config=config or partitioned_config, 101 tags=tags, 102 executor_def=executor_def, 103 ) 104 105 106 def build_job_partitions_from_assets( 107 assets: Sequence[AssetsDefinition], 108 source_assets: Sequence[Union[SourceAsset, AssetsDefinition]], 109 ) -> Optional[PartitionedConfig]: 110 assets_with_partitions_defs = [assets_def for assets_def in assets if assets_def.partitions_def] 111 112 if len(assets_with_partitions_defs) == 0: 113 return None 114 115 first_assets_with_partitions_def: AssetsDefinition = assets_with_partitions_defs[0] 116 for assets_def in assets_with_partitions_defs: 117 if assets_def.partitions_def != first_assets_with_partitions_def.partitions_def: 118 first_asset_key = next(iter(assets_def.asset_keys)).to_string() 119 second_asset_key = next(iter(first_assets_with_partitions_def.asset_keys)).to_string() 120 raise DagsterInvalidDefinitionError( 121 "When an assets job contains multiple partitions assets, they must have the " 122 f"same partitions definitions, but asset '{first_asset_key}' and asset " 123 f"'{second_asset_key}' have different partitions definitions. " 124 ) 125 126 partitions_defs_by_asset_key: Dict[AssetKey, PartitionsDefinition] = {} 127 asset: Union[AssetsDefinition, SourceAsset] 128 for asset in itertools.chain.from_iterable([assets, source_assets]): 129 if isinstance(asset, AssetsDefinition) and asset.partitions_def is not None: 130 for asset_key in asset.asset_keys: 131 partitions_defs_by_asset_key[asset_key] = asset.partitions_def 132 elif isinstance(asset, SourceAsset) and asset.partitions_def is not None: 133 partitions_defs_by_asset_key[asset.key] = asset.partitions_def 134 135 def asset_partitions_for_job_partition( 136 job_partition_key: str, 137 ) -> Mapping[AssetKey, PartitionKeyRange]: 138 return { 139 asset_key: PartitionKeyRange(job_partition_key, job_partition_key) 140 for assets_def in assets 141 for asset_key in assets_def.asset_keys 142 if assets_def.partitions_def 143 } 144 145 def run_config_for_partition_fn(partition_key: str) -> Dict[str, Any]: 146 ops_config: Dict[str, Any] = {} 147 asset_partitions_by_asset_key = asset_partitions_for_job_partition(partition_key) 148 149 for assets_def in assets: 150 outputs_dict: Dict[str, Dict[str, Any]] = {} 151 if assets_def.partitions_def is not None: 152 for asset_key, output_def in assets_def.output_defs_by_asset_key.items(): 153 asset_partition_key_range = asset_partitions_by_asset_key[asset_key] 154 outputs_dict[output_def.name] = { 155 "start": asset_partition_key_range.start, 156 "end": asset_partition_key_range.end, 157 } 158 159 inputs_dict: Dict[str, Dict[str, Any]] = {} 160 for in_asset_key, input_def in assets_def.input_defs_by_asset_key.items(): 161 upstream_partitions_def = partitions_defs_by_asset_key[in_asset_key] 162 if assets_def.partitions_def is not None and upstream_partitions_def is not None: 163 upstream_partition_key_range = get_upstream_partitions_for_partition_range( 164 assets_def, upstream_partitions_def, in_asset_key, asset_partition_key_range 165 ) 166 inputs_dict[input_def.name] = { 167 "start": upstream_partition_key_range.start, 168 "end": upstream_partition_key_range.end, 169 } 170 171 ops_config[assets_def.op.name] = { 172 "config": { 173 "assets": { 174 "input_partitions": inputs_dict, 175 "output_partitions": outputs_dict, 176 } 177 } 178 } 179 180 return {"ops": ops_config} 181 182 return PartitionedConfig( 183 partitions_def=cast(PartitionsDefinition, first_assets_with_partitions_def.partitions_def), 184 run_config_for_partition_fn=lambda p: run_config_for_partition_fn(p.name), 185 ) 186 187 188 def build_source_assets_by_key( 189 source_assets: Optional[Sequence[Union[SourceAsset, AssetsDefinition]]] 190 ) -> Mapping[AssetKey, Union[SourceAsset, OutputDefinition]]: 191 source_assets_by_key: Dict[AssetKey, Union[SourceAsset, OutputDefinition]] = {} 192 for asset_source in source_assets or []: 193 if isinstance(asset_source, SourceAsset): 194 source_assets_by_key[asset_source.key] = asset_source 195 elif isinstance(asset_source, AssetsDefinition): 196 for asset_key, output_def in asset_source.output_defs_by_asset_key.items(): 197 if asset_key: 198 source_assets_by_key[asset_key] = output_def 199 200 return source_assets_by_key 201 202 203 def build_op_deps( 204 multi_asset_defs: List[AssetsDefinition], source_paths: AbstractSet[AssetKey] 205 ) -> Dict[Union[str, NodeInvocation], Dict[str, IDependencyDefinition]]: 206 op_outputs_by_asset: Dict[AssetKey, Tuple[OpDefinition, str]] = {} 207 for multi_asset_def in multi_asset_defs: 208 for asset_key, output_def in multi_asset_def.output_defs_by_asset_key.items(): 209 if asset_key in op_outputs_by_asset: 210 raise DagsterInvalidDefinitionError( 211 f"The same asset key was included for two definitions: '{asset_key.to_string()}'" 212 ) 213 214 op_outputs_by_asset[asset_key] = (multi_asset_def.op, output_def.name) 215 216 op_deps: Dict[Union[str, NodeInvocation], Dict[str, IDependencyDefinition]] = {} 217 for multi_asset_def in multi_asset_defs: 218 op_name = multi_asset_def.op.name 219 op_deps[op_name] = {} 220 for asset_key, input_def in multi_asset_def.input_defs_by_asset_key.items(): 221 if asset_key in op_outputs_by_asset: 222 op_def, output_name = op_outputs_by_asset[asset_key] 223 op_deps[op_name][input_def.name] = DependencyDefinition(op_def.name, output_name) 224 elif asset_key not in source_paths and not input_def.dagster_type.is_nothing: 225 raise DagsterInvalidDefinitionError( 226 f"Input asset '{asset_key.to_string()}' for asset '{op_name}' is not " 227 "produced by any of the provided asset ops and is not one of the provided " 228 "sources" 229 ) 230 231 return op_deps 232 233 234 def build_root_manager( 235 source_assets_by_key: Mapping[AssetKey, Union[SourceAsset, OutputDefinition]] 236 ) -> RootInputManagerDefinition: 237 source_asset_io_manager_keys = { 238 source_asset.io_manager_key for source_asset in source_assets_by_key.values() 239 } 240 241 @root_input_manager(required_resource_keys=source_asset_io_manager_keys) 242 def _root_manager(input_context: InputContext) -> Any: 243 source_asset_key = cast(AssetKey, input_context.asset_key) 244 source_asset = source_assets_by_key[source_asset_key] 245 246 @op(out={source_asset_key.path[-1]: Out(asset_key=source_asset_key)}) 247 def _op(): 248 pass 249 250 output_context = build_output_context( 251 name=source_asset_key.path[-1], 252 step_key="none", 253 solid_def=_op, 254 metadata=source_asset.metadata, 255 ) 256 input_context_with_upstream = build_input_context( 257 name=input_context.name, 258 metadata=input_context.metadata, 259 config=input_context.config, 260 dagster_type=input_context.dagster_type, 261 upstream_output=output_context, 262 op_def=input_context.op_def, 263 step_context=input_context.step_context, 264 ) 265 266 io_manager = getattr(cast(Any, input_context.resources), source_asset.io_manager_key) 267 return io_manager.load_input(input_context_with_upstream) 268 269 return _root_manager 270 [end of python_modules/dagster/dagster/core/asset_defs/assets_job.py] [start of python_modules/dagster/dagster/core/execution/context/input.py] 1 from typing import TYPE_CHECKING, Any, Dict, Optional, Union, cast 2 3 from dagster import check 4 from dagster.core.definitions.events import AssetKey 5 from dagster.core.definitions.op_definition import OpDefinition 6 from dagster.core.definitions.partition_key_range import PartitionKeyRange 7 from dagster.core.definitions.solid_definition import SolidDefinition 8 from dagster.core.definitions.time_window_partitions import ( 9 TimeWindow, 10 TimeWindowPartitionsDefinition, 11 ) 12 from dagster.core.errors import DagsterInvariantViolationError 13 14 if TYPE_CHECKING: 15 from dagster.core.definitions.resource_definition import Resources 16 from dagster.core.execution.context.system import StepExecutionContext 17 from dagster.core.log_manager import DagsterLogManager 18 from dagster.core.types.dagster_type import DagsterType 19 20 from .output import OutputContext 21 22 23 class InputContext: 24 """ 25 The ``context`` object available to the load_input method of :py:class:`RootInputManager`. 26 27 Attributes: 28 name (Optional[str]): The name of the input that we're loading. 29 pipeline_name (Optional[str]): The name of the pipeline. 30 solid_def (Optional[SolidDefinition]): The definition of the solid that's loading the input. 31 config (Optional[Any]): The config attached to the input that we're loading. 32 metadata (Optional[Dict[str, Any]]): A dict of metadata that is assigned to the 33 InputDefinition that we're loading for. 34 upstream_output (Optional[OutputContext]): Info about the output that produced the object 35 we're loading. 36 dagster_type (Optional[DagsterType]): The type of this input. 37 log (Optional[DagsterLogManager]): The log manager to use for this input. 38 resource_config (Optional[Dict[str, Any]]): The config associated with the resource that 39 initializes the RootInputManager. 40 resources (Optional[Resources]): The resources required by the resource that initializes the 41 input manager. If using the :py:func:`@root_input_manager` decorator, these resources 42 correspond to those requested with the `required_resource_keys` parameter. 43 op_def (Optional[OpDefinition]): The definition of the op that's loading the input. 44 """ 45 46 def __init__( 47 self, 48 name: Optional[str] = None, 49 pipeline_name: Optional[str] = None, 50 solid_def: Optional["SolidDefinition"] = None, 51 config: Optional[Any] = None, 52 metadata: Optional[Dict[str, Any]] = None, 53 upstream_output: Optional["OutputContext"] = None, 54 dagster_type: Optional["DagsterType"] = None, 55 log_manager: Optional["DagsterLogManager"] = None, 56 resource_config: Optional[Dict[str, Any]] = None, 57 resources: Optional[Union["Resources", Dict[str, Any]]] = None, 58 step_context: Optional["StepExecutionContext"] = None, 59 op_def: Optional["OpDefinition"] = None, 60 ): 61 from dagster.core.definitions.resource_definition import IContainsGenerator, Resources 62 from dagster.core.execution.build_resources import build_resources 63 64 self._name = name 65 self._pipeline_name = pipeline_name 66 check.invariant( 67 solid_def is None or op_def is None, "Can't provide both a solid_def and an op_def arg" 68 ) 69 self._solid_def = solid_def or op_def 70 self._config = config 71 self._metadata = metadata 72 self._upstream_output = upstream_output 73 self._dagster_type = dagster_type 74 self._log = log_manager 75 self._resource_config = resource_config 76 self._step_context = step_context 77 78 if isinstance(resources, Resources): 79 self._resources_cm = None 80 self._resources = resources 81 else: 82 self._resources_cm = build_resources( 83 check.opt_dict_param(resources, "resources", key_type=str) 84 ) 85 self._resources = self._resources_cm.__enter__() # pylint: disable=no-member 86 self._resources_contain_cm = isinstance(self._resources, IContainsGenerator) 87 self._cm_scope_entered = False 88 89 def __enter__(self): 90 if self._resources_cm: 91 self._cm_scope_entered = True 92 return self 93 94 def __exit__(self, *exc): 95 if self._resources_cm: 96 self._resources_cm.__exit__(*exc) # pylint: disable=no-member 97 98 def __del__(self): 99 if self._resources_cm and self._resources_contain_cm and not self._cm_scope_entered: 100 self._resources_cm.__exit__(None, None, None) # pylint: disable=no-member 101 102 @property 103 def has_input_name(self) -> bool: 104 """If we're the InputContext is being used to load the result of a run from outside the run, 105 then it won't have an input name.""" 106 return self._name is not None 107 108 @property 109 def name(self) -> str: 110 if self._name is None: 111 raise DagsterInvariantViolationError( 112 "Attempting to access name, " 113 "but it was not provided when constructing the InputContext" 114 ) 115 116 return self._name 117 118 @property 119 def pipeline_name(self) -> str: 120 if self._pipeline_name is None: 121 raise DagsterInvariantViolationError( 122 "Attempting to access pipeline_name, " 123 "but it was not provided when constructing the InputContext" 124 ) 125 126 return self._pipeline_name 127 128 @property 129 def solid_def(self) -> "SolidDefinition": 130 if self._solid_def is None: 131 raise DagsterInvariantViolationError( 132 "Attempting to access solid_def, " 133 "but it was not provided when constructing the InputContext" 134 ) 135 136 return self._solid_def 137 138 @property 139 def op_def(self) -> "OpDefinition": 140 if self._solid_def is None: 141 raise DagsterInvariantViolationError( 142 "Attempting to access op_def, " 143 "but it was not provided when constructing the InputContext" 144 ) 145 146 return cast(OpDefinition, self._solid_def) 147 148 @property 149 def config(self) -> Any: 150 return self._config 151 152 @property 153 def metadata(self) -> Optional[Dict[str, Any]]: 154 return self._metadata 155 156 @property 157 def upstream_output(self) -> Optional["OutputContext"]: 158 return self._upstream_output 159 160 @property 161 def dagster_type(self) -> "DagsterType": 162 if self._dagster_type is None: 163 raise DagsterInvariantViolationError( 164 "Attempting to access dagster_type, " 165 "but it was not provided when constructing the InputContext" 166 ) 167 168 return self._dagster_type 169 170 @property 171 def log(self) -> "DagsterLogManager": 172 if self._log is None: 173 raise DagsterInvariantViolationError( 174 "Attempting to access log, " 175 "but it was not provided when constructing the InputContext" 176 ) 177 178 return self._log 179 180 @property 181 def resource_config(self) -> Optional[Dict[str, Any]]: 182 return self._resource_config 183 184 @property 185 def resources(self) -> Any: 186 if self._resources is None: 187 raise DagsterInvariantViolationError( 188 "Attempting to access resources, " 189 "but it was not provided when constructing the InputContext" 190 ) 191 192 if self._resources_cm and self._resources_contain_cm and not self._cm_scope_entered: 193 raise DagsterInvariantViolationError( 194 "At least one provided resource is a generator, but attempting to access " 195 "resources outside of context manager scope. You can use the following syntax to " 196 "open a context manager: `with build_input_context(...) as context:`" 197 ) 198 return self._resources 199 200 @property 201 def asset_key(self) -> Optional[AssetKey]: 202 matching_input_defs = [ 203 input_def 204 for input_def in cast(SolidDefinition, self._solid_def).input_defs 205 if input_def.name == self.name 206 ] 207 check.invariant(len(matching_input_defs) == 1) 208 return matching_input_defs[0].get_asset_key(self) 209 210 @property 211 def step_context(self) -> "StepExecutionContext": 212 if self._step_context is None: 213 raise DagsterInvariantViolationError( 214 "Attempting to access step_context, " 215 "but it was not provided when constructing the InputContext" 216 ) 217 218 return self._step_context 219 220 @property 221 def has_partition_key(self) -> bool: 222 """Whether the current run is a partitioned run""" 223 return self.step_context.has_partition_key 224 225 @property 226 def partition_key(self) -> str: 227 """The partition key for the current run. 228 229 Raises an error if the current run is not a partitioned run. 230 """ 231 return self.step_context.partition_key 232 233 @property 234 def has_asset_partitions(self) -> bool: 235 if self._step_context is not None: 236 return self._step_context.has_asset_partitions_for_input(self.name) 237 else: 238 return False 239 240 @property 241 def asset_partition_key(self) -> str: 242 """The partition key for input asset. 243 244 Raises an error if the input asset has no partitioning, or if the run covers a partition 245 range for the input asset. 246 """ 247 return self.step_context.asset_partition_key_for_input(self.name) 248 249 @property 250 def asset_partition_key_range(self) -> PartitionKeyRange: 251 """The partition key range for input asset. 252 253 Raises an error if the input asset has no partitioning. 254 """ 255 return self.step_context.asset_partition_key_range_for_input(self.name) 256 257 @property 258 def asset_partitions_time_window(self) -> TimeWindow: 259 """The time window for the partitions of the input asset. 260 261 Raises an error if either of the following are true: 262 - The input asset has no partitioning. 263 - The input asset is not partitioned with a TimeWindowPartitionsDefinition. 264 """ 265 if self.upstream_output is None: 266 check.failed("InputContext needs upstream_output to get asset_partitions_time_window") 267 268 partitions_def = self.upstream_output.solid_def.output_def_named( 269 self.upstream_output.name 270 ).asset_partitions_def 271 272 if not partitions_def: 273 raise ValueError( 274 "Tried to get asset partitions for an output that does not correspond to a " 275 "partitioned asset." 276 ) 277 278 if not isinstance(partitions_def, TimeWindowPartitionsDefinition): 279 raise ValueError( 280 "Tried to get asset partitions for an input that correponds to a partitioned " 281 "asset that is not partitioned with a TimeWindowPartitionsDefinition." 282 ) 283 284 partition_key_range = self.asset_partition_key_range 285 return TimeWindow( 286 partitions_def.time_window_for_partition_key(partition_key_range.start).start, 287 partitions_def.time_window_for_partition_key(partition_key_range.end).end, 288 ) 289 290 291 def build_input_context( 292 name: Optional[str] = None, 293 config: Optional[Any] = None, 294 metadata: Optional[Dict[str, Any]] = None, 295 upstream_output: Optional["OutputContext"] = None, 296 dagster_type: Optional["DagsterType"] = None, 297 resource_config: Optional[Dict[str, Any]] = None, 298 resources: Optional[Dict[str, Any]] = None, 299 op_def: Optional[OpDefinition] = None, 300 step_context: Optional["StepExecutionContext"] = None, 301 ) -> "InputContext": 302 """Builds input context from provided parameters. 303 304 ``build_input_context`` can be used as either a function, or a context manager. If resources 305 that are also context managers are provided, then ``build_input_context`` must be used as a 306 context manager. 307 308 Args: 309 name (Optional[str]): The name of the input that we're loading. 310 config (Optional[Any]): The config attached to the input that we're loading. 311 metadata (Optional[Dict[str, Any]]): A dict of metadata that is assigned to the 312 InputDefinition that we're loading for. 313 upstream_output (Optional[OutputContext]): Info about the output that produced the object 314 we're loading. 315 dagster_type (Optional[DagsterType]): The type of this input. 316 resource_config (Optional[Dict[str, Any]]): The resource config to make available from the 317 input context. This usually corresponds to the config provided to the resource that 318 loads the input manager. 319 resources (Optional[Dict[str, Any]]): The resources to make available from the context. 320 For a given key, you can provide either an actual instance of an object, or a resource 321 definition. 322 asset_key (Optional[AssetKey]): The asset key attached to the InputDefinition. 323 op_def (Optional[OpDefinition]): The definition of the op that's loading the input. 324 step_context (Optional[StepExecutionContext]): For internal use. 325 326 Examples: 327 328 .. code-block:: python 329 330 build_input_context() 331 332 with build_input_context(resources={"foo": context_manager_resource}) as context: 333 do_something 334 """ 335 from dagster.core.execution.context.output import OutputContext 336 from dagster.core.execution.context.system import StepExecutionContext 337 from dagster.core.execution.context_creation_pipeline import initialize_console_manager 338 from dagster.core.types.dagster_type import DagsterType 339 340 name = check.opt_str_param(name, "name") 341 metadata = check.opt_dict_param(metadata, "metadata", key_type=str) 342 upstream_output = check.opt_inst_param(upstream_output, "upstream_output", OutputContext) 343 dagster_type = check.opt_inst_param(dagster_type, "dagster_type", DagsterType) 344 resource_config = check.opt_dict_param(resource_config, "resource_config", key_type=str) 345 resources = check.opt_dict_param(resources, "resources", key_type=str) 346 op_def = check.opt_inst_param(op_def, "op_def", OpDefinition) 347 step_context = check.opt_inst_param(step_context, "step_context", StepExecutionContext) 348 349 return InputContext( 350 name=name, 351 pipeline_name=None, 352 config=config, 353 metadata=metadata, 354 upstream_output=upstream_output, 355 dagster_type=dagster_type, 356 log_manager=initialize_console_manager(None), 357 resource_config=resource_config, 358 resources=resources, 359 step_context=step_context, 360 op_def=op_def, 361 ) 362 [end of python_modules/dagster/dagster/core/execution/context/input.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
dagster-io/dagster
a242d6b43b40a4507b0449af3136f42b6deaa77d
computing config for a partitioned asset job fails when it includes an asset downstream of a non-partitioned asset This test fails: ``` def test_access_partition_keys_from_context_only_one_asset_partitioned(): upstream_partitions_def = StaticPartitionsDefinition(["a", "b", "c"]) class MyIOManager(IOManager): def handle_output(self, context, obj): if context.op_def.name == "upstream_asset": assert context.asset_partition_key == "b" elif context.op_def.name == "downstream_asset": assert not context.has_asset_partitions with pytest.raises(Exception): # TODO: better error message assert context.asset_partition_key_range else: assert False def load_input(self, context): assert not context.has_asset_partitions @asset(partitions_def=upstream_partitions_def) def upstream_asset(context): assert context.output_asset_partition_key() == "b" @asset def downstream_asset(upstream_asset): assert upstream_asset is None @asset def double_downstream_asset(downstream_asset): assert upstream_asset is None my_job = build_assets_job( "my_job", assets=[upstream_asset, downstream_asset, double_downstream_asset], resource_defs={"io_manager": IOManagerDefinition.hardcoded_io_manager(MyIOManager())}, ) result = my_job.execute_in_process(partition_key="b") assert result.asset_materializations_for_node("upstream_asset") == [ AssetMaterialization(asset_key=AssetKey(["upstream_asset"]), partition="b") ] ``` with ``` partition_key = 'b' def run_config_for_partition_fn(partition_key: str) -> Dict[str, Any]: ops_config: Dict[str, Any] = {} asset_partitions_by_asset_key = asset_partitions_for_job_partition(partition_key) for assets_def in assets: outputs_dict: Dict[str, Dict[str, Any]] = {} if assets_def.partitions_def is not None: for asset_key, output_def in assets_def.output_defs_by_asset_key.items(): asset_partition_key_range = asset_partitions_by_asset_key[asset_key] outputs_dict[output_def.name] = { "start": asset_partition_key_range.start, "end": asset_partition_key_range.end, } inputs_dict: Dict[str, Dict[str, Any]] = {} for in_asset_key, input_def in assets_def.input_defs_by_asset_key.items(): > upstream_partitions_def = partitions_defs_by_asset_key[in_asset_key] E KeyError: AssetKey(['downstream_asset']) python_modules/dagster/dagster/core/asset_defs/assets_job.py:161: KeyError ```
2022-03-07T23:19:54Z
<patch> diff --git a/python_modules/dagster/dagster/core/asset_defs/assets_job.py b/python_modules/dagster/dagster/core/asset_defs/assets_job.py --- a/python_modules/dagster/dagster/core/asset_defs/assets_job.py +++ b/python_modules/dagster/dagster/core/asset_defs/assets_job.py @@ -158,7 +158,7 @@ def run_config_for_partition_fn(partition_key: str) -> Dict[str, Any]: inputs_dict: Dict[str, Dict[str, Any]] = {} for in_asset_key, input_def in assets_def.input_defs_by_asset_key.items(): - upstream_partitions_def = partitions_defs_by_asset_key[in_asset_key] + upstream_partitions_def = partitions_defs_by_asset_key.get(in_asset_key) if assets_def.partitions_def is not None and upstream_partitions_def is not None: upstream_partition_key_range = get_upstream_partitions_for_partition_range( assets_def, upstream_partitions_def, in_asset_key, asset_partition_key_range </patch>
[]
[]
ipython__ipython-11330
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> %%writefile can't handle white space in file path/filename It is as the title suggests. In ipython, I have tried `%%writefile "test out.txt"` `%%writefile 'test out.txt'` `%%writefile test out.txt` `%%writefile test\ out.txt` and none gives me the expected behavior. The first one leads to creating a file literally named "test out.txt" (yes, with quotes). The third one just leads to > unrecognized arguments: out.txt Is this a bug or am I just missing something obvious? Using IPython 6.5.0 with Python 3.6.5. %%writefile can't handle white space in file path/filename It is as the title suggests. In ipython, I have tried `%%writefile "test out.txt"` `%%writefile 'test out.txt'` `%%writefile test out.txt` `%%writefile test\ out.txt` and none gives me the expected behavior. The first one leads to creating a file literally named "test out.txt" (yes, with quotes). The third one just leads to > unrecognized arguments: out.txt Is this a bug or am I just missing something obvious? Using IPython 6.5.0 with Python 3.6.5. </issue> <code> [start of README.rst] 1 .. image:: https://codecov.io/github/ipython/ipython/coverage.svg?branch=master 2 :target: https://codecov.io/github/ipython/ipython?branch=master 3 4 .. image:: https://img.shields.io/pypi/v/IPython.svg 5 :target: https://pypi.python.org/pypi/ipython 6 7 .. image:: https://img.shields.io/travis/ipython/ipython.svg 8 :target: https://travis-ci.org/ipython/ipython 9 10 .. image:: https://www.codetriage.com/ipython/ipython/badges/users.svg 11 :target: https://www.codetriage.com/ipython/ipython/ 12 13 =========================================== 14 IPython: Productive Interactive Computing 15 =========================================== 16 17 Overview 18 ======== 19 20 Welcome to IPython. Our full documentation is available on `ipython.readthedocs.io 21 <https://ipython.readthedocs.io/en/stable/>`_ and contains information on how to install, use and 22 contribute to the project. 23 24 **IPython versions and Python Support** 25 26 **IPython 7.0** requires Python version 3.4 and above. 27 28 **IPython 6.x** requires Python version 3.3 and above. 29 30 **IPython 5.x LTS** is the compatible release for Python 2.7. 31 If you require Python 2 support, you **must** use IPython 5.x LTS. Please 32 update your project configurations and requirements as necessary. 33 34 35 The Notebook, Qt console and a number of other pieces are now parts of *Jupyter*. 36 See the `Jupyter installation docs <https://jupyter.readthedocs.io/en/latest/install.html>`__ 37 if you want to use these. 38 39 40 41 42 Development and Instant running 43 =============================== 44 45 You can find the latest version of the development documentation on `readthedocs 46 <https://ipython.readthedocs.io/en/latest/>`_. 47 48 You can run IPython from this directory without even installing it system-wide 49 by typing at the terminal:: 50 51 $ python -m IPython 52 53 Or see the `development installation docs 54 <https://ipython.readthedocs.io/en/latest/install/install.html#installing-the-development-version>`_ 55 for the latest revision on read the docs. 56 57 Documentation and installation instructions for older version of IPython can be 58 found on the `IPython website <https://ipython.org/documentation.html>`_ 59 60 61 62 IPython requires Python version 3 or above 63 ========================================== 64 65 Starting with version 6.0, IPython does not support Python 2.7, 3.0, 3.1, or 66 3.2. 67 68 For a version compatible with Python 2.7, please install the 5.x LTS Long Term 69 Support version. 70 71 If you are encountering this error message you are likely trying to install or 72 use IPython from source. You need to checkout the remote 5.x branch. If you are 73 using git the following should work:: 74 75 $ git fetch origin 76 $ git checkout 5.x 77 78 If you encounter this error message with a regular install of IPython, then you 79 likely need to update your package manager, for example if you are using `pip` 80 check the version of pip with:: 81 82 $ pip --version 83 84 You will need to update pip to the version 9.0.1 or greater. If you are not using 85 pip, please inquiry with the maintainers of the package for your package 86 manager. 87 88 For more information see one of our blog posts: 89 90 https://blog.jupyter.org/2016/07/08/ipython-5-0-released/ 91 92 As well as the following Pull-Request for discussion: 93 94 https://github.com/ipython/ipython/pull/9900 95 96 This error does also occur if you are invoking ``setup.py`` directly – which you 97 should not – or are using ``easy_install`` If this is the case, use ``pip 98 install .`` (instead of ``setup.py install`` , and ``pip install -e .`` instead 99 of ``setup.py develop`` If you are depending on IPython as a dependency you may 100 also want to have a conditional dependency on IPython depending on the Python 101 version:: 102 103 install_req = ['ipython'] 104 if sys.version_info[0] < 3 and 'bdist_wheel' not in sys.argv: 105 install_req.remove('ipython') 106 install_req.append('ipython<6') 107 108 setup( 109 ... 110 install_requires=install_req 111 ) 112 [end of README.rst] [start of IPython/utils/_process_win32_controller.py] 1 """Windows-specific implementation of process utilities with direct WinAPI. 2 3 This file is meant to be used by process.py 4 """ 5 6 #----------------------------------------------------------------------------- 7 # Copyright (C) 2010-2011 The IPython Development Team 8 # 9 # Distributed under the terms of the BSD License. The full license is in 10 # the file COPYING, distributed as part of this software. 11 #----------------------------------------------------------------------------- 12 13 14 # stdlib 15 import os, sys, threading 16 import ctypes, msvcrt 17 18 # Win32 API types needed for the API calls 19 from ctypes import POINTER 20 from ctypes.wintypes import HANDLE, HLOCAL, LPVOID, WORD, DWORD, BOOL, \ 21 ULONG, LPCWSTR 22 LPDWORD = POINTER(DWORD) 23 LPHANDLE = POINTER(HANDLE) 24 ULONG_PTR = POINTER(ULONG) 25 class SECURITY_ATTRIBUTES(ctypes.Structure): 26 _fields_ = [("nLength", DWORD), 27 ("lpSecurityDescriptor", LPVOID), 28 ("bInheritHandle", BOOL)] 29 LPSECURITY_ATTRIBUTES = POINTER(SECURITY_ATTRIBUTES) 30 class STARTUPINFO(ctypes.Structure): 31 _fields_ = [("cb", DWORD), 32 ("lpReserved", LPCWSTR), 33 ("lpDesktop", LPCWSTR), 34 ("lpTitle", LPCWSTR), 35 ("dwX", DWORD), 36 ("dwY", DWORD), 37 ("dwXSize", DWORD), 38 ("dwYSize", DWORD), 39 ("dwXCountChars", DWORD), 40 ("dwYCountChars", DWORD), 41 ("dwFillAttribute", DWORD), 42 ("dwFlags", DWORD), 43 ("wShowWindow", WORD), 44 ("cbReserved2", WORD), 45 ("lpReserved2", LPVOID), 46 ("hStdInput", HANDLE), 47 ("hStdOutput", HANDLE), 48 ("hStdError", HANDLE)] 49 LPSTARTUPINFO = POINTER(STARTUPINFO) 50 class PROCESS_INFORMATION(ctypes.Structure): 51 _fields_ = [("hProcess", HANDLE), 52 ("hThread", HANDLE), 53 ("dwProcessId", DWORD), 54 ("dwThreadId", DWORD)] 55 LPPROCESS_INFORMATION = POINTER(PROCESS_INFORMATION) 56 57 # Win32 API constants needed 58 ERROR_HANDLE_EOF = 38 59 ERROR_BROKEN_PIPE = 109 60 ERROR_NO_DATA = 232 61 HANDLE_FLAG_INHERIT = 0x0001 62 STARTF_USESTDHANDLES = 0x0100 63 CREATE_SUSPENDED = 0x0004 64 CREATE_NEW_CONSOLE = 0x0010 65 CREATE_NO_WINDOW = 0x08000000 66 STILL_ACTIVE = 259 67 WAIT_TIMEOUT = 0x0102 68 WAIT_FAILED = 0xFFFFFFFF 69 INFINITE = 0xFFFFFFFF 70 DUPLICATE_SAME_ACCESS = 0x00000002 71 ENABLE_ECHO_INPUT = 0x0004 72 ENABLE_LINE_INPUT = 0x0002 73 ENABLE_PROCESSED_INPUT = 0x0001 74 75 # Win32 API functions needed 76 GetLastError = ctypes.windll.kernel32.GetLastError 77 GetLastError.argtypes = [] 78 GetLastError.restype = DWORD 79 80 CreateFile = ctypes.windll.kernel32.CreateFileW 81 CreateFile.argtypes = [LPCWSTR, DWORD, DWORD, LPVOID, DWORD, DWORD, HANDLE] 82 CreateFile.restype = HANDLE 83 84 CreatePipe = ctypes.windll.kernel32.CreatePipe 85 CreatePipe.argtypes = [POINTER(HANDLE), POINTER(HANDLE), 86 LPSECURITY_ATTRIBUTES, DWORD] 87 CreatePipe.restype = BOOL 88 89 CreateProcess = ctypes.windll.kernel32.CreateProcessW 90 CreateProcess.argtypes = [LPCWSTR, LPCWSTR, LPSECURITY_ATTRIBUTES, 91 LPSECURITY_ATTRIBUTES, BOOL, DWORD, LPVOID, LPCWSTR, LPSTARTUPINFO, 92 LPPROCESS_INFORMATION] 93 CreateProcess.restype = BOOL 94 95 GetExitCodeProcess = ctypes.windll.kernel32.GetExitCodeProcess 96 GetExitCodeProcess.argtypes = [HANDLE, LPDWORD] 97 GetExitCodeProcess.restype = BOOL 98 99 GetCurrentProcess = ctypes.windll.kernel32.GetCurrentProcess 100 GetCurrentProcess.argtypes = [] 101 GetCurrentProcess.restype = HANDLE 102 103 ResumeThread = ctypes.windll.kernel32.ResumeThread 104 ResumeThread.argtypes = [HANDLE] 105 ResumeThread.restype = DWORD 106 107 ReadFile = ctypes.windll.kernel32.ReadFile 108 ReadFile.argtypes = [HANDLE, LPVOID, DWORD, LPDWORD, LPVOID] 109 ReadFile.restype = BOOL 110 111 WriteFile = ctypes.windll.kernel32.WriteFile 112 WriteFile.argtypes = [HANDLE, LPVOID, DWORD, LPDWORD, LPVOID] 113 WriteFile.restype = BOOL 114 115 GetConsoleMode = ctypes.windll.kernel32.GetConsoleMode 116 GetConsoleMode.argtypes = [HANDLE, LPDWORD] 117 GetConsoleMode.restype = BOOL 118 119 SetConsoleMode = ctypes.windll.kernel32.SetConsoleMode 120 SetConsoleMode.argtypes = [HANDLE, DWORD] 121 SetConsoleMode.restype = BOOL 122 123 FlushConsoleInputBuffer = ctypes.windll.kernel32.FlushConsoleInputBuffer 124 FlushConsoleInputBuffer.argtypes = [HANDLE] 125 FlushConsoleInputBuffer.restype = BOOL 126 127 WaitForSingleObject = ctypes.windll.kernel32.WaitForSingleObject 128 WaitForSingleObject.argtypes = [HANDLE, DWORD] 129 WaitForSingleObject.restype = DWORD 130 131 DuplicateHandle = ctypes.windll.kernel32.DuplicateHandle 132 DuplicateHandle.argtypes = [HANDLE, HANDLE, HANDLE, LPHANDLE, 133 DWORD, BOOL, DWORD] 134 DuplicateHandle.restype = BOOL 135 136 SetHandleInformation = ctypes.windll.kernel32.SetHandleInformation 137 SetHandleInformation.argtypes = [HANDLE, DWORD, DWORD] 138 SetHandleInformation.restype = BOOL 139 140 CloseHandle = ctypes.windll.kernel32.CloseHandle 141 CloseHandle.argtypes = [HANDLE] 142 CloseHandle.restype = BOOL 143 144 CommandLineToArgvW = ctypes.windll.shell32.CommandLineToArgvW 145 CommandLineToArgvW.argtypes = [LPCWSTR, POINTER(ctypes.c_int)] 146 CommandLineToArgvW.restype = POINTER(LPCWSTR) 147 148 LocalFree = ctypes.windll.kernel32.LocalFree 149 LocalFree.argtypes = [HLOCAL] 150 LocalFree.restype = HLOCAL 151 152 class AvoidUNCPath(object): 153 """A context manager to protect command execution from UNC paths. 154 155 In the Win32 API, commands can't be invoked with the cwd being a UNC path. 156 This context manager temporarily changes directory to the 'C:' drive on 157 entering, and restores the original working directory on exit. 158 159 The context manager returns the starting working directory *if* it made a 160 change and None otherwise, so that users can apply the necessary adjustment 161 to their system calls in the event of a change. 162 163 Examples 164 -------- 165 :: 166 cmd = 'dir' 167 with AvoidUNCPath() as path: 168 if path is not None: 169 cmd = '"pushd %s &&"%s' % (path, cmd) 170 os.system(cmd) 171 """ 172 def __enter__(self): 173 self.path = os.getcwd() 174 self.is_unc_path = self.path.startswith(r"\\") 175 if self.is_unc_path: 176 # change to c drive (as cmd.exe cannot handle UNC addresses) 177 os.chdir("C:") 178 return self.path 179 else: 180 # We return None to signal that there was no change in the working 181 # directory 182 return None 183 184 def __exit__(self, exc_type, exc_value, traceback): 185 if self.is_unc_path: 186 os.chdir(self.path) 187 188 189 class Win32ShellCommandController(object): 190 """Runs a shell command in a 'with' context. 191 192 This implementation is Win32-specific. 193 194 Example: 195 # Runs the command interactively with default console stdin/stdout 196 with ShellCommandController('python -i') as scc: 197 scc.run() 198 199 # Runs the command using the provided functions for stdin/stdout 200 def my_stdout_func(s): 201 # print or save the string 's' 202 write_to_stdout(s) 203 def my_stdin_func(): 204 # If input is available, return it as a string. 205 if input_available(): 206 return get_input() 207 # If no input available, return None after a short delay to 208 # keep from blocking. 209 else: 210 time.sleep(0.01) 211 return None 212 213 with ShellCommandController('python -i') as scc: 214 scc.run(my_stdout_func, my_stdin_func) 215 """ 216 217 def __init__(self, cmd, mergeout = True): 218 """Initializes the shell command controller. 219 220 The cmd is the program to execute, and mergeout is 221 whether to blend stdout and stderr into one output 222 in stdout. Merging them together in this fashion more 223 reliably keeps stdout and stderr in the correct order 224 especially for interactive shell usage. 225 """ 226 self.cmd = cmd 227 self.mergeout = mergeout 228 229 def __enter__(self): 230 cmd = self.cmd 231 mergeout = self.mergeout 232 233 self.hstdout, self.hstdin, self.hstderr = None, None, None 234 self.piProcInfo = None 235 try: 236 p_hstdout, c_hstdout, p_hstderr, \ 237 c_hstderr, p_hstdin, c_hstdin = [None]*6 238 239 # SECURITY_ATTRIBUTES with inherit handle set to True 240 saAttr = SECURITY_ATTRIBUTES() 241 saAttr.nLength = ctypes.sizeof(saAttr) 242 saAttr.bInheritHandle = True 243 saAttr.lpSecurityDescriptor = None 244 245 def create_pipe(uninherit): 246 """Creates a Windows pipe, which consists of two handles. 247 248 The 'uninherit' parameter controls which handle is not 249 inherited by the child process. 250 """ 251 handles = HANDLE(), HANDLE() 252 if not CreatePipe(ctypes.byref(handles[0]), 253 ctypes.byref(handles[1]), ctypes.byref(saAttr), 0): 254 raise ctypes.WinError() 255 if not SetHandleInformation(handles[uninherit], 256 HANDLE_FLAG_INHERIT, 0): 257 raise ctypes.WinError() 258 return handles[0].value, handles[1].value 259 260 p_hstdout, c_hstdout = create_pipe(uninherit=0) 261 # 'mergeout' signals that stdout and stderr should be merged. 262 # We do that by using one pipe for both of them. 263 if mergeout: 264 c_hstderr = HANDLE() 265 if not DuplicateHandle(GetCurrentProcess(), c_hstdout, 266 GetCurrentProcess(), ctypes.byref(c_hstderr), 267 0, True, DUPLICATE_SAME_ACCESS): 268 raise ctypes.WinError() 269 else: 270 p_hstderr, c_hstderr = create_pipe(uninherit=0) 271 c_hstdin, p_hstdin = create_pipe(uninherit=1) 272 273 # Create the process object 274 piProcInfo = PROCESS_INFORMATION() 275 siStartInfo = STARTUPINFO() 276 siStartInfo.cb = ctypes.sizeof(siStartInfo) 277 siStartInfo.hStdInput = c_hstdin 278 siStartInfo.hStdOutput = c_hstdout 279 siStartInfo.hStdError = c_hstderr 280 siStartInfo.dwFlags = STARTF_USESTDHANDLES 281 dwCreationFlags = CREATE_SUSPENDED | CREATE_NO_WINDOW # | CREATE_NEW_CONSOLE 282 283 if not CreateProcess(None, 284 u"cmd.exe /c " + cmd, 285 None, None, True, dwCreationFlags, 286 None, None, ctypes.byref(siStartInfo), 287 ctypes.byref(piProcInfo)): 288 raise ctypes.WinError() 289 290 # Close this process's versions of the child handles 291 CloseHandle(c_hstdin) 292 c_hstdin = None 293 CloseHandle(c_hstdout) 294 c_hstdout = None 295 if c_hstderr is not None: 296 CloseHandle(c_hstderr) 297 c_hstderr = None 298 299 # Transfer ownership of the parent handles to the object 300 self.hstdin = p_hstdin 301 p_hstdin = None 302 self.hstdout = p_hstdout 303 p_hstdout = None 304 if not mergeout: 305 self.hstderr = p_hstderr 306 p_hstderr = None 307 self.piProcInfo = piProcInfo 308 309 finally: 310 if p_hstdin: 311 CloseHandle(p_hstdin) 312 if c_hstdin: 313 CloseHandle(c_hstdin) 314 if p_hstdout: 315 CloseHandle(p_hstdout) 316 if c_hstdout: 317 CloseHandle(c_hstdout) 318 if p_hstderr: 319 CloseHandle(p_hstderr) 320 if c_hstderr: 321 CloseHandle(c_hstderr) 322 323 return self 324 325 def _stdin_thread(self, handle, hprocess, func, stdout_func): 326 exitCode = DWORD() 327 bytesWritten = DWORD(0) 328 while True: 329 #print("stdin thread loop start") 330 # Get the input string (may be bytes or unicode) 331 data = func() 332 333 # None signals to poll whether the process has exited 334 if data is None: 335 #print("checking for process completion") 336 if not GetExitCodeProcess(hprocess, ctypes.byref(exitCode)): 337 raise ctypes.WinError() 338 if exitCode.value != STILL_ACTIVE: 339 return 340 # TESTING: Does zero-sized writefile help? 341 if not WriteFile(handle, "", 0, 342 ctypes.byref(bytesWritten), None): 343 raise ctypes.WinError() 344 continue 345 #print("\nGot str %s\n" % repr(data), file=sys.stderr) 346 347 # Encode the string to the console encoding 348 if isinstance(data, unicode): #FIXME: Python3 349 data = data.encode('utf_8') 350 351 # What we have now must be a string of bytes 352 if not isinstance(data, str): #FIXME: Python3 353 raise RuntimeError("internal stdin function string error") 354 355 # An empty string signals EOF 356 if len(data) == 0: 357 return 358 359 # In a windows console, sometimes the input is echoed, 360 # but sometimes not. How do we determine when to do this? 361 stdout_func(data) 362 # WriteFile may not accept all the data at once. 363 # Loop until everything is processed 364 while len(data) != 0: 365 #print("Calling writefile") 366 if not WriteFile(handle, data, len(data), 367 ctypes.byref(bytesWritten), None): 368 # This occurs at exit 369 if GetLastError() == ERROR_NO_DATA: 370 return 371 raise ctypes.WinError() 372 #print("Called writefile") 373 data = data[bytesWritten.value:] 374 375 def _stdout_thread(self, handle, func): 376 # Allocate the output buffer 377 data = ctypes.create_string_buffer(4096) 378 while True: 379 bytesRead = DWORD(0) 380 if not ReadFile(handle, data, 4096, 381 ctypes.byref(bytesRead), None): 382 le = GetLastError() 383 if le == ERROR_BROKEN_PIPE: 384 return 385 else: 386 raise ctypes.WinError() 387 # FIXME: Python3 388 s = data.value[0:bytesRead.value] 389 #print("\nv: %s" % repr(s), file=sys.stderr) 390 func(s.decode('utf_8', 'replace')) 391 392 def run(self, stdout_func = None, stdin_func = None, stderr_func = None): 393 """Runs the process, using the provided functions for I/O. 394 395 The function stdin_func should return strings whenever a 396 character or characters become available. 397 The functions stdout_func and stderr_func are called whenever 398 something is printed to stdout or stderr, respectively. 399 These functions are called from different threads (but not 400 concurrently, because of the GIL). 401 """ 402 if stdout_func is None and stdin_func is None and stderr_func is None: 403 return self._run_stdio() 404 405 if stderr_func is not None and self.mergeout: 406 raise RuntimeError("Shell command was initiated with " 407 "merged stdin/stdout, but a separate stderr_func " 408 "was provided to the run() method") 409 410 # Create a thread for each input/output handle 411 stdin_thread = None 412 threads = [] 413 if stdin_func: 414 stdin_thread = threading.Thread(target=self._stdin_thread, 415 args=(self.hstdin, self.piProcInfo.hProcess, 416 stdin_func, stdout_func)) 417 threads.append(threading.Thread(target=self._stdout_thread, 418 args=(self.hstdout, stdout_func))) 419 if not self.mergeout: 420 if stderr_func is None: 421 stderr_func = stdout_func 422 threads.append(threading.Thread(target=self._stdout_thread, 423 args=(self.hstderr, stderr_func))) 424 # Start the I/O threads and the process 425 if ResumeThread(self.piProcInfo.hThread) == 0xFFFFFFFF: 426 raise ctypes.WinError() 427 if stdin_thread is not None: 428 stdin_thread.start() 429 for thread in threads: 430 thread.start() 431 # Wait for the process to complete 432 if WaitForSingleObject(self.piProcInfo.hProcess, INFINITE) == \ 433 WAIT_FAILED: 434 raise ctypes.WinError() 435 # Wait for the I/O threads to complete 436 for thread in threads: 437 thread.join() 438 439 # Wait for the stdin thread to complete 440 if stdin_thread is not None: 441 stdin_thread.join() 442 443 def _stdin_raw_nonblock(self): 444 """Use the raw Win32 handle of sys.stdin to do non-blocking reads""" 445 # WARNING: This is experimental, and produces inconsistent results. 446 # It's possible for the handle not to be appropriate for use 447 # with WaitForSingleObject, among other things. 448 handle = msvcrt.get_osfhandle(sys.stdin.fileno()) 449 result = WaitForSingleObject(handle, 100) 450 if result == WAIT_FAILED: 451 raise ctypes.WinError() 452 elif result == WAIT_TIMEOUT: 453 print(".", end='') 454 return None 455 else: 456 data = ctypes.create_string_buffer(256) 457 bytesRead = DWORD(0) 458 print('?', end='') 459 460 if not ReadFile(handle, data, 256, 461 ctypes.byref(bytesRead), None): 462 raise ctypes.WinError() 463 # This ensures the non-blocking works with an actual console 464 # Not checking the error, so the processing will still work with 465 # other handle types 466 FlushConsoleInputBuffer(handle) 467 468 data = data.value 469 data = data.replace('\r\n', '\n') 470 data = data.replace('\r', '\n') 471 print(repr(data) + " ", end='') 472 return data 473 474 def _stdin_raw_block(self): 475 """Use a blocking stdin read""" 476 # The big problem with the blocking read is that it doesn't 477 # exit when it's supposed to in all contexts. An extra 478 # key-press may be required to trigger the exit. 479 try: 480 data = sys.stdin.read(1) 481 data = data.replace('\r', '\n') 482 return data 483 except WindowsError as we: 484 if we.winerror == ERROR_NO_DATA: 485 # This error occurs when the pipe is closed 486 return None 487 else: 488 # Otherwise let the error propagate 489 raise we 490 491 def _stdout_raw(self, s): 492 """Writes the string to stdout""" 493 print(s, end='', file=sys.stdout) 494 sys.stdout.flush() 495 496 def _stderr_raw(self, s): 497 """Writes the string to stdout""" 498 print(s, end='', file=sys.stderr) 499 sys.stderr.flush() 500 501 def _run_stdio(self): 502 """Runs the process using the system standard I/O. 503 504 IMPORTANT: stdin needs to be asynchronous, so the Python 505 sys.stdin object is not used. Instead, 506 msvcrt.kbhit/getwch are used asynchronously. 507 """ 508 # Disable Line and Echo mode 509 #lpMode = DWORD() 510 #handle = msvcrt.get_osfhandle(sys.stdin.fileno()) 511 #if GetConsoleMode(handle, ctypes.byref(lpMode)): 512 # set_console_mode = True 513 # if not SetConsoleMode(handle, lpMode.value & 514 # ~(ENABLE_ECHO_INPUT | ENABLE_LINE_INPUT | ENABLE_PROCESSED_INPUT)): 515 # raise ctypes.WinError() 516 517 if self.mergeout: 518 return self.run(stdout_func = self._stdout_raw, 519 stdin_func = self._stdin_raw_block) 520 else: 521 return self.run(stdout_func = self._stdout_raw, 522 stdin_func = self._stdin_raw_block, 523 stderr_func = self._stderr_raw) 524 525 # Restore the previous console mode 526 #if set_console_mode: 527 # if not SetConsoleMode(handle, lpMode.value): 528 # raise ctypes.WinError() 529 530 def __exit__(self, exc_type, exc_value, traceback): 531 if self.hstdin: 532 CloseHandle(self.hstdin) 533 self.hstdin = None 534 if self.hstdout: 535 CloseHandle(self.hstdout) 536 self.hstdout = None 537 if self.hstderr: 538 CloseHandle(self.hstderr) 539 self.hstderr = None 540 if self.piProcInfo is not None: 541 CloseHandle(self.piProcInfo.hProcess) 542 CloseHandle(self.piProcInfo.hThread) 543 self.piProcInfo = None 544 545 546 def system(cmd): 547 """Win32 version of os.system() that works with network shares. 548 549 Note that this implementation returns None, as meant for use in IPython. 550 551 Parameters 552 ---------- 553 cmd : str 554 A command to be executed in the system shell. 555 556 Returns 557 ------- 558 None : we explicitly do NOT return the subprocess status code, as this 559 utility is meant to be used extensively in IPython, where any return value 560 would trigger :func:`sys.displayhook` calls. 561 """ 562 with AvoidUNCPath() as path: 563 if path is not None: 564 cmd = '"pushd %s &&"%s' % (path, cmd) 565 with Win32ShellCommandController(cmd) as scc: 566 scc.run() 567 568 569 if __name__ == "__main__": 570 print("Test starting!") 571 #system("cmd") 572 system("python -i") 573 print("Test finished!") 574 [end of IPython/utils/_process_win32_controller.py] [start of IPython/utils/path.py] 1 # encoding: utf-8 2 """ 3 Utilities for path handling. 4 """ 5 6 # Copyright (c) IPython Development Team. 7 # Distributed under the terms of the Modified BSD License. 8 9 import os 10 import sys 11 import errno 12 import shutil 13 import random 14 import glob 15 from warnings import warn 16 17 from IPython.utils.process import system 18 from IPython.utils import py3compat 19 from IPython.utils.decorators import undoc 20 21 #----------------------------------------------------------------------------- 22 # Code 23 #----------------------------------------------------------------------------- 24 25 fs_encoding = sys.getfilesystemencoding() 26 27 def _writable_dir(path): 28 """Whether `path` is a directory, to which the user has write access.""" 29 return os.path.isdir(path) and os.access(path, os.W_OK) 30 31 if sys.platform == 'win32': 32 def _get_long_path_name(path): 33 """Get a long path name (expand ~) on Windows using ctypes. 34 35 Examples 36 -------- 37 38 >>> get_long_path_name('c:\\docume~1') 39 'c:\\\\Documents and Settings' 40 41 """ 42 try: 43 import ctypes 44 except ImportError: 45 raise ImportError('you need to have ctypes installed for this to work') 46 _GetLongPathName = ctypes.windll.kernel32.GetLongPathNameW 47 _GetLongPathName.argtypes = [ctypes.c_wchar_p, ctypes.c_wchar_p, 48 ctypes.c_uint ] 49 50 buf = ctypes.create_unicode_buffer(260) 51 rv = _GetLongPathName(path, buf, 260) 52 if rv == 0 or rv > 260: 53 return path 54 else: 55 return buf.value 56 else: 57 def _get_long_path_name(path): 58 """Dummy no-op.""" 59 return path 60 61 62 63 def get_long_path_name(path): 64 """Expand a path into its long form. 65 66 On Windows this expands any ~ in the paths. On other platforms, it is 67 a null operation. 68 """ 69 return _get_long_path_name(path) 70 71 72 def unquote_filename(name, win32=(sys.platform=='win32')): 73 """ On Windows, remove leading and trailing quotes from filenames. 74 75 This function has been deprecated and should not be used any more: 76 unquoting is now taken care of by :func:`IPython.utils.process.arg_split`. 77 """ 78 warn("'unquote_filename' is deprecated since IPython 5.0 and should not " 79 "be used anymore", DeprecationWarning, stacklevel=2) 80 if win32: 81 if name.startswith(("'", '"')) and name.endswith(("'", '"')): 82 name = name[1:-1] 83 return name 84 85 86 def compress_user(path): 87 """Reverse of :func:`os.path.expanduser` 88 """ 89 home = os.path.expanduser('~') 90 if path.startswith(home): 91 path = "~" + path[len(home):] 92 return path 93 94 def get_py_filename(name, force_win32=None): 95 """Return a valid python filename in the current directory. 96 97 If the given name is not a file, it adds '.py' and searches again. 98 Raises IOError with an informative message if the file isn't found. 99 """ 100 101 name = os.path.expanduser(name) 102 if force_win32 is not None: 103 warn("The 'force_win32' argument to 'get_py_filename' is deprecated " 104 "since IPython 5.0 and should not be used anymore", 105 DeprecationWarning, stacklevel=2) 106 if not os.path.isfile(name) and not name.endswith('.py'): 107 name += '.py' 108 if os.path.isfile(name): 109 return name 110 else: 111 raise IOError('File `%r` not found.' % name) 112 113 114 def filefind(filename, path_dirs=None): 115 """Find a file by looking through a sequence of paths. 116 117 This iterates through a sequence of paths looking for a file and returns 118 the full, absolute path of the first occurrence of the file. If no set of 119 path dirs is given, the filename is tested as is, after running through 120 :func:`expandvars` and :func:`expanduser`. Thus a simple call:: 121 122 filefind('myfile.txt') 123 124 will find the file in the current working dir, but:: 125 126 filefind('~/myfile.txt') 127 128 Will find the file in the users home directory. This function does not 129 automatically try any paths, such as the cwd or the user's home directory. 130 131 Parameters 132 ---------- 133 filename : str 134 The filename to look for. 135 path_dirs : str, None or sequence of str 136 The sequence of paths to look for the file in. If None, the filename 137 need to be absolute or be in the cwd. If a string, the string is 138 put into a sequence and the searched. If a sequence, walk through 139 each element and join with ``filename``, calling :func:`expandvars` 140 and :func:`expanduser` before testing for existence. 141 142 Returns 143 ------- 144 Raises :exc:`IOError` or returns absolute path to file. 145 """ 146 147 # If paths are quoted, abspath gets confused, strip them... 148 filename = filename.strip('"').strip("'") 149 # If the input is an absolute path, just check it exists 150 if os.path.isabs(filename) and os.path.isfile(filename): 151 return filename 152 153 if path_dirs is None: 154 path_dirs = ("",) 155 elif isinstance(path_dirs, str): 156 path_dirs = (path_dirs,) 157 158 for path in path_dirs: 159 if path == '.': path = os.getcwd() 160 testname = expand_path(os.path.join(path, filename)) 161 if os.path.isfile(testname): 162 return os.path.abspath(testname) 163 164 raise IOError("File %r does not exist in any of the search paths: %r" % 165 (filename, path_dirs) ) 166 167 168 class HomeDirError(Exception): 169 pass 170 171 172 def get_home_dir(require_writable=False): 173 """Return the 'home' directory, as a unicode string. 174 175 Uses os.path.expanduser('~'), and checks for writability. 176 177 See stdlib docs for how this is determined. 178 $HOME is first priority on *ALL* platforms. 179 180 Parameters 181 ---------- 182 183 require_writable : bool [default: False] 184 if True: 185 guarantees the return value is a writable directory, otherwise 186 raises HomeDirError 187 if False: 188 The path is resolved, but it is not guaranteed to exist or be writable. 189 """ 190 191 homedir = os.path.expanduser('~') 192 # Next line will make things work even when /home/ is a symlink to 193 # /usr/home as it is on FreeBSD, for example 194 homedir = os.path.realpath(homedir) 195 196 if not _writable_dir(homedir) and os.name == 'nt': 197 # expanduser failed, use the registry to get the 'My Documents' folder. 198 try: 199 try: 200 import winreg as wreg # Py 3 201 except ImportError: 202 import _winreg as wreg # Py 2 203 key = wreg.OpenKey( 204 wreg.HKEY_CURRENT_USER, 205 r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" 206 ) 207 homedir = wreg.QueryValueEx(key,'Personal')[0] 208 key.Close() 209 except: 210 pass 211 212 if (not require_writable) or _writable_dir(homedir): 213 return py3compat.cast_unicode(homedir, fs_encoding) 214 else: 215 raise HomeDirError('%s is not a writable dir, ' 216 'set $HOME environment variable to override' % homedir) 217 218 def get_xdg_dir(): 219 """Return the XDG_CONFIG_HOME, if it is defined and exists, else None. 220 221 This is only for non-OS X posix (Linux,Unix,etc.) systems. 222 """ 223 224 env = os.environ 225 226 if os.name == 'posix' and sys.platform != 'darwin': 227 # Linux, Unix, AIX, etc. 228 # use ~/.config if empty OR not set 229 xdg = env.get("XDG_CONFIG_HOME", None) or os.path.join(get_home_dir(), '.config') 230 if xdg and _writable_dir(xdg): 231 return py3compat.cast_unicode(xdg, fs_encoding) 232 233 return None 234 235 236 def get_xdg_cache_dir(): 237 """Return the XDG_CACHE_HOME, if it is defined and exists, else None. 238 239 This is only for non-OS X posix (Linux,Unix,etc.) systems. 240 """ 241 242 env = os.environ 243 244 if os.name == 'posix' and sys.platform != 'darwin': 245 # Linux, Unix, AIX, etc. 246 # use ~/.cache if empty OR not set 247 xdg = env.get("XDG_CACHE_HOME", None) or os.path.join(get_home_dir(), '.cache') 248 if xdg and _writable_dir(xdg): 249 return py3compat.cast_unicode(xdg, fs_encoding) 250 251 return None 252 253 254 @undoc 255 def get_ipython_dir(): 256 warn("get_ipython_dir has moved to the IPython.paths module since IPython 4.0.", stacklevel=2) 257 from IPython.paths import get_ipython_dir 258 return get_ipython_dir() 259 260 @undoc 261 def get_ipython_cache_dir(): 262 warn("get_ipython_cache_dir has moved to the IPython.paths module since IPython 4.0.", stacklevel=2) 263 from IPython.paths import get_ipython_cache_dir 264 return get_ipython_cache_dir() 265 266 @undoc 267 def get_ipython_package_dir(): 268 warn("get_ipython_package_dir has moved to the IPython.paths module since IPython 4.0.", stacklevel=2) 269 from IPython.paths import get_ipython_package_dir 270 return get_ipython_package_dir() 271 272 @undoc 273 def get_ipython_module_path(module_str): 274 warn("get_ipython_module_path has moved to the IPython.paths module since IPython 4.0.", stacklevel=2) 275 from IPython.paths import get_ipython_module_path 276 return get_ipython_module_path(module_str) 277 278 @undoc 279 def locate_profile(profile='default'): 280 warn("locate_profile has moved to the IPython.paths module since IPython 4.0.", stacklevel=2) 281 from IPython.paths import locate_profile 282 return locate_profile(profile=profile) 283 284 def expand_path(s): 285 """Expand $VARS and ~names in a string, like a shell 286 287 :Examples: 288 289 In [2]: os.environ['FOO']='test' 290 291 In [3]: expand_path('variable FOO is $FOO') 292 Out[3]: 'variable FOO is test' 293 """ 294 # This is a pretty subtle hack. When expand user is given a UNC path 295 # on Windows (\\server\share$\%username%), os.path.expandvars, removes 296 # the $ to get (\\server\share\%username%). I think it considered $ 297 # alone an empty var. But, we need the $ to remains there (it indicates 298 # a hidden share). 299 if os.name=='nt': 300 s = s.replace('$\\', 'IPYTHON_TEMP') 301 s = os.path.expandvars(os.path.expanduser(s)) 302 if os.name=='nt': 303 s = s.replace('IPYTHON_TEMP', '$\\') 304 return s 305 306 307 def unescape_glob(string): 308 """Unescape glob pattern in `string`.""" 309 def unescape(s): 310 for pattern in '*[]!?': 311 s = s.replace(r'\{0}'.format(pattern), pattern) 312 return s 313 return '\\'.join(map(unescape, string.split('\\\\'))) 314 315 316 def shellglob(args): 317 """ 318 Do glob expansion for each element in `args` and return a flattened list. 319 320 Unmatched glob pattern will remain as-is in the returned list. 321 322 """ 323 expanded = [] 324 # Do not unescape backslash in Windows as it is interpreted as 325 # path separator: 326 unescape = unescape_glob if sys.platform != 'win32' else lambda x: x 327 for a in args: 328 expanded.extend(glob.glob(a) or [unescape(a)]) 329 return expanded 330 331 332 def target_outdated(target,deps): 333 """Determine whether a target is out of date. 334 335 target_outdated(target,deps) -> 1/0 336 337 deps: list of filenames which MUST exist. 338 target: single filename which may or may not exist. 339 340 If target doesn't exist or is older than any file listed in deps, return 341 true, otherwise return false. 342 """ 343 try: 344 target_time = os.path.getmtime(target) 345 except os.error: 346 return 1 347 for dep in deps: 348 dep_time = os.path.getmtime(dep) 349 if dep_time > target_time: 350 #print "For target",target,"Dep failed:",dep # dbg 351 #print "times (dep,tar):",dep_time,target_time # dbg 352 return 1 353 return 0 354 355 356 def target_update(target,deps,cmd): 357 """Update a target with a given command given a list of dependencies. 358 359 target_update(target,deps,cmd) -> runs cmd if target is outdated. 360 361 This is just a wrapper around target_outdated() which calls the given 362 command if target is outdated.""" 363 364 if target_outdated(target,deps): 365 system(cmd) 366 367 368 ENOLINK = 1998 369 370 def link(src, dst): 371 """Hard links ``src`` to ``dst``, returning 0 or errno. 372 373 Note that the special errno ``ENOLINK`` will be returned if ``os.link`` isn't 374 supported by the operating system. 375 """ 376 377 if not hasattr(os, "link"): 378 return ENOLINK 379 link_errno = 0 380 try: 381 os.link(src, dst) 382 except OSError as e: 383 link_errno = e.errno 384 return link_errno 385 386 387 def link_or_copy(src, dst): 388 """Attempts to hardlink ``src`` to ``dst``, copying if the link fails. 389 390 Attempts to maintain the semantics of ``shutil.copy``. 391 392 Because ``os.link`` does not overwrite files, a unique temporary file 393 will be used if the target already exists, then that file will be moved 394 into place. 395 """ 396 397 if os.path.isdir(dst): 398 dst = os.path.join(dst, os.path.basename(src)) 399 400 link_errno = link(src, dst) 401 if link_errno == errno.EEXIST: 402 if os.stat(src).st_ino == os.stat(dst).st_ino: 403 # dst is already a hard link to the correct file, so we don't need 404 # to do anything else. If we try to link and rename the file 405 # anyway, we get duplicate files - see http://bugs.python.org/issue21876 406 return 407 408 new_dst = dst + "-temp-%04X" %(random.randint(1, 16**4), ) 409 try: 410 link_or_copy(src, new_dst) 411 except: 412 try: 413 os.remove(new_dst) 414 except OSError: 415 pass 416 raise 417 os.rename(new_dst, dst) 418 elif link_errno != 0: 419 # Either link isn't supported, or the filesystem doesn't support 420 # linking, or 'src' and 'dst' are on different filesystems. 421 shutil.copy(src, dst) 422 423 def ensure_dir_exists(path, mode=0o755): 424 """ensure that a directory exists 425 426 If it doesn't exist, try to create it and protect against a race condition 427 if another process is doing the same. 428 429 The default permissions are 755, which differ from os.makedirs default of 777. 430 """ 431 if not os.path.exists(path): 432 try: 433 os.makedirs(path, mode=mode) 434 except OSError as e: 435 if e.errno != errno.EEXIST: 436 raise 437 elif not os.path.isdir(path): 438 raise IOError("%r exists but is not a directory" % path) 439 [end of IPython/utils/path.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
ipython/ipython
032cc8c92986762204a801eed36c57822b4a6345
%%writefile can't handle white space in file path/filename It is as the title suggests. In ipython, I have tried `%%writefile "test out.txt"` `%%writefile 'test out.txt'` `%%writefile test out.txt` `%%writefile test\ out.txt` and none gives me the expected behavior. The first one leads to creating a file literally named "test out.txt" (yes, with quotes). The third one just leads to > unrecognized arguments: out.txt Is this a bug or am I just missing something obvious? Using IPython 6.5.0 with Python 3.6.5. %%writefile can't handle white space in file path/filename It is as the title suggests. In ipython, I have tried `%%writefile "test out.txt"` `%%writefile 'test out.txt'` `%%writefile test out.txt` `%%writefile test\ out.txt` and none gives me the expected behavior. The first one leads to creating a file literally named "test out.txt" (yes, with quotes). The third one just leads to > unrecognized arguments: out.txt Is this a bug or am I just missing something obvious? Using IPython 6.5.0 with Python 3.6.5.
This is probably a bug, it shouldn't be too hard to fix, it will just take some time to figure out exactly how to parse the line. Here is where the [code of writefile is](https://github.com/ipython/ipython/blob/032cc8c92986762204a801eed36c57822b4a6345/IPython/core/magics/osm.py#L763-L791). THe parsing is based on argparse. This is probably a bug, it shouldn't be too hard to fix, it will just take some time to figure out exactly how to parse the line. Here is where the [code of writefile is](https://github.com/ipython/ipython/blob/032cc8c92986762204a801eed36c57822b4a6345/IPython/core/magics/osm.py#L763-L791). THe parsing is based on argparse.
2018-09-25T14:27:27Z
<patch> diff --git a/IPython/core/magics/osm.py b/IPython/core/magics/osm.py --- a/IPython/core/magics/osm.py +++ b/IPython/core/magics/osm.py @@ -777,8 +777,11 @@ def writefile(self, line, cell): The file will be overwritten unless the -a (--append) flag is specified. """ args = magic_arguments.parse_argstring(self.writefile, line) - filename = os.path.expanduser(args.filename) - + if re.match(r'[\'*\']|["*"]', args.filename): + filename = os.path.expanduser(args.filename[1:-1]) + else: + filename = os.path.expanduser(args.filename) + if os.path.exists(filename): if args.append: print("Appending to %s" % filename) </patch>
[]
[]
numpy__numpy-20934
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: AttributeError: 'Extension' object has no attribute 'extra_c_compile_args' ### Describe the issue: It seems https://github.com/numpy/numpy/pull/19713 included in numpy >=1.22 is preventing to build some extensions such as in the assimulo package with the error: 'Extension' object has no attribute 'extra_c_compile_args' cc @serge-sans-paille ### Reproduce the code example: ```python curl -fSsL https://github.com/modelon-community/Assimulo/archive/Assimulo-3.2.9.tar.gz | tar xz cd Assimulo-Assimulo-3.2.9 python3 setup.py install --extra-fortran-link-flags="-shared" --sundials-home=/usr/local --lapack-home=/usr/lib64 --blas-home=/usr/lib64 ``` ``` ### Error message: ```shell Traceback (most recent call last): File "/usr/local/src/Assimulo-Assimulo-3.2.9/setup.py", line 691, in <module> ndc.setup(name=NAME, File "/usr/local/lib/python3.9/site-packages/numpy/distutils/core.py", line 169, in setup return old_setup(**new_attr) File "/usr/local/lib/python3.9/site-packages/setuptools/__init__.py", line 155, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/install.py", line 60, in run r = self.setuptools_run() File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/install.py", line 54, in setuptools_run self.do_egg_install() File "/usr/local/lib/python3.9/site-packages/setuptools/command/install.py", line 116, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/usr/local/lib/python3.9/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/build_ext.py", line 316, in run self.build_extensions() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/build_ext.py", line 380, in build_extension extra_cflags = ext.extra_c_compile_args or [] AttributeError: 'Extension' object has no attribute 'extra_c_compile_args' ``` ``` ### NumPy/Python version information: numpy 1.22.x </issue> <code> [start of README.md] 1 # <a href="https://numpy.org/"><img alt="NumPy" src="/branding/logo/primary/numpylogo.svg" height="60"></a> 2 3 <!--[![Azure Pipelines](https://dev.azure.com/numpy/numpy/_apis/build/status/numpy.numpy?branchName=main)](--> 4 <!--https://dev.azure.com/numpy/numpy/_build/latest?definitionId=1?branchName=main)--> 5 <!--[![Actions build_test](https://github.com/numpy/numpy/actions/workflows/build_test.yml/badge.svg)](--> 6 <!--https://github.com/numpy/numpy/actions/workflows/build_test.yml)--> 7 <!--[![TravisCI](https://app.travis-ci.com/numpy/numpy.svg?branch=main)](--> 8 <!--https://app.travis-ci.com/numpy/numpy)--> 9 <!--[![CircleCI](https://img.shields.io/circleci/project/github/numpy/numpy/main.svg?label=CircleCI)](--> 10 <!--https://circleci.com/gh/numpy/numpy)--> 11 <!--[![Codecov](https://codecov.io/gh/numpy/numpy/branch/main/graph/badge.svg)](--> 12 <!--https://codecov.io/gh/numpy/numpy)--> 13 14 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)]( 15 https://numfocus.org) 16 [![PyPI Downloads](https://img.shields.io/pypi/dm/numpy.svg?label=PyPI%20downloads)]( 17 https://pypi.org/project/numpy/) 18 [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/numpy.svg?label=Conda%20downloads)]( 19 https://anaconda.org/conda-forge/numpy) 20 [![Stack Overflow](https://img.shields.io/badge/stackoverflow-Ask%20questions-blue.svg)]( 21 https://stackoverflow.com/questions/tagged/numpy) 22 [![Nature Paper](https://img.shields.io/badge/DOI-10.1038%2Fs41592--019--0686--2-blue)]( 23 https://doi.org/10.1038/s41586-020-2649-2) 24 25 NumPy is the fundamental package for scientific computing with Python. 26 27 - **Website:** https://www.numpy.org 28 - **Documentation:** https://numpy.org/doc 29 - **Mailing list:** https://mail.python.org/mailman/listinfo/numpy-discussion 30 - **Source code:** https://github.com/numpy/numpy 31 - **Contributing:** https://www.numpy.org/devdocs/dev/index.html 32 - **Bug reports:** https://github.com/numpy/numpy/issues 33 - **Report a security vulnerability:** https://tidelift.com/docs/security 34 35 It provides: 36 37 - a powerful N-dimensional array object 38 - sophisticated (broadcasting) functions 39 - tools for integrating C/C++ and Fortran code 40 - useful linear algebra, Fourier transform, and random number capabilities 41 42 Testing: 43 44 NumPy requires `pytest` and `hypothesis`. Tests can then be run after installation with: 45 46 python -c 'import numpy; numpy.test()' 47 48 Code of Conduct 49 ---------------------- 50 51 NumPy is a community-driven open source project developed by a diverse group of 52 [contributors](https://numpy.org/teams/). The NumPy leadership has made a strong 53 commitment to creating an open, inclusive, and positive community. Please read the 54 [NumPy Code of Conduct](https://numpy.org/code-of-conduct/) for guidance on how to interact 55 with others in a way that makes our community thrive. 56 57 Call for Contributions 58 ---------------------- 59 60 The NumPy project welcomes your expertise and enthusiasm! 61 62 Small improvements or fixes are always appreciated; issues labeled as ["good 63 first issue"](https://github.com/numpy/numpy/labels/good%20first%20issue) 64 may be a good starting point. If you are considering larger contributions 65 to the source code, please contact us through the [mailing 66 list](https://mail.python.org/mailman/listinfo/numpy-discussion) first. 67 68 Writing code isn’t the only way to contribute to NumPy. You can also: 69 - review pull requests 70 - help us stay on top of new and old issues 71 - develop tutorials, presentations, and other educational materials 72 - maintain and improve [our website](https://github.com/numpy/numpy.org) 73 - develop graphic design for our brand assets and promotional materials 74 - translate website content 75 - help with outreach and onboard new contributors 76 - write grant proposals and help with other fundraising efforts 77 78 For more information about the ways you can contribute to NumPy, visit [our website](https://numpy.org/contribute/). 79 If you’re unsure where to start or how your skills fit in, reach out! You can 80 ask on the mailing list or here, on GitHub, by opening a new issue or leaving a 81 comment on a relevant issue that is already open. 82 83 Our preferred channels of communication are all public, but if you’d like to 84 speak to us in private first, contact our community coordinators at 85 [email protected] or on Slack (write [email protected] for 86 an invitation). 87 88 We also have a biweekly community call, details of which are announced on the 89 mailing list. You are very welcome to join. 90 91 If you are new to contributing to open source, [this 92 guide](https://opensource.guide/how-to-contribute/) helps explain why, what, 93 and how to successfully get involved. 94 [end of README.md] [start of numpy/distutils/command/install.py] 1 import sys 2 if 'setuptools' in sys.modules: 3 import setuptools.command.install as old_install_mod 4 have_setuptools = True 5 else: 6 import distutils.command.install as old_install_mod 7 have_setuptools = False 8 from distutils.file_util import write_file 9 10 old_install = old_install_mod.install 11 12 class install(old_install): 13 14 # Always run install_clib - the command is cheap, so no need to bypass it; 15 # but it's not run by setuptools -- so it's run again in install_data 16 sub_commands = old_install.sub_commands + [ 17 ('install_clib', lambda x: True) 18 ] 19 20 def finalize_options (self): 21 old_install.finalize_options(self) 22 self.install_lib = self.install_libbase 23 24 def setuptools_run(self): 25 """ The setuptools version of the .run() method. 26 27 We must pull in the entire code so we can override the level used in the 28 _getframe() call since we wrap this call by one more level. 29 """ 30 from distutils.command.install import install as distutils_install 31 32 # Explicit request for old-style install? Just do it 33 if self.old_and_unmanageable or self.single_version_externally_managed: 34 return distutils_install.run(self) 35 36 # Attempt to detect whether we were called from setup() or by another 37 # command. If we were called by setup(), our caller will be the 38 # 'run_command' method in 'distutils.dist', and *its* caller will be 39 # the 'run_commands' method. If we were called any other way, our 40 # immediate caller *might* be 'run_command', but it won't have been 41 # called by 'run_commands'. This is slightly kludgy, but seems to 42 # work. 43 # 44 caller = sys._getframe(3) 45 caller_module = caller.f_globals.get('__name__', '') 46 caller_name = caller.f_code.co_name 47 48 if caller_module != 'distutils.dist' or caller_name!='run_commands': 49 # We weren't called from the command line or setup(), so we 50 # should run in backward-compatibility mode to support bdist_* 51 # commands. 52 distutils_install.run(self) 53 else: 54 self.do_egg_install() 55 56 def run(self): 57 if not have_setuptools: 58 r = old_install.run(self) 59 else: 60 r = self.setuptools_run() 61 if self.record: 62 # bdist_rpm fails when INSTALLED_FILES contains 63 # paths with spaces. Such paths must be enclosed 64 # with double-quotes. 65 with open(self.record, 'r') as f: 66 lines = [] 67 need_rewrite = False 68 for l in f: 69 l = l.rstrip() 70 if ' ' in l: 71 need_rewrite = True 72 l = '"%s"' % (l) 73 lines.append(l) 74 if need_rewrite: 75 self.execute(write_file, 76 (self.record, lines), 77 "re-writing list of installed files to '%s'" % 78 self.record) 79 return r 80 [end of numpy/distutils/command/install.py] [start of numpy/distutils/core.py] 1 import sys 2 from distutils.core import Distribution 3 4 if 'setuptools' in sys.modules: 5 have_setuptools = True 6 from setuptools import setup as old_setup 7 # easy_install imports math, it may be picked up from cwd 8 from setuptools.command import easy_install 9 try: 10 # very old versions of setuptools don't have this 11 from setuptools.command import bdist_egg 12 except ImportError: 13 have_setuptools = False 14 else: 15 from distutils.core import setup as old_setup 16 have_setuptools = False 17 18 import warnings 19 import distutils.core 20 import distutils.dist 21 22 from numpy.distutils.extension import Extension # noqa: F401 23 from numpy.distutils.numpy_distribution import NumpyDistribution 24 from numpy.distutils.command import config, config_compiler, \ 25 build, build_py, build_ext, build_clib, build_src, build_scripts, \ 26 sdist, install_data, install_headers, install, bdist_rpm, \ 27 install_clib 28 from numpy.distutils.misc_util import is_sequence, is_string 29 30 numpy_cmdclass = {'build': build.build, 31 'build_src': build_src.build_src, 32 'build_scripts': build_scripts.build_scripts, 33 'config_cc': config_compiler.config_cc, 34 'config_fc': config_compiler.config_fc, 35 'config': config.config, 36 'build_ext': build_ext.build_ext, 37 'build_py': build_py.build_py, 38 'build_clib': build_clib.build_clib, 39 'sdist': sdist.sdist, 40 'install_data': install_data.install_data, 41 'install_headers': install_headers.install_headers, 42 'install_clib': install_clib.install_clib, 43 'install': install.install, 44 'bdist_rpm': bdist_rpm.bdist_rpm, 45 } 46 if have_setuptools: 47 # Use our own versions of develop and egg_info to ensure that build_src is 48 # handled appropriately. 49 from numpy.distutils.command import develop, egg_info 50 numpy_cmdclass['bdist_egg'] = bdist_egg.bdist_egg 51 numpy_cmdclass['develop'] = develop.develop 52 numpy_cmdclass['easy_install'] = easy_install.easy_install 53 numpy_cmdclass['egg_info'] = egg_info.egg_info 54 55 def _dict_append(d, **kws): 56 for k, v in kws.items(): 57 if k not in d: 58 d[k] = v 59 continue 60 dv = d[k] 61 if isinstance(dv, tuple): 62 d[k] = dv + tuple(v) 63 elif isinstance(dv, list): 64 d[k] = dv + list(v) 65 elif isinstance(dv, dict): 66 _dict_append(dv, **v) 67 elif is_string(dv): 68 d[k] = dv + v 69 else: 70 raise TypeError(repr(type(dv))) 71 72 def _command_line_ok(_cache=None): 73 """ Return True if command line does not contain any 74 help or display requests. 75 """ 76 if _cache: 77 return _cache[0] 78 elif _cache is None: 79 _cache = [] 80 ok = True 81 display_opts = ['--'+n for n in Distribution.display_option_names] 82 for o in Distribution.display_options: 83 if o[1]: 84 display_opts.append('-'+o[1]) 85 for arg in sys.argv: 86 if arg.startswith('--help') or arg=='-h' or arg in display_opts: 87 ok = False 88 break 89 _cache.append(ok) 90 return ok 91 92 def get_distribution(always=False): 93 dist = distutils.core._setup_distribution 94 # XXX Hack to get numpy installable with easy_install. 95 # The problem is easy_install runs it's own setup(), which 96 # sets up distutils.core._setup_distribution. However, 97 # when our setup() runs, that gets overwritten and lost. 98 # We can't use isinstance, as the DistributionWithoutHelpCommands 99 # class is local to a function in setuptools.command.easy_install 100 if dist is not None and \ 101 'DistributionWithoutHelpCommands' in repr(dist): 102 dist = None 103 if always and dist is None: 104 dist = NumpyDistribution() 105 return dist 106 107 def setup(**attr): 108 109 cmdclass = numpy_cmdclass.copy() 110 111 new_attr = attr.copy() 112 if 'cmdclass' in new_attr: 113 cmdclass.update(new_attr['cmdclass']) 114 new_attr['cmdclass'] = cmdclass 115 116 if 'configuration' in new_attr: 117 # To avoid calling configuration if there are any errors 118 # or help request in command in the line. 119 configuration = new_attr.pop('configuration') 120 121 old_dist = distutils.core._setup_distribution 122 old_stop = distutils.core._setup_stop_after 123 distutils.core._setup_distribution = None 124 distutils.core._setup_stop_after = "commandline" 125 try: 126 dist = setup(**new_attr) 127 finally: 128 distutils.core._setup_distribution = old_dist 129 distutils.core._setup_stop_after = old_stop 130 if dist.help or not _command_line_ok(): 131 # probably displayed help, skip running any commands 132 return dist 133 134 # create setup dictionary and append to new_attr 135 config = configuration() 136 if hasattr(config, 'todict'): 137 config = config.todict() 138 _dict_append(new_attr, **config) 139 140 # Move extension source libraries to libraries 141 libraries = [] 142 for ext in new_attr.get('ext_modules', []): 143 new_libraries = [] 144 for item in ext.libraries: 145 if is_sequence(item): 146 lib_name, build_info = item 147 _check_append_ext_library(libraries, lib_name, build_info) 148 new_libraries.append(lib_name) 149 elif is_string(item): 150 new_libraries.append(item) 151 else: 152 raise TypeError("invalid description of extension module " 153 "library %r" % (item,)) 154 ext.libraries = new_libraries 155 if libraries: 156 if 'libraries' not in new_attr: 157 new_attr['libraries'] = [] 158 for item in libraries: 159 _check_append_library(new_attr['libraries'], item) 160 161 # sources in ext_modules or libraries may contain header files 162 if ('ext_modules' in new_attr or 'libraries' in new_attr) \ 163 and 'headers' not in new_attr: 164 new_attr['headers'] = [] 165 166 # Use our custom NumpyDistribution class instead of distutils' one 167 new_attr['distclass'] = NumpyDistribution 168 169 return old_setup(**new_attr) 170 171 def _check_append_library(libraries, item): 172 for libitem in libraries: 173 if is_sequence(libitem): 174 if is_sequence(item): 175 if item[0]==libitem[0]: 176 if item[1] is libitem[1]: 177 return 178 warnings.warn("[0] libraries list contains %r with" 179 " different build_info" % (item[0],), 180 stacklevel=2) 181 break 182 else: 183 if item==libitem[0]: 184 warnings.warn("[1] libraries list contains %r with" 185 " no build_info" % (item[0],), 186 stacklevel=2) 187 break 188 else: 189 if is_sequence(item): 190 if item[0]==libitem: 191 warnings.warn("[2] libraries list contains %r with" 192 " no build_info" % (item[0],), 193 stacklevel=2) 194 break 195 else: 196 if item==libitem: 197 return 198 libraries.append(item) 199 200 def _check_append_ext_library(libraries, lib_name, build_info): 201 for item in libraries: 202 if is_sequence(item): 203 if item[0]==lib_name: 204 if item[1] is build_info: 205 return 206 warnings.warn("[3] libraries list contains %r with" 207 " different build_info" % (lib_name,), 208 stacklevel=2) 209 break 210 elif item==lib_name: 211 warnings.warn("[4] libraries list contains %r with" 212 " no build_info" % (lib_name,), 213 stacklevel=2) 214 break 215 libraries.append((lib_name, build_info)) 216 [end of numpy/distutils/core.py] [start of runtests.py] 1 #!/usr/bin/env python3 2 """ 3 runtests.py [OPTIONS] [-- ARGS] 4 5 Run tests, building the project first. 6 7 Examples:: 8 9 $ python runtests.py 10 $ python runtests.py -s {SAMPLE_SUBMODULE} 11 $ # Run a standalone test function: 12 $ python runtests.py -t {SAMPLE_TEST} 13 $ # Run a test defined as a method of a TestXXX class: 14 $ python runtests.py -t {SAMPLE_TEST2} 15 $ python runtests.py --ipython 16 $ python runtests.py --python somescript.py 17 $ python runtests.py --bench 18 $ python runtests.py --durations 20 19 20 Run a debugger: 21 22 $ gdb --args python runtests.py [...other args...] 23 24 Disable pytest capturing of output by using its '-s' option: 25 26 $ python runtests.py -- -s 27 28 Generate C code coverage listing under build/lcov/: 29 (requires http://ltp.sourceforge.net/coverage/lcov.php) 30 31 $ python runtests.py --gcov [...other args...] 32 $ python runtests.py --lcov-html 33 34 Run lint checks. 35 Provide target branch name or `uncommitted` to check before committing: 36 37 $ python runtests.py --lint main 38 $ python runtests.py --lint uncommitted 39 40 """ 41 # 42 # This is a generic test runner script for projects using NumPy's test 43 # framework. Change the following values to adapt to your project: 44 # 45 46 PROJECT_MODULE = "numpy" 47 PROJECT_ROOT_FILES = ['numpy', 'LICENSE.txt', 'setup.py'] 48 SAMPLE_TEST = "numpy/linalg/tests/test_linalg.py::test_byteorder_check" 49 SAMPLE_TEST2 = "numpy/core/tests/test_memmap.py::TestMemmap::test_open_with_filename" 50 SAMPLE_SUBMODULE = "linalg" 51 52 EXTRA_PATH = ['/usr/lib/ccache', '/usr/lib/f90cache', 53 '/usr/local/lib/ccache', '/usr/local/lib/f90cache'] 54 55 # --------------------------------------------------------------------- 56 57 58 if __doc__ is None: 59 __doc__ = "Run without -OO if you want usage info" 60 else: 61 __doc__ = __doc__.format(**globals()) 62 63 64 import sys 65 import os, glob 66 67 # In case we are run from the source directory, we don't want to import the 68 # project from there: 69 sys.path.pop(0) 70 71 import shutil 72 import subprocess 73 import time 74 from argparse import ArgumentParser, REMAINDER 75 76 ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__))) 77 78 def main(argv): 79 parser = ArgumentParser(usage=__doc__.lstrip()) 80 parser.add_argument("--verbose", "-v", action="count", default=1, 81 help="Add one verbosity level to pytest. Default is 0") 82 parser.add_argument("--debug-info", action="store_true", 83 help=("Add --verbose-cfg to build_src to show " 84 "compiler configuration output while creating " 85 "_numpyconfig.h and config.h")) 86 parser.add_argument("--no-build", "-n", action="store_true", default=False, 87 help="Do not build the project (use system installed " 88 "version)") 89 parser.add_argument("--build-only", "-b", action="store_true", 90 default=False, help="Just build, do not run any tests") 91 parser.add_argument("--doctests", action="store_true", default=False, 92 help="Run doctests in module") 93 parser.add_argument("--refguide-check", action="store_true", default=False, 94 help="Run refguide (doctest) check (do not run " 95 "regular tests.)") 96 parser.add_argument("--coverage", action="store_true", default=False, 97 help=("Report coverage of project code. HTML output " 98 "goes under build/coverage")) 99 parser.add_argument("--lint", default=None, 100 help="'<Target Branch>' or 'uncommitted', passed to " 101 "tools/linter.py [--branch BRANCH] " 102 "[--uncommitted]") 103 parser.add_argument("--durations", action="store", default=-1, type=int, 104 help=("Time N slowest tests, time all if 0, time none " 105 "if < 0")) 106 parser.add_argument("--gcov", action="store_true", default=False, 107 help=("Enable C code coverage via gcov (requires " 108 "GCC). gcov output goes to build/**/*.gc*")) 109 parser.add_argument("--lcov-html", action="store_true", default=False, 110 help=("Produce HTML for C code coverage information " 111 "from a previous run with --gcov. " 112 "HTML output goes to build/lcov/")) 113 parser.add_argument("--mode", "-m", default="fast", 114 help="'fast', 'full', or something that could be " 115 "passed to nosetests -A [default: fast]") 116 parser.add_argument("--submodule", "-s", default=None, 117 help="Submodule whose tests to run (cluster, " 118 "constants, ...)") 119 parser.add_argument("--pythonpath", "-p", default=None, 120 help="Paths to prepend to PYTHONPATH") 121 parser.add_argument("--tests", "-t", action='append', 122 help="Specify tests to run") 123 parser.add_argument("--python", action="store_true", 124 help="Start a Python shell with PYTHONPATH set") 125 parser.add_argument("--ipython", "-i", action="store_true", 126 help="Start IPython shell with PYTHONPATH set") 127 parser.add_argument("--shell", action="store_true", 128 help="Start Unix shell with PYTHONPATH set") 129 parser.add_argument("--mypy", action="store_true", 130 help="Run mypy on files with NumPy on the MYPYPATH") 131 parser.add_argument("--debug", "-g", action="store_true", 132 help="Debug build") 133 parser.add_argument("--parallel", "-j", type=int, default=0, 134 help="Number of parallel jobs during build") 135 parser.add_argument("--warn-error", action="store_true", 136 help="Set -Werror to convert all compiler warnings to " 137 "errors") 138 parser.add_argument("--cpu-baseline", default=None, 139 help="Specify a list of enabled baseline CPU " 140 "optimizations"), 141 parser.add_argument("--cpu-dispatch", default=None, 142 help="Specify a list of dispatched CPU optimizations"), 143 parser.add_argument("--disable-optimization", action="store_true", 144 help="Disable CPU optimized code (dispatch, simd, " 145 "fast, ...)"), 146 parser.add_argument("--simd-test", default=None, 147 help="Specify a list of CPU optimizations to be " 148 "tested against NumPy SIMD interface"), 149 parser.add_argument("--show-build-log", action="store_true", 150 help="Show build output rather than using a log file") 151 parser.add_argument("--bench", action="store_true", 152 help="Run benchmark suite instead of test suite") 153 parser.add_argument("--bench-compare", action="store", metavar="COMMIT", 154 help=("Compare benchmark results of current HEAD to " 155 "BEFORE. Use an additional " 156 "--bench-compare=COMMIT to override HEAD with " 157 "COMMIT. Note that you need to commit your " 158 "changes first!")) 159 parser.add_argument("args", metavar="ARGS", default=[], nargs=REMAINDER, 160 help="Arguments to pass to pytest, asv, mypy, Python " 161 "or shell") 162 args = parser.parse_args(argv) 163 164 if args.durations < 0: 165 args.durations = -1 166 167 if args.bench_compare: 168 args.bench = True 169 args.no_build = True # ASV does the building 170 171 if args.lcov_html: 172 # generate C code coverage output 173 lcov_generate() 174 sys.exit(0) 175 176 if args.pythonpath: 177 for p in reversed(args.pythonpath.split(os.pathsep)): 178 sys.path.insert(0, p) 179 180 if args.gcov: 181 gcov_reset_counters() 182 183 if args.debug and args.bench: 184 print("*** Benchmarks should not be run against debug " 185 "version; remove -g flag ***") 186 187 if args.lint: 188 check_lint(args.lint) 189 190 if not args.no_build: 191 # we need the noarch path in case the package is pure python. 192 site_dir, site_dir_noarch = build_project(args) 193 sys.path.insert(0, site_dir) 194 sys.path.insert(0, site_dir_noarch) 195 os.environ['PYTHONPATH'] = \ 196 os.pathsep.join(( 197 site_dir, 198 site_dir_noarch, 199 os.environ.get('PYTHONPATH', '') 200 )) 201 else: 202 _temp = __import__(PROJECT_MODULE) 203 site_dir = os.path.sep.join(_temp.__file__.split(os.path.sep)[:-2]) 204 205 extra_argv = args.args[:] 206 if not args.bench: 207 # extra_argv may also lists selected benchmarks 208 if extra_argv and extra_argv[0] == '--': 209 extra_argv = extra_argv[1:] 210 211 if args.python: 212 # Debugging issues with warnings is much easier if you can see them 213 print("Enabling display of all warnings") 214 import warnings 215 import types 216 217 warnings.filterwarnings("always") 218 if extra_argv: 219 # Don't use subprocess, since we don't want to include the 220 # current path in PYTHONPATH. 221 sys.argv = extra_argv 222 with open(extra_argv[0], 'r') as f: 223 script = f.read() 224 sys.modules['__main__'] = types.ModuleType('__main__') 225 ns = dict(__name__='__main__', 226 __file__=extra_argv[0]) 227 exec(script, ns) 228 sys.exit(0) 229 else: 230 import code 231 code.interact() 232 sys.exit(0) 233 234 if args.ipython: 235 # Debugging issues with warnings is much easier if you can see them 236 print("Enabling display of all warnings and pre-importing numpy as np") 237 import warnings; warnings.filterwarnings("always") 238 import IPython 239 import numpy as np 240 IPython.embed(colors='neutral', user_ns={"np": np}) 241 sys.exit(0) 242 243 if args.shell: 244 shell = os.environ.get('SHELL', 'cmd' if os.name == 'nt' else 'sh') 245 print("Spawning a shell ({})...".format(shell)) 246 subprocess.call([shell] + extra_argv) 247 sys.exit(0) 248 249 if args.mypy: 250 try: 251 import mypy.api 252 except ImportError: 253 raise RuntimeError( 254 "Mypy not found. Please install it by running " 255 "pip install -r test_requirements.txt from the repo root" 256 ) 257 258 os.environ['MYPYPATH'] = site_dir 259 # By default mypy won't color the output since it isn't being 260 # invoked from a tty. 261 os.environ['MYPY_FORCE_COLOR'] = '1' 262 263 config = os.path.join( 264 site_dir, 265 "numpy", 266 "typing", 267 "tests", 268 "data", 269 "mypy.ini", 270 ) 271 272 report, errors, status = mypy.api.run( 273 ['--config-file', config] + args.args 274 ) 275 print(report, end='') 276 print(errors, end='', file=sys.stderr) 277 sys.exit(status) 278 279 if args.coverage: 280 dst_dir = os.path.join(ROOT_DIR, 'build', 'coverage') 281 fn = os.path.join(dst_dir, 'coverage_html.js') 282 if os.path.isdir(dst_dir) and os.path.isfile(fn): 283 shutil.rmtree(dst_dir) 284 extra_argv += ['--cov-report=html:' + dst_dir] 285 286 if args.refguide_check: 287 cmd = [os.path.join(ROOT_DIR, 'tools', 'refguide_check.py'), 288 '--doctests'] 289 if args.submodule: 290 cmd += [args.submodule] 291 os.execv(sys.executable, [sys.executable] + cmd) 292 sys.exit(0) 293 294 if args.bench: 295 # Run ASV 296 for i, v in enumerate(extra_argv): 297 if v.startswith("--"): 298 items = extra_argv[:i] 299 if v == "--": 300 i += 1 # skip '--' indicating further are passed on. 301 bench_args = extra_argv[i:] 302 break 303 else: 304 items = extra_argv 305 bench_args = [] 306 307 if args.tests: 308 items += args.tests 309 if args.submodule: 310 items += [args.submodule] 311 for a in items: 312 bench_args.extend(['--bench', a]) 313 314 if not args.bench_compare: 315 cmd = ['asv', 'run', '-n', '-e', '--python=same'] + bench_args 316 ret = subprocess.call(cmd, cwd=os.path.join(ROOT_DIR, 'benchmarks')) 317 sys.exit(ret) 318 else: 319 commits = [x.strip() for x in args.bench_compare.split(',')] 320 if len(commits) == 1: 321 commit_a = commits[0] 322 commit_b = 'HEAD' 323 elif len(commits) == 2: 324 commit_a, commit_b = commits 325 else: 326 p.error("Too many commits to compare benchmarks for") 327 328 # Check for uncommitted files 329 if commit_b == 'HEAD': 330 r1 = subprocess.call(['git', 'diff-index', '--quiet', 331 '--cached', 'HEAD']) 332 r2 = subprocess.call(['git', 'diff-files', '--quiet']) 333 if r1 != 0 or r2 != 0: 334 print("*"*80) 335 print("WARNING: you have uncommitted changes --- " 336 "these will NOT be benchmarked!") 337 print("*"*80) 338 339 # Fix commit ids (HEAD is local to current repo) 340 out = subprocess.check_output(['git', 'rev-parse', commit_b]) 341 commit_b = out.strip().decode('ascii') 342 343 out = subprocess.check_output(['git', 'rev-parse', commit_a]) 344 commit_a = out.strip().decode('ascii') 345 346 # generate config file with the required build options 347 asv_cfpath = [ 348 '--config', asv_compare_config( 349 os.path.join(ROOT_DIR, 'benchmarks'), args, 350 # to clear the cache if the user changed build options 351 (commit_a, commit_b) 352 ) 353 ] 354 cmd = ['asv', 'continuous', '-e', '-f', '1.05', 355 commit_a, commit_b] + asv_cfpath + bench_args 356 ret = subprocess.call(cmd, cwd=os.path.join(ROOT_DIR, 'benchmarks')) 357 sys.exit(ret) 358 359 if args.build_only: 360 sys.exit(0) 361 else: 362 __import__(PROJECT_MODULE) 363 test = sys.modules[PROJECT_MODULE].test 364 365 if args.submodule: 366 tests = [PROJECT_MODULE + "." + args.submodule] 367 elif args.tests: 368 tests = args.tests 369 else: 370 tests = None 371 372 373 # Run the tests under build/test 374 375 if not args.no_build: 376 test_dir = site_dir 377 else: 378 test_dir = os.path.join(ROOT_DIR, 'build', 'test') 379 if not os.path.isdir(test_dir): 380 os.makedirs(test_dir) 381 382 shutil.copyfile(os.path.join(ROOT_DIR, '.coveragerc'), 383 os.path.join(test_dir, '.coveragerc')) 384 385 cwd = os.getcwd() 386 try: 387 os.chdir(test_dir) 388 result = test(args.mode, 389 verbose=args.verbose, 390 extra_argv=extra_argv, 391 doctests=args.doctests, 392 coverage=args.coverage, 393 durations=args.durations, 394 tests=tests) 395 finally: 396 os.chdir(cwd) 397 398 if isinstance(result, bool): 399 sys.exit(0 if result else 1) 400 elif result.wasSuccessful(): 401 sys.exit(0) 402 else: 403 sys.exit(1) 404 405 def build_project(args): 406 """ 407 Build a dev version of the project. 408 409 Returns 410 ------- 411 site_dir 412 site-packages directory where it was installed 413 414 """ 415 416 import sysconfig 417 418 root_ok = [os.path.exists(os.path.join(ROOT_DIR, fn)) 419 for fn in PROJECT_ROOT_FILES] 420 if not all(root_ok): 421 print("To build the project, run runtests.py in " 422 "git checkout or unpacked source") 423 sys.exit(1) 424 425 dst_dir = os.path.join(ROOT_DIR, 'build', 'testenv') 426 427 env = dict(os.environ) 428 cmd = [sys.executable, 'setup.py'] 429 430 # Always use ccache, if installed 431 env['PATH'] = os.pathsep.join(EXTRA_PATH + env.get('PATH', '').split(os.pathsep)) 432 cvars = sysconfig.get_config_vars() 433 compiler = env.get('CC') or cvars.get('CC', '') 434 if 'gcc' in compiler: 435 # Check that this isn't clang masquerading as gcc. 436 if sys.platform != 'darwin' or 'gnu-gcc' in compiler: 437 # add flags used as werrors 438 warnings_as_errors = ' '.join([ 439 # from tools/travis-test.sh 440 '-Werror=vla', 441 '-Werror=nonnull', 442 '-Werror=pointer-arith', 443 '-Wlogical-op', 444 # from sysconfig 445 '-Werror=unused-function', 446 ]) 447 env['CFLAGS'] = warnings_as_errors + ' ' + env.get('CFLAGS', '') 448 if args.debug or args.gcov: 449 # assume everyone uses gcc/gfortran 450 env['OPT'] = '-O0 -ggdb' 451 env['FOPT'] = '-O0 -ggdb' 452 if args.gcov: 453 env['OPT'] = '-O0 -ggdb' 454 env['FOPT'] = '-O0 -ggdb' 455 env['CC'] = cvars['CC'] + ' --coverage' 456 env['CXX'] = cvars['CXX'] + ' --coverage' 457 env['F77'] = 'gfortran --coverage ' 458 env['F90'] = 'gfortran --coverage ' 459 env['LDSHARED'] = cvars['LDSHARED'] + ' --coverage' 460 env['LDFLAGS'] = " ".join(cvars['LDSHARED'].split()[1:]) + ' --coverage' 461 462 cmd += ["build"] 463 if args.parallel > 1: 464 cmd += ["-j", str(args.parallel)] 465 if args.warn_error: 466 cmd += ["--warn-error"] 467 if args.cpu_baseline: 468 cmd += ["--cpu-baseline", args.cpu_baseline] 469 if args.cpu_dispatch: 470 cmd += ["--cpu-dispatch", args.cpu_dispatch] 471 if args.disable_optimization: 472 cmd += ["--disable-optimization"] 473 if args.simd_test is not None: 474 cmd += ["--simd-test", args.simd_test] 475 if args.debug_info: 476 cmd += ["build_src", "--verbose-cfg"] 477 # Install; avoid producing eggs so numpy can be imported from dst_dir. 478 cmd += ['install', '--prefix=' + dst_dir, 479 '--single-version-externally-managed', 480 '--record=' + dst_dir + 'tmp_install_log.txt'] 481 482 config_vars = dict(sysconfig.get_config_vars()) 483 config_vars["platbase"] = dst_dir 484 config_vars["base"] = dst_dir 485 486 site_dir_template = os.path.normpath(sysconfig.get_path( 487 'platlib', expand=False 488 )) 489 site_dir = site_dir_template.format(**config_vars) 490 noarch_template = os.path.normpath(sysconfig.get_path( 491 'purelib', expand=False 492 )) 493 site_dir_noarch = noarch_template.format(**config_vars) 494 495 # easy_install won't install to a path that Python by default cannot see 496 # and isn't on the PYTHONPATH. Plus, it has to exist. 497 if not os.path.exists(site_dir): 498 os.makedirs(site_dir) 499 if not os.path.exists(site_dir_noarch): 500 os.makedirs(site_dir_noarch) 501 env['PYTHONPATH'] = \ 502 os.pathsep.join((site_dir, site_dir_noarch, env.get('PYTHONPATH', ''))) 503 504 log_filename = os.path.join(ROOT_DIR, 'build.log') 505 506 if args.show_build_log: 507 ret = subprocess.call(cmd, env=env, cwd=ROOT_DIR) 508 else: 509 log_filename = os.path.join(ROOT_DIR, 'build.log') 510 print("Building, see build.log...") 511 with open(log_filename, 'w') as log: 512 p = subprocess.Popen(cmd, env=env, stdout=log, stderr=log, 513 cwd=ROOT_DIR) 514 try: 515 # Wait for it to finish, and print something to indicate the 516 # process is alive, but only if the log file has grown (to 517 # allow continuous integration environments kill a hanging 518 # process accurately if it produces no output) 519 last_blip = time.time() 520 last_log_size = os.stat(log_filename).st_size 521 while p.poll() is None: 522 time.sleep(0.5) 523 if time.time() - last_blip > 60: 524 log_size = os.stat(log_filename).st_size 525 if log_size > last_log_size: 526 print(" ... build in progress") 527 last_blip = time.time() 528 last_log_size = log_size 529 530 ret = p.wait() 531 except: 532 p.kill() 533 p.wait() 534 raise 535 536 if ret == 0: 537 print("Build OK") 538 else: 539 if not args.show_build_log: 540 with open(log_filename, 'r') as f: 541 print(f.read()) 542 print("Build failed!") 543 sys.exit(1) 544 545 return site_dir, site_dir_noarch 546 547 def asv_compare_config(bench_path, args, h_commits): 548 """ 549 Fill the required build options through custom variable 550 'numpy_build_options' and return the generated config path. 551 """ 552 conf_path = os.path.join(bench_path, "asv_compare.conf.json.tpl") 553 nconf_path = os.path.join(bench_path, "_asv_compare.conf.json") 554 555 # add custom build 556 build = [] 557 if args.parallel > 1: 558 build += ["-j", str(args.parallel)] 559 if args.cpu_baseline: 560 build += ["--cpu-baseline", args.cpu_baseline] 561 if args.cpu_dispatch: 562 build += ["--cpu-dispatch", args.cpu_dispatch] 563 if args.disable_optimization: 564 build += ["--disable-optimization"] 565 566 is_cached = asv_substitute_config(conf_path, nconf_path, 567 numpy_build_options = ' '.join([f'\\"{v}\\"' for v in build]), 568 numpy_global_options= ' '.join([f'--global-option=\\"{v}\\"' for v in ["build"] + build]) 569 ) 570 if not is_cached: 571 asv_clear_cache(bench_path, h_commits) 572 return nconf_path 573 574 def asv_clear_cache(bench_path, h_commits, env_dir="env"): 575 """ 576 Force ASV to clear the cache according to specified commit hashes. 577 """ 578 # FIXME: only clear the cache from the current environment dir 579 asv_build_pattern = os.path.join(bench_path, env_dir, "*", "asv-build-cache") 580 for asv_build_cache in glob.glob(asv_build_pattern, recursive=True): 581 for c in h_commits: 582 try: shutil.rmtree(os.path.join(asv_build_cache, c)) 583 except OSError: pass 584 585 def asv_substitute_config(in_config, out_config, **custom_vars): 586 """ 587 A workaround to allow substituting custom tokens within 588 ASV configuration file since there's no official way to add custom 589 variables(e.g. env vars). 590 591 Parameters 592 ---------- 593 in_config : str 594 The path of ASV configuration file, e.g. '/path/to/asv.conf.json' 595 out_config : str 596 The path of generated configuration file, 597 e.g. '/path/to/asv_substituted.conf.json'. 598 599 The other keyword arguments represent the custom variables. 600 601 Returns 602 ------- 603 True(is cached) if 'out_config' is already generated with 604 the same '**custom_vars' and updated with latest 'in_config', 605 False otherwise. 606 607 Examples 608 -------- 609 See asv_compare_config(). 610 """ 611 assert in_config != out_config 612 assert len(custom_vars) > 0 613 614 def sdbm_hash(*factors): 615 chash = 0 616 for f in factors: 617 for char in str(f): 618 chash = ord(char) + (chash << 6) + (chash << 16) - chash 619 chash &= 0xFFFFFFFF 620 return chash 621 622 vars_hash = sdbm_hash(custom_vars, os.path.getmtime(in_config)) 623 try: 624 with open(out_config, "r") as wfd: 625 hash_line = wfd.readline().split('hash:') 626 if len(hash_line) > 1 and int(hash_line[1]) == vars_hash: 627 return True 628 except OSError: 629 pass 630 631 custom_vars = {f'{{{k}}}':v for k, v in custom_vars.items()} 632 with open(in_config, "r") as rfd, open(out_config, "w") as wfd: 633 wfd.write(f"// hash:{vars_hash}\n") 634 wfd.write("// This file is automatically generated by runtests.py\n") 635 for line in rfd: 636 for key, val in custom_vars.items(): 637 line = line.replace(key, val) 638 wfd.write(line) 639 return False 640 641 # 642 # GCOV support 643 # 644 def gcov_reset_counters(): 645 print("Removing previous GCOV .gcda files...") 646 build_dir = os.path.join(ROOT_DIR, 'build') 647 for dirpath, dirnames, filenames in os.walk(build_dir): 648 for fn in filenames: 649 if fn.endswith('.gcda') or fn.endswith('.da'): 650 pth = os.path.join(dirpath, fn) 651 os.unlink(pth) 652 653 # 654 # LCOV support 655 # 656 657 LCOV_OUTPUT_FILE = os.path.join(ROOT_DIR, 'build', 'lcov.out') 658 LCOV_HTML_DIR = os.path.join(ROOT_DIR, 'build', 'lcov') 659 660 def lcov_generate(): 661 try: os.unlink(LCOV_OUTPUT_FILE) 662 except OSError: pass 663 try: shutil.rmtree(LCOV_HTML_DIR) 664 except OSError: pass 665 666 print("Capturing lcov info...") 667 subprocess.call(['lcov', '-q', '-c', 668 '-d', os.path.join(ROOT_DIR, 'build'), 669 '-b', ROOT_DIR, 670 '--output-file', LCOV_OUTPUT_FILE]) 671 672 print("Generating lcov HTML output...") 673 ret = subprocess.call(['genhtml', '-q', LCOV_OUTPUT_FILE, 674 '--output-directory', LCOV_HTML_DIR, 675 '--legend', '--highlight']) 676 if ret != 0: 677 print("genhtml failed!") 678 else: 679 print("HTML output generated under build/lcov/") 680 681 def check_lint(lint_args): 682 """ 683 Adds ROOT_DIR to path and performs lint checks. 684 This functions exits the program with status code of lint check. 685 """ 686 sys.path.append(ROOT_DIR) 687 try: 688 from tools.linter import DiffLinter 689 except ModuleNotFoundError as e: 690 print(f"Error: {e.msg}. " 691 "Install using linter_requirements.txt.") 692 sys.exit(1) 693 694 uncommitted = lint_args == "uncommitted" 695 branch = "main" if uncommitted else lint_args 696 697 DiffLinter(branch).run_lint(uncommitted) 698 699 700 if __name__ == "__main__": 701 main(argv=sys.argv[1:]) 702 [end of runtests.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
numpy/numpy
7e76174901a17801dc21e832403a94ed20fc1f97
BUG: AttributeError: 'Extension' object has no attribute 'extra_c_compile_args' ### Describe the issue: It seems https://github.com/numpy/numpy/pull/19713 included in numpy >=1.22 is preventing to build some extensions such as in the assimulo package with the error: 'Extension' object has no attribute 'extra_c_compile_args' cc @serge-sans-paille ### Reproduce the code example: ```python curl -fSsL https://github.com/modelon-community/Assimulo/archive/Assimulo-3.2.9.tar.gz | tar xz cd Assimulo-Assimulo-3.2.9 python3 setup.py install --extra-fortran-link-flags="-shared" --sundials-home=/usr/local --lapack-home=/usr/lib64 --blas-home=/usr/lib64 ``` ``` ### Error message: ```shell Traceback (most recent call last): File "/usr/local/src/Assimulo-Assimulo-3.2.9/setup.py", line 691, in <module> ndc.setup(name=NAME, File "/usr/local/lib/python3.9/site-packages/numpy/distutils/core.py", line 169, in setup return old_setup(**new_attr) File "/usr/local/lib/python3.9/site-packages/setuptools/__init__.py", line 155, in setup return distutils.core.setup(**attrs) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/install.py", line 60, in run r = self.setuptools_run() File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/install.py", line 54, in setuptools_run self.do_egg_install() File "/usr/local/lib/python3.9/site-packages/setuptools/command/install.py", line 116, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/usr/local/lib/python3.9/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/build_ext.py", line 316, in run self.build_extensions() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/usr/local/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/usr/local/lib/python3.9/site-packages/numpy/distutils/command/build_ext.py", line 380, in build_extension extra_cflags = ext.extra_c_compile_args or [] AttributeError: 'Extension' object has no attribute 'extra_c_compile_args' ``` ``` ### NumPy/Python version information: numpy 1.22.x
That's probably on me, I'll have a look this week end.
2022-01-28T18:48:00Z
<patch> diff --git a/numpy/distutils/command/build_ext.py b/numpy/distutils/command/build_ext.py --- a/numpy/distutils/command/build_ext.py +++ b/numpy/distutils/command/build_ext.py @@ -393,8 +393,8 @@ def build_extension(self, ext): log.info("building '%s' extension", ext.name) extra_args = ext.extra_compile_args or [] - extra_cflags = ext.extra_c_compile_args or [] - extra_cxxflags = ext.extra_cxx_compile_args or [] + extra_cflags = getattr(ext, 'extra_c_compile_args', None) or [] + extra_cxxflags = getattr(ext, 'extra_cxx_compile_args', None) or [] macros = ext.define_macros[:] for undef in ext.undef_macros: </patch>
[]
[]
pandas-dev__pandas-23657
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> API/DEPR: replace raise_conflict-kwarg in df.update with errors From [review](https://github.com/pandas-dev/pandas/pull/23192/files#r225758408) in #23192: > pls create [an issue] to deprecate this arg and rename to `errors=` The idea is to replace the `raise_conflict=True|False` with `errors='ignore'|'raise'`. Same goes for `Panel`. </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://circleci.com/gh/pandas-dev/pandas"> 52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td></td> 58 <td> 59 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 60 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Coverage</td> 66  <td> 67 <a href="https://codecov.io/gh/pandas-dev/pandas"> 68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Downloads</td> 74 <td> 75 <a href="https://pandas.pydata.org"> 76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 77 </a> 78 </td> 79 </tr> 80 <tr> 81 <td>Gitter</td> 82 <td> 83 <a href="https://gitter.im/pydata/pandas"> 84 <img src="https://badges.gitter.im/Join%20Chat.svg" 85 </a> 86 </td> 87 </tr> 88 </table> 89 90 91 92 ## What is it? 93 94 **pandas** is a Python package providing fast, flexible, and expressive data 95 structures designed to make working with "relational" or "labeled" data both 96 easy and intuitive. It aims to be the fundamental high-level building block for 97 doing practical, **real world** data analysis in Python. Additionally, it has 98 the broader goal of becoming **the most powerful and flexible open source data 99 analysis / manipulation tool available in any language**. It is already well on 100 its way towards this goal. 101 102 ## Main Features 103 Here are just a few of the things that pandas does well: 104 105 - Easy handling of [**missing data**][missing-data] (represented as 106 `NaN`) in floating point as well as non-floating point data 107 - Size mutability: columns can be [**inserted and 108 deleted**][insertion-deletion] from DataFrame and higher dimensional 109 objects 110 - Automatic and explicit [**data alignment**][alignment]: objects can 111 be explicitly aligned to a set of labels, or the user can simply 112 ignore the labels and let `Series`, `DataFrame`, etc. automatically 113 align the data for you in computations 114 - Powerful, flexible [**group by**][groupby] functionality to perform 115 split-apply-combine operations on data sets, for both aggregating 116 and transforming data 117 - Make it [**easy to convert**][conversion] ragged, 118 differently-indexed data in other Python and NumPy data structures 119 into DataFrame objects 120 - Intelligent label-based [**slicing**][slicing], [**fancy 121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 122 large data sets 123 - Intuitive [**merging**][merging] and [**joining**][joining] data 124 sets 125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 126 data sets 127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 128 labels per tick) 129 - Robust IO tools for loading data from [**flat files**][flat-files] 130 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 132 - [**Time series**][timeseries]-specific functionality: date range 133 generation and frequency conversion, moving window statistics, 134 moving window linear regressions, date shifting and lagging, etc. 135 136 137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 155 156 ## Where to get it 157 The source code is currently hosted on GitHub at: 158 https://github.com/pandas-dev/pandas 159 160 Binary installers for the latest released version are available at the [Python 161 package index](https://pypi.org/project/pandas) and on conda. 162 163 ```sh 164 # conda 165 conda install pandas 166 ``` 167 168 ```sh 169 # or PyPI 170 pip install pandas 171 ``` 172 173 ## Dependencies 174 - [NumPy](https://www.numpy.org): 1.9.0 or higher 175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 177 178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 179 for recommended and optional dependencies. 180 181 ## Installation from sources 182 To install pandas from source you need Cython in addition to the normal 183 dependencies above. Cython can be installed from pypi: 184 185 ```sh 186 pip install cython 187 ``` 188 189 In the `pandas` directory (same one where you found this file after 190 cloning the git repo), execute: 191 192 ```sh 193 python setup.py install 194 ``` 195 196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 197 198 ```sh 199 python setup.py develop 200 ``` 201 202 Alternatively, you can use `pip` if you want all the dependencies pulled 203 in automatically (the `-e` option is for installing it in [development 204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 205 206 ```sh 207 pip install -e . 208 ``` 209 210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 211 212 ## License 213 [BSD 3](LICENSE) 214 215 ## Documentation 216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 217 218 ## Background 219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 220 has been under active development since then. 221 222 ## Getting Help 223 224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 226 227 ## Discussion and Development 228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 229 230 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 231 232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 233 234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)** 235 236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 237 238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 239 240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 241 242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 243 [end of README.md] [start of doc/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # pandas documentation build configuration file, created by 4 # 5 # This file is execfile()d with the current directory set to its containing 6 # dir. 7 # 8 # Note that not all possible configuration values are present in this 9 # autogenerated file. 10 # 11 # All configuration values have a default; values that are commented out 12 # serve to show the default. 13 14 import sys 15 import os 16 import re 17 import inspect 18 import importlib 19 import logging 20 import warnings 21 from sphinx.ext.autosummary import _import_by_name 22 23 logger = logging.getLogger(__name__) 24 25 try: 26 raw_input # Python 2 27 except NameError: 28 raw_input = input # Python 3 29 30 # https://github.com/sphinx-doc/sphinx/pull/2325/files 31 # Workaround for sphinx-build recursion limit overflow: 32 # pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) 33 # RuntimeError: maximum recursion depth exceeded while pickling an object 34 # 35 # Python's default allowed recursion depth is 1000. 36 sys.setrecursionlimit(5000) 37 38 # If extensions (or modules to document with autodoc) are in another directory, 39 # add these directories to sys.path here. If the directory is relative to the 40 # documentation root, use os.path.abspath to make it absolute, like shown here. 41 # sys.path.append(os.path.abspath('.')) 42 sys.path.insert(0, os.path.abspath('../sphinxext')) 43 sys.path.extend([ 44 45 # numpy standard doc extensions 46 os.path.join(os.path.dirname(__file__), 47 '..', '../..', 48 'sphinxext') 49 50 ]) 51 52 # numpydoc is available in the sphinxext directory, and can't be imported 53 # until sphinxext is available in the Python path 54 from numpydoc.docscrape import NumpyDocString 55 56 # -- General configuration ----------------------------------------------- 57 58 # Add any Sphinx extension module names here, as strings. They can be 59 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 60 # sphinxext. 61 62 extensions = ['sphinx.ext.autodoc', 63 'sphinx.ext.autosummary', 64 'sphinx.ext.doctest', 65 'sphinx.ext.extlinks', 66 'sphinx.ext.todo', 67 'numpydoc', 68 'IPython.sphinxext.ipython_directive', 69 'IPython.sphinxext.ipython_console_highlighting', 70 'matplotlib.sphinxext.plot_directive', 71 'sphinx.ext.intersphinx', 72 'sphinx.ext.coverage', 73 'sphinx.ext.mathjax', 74 'sphinx.ext.ifconfig', 75 'sphinx.ext.linkcode', 76 'nbsphinx', 77 'contributors', # custom pandas extension 78 ] 79 80 try: 81 import sphinxcontrib.spelling # noqa 82 except ImportError as err: 83 logger.warn(('sphinxcontrib.spelling failed to import with error "{}". ' 84 '`spellcheck` command is not available.'.format(err))) 85 else: 86 extensions.append('sphinxcontrib.spelling') 87 88 exclude_patterns = ['**.ipynb_checkpoints'] 89 90 spelling_word_list_filename = ['spelling_wordlist.txt', 'names_wordlist.txt'] 91 spelling_ignore_pypi_package_names = True 92 93 with open("index.rst") as f: 94 index_rst_lines = f.readlines() 95 96 # only include the slow autosummary feature if we're building the API section 97 # of the docs 98 99 # JP: added from sphinxdocs 100 autosummary_generate = False 101 102 if any(re.match(r"\s*api\s*", l) for l in index_rst_lines): 103 autosummary_generate = True 104 105 # numpydoc 106 # for now use old parameter listing (styling + **kwargs problem) 107 numpydoc_use_blockquotes = True 108 # use member listing for attributes 109 numpydoc_attributes_as_param_list = False 110 111 # matplotlib plot directive 112 plot_include_source = True 113 plot_formats = [("png", 90)] 114 plot_html_show_formats = False 115 plot_html_show_source_link = False 116 plot_pre_code = """import numpy as np 117 import pandas as pd""" 118 119 # Add any paths that contain templates here, relative to this directory. 120 templates_path = ['../_templates'] 121 122 # The suffix of source filenames. 123 source_suffix = [ 124 '.rst', 125 ] 126 127 # The encoding of source files. 128 source_encoding = 'utf-8' 129 130 # The master toctree document. 131 master_doc = 'index' 132 133 # General information about the project. 134 project = u'pandas' 135 copyright = u'2008-2014, the pandas development team' 136 137 # The version info for the project you're documenting, acts as replacement for 138 # |version| and |release|, also used in various other places throughout the 139 # built documents. 140 # 141 # The short X.Y version. 142 import pandas 143 144 # version = '%s r%s' % (pandas.__version__, svn_version()) 145 version = str(pandas.__version__) 146 147 # The full version, including alpha/beta/rc tags. 148 release = version 149 150 # The language for content autogenerated by Sphinx. Refer to documentation 151 # for a list of supported languages. 152 # language = None 153 154 # There are two options for replacing |today|: either, you set today to some 155 # non-false value, then it is used: 156 # today = '' 157 # Else, today_fmt is used as the format for a strftime call. 158 # today_fmt = '%B %d, %Y' 159 160 # List of documents that shouldn't be included in the build. 161 # unused_docs = [] 162 163 # List of directories, relative to source directory, that shouldn't be searched 164 # for source files. 165 exclude_trees = [] 166 167 # The reST default role (used for this markup: `text`) to use for all 168 # documents. default_role = None 169 170 # If true, '()' will be appended to :func: etc. cross-reference text. 171 # add_function_parentheses = True 172 173 # If true, the current module name will be prepended to all description 174 # unit titles (such as .. function::). 175 # add_module_names = True 176 177 # If true, sectionauthor and moduleauthor directives will be shown in the 178 # output. They are ignored by default. 179 # show_authors = False 180 181 # The name of the Pygments (syntax highlighting) style to use. 182 pygments_style = 'sphinx' 183 184 # A list of ignored prefixes for module index sorting. 185 # modindex_common_prefix = [] 186 187 188 # -- Options for HTML output --------------------------------------------- 189 190 # The theme to use for HTML and HTML Help pages. Major themes that come with 191 # Sphinx are currently 'default' and 'sphinxdoc'. 192 html_theme = 'nature_with_gtoc' 193 194 # The style sheet to use for HTML and HTML Help pages. A file of that name 195 # must exist either in Sphinx' static/ path, or in one of the custom paths 196 # given in html_static_path. 197 # html_style = 'statsmodels.css' 198 199 # Theme options are theme-specific and customize the look and feel of a theme 200 # further. For a list of options available for each theme, see the 201 # documentation. 202 # html_theme_options = {} 203 204 # Add any paths that contain custom themes here, relative to this directory. 205 html_theme_path = ['themes'] 206 207 # The name for this set of Sphinx documents. If None, it defaults to 208 # "<project> v<release> documentation". 209 # html_title = None 210 211 # A shorter title for the navigation bar. Default is the same as html_title. 212 # html_short_title = None 213 214 # The name of an image file (relative to this directory) to place at the top 215 # of the sidebar. 216 # html_logo = None 217 218 # Add any paths that contain custom static files (such as style sheets) here, 219 # relative to this directory. They are copied after the builtin static files, 220 # so a file named "default.css" will overwrite the builtin "default.css". 221 html_static_path = ['_static'] 222 223 # The name of an image file (within the static path) to use as favicon of the 224 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 225 # pixels large. 226 html_favicon = os.path.join(html_static_path[0], 'favicon.ico') 227 228 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 229 # using the given strftime format. 230 # html_last_updated_fmt = '%b %d, %Y' 231 232 # If true, SmartyPants will be used to convert quotes and dashes to 233 # typographically correct entities. 234 # html_use_smartypants = True 235 236 # Custom sidebar templates, maps document names to template names. 237 # html_sidebars = {} 238 239 # Additional templates that should be rendered to pages, maps page names to 240 # template names. 241 242 # Add redirect for previously existing API pages 243 # each item is like `(from_old, to_new)` 244 # To redirect a class and all its methods, see below 245 # https://github.com/pandas-dev/pandas/issues/16186 246 247 moved_api_pages = [ 248 ('pandas.core.common.isnull', 'pandas.isna'), 249 ('pandas.core.common.notnull', 'pandas.notna'), 250 ('pandas.core.reshape.get_dummies', 'pandas.get_dummies'), 251 ('pandas.tools.merge.concat', 'pandas.concat'), 252 ('pandas.tools.merge.merge', 'pandas.merge'), 253 ('pandas.tools.pivot.pivot_table', 'pandas.pivot_table'), 254 ('pandas.tseries.tools.to_datetime', 'pandas.to_datetime'), 255 ('pandas.io.clipboard.read_clipboard', 'pandas.read_clipboard'), 256 ('pandas.io.excel.ExcelFile.parse', 'pandas.ExcelFile.parse'), 257 ('pandas.io.excel.read_excel', 'pandas.read_excel'), 258 ('pandas.io.gbq.read_gbq', 'pandas.read_gbq'), 259 ('pandas.io.html.read_html', 'pandas.read_html'), 260 ('pandas.io.json.read_json', 'pandas.read_json'), 261 ('pandas.io.parsers.read_csv', 'pandas.read_csv'), 262 ('pandas.io.parsers.read_fwf', 'pandas.read_fwf'), 263 ('pandas.io.parsers.read_table', 'pandas.read_table'), 264 ('pandas.io.pickle.read_pickle', 'pandas.read_pickle'), 265 ('pandas.io.pytables.HDFStore.append', 'pandas.HDFStore.append'), 266 ('pandas.io.pytables.HDFStore.get', 'pandas.HDFStore.get'), 267 ('pandas.io.pytables.HDFStore.put', 'pandas.HDFStore.put'), 268 ('pandas.io.pytables.HDFStore.select', 'pandas.HDFStore.select'), 269 ('pandas.io.pytables.read_hdf', 'pandas.read_hdf'), 270 ('pandas.io.sql.read_sql', 'pandas.read_sql'), 271 ('pandas.io.sql.read_frame', 'pandas.read_frame'), 272 ('pandas.io.sql.write_frame', 'pandas.write_frame'), 273 ('pandas.io.stata.read_stata', 'pandas.read_stata'), 274 ] 275 276 # Again, tuples of (from_old, to_new) 277 moved_classes = [ 278 ('pandas.tseries.resample.Resampler', 'pandas.core.resample.Resampler'), 279 ('pandas.formats.style.Styler', 'pandas.io.formats.style.Styler'), 280 ] 281 282 for old, new in moved_classes: 283 # the class itself... 284 moved_api_pages.append((old, new)) 285 286 mod, classname = new.rsplit('.', 1) 287 klass = getattr(importlib.import_module(mod), classname) 288 methods = [x for x in dir(klass) 289 if not x.startswith('_') or x in ('__iter__', '__array__')] 290 291 for method in methods: 292 # ... and each of its public methods 293 moved_api_pages.append( 294 ("{old}.{method}".format(old=old, method=method), 295 "{new}.{method}".format(new=new, method=method)) 296 ) 297 298 html_additional_pages = { 299 'generated/' + page[0]: 'api_redirect.html' 300 for page in moved_api_pages 301 } 302 303 304 common_imports = """\ 305 .. currentmodule:: pandas 306 307 .. ipython:: python 308 :suppress: 309 310 import numpy as np 311 from pandas import * 312 import pandas as pd 313 randn = np.random.randn 314 np.set_printoptions(precision=4, suppress=True) 315 options.display.max_rows = 15 316 from pandas.compat import StringIO 317 """ 318 319 320 html_context = { 321 'redirects': {old: new for old, new in moved_api_pages}, 322 'common_imports': common_imports, 323 } 324 325 # If false, no module index is generated. 326 html_use_modindex = True 327 328 # If false, no index is generated. 329 # html_use_index = True 330 331 # If true, the index is split into individual pages for each letter. 332 # html_split_index = False 333 334 # If true, links to the reST sources are added to the pages. 335 # html_show_sourcelink = True 336 337 # If true, an OpenSearch description file will be output, and all pages will 338 # contain a <link> tag referring to it. The value of this option must be the 339 # base URL from which the finished HTML is served. 340 # html_use_opensearch = '' 341 342 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). 343 # html_file_suffix = '' 344 345 # Output file base name for HTML help builder. 346 htmlhelp_basename = 'pandas' 347 348 # -- Options for nbsphinx ------------------------------------------------ 349 350 nbsphinx_allow_errors = True 351 352 # -- Options for LaTeX output -------------------------------------------- 353 354 # The paper size ('letter' or 'a4'). 355 # latex_paper_size = 'letter' 356 357 # The font size ('10pt', '11pt' or '12pt'). 358 # latex_font_size = '10pt' 359 360 # Grouping the document tree into LaTeX files. List of tuples (source start 361 # file, target name, title, author, documentclass [howto/manual]). 362 latex_documents = [ 363 ('index', 'pandas.tex', 364 'pandas: powerful Python data analysis toolkit', 365 r'Wes McKinney\n\& PyData Development Team', 'manual'), 366 ] 367 368 # The name of an image file (relative to this directory) to place at the top of 369 # the title page. 370 # latex_logo = None 371 372 # For "manual" documents, if this is true, then toplevel headings are parts, 373 # not chapters. 374 # latex_use_parts = False 375 376 # Additional stuff for the LaTeX preamble. 377 # latex_preamble = '' 378 379 # Documents to append as an appendix to all manuals. 380 # latex_appendices = [] 381 382 # If false, no module index is generated. 383 # latex_use_modindex = True 384 385 386 intersphinx_mapping = { 387 'statsmodels': ('http://www.statsmodels.org/devel/', None), 388 'matplotlib': ('http://matplotlib.org/', None), 389 'pandas-gbq': ('https://pandas-gbq.readthedocs.io/en/latest/', None), 390 'python': ('https://docs.python.org/3/', None), 391 'numpy': ('https://docs.scipy.org/doc/numpy/', None), 392 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None), 393 'py': ('https://pylib.readthedocs.io/en/latest/', None) 394 } 395 import glob 396 autosummary_generate = glob.glob("*.rst") 397 398 # extlinks alias 399 extlinks = {'issue': ('https://github.com/pandas-dev/pandas/issues/%s', 400 'GH'), 401 'wiki': ('https://github.com/pandas-dev/pandas/wiki/%s', 402 'wiki ')} 403 404 405 # ignore all deprecation warnings from Panel during doc build 406 # (to avoid the need to add :okwarning: in many places) 407 warnings.filterwarnings("ignore", message="\nPanel is deprecated", 408 category=FutureWarning) 409 410 411 ipython_warning_is_error = False 412 ipython_exec_lines = [ 413 'import numpy as np', 414 'import pandas as pd', 415 # This ensures correct rendering on system with console encoding != utf8 416 # (windows). It forces pandas to encode its output reprs using utf8 417 # wherever the docs are built. The docs' target is the browser, not 418 # the console, so this is fine. 419 'pd.options.display.encoding="utf8"' 420 ] 421 422 423 # Add custom Documenter to handle attributes/methods of an AccessorProperty 424 # eg pandas.Series.str and pandas.Series.dt (see GH9322) 425 426 import sphinx 427 from sphinx.util import rpartition 428 from sphinx.ext.autodoc import ( 429 Documenter, MethodDocumenter, AttributeDocumenter) 430 from sphinx.ext.autosummary import Autosummary 431 432 433 class AccessorDocumenter(MethodDocumenter): 434 """ 435 Specialized Documenter subclass for accessors. 436 """ 437 objtype = 'accessor' 438 directivetype = 'method' 439 440 # lower than MethodDocumenter so this is not chosen for normal methods 441 priority = 0.6 442 443 def format_signature(self): 444 # this method gives an error/warning for the accessors, therefore 445 # overriding it (accessor has no arguments) 446 return '' 447 448 449 class AccessorLevelDocumenter(Documenter): 450 """ 451 Specialized Documenter subclass for objects on accessor level (methods, 452 attributes). 453 """ 454 # This is the simple straightforward version 455 # modname is None, base the last elements (eg 'hour') 456 # and path the part before (eg 'Series.dt') 457 # def resolve_name(self, modname, parents, path, base): 458 # modname = 'pandas' 459 # mod_cls = path.rstrip('.') 460 # mod_cls = mod_cls.split('.') 461 # 462 # return modname, mod_cls + [base] 463 def resolve_name(self, modname, parents, path, base): 464 if modname is None: 465 if path: 466 mod_cls = path.rstrip('.') 467 else: 468 mod_cls = None 469 # if documenting a class-level object without path, 470 # there must be a current class, either from a parent 471 # auto directive ... 472 mod_cls = self.env.temp_data.get('autodoc:class') 473 # ... or from a class directive 474 if mod_cls is None: 475 mod_cls = self.env.temp_data.get('py:class') 476 # ... if still None, there's no way to know 477 if mod_cls is None: 478 return None, [] 479 # HACK: this is added in comparison to ClassLevelDocumenter 480 # mod_cls still exists of class.accessor, so an extra 481 # rpartition is needed 482 modname, accessor = rpartition(mod_cls, '.') 483 modname, cls = rpartition(modname, '.') 484 parents = [cls, accessor] 485 # if the module name is still missing, get it like above 486 if not modname: 487 modname = self.env.temp_data.get('autodoc:module') 488 if not modname: 489 if sphinx.__version__ > '1.3': 490 modname = self.env.ref_context.get('py:module') 491 else: 492 modname = self.env.temp_data.get('py:module') 493 # ... else, it stays None, which means invalid 494 return modname, parents + [base] 495 496 497 class AccessorAttributeDocumenter(AccessorLevelDocumenter, 498 AttributeDocumenter): 499 objtype = 'accessorattribute' 500 directivetype = 'attribute' 501 502 # lower than AttributeDocumenter so this is not chosen for normal 503 # attributes 504 priority = 0.6 505 506 507 class AccessorMethodDocumenter(AccessorLevelDocumenter, MethodDocumenter): 508 objtype = 'accessormethod' 509 directivetype = 'method' 510 511 # lower than MethodDocumenter so this is not chosen for normal methods 512 priority = 0.6 513 514 515 class AccessorCallableDocumenter(AccessorLevelDocumenter, MethodDocumenter): 516 """ 517 This documenter lets us removes .__call__ from the method signature for 518 callable accessors like Series.plot 519 """ 520 objtype = 'accessorcallable' 521 directivetype = 'method' 522 523 # lower than MethodDocumenter; otherwise the doc build prints warnings 524 priority = 0.5 525 526 def format_name(self): 527 return MethodDocumenter.format_name(self).rstrip('.__call__') 528 529 530 class PandasAutosummary(Autosummary): 531 """ 532 This alternative autosummary class lets us override the table summary for 533 Series.plot and DataFrame.plot in the API docs. 534 """ 535 def _replace_pandas_items(self, display_name, sig, summary, real_name): 536 # this a hack: ideally we should extract the signature from the 537 # .__call__ method instead of hard coding this 538 if display_name == 'DataFrame.plot': 539 sig = '([x, y, kind, ax, ....])' 540 summary = 'DataFrame plotting accessor and method' 541 elif display_name == 'Series.plot': 542 sig = '([kind, ax, figsize, ....])' 543 summary = 'Series plotting accessor and method' 544 return (display_name, sig, summary, real_name) 545 546 @staticmethod 547 def _is_deprecated(real_name): 548 try: 549 obj, parent, modname = _import_by_name(real_name) 550 except ImportError: 551 return False 552 doc = NumpyDocString(obj.__doc__ or '') 553 summary = ''.join(doc['Summary'] + doc['Extended Summary']) 554 return '.. deprecated::' in summary 555 556 def _add_deprecation_prefixes(self, items): 557 for item in items: 558 display_name, sig, summary, real_name = item 559 if self._is_deprecated(real_name): 560 summary = '(DEPRECATED) %s' % summary 561 yield display_name, sig, summary, real_name 562 563 def get_items(self, names): 564 items = Autosummary.get_items(self, names) 565 items = [self._replace_pandas_items(*item) for item in items] 566 items = list(self._add_deprecation_prefixes(items)) 567 return items 568 569 570 # based on numpy doc/source/conf.py 571 def linkcode_resolve(domain, info): 572 """ 573 Determine the URL corresponding to Python object 574 """ 575 if domain != 'py': 576 return None 577 578 modname = info['module'] 579 fullname = info['fullname'] 580 581 submod = sys.modules.get(modname) 582 if submod is None: 583 return None 584 585 obj = submod 586 for part in fullname.split('.'): 587 try: 588 obj = getattr(obj, part) 589 except: 590 return None 591 592 try: 593 # inspect.unwrap() was added in Python version 3.4 594 if sys.version_info >= (3, 5): 595 fn = inspect.getsourcefile(inspect.unwrap(obj)) 596 else: 597 fn = inspect.getsourcefile(obj) 598 except: 599 fn = None 600 if not fn: 601 return None 602 603 try: 604 source, lineno = inspect.getsourcelines(obj) 605 except: 606 lineno = None 607 608 if lineno: 609 linespec = "#L{:d}-L{:d}".format(lineno, lineno + len(source) - 1) 610 else: 611 linespec = "" 612 613 fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__)) 614 615 if '+' in pandas.__version__: 616 return ("http://github.com/pandas-dev/pandas/blob/master/pandas/" 617 "{}{}".format(fn, linespec)) 618 else: 619 return ("http://github.com/pandas-dev/pandas/blob/" 620 "v{}/pandas/{}{}".format(pandas.__version__, fn, linespec)) 621 622 623 # remove the docstring of the flags attribute (inherited from numpy ndarray) 624 # because these give doc build errors (see GH issue 5331) 625 def remove_flags_docstring(app, what, name, obj, options, lines): 626 if what == "attribute" and name.endswith(".flags"): 627 del lines[:] 628 629 630 def process_class_docstrings(app, what, name, obj, options, lines): 631 """ 632 For those classes for which we use :: 633 634 :template: autosummary/class_without_autosummary.rst 635 636 the documented attributes/methods have to be listed in the class 637 docstring. However, if one of those lists is empty, we use 'None', 638 which then generates warnings in sphinx / ugly html output. 639 This "autodoc-process-docstring" event connector removes that part 640 from the processed docstring. 641 642 """ 643 if what == "class": 644 joined = '\n'.join(lines) 645 646 templates = [ 647 """.. rubric:: Attributes 648 649 .. autosummary:: 650 :toctree: 651 652 None 653 """, 654 """.. rubric:: Methods 655 656 .. autosummary:: 657 :toctree: 658 659 None 660 """ 661 ] 662 663 for template in templates: 664 if template in joined: 665 joined = joined.replace(template, '') 666 lines[:] = joined.split('\n') 667 668 669 suppress_warnings = [ 670 # We "overwrite" autosummary with our PandasAutosummary, but 671 # still want the regular autosummary setup to run. So we just 672 # suppress this warning. 673 'app.add_directive' 674 ] 675 676 677 def rstjinja(app, docname, source): 678 """ 679 Render our pages as a jinja template for fancy templating goodness. 680 """ 681 # http://ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/ 682 # Make sure we're outputting HTML 683 if app.builder.format != 'html': 684 return 685 src = source[0] 686 rendered = app.builder.templates.render_string( 687 src, app.config.html_context 688 ) 689 source[0] = rendered 690 691 692 def setup(app): 693 app.connect("source-read", rstjinja) 694 app.connect("autodoc-process-docstring", remove_flags_docstring) 695 app.connect("autodoc-process-docstring", process_class_docstrings) 696 app.add_autodocumenter(AccessorDocumenter) 697 app.add_autodocumenter(AccessorAttributeDocumenter) 698 app.add_autodocumenter(AccessorMethodDocumenter) 699 app.add_autodocumenter(AccessorCallableDocumenter) 700 app.add_directive('autosummary', PandasAutosummary) 701 [end of doc/source/conf.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
e413c491e090274aad78489cc17a2e29cbd8e269
API/DEPR: replace raise_conflict-kwarg in df.update with errors From [review](https://github.com/pandas-dev/pandas/pull/23192/files#r225758408) in #23192: > pls create [an issue] to deprecate this arg and rename to `errors=` The idea is to replace the `raise_conflict=True|False` with `errors='ignore'|'raise'`. Same goes for `Panel`.
2018-11-12T23:24:48Z
<patch> diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst --- a/doc/source/whatsnew/v0.24.0.rst +++ b/doc/source/whatsnew/v0.24.0.rst @@ -981,6 +981,7 @@ Deprecations - The ``fastpath`` keyword of the different Index constructors is deprecated (:issue:`23110`). - :meth:`Timestamp.tz_localize`, :meth:`DatetimeIndex.tz_localize`, and :meth:`Series.tz_localize` have deprecated the ``errors`` argument in favor of the ``nonexistent`` argument (:issue:`8917`) - The class ``FrozenNDArray`` has been deprecated. When unpickling, ``FrozenNDArray`` will be unpickled to ``np.ndarray`` once this class is removed (:issue:`9031`) +- The methods :meth:`DataFrame.update` and :meth:`Panel.update` have deprecated the ``raise_conflict=False|True`` keyword in favor of ``errors='ignore'|'raise'`` (:issue:`23585`) - Deprecated the `nthreads` keyword of :func:`pandas.read_feather` in favor of `use_threads` to reflect the changes in pyarrow 0.11.0. (:issue:`23053`) - :func:`pandas.read_excel` has deprecated accepting ``usecols`` as an integer. Please pass in a list of ints from 0 to ``usecols`` inclusive instead (:issue:`23527`) diff --git a/pandas/core/frame.py b/pandas/core/frame.py --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -5213,8 +5213,10 @@ def combiner(x, y): return self.combine(other, combiner, overwrite=False) + @deprecate_kwarg(old_arg_name='raise_conflict', new_arg_name='errors', + mapping={False: 'ignore', True: 'raise'}) def update(self, other, join='left', overwrite=True, filter_func=None, - raise_conflict=False): + errors='ignore'): """ Modify in place using non-NA values from another DataFrame. @@ -5238,17 +5240,28 @@ def update(self, other, join='left', overwrite=True, filter_func=None, * False: only update values that are NA in the original DataFrame. - filter_func : callable(1d-array) -> boolean 1d-array, optional + filter_func : callable(1d-array) -> bool 1d-array, optional Can choose to replace values other than NA. Return True for values that should be updated. - raise_conflict : bool, default False - If True, will raise a ValueError if the DataFrame and `other` + errors : {'raise', 'ignore'}, default 'ignore' + If 'raise', will raise a ValueError if the DataFrame and `other` both contain non-NA data in the same place. + .. versionchanged :: 0.24.0 + Changed from `raise_conflict=False|True` + to `errors='ignore'|'raise'`. + + Returns + ------- + None : method directly changes calling object + Raises ------ ValueError - When `raise_conflict` is True and there's overlapping non-NA data. + * When `errors='raise'` and there's overlapping non-NA data. + * When `errors` is not either `'ignore'` or `'raise'` + NotImplementedError + * If `join != 'left'` See Also -------- @@ -5319,6 +5332,9 @@ def update(self, other, join='left', overwrite=True, filter_func=None, # TODO: Support other joins if join != 'left': # pragma: no cover raise NotImplementedError("Only left join is supported") + if errors not in ['ignore', 'raise']: + raise ValueError("The parameter errors must be either " + "'ignore' or 'raise'") if not isinstance(other, DataFrame): other = DataFrame(other) @@ -5332,7 +5348,7 @@ def update(self, other, join='left', overwrite=True, filter_func=None, with np.errstate(all='ignore'): mask = ~filter_func(this) | isna(that) else: - if raise_conflict: + if errors == 'raise': mask_this = notna(that) mask_that = notna(this) if any(mask_this & mask_that): diff --git a/pandas/core/panel.py b/pandas/core/panel.py --- a/pandas/core/panel.py +++ b/pandas/core/panel.py @@ -32,7 +32,7 @@ create_block_manager_from_blocks) from pandas.core.series import Series from pandas.core.reshape.util import cartesian_product -from pandas.util._decorators import Appender, Substitution +from pandas.util._decorators import Appender, Substitution, deprecate_kwarg from pandas.util._validators import validate_axis_style_args _shared_doc_kwargs = dict( @@ -1235,7 +1235,12 @@ def reindex(self, *args, **kwargs): kwargs.update(axes) kwargs.pop('axis', None) kwargs.pop('labels', None) - return super(Panel, self).reindex(**kwargs) + + with warnings.catch_warnings(): + warnings.simplefilter("ignore", FutureWarning) + # do not warn about constructing Panel when reindexing + result = super(Panel, self).reindex(**kwargs) + return result @Substitution(**_shared_doc_kwargs) @Appender(NDFrame.rename.__doc__) @@ -1377,25 +1382,37 @@ def join(self, other, how='left', lsuffix='', rsuffix=''): return concat([self] + list(other), axis=0, join=how, join_axes=join_axes, verify_integrity=True) + @deprecate_kwarg(old_arg_name='raise_conflict', new_arg_name='errors', + mapping={False: 'ignore', True: 'raise'}) def update(self, other, join='left', overwrite=True, filter_func=None, - raise_conflict=False): + errors='ignore'): """ - Modify Panel in place using non-NA values from passed - Panel, or object coercible to Panel. Aligns on items + Modify Panel in place using non-NA values from other Panel. + + May also use object coercible to Panel. Will align on items. Parameters ---------- other : Panel, or object coercible to Panel - join : How to join individual DataFrames - {'left', 'right', 'outer', 'inner'}, default 'left' - overwrite : boolean, default True - If True then overwrite values for common keys in the calling panel - filter_func : callable(1d-array) -> 1d-array<boolean>, default None + The object from which the caller will be udpated. + join : {'left', 'right', 'outer', 'inner'}, default 'left' + How individual DataFrames are joined. + overwrite : bool, default True + If True then overwrite values for common keys in the calling Panel. + filter_func : callable(1d-array) -> 1d-array<bool>, default None Can choose to replace values other than NA. Return True for values - that should be updated - raise_conflict : bool - If True, will raise an error if a DataFrame and other both - contain data in the same place. + that should be updated. + errors : {'raise', 'ignore'}, default 'ignore' + If 'raise', will raise an error if a DataFrame and other both. + + .. versionchanged :: 0.24.0 + Changed from `raise_conflict=False|True` + to `errors='ignore'|'raise'`. + + See Also + -------- + DataFrame.update : Similar method for DataFrames. + dict.update : Similar method for dictionaries. """ if not isinstance(other, self._constructor): @@ -1406,8 +1423,8 @@ def update(self, other, join='left', overwrite=True, filter_func=None, other = other.reindex(**{axis_name: axis_values}) for frame in axis_values: - self[frame].update(other[frame], join, overwrite, filter_func, - raise_conflict) + self[frame].update(other[frame], join=join, overwrite=overwrite, + filter_func=filter_func, errors=errors) def _get_join_index(self, other, how): if how == 'left': </patch>
[]
[]
open-mmlab__mmdetection-6279
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [BUG]bug in ConvFCBBoxHead's init cfg. https://github.com/open-mmlab/mmdetection/blob/c88509cb9a73d6bd1edcba64eb924d3cf3cfe85d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L103 This line will override initializers for fc_cls and fc_reg because they are also nn.Linear. Or is it what's intended? But I see the old way to initialize fc_cls and fc_reg is using Normal. </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="resources/mmdet-logo.png" width="600"/> 3 </div> 4 5 [![PyPI](https://img.shields.io/pypi/v/mmdet)](https://pypi.org/project/mmdet) 6 [![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection.readthedocs.io/en/latest/) 7 [![badge](https://github.com/open-mmlab/mmdetection/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection/actions) 8 [![codecov](https://codecov.io/gh/open-mmlab/mmdetection/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection) 9 [![license](https://img.shields.io/github/license/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/blob/master/LICENSE) 10 [![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/issues) 11 12 Documentation: https://mmdetection.readthedocs.io/ 13 14 ## Introduction 15 16 English | [简体中文](README_zh-CN.md) 17 18 MMDetection is an open source object detection toolbox based on PyTorch. It is 19 a part of the [OpenMMLab](https://openmmlab.com/) project. 20 21 The master branch works with **PyTorch 1.3+**. 22 The old v1.x branch works with PyTorch 1.1 to 1.4, but v2.0 is strongly recommended for faster speed, higher performance, better design and more friendly usage. 23 24 ![demo image](resources/coco_test_12510.jpg) 25 26 ### Major features 27 28 - **Modular Design** 29 30 We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules. 31 32 - **Support of multiple frameworks out of box** 33 34 The toolbox directly supports popular and contemporary detection frameworks, *e.g.* Faster RCNN, Mask RCNN, RetinaNet, etc. 35 36 - **High efficiency** 37 38 All basic bbox and mask operations run on GPUs. The training speed is faster than or comparable to other codebases, including [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) and [SimpleDet](https://github.com/TuSimple/simpledet). 39 40 - **State of the art** 41 42 The toolbox stems from the codebase developed by the *MMDet* team, who won [COCO Detection Challenge](http://cocodataset.org/#detection-leaderboard) in 2018, and we keep pushing it forward. 43 44 Apart from MMDetection, we also released a library [mmcv](https://github.com/open-mmlab/mmcv) for computer vision research, which is heavily depended on by this toolbox. 45 46 ## License 47 48 This project is released under the [Apache 2.0 license](LICENSE). 49 50 ## Changelog 51 52 v2.17.0 was released in 28/09/2021. 53 Please refer to [changelog.md](docs/changelog.md) for details and release history. 54 A comparison between v1.x and v2.0 codebases can be found in [compatibility.md](docs/compatibility.md). 55 56 ## Benchmark and model zoo 57 58 Results and models are available in the [model zoo](docs/model_zoo.md). 59 60 Supported backbones: 61 62 - [x] ResNet (CVPR'2016) 63 - [x] ResNeXt (CVPR'2017) 64 - [x] VGG (ICLR'2015) 65 - [x] MobileNetV2 (CVPR'2018) 66 - [x] HRNet (CVPR'2019) 67 - [x] RegNet (CVPR'2020) 68 - [x] Res2Net (TPAMI'2020) 69 - [x] ResNeSt (ArXiv'2020) 70 - [X] Swin (CVPR'2021) 71 - [x] PVT (ICCV'2021) 72 - [x] PVTv2 (ArXiv'2021) 73 74 Supported methods: 75 76 - [x] [RPN (NeurIPS'2015)](configs/rpn) 77 - [x] [Fast R-CNN (ICCV'2015)](configs/fast_rcnn) 78 - [x] [Faster R-CNN (NeurIPS'2015)](configs/faster_rcnn) 79 - [x] [Mask R-CNN (ICCV'2017)](configs/mask_rcnn) 80 - [x] [Cascade R-CNN (CVPR'2018)](configs/cascade_rcnn) 81 - [x] [Cascade Mask R-CNN (CVPR'2018)](configs/cascade_rcnn) 82 - [x] [SSD (ECCV'2016)](configs/ssd) 83 - [x] [RetinaNet (ICCV'2017)](configs/retinanet) 84 - [x] [GHM (AAAI'2019)](configs/ghm) 85 - [x] [Mask Scoring R-CNN (CVPR'2019)](configs/ms_rcnn) 86 - [x] [Double-Head R-CNN (CVPR'2020)](configs/double_heads) 87 - [x] [Hybrid Task Cascade (CVPR'2019)](configs/htc) 88 - [x] [Libra R-CNN (CVPR'2019)](configs/libra_rcnn) 89 - [x] [Guided Anchoring (CVPR'2019)](configs/guided_anchoring) 90 - [x] [FCOS (ICCV'2019)](configs/fcos) 91 - [x] [RepPoints (ICCV'2019)](configs/reppoints) 92 - [x] [Foveabox (TIP'2020)](configs/foveabox) 93 - [x] [FreeAnchor (NeurIPS'2019)](configs/free_anchor) 94 - [x] [NAS-FPN (CVPR'2019)](configs/nas_fpn) 95 - [x] [ATSS (CVPR'2020)](configs/atss) 96 - [x] [FSAF (CVPR'2019)](configs/fsaf) 97 - [x] [PAFPN (CVPR'2018)](configs/pafpn) 98 - [x] [Dynamic R-CNN (ECCV'2020)](configs/dynamic_rcnn) 99 - [x] [PointRend (CVPR'2020)](configs/point_rend) 100 - [x] [CARAFE (ICCV'2019)](configs/carafe/README.md) 101 - [x] [DCNv2 (CVPR'2019)](configs/dcn/README.md) 102 - [x] [Group Normalization (ECCV'2018)](configs/gn/README.md) 103 - [x] [Weight Standardization (ArXiv'2019)](configs/gn+ws/README.md) 104 - [x] [OHEM (CVPR'2016)](configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py) 105 - [x] [Soft-NMS (ICCV'2017)](configs/faster_rcnn/faster_rcnn_r50_fpn_soft_nms_1x_coco.py) 106 - [x] [Generalized Attention (ICCV'2019)](configs/empirical_attention/README.md) 107 - [x] [GCNet (ICCVW'2019)](configs/gcnet/README.md) 108 - [x] [Mixed Precision (FP16) Training (ArXiv'2017)](configs/fp16/README.md) 109 - [x] [InstaBoost (ICCV'2019)](configs/instaboost/README.md) 110 - [x] [GRoIE (ICPR'2020)](configs/groie/README.md) 111 - [x] [DetectoRS (ArXiv'2020)](configs/detectors/README.md) 112 - [x] [Generalized Focal Loss (NeurIPS'2020)](configs/gfl/README.md) 113 - [x] [CornerNet (ECCV'2018)](configs/cornernet/README.md) 114 - [x] [Side-Aware Boundary Localization (ECCV'2020)](configs/sabl/README.md) 115 - [x] [YOLOv3 (ArXiv'2018)](configs/yolo/README.md) 116 - [x] [PAA (ECCV'2020)](configs/paa/README.md) 117 - [x] [YOLACT (ICCV'2019)](configs/yolact/README.md) 118 - [x] [CentripetalNet (CVPR'2020)](configs/centripetalnet/README.md) 119 - [x] [VFNet (ArXiv'2020)](configs/vfnet/README.md) 120 - [x] [DETR (ECCV'2020)](configs/detr/README.md) 121 - [x] [Deformable DETR (ICLR'2021)](configs/deformable_detr/README.md) 122 - [x] [CascadeRPN (NeurIPS'2019)](configs/cascade_rpn/README.md) 123 - [x] [SCNet (AAAI'2021)](configs/scnet/README.md) 124 - [x] [AutoAssign (ArXiv'2020)](configs/autoassign/README.md) 125 - [x] [YOLOF (CVPR'2021)](configs/yolof/README.md) 126 - [x] [Seasaw Loss (CVPR'2021)](configs/seesaw_loss/README.md) 127 - [x] [CenterNet (CVPR'2019)](configs/centernet/README.md) 128 - [x] [YOLOX (ArXiv'2021)](configs/yolox/README.md) 129 - [x] [SOLO (ECCV'2020)](configs/solo/README.md) 130 131 Some other methods are also supported in [projects using MMDetection](./docs/projects.md). 132 133 ## Installation 134 135 Please refer to [get_started.md](docs/get_started.md) for installation. 136 137 ## Getting Started 138 139 Please see [get_started.md](docs/get_started.md) for the basic usage of MMDetection. 140 We provide [colab tutorial](demo/MMDet_Tutorial.ipynb), and full guidance for quick run [with existing dataset](docs/1_exist_data_model.md) and [with new dataset](docs/2_new_data_model.md) for beginners. 141 There are also tutorials for [finetuning models](docs/tutorials/finetune.md), [adding new dataset](docs/tutorials/customize_dataset.md), [designing data pipeline](docs/tutorials/data_pipeline.md), [customizing models](docs/tutorials/customize_models.md), [customizing runtime settings](docs/tutorials/customize_runtime.md) and [useful tools](docs/useful_tools.md). 142 143 Please refer to [FAQ](docs/faq.md) for frequently asked questions. 144 145 ## Contributing 146 147 We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline. 148 149 ## Acknowledgement 150 151 MMDetection is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. 152 We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors. 153 154 ## Citation 155 156 If you use this toolbox or benchmark in your research, please cite this project. 157 158 ``` 159 @article{mmdetection, 160 title = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark}, 161 author = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and 162 Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and 163 Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and 164 Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and 165 Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong 166 and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua}, 167 journal= {arXiv preprint arXiv:1906.07155}, 168 year={2019} 169 } 170 ``` 171 172 ## Projects in OpenMMLab 173 174 - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision. 175 - [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages. 176 - [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark. 177 - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark. 178 - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection. 179 - [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark. 180 - [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark. 181 - [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark. 182 - [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark. 183 - [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox. 184 - [MMOCR](https://github.com/open-mmlab/mmocr): A Comprehensive Toolbox for Text Detection, Recognition and Understanding. 185 - [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox. 186 [end of README.md] [start of README_zh-CN.md] 1 <div align="center"> 2 <img src="resources/mmdet-logo.png" width="600"/> 3 </div> 4 5 **新闻**: 我们在 [ArXiv](https://arxiv.org/abs/1906.07155) 上公开了技术报告。 6 7 文档: https://mmdetection.readthedocs.io/ 8 9 ## 简介 10 11 [English](README.md) | 简体中文 12 13 MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [OpenMMLab](https://openmmlab.com/) 项目的一部分。 14 15 主分支代码目前支持 PyTorch 1.3 以上的版本。 16 17 v1.x 的历史版本支持 PyTorch 1.1 到 1.4,但是我们强烈建议用户使用新的 2.x 的版本,新的版本速度更快,性能更高,有更优雅的代码设计,对用户使用也更加友好。 18 19 ![demo image](resources/coco_test_12510.jpg) 20 21 ### 主要特性 22 23 - **模块化设计** 24 25 MMDetection 将检测框架解耦成不同的模块组件,通过组合不同的模块组件,用户可以便捷地构建自定义的检测模型 26 27 - **丰富的即插即用的算法和模型** 28 29 MMDetection 支持了众多主流的和最新的检测算法,例如 Faster R-CNN,Mask R-CNN,RetinaNet 等。 30 31 - **速度快** 32 33 基本的框和 mask 操作都实现了 GPU 版本,训练速度比其他代码库更快或者相当,包括 [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) 和 [SimpleDet](https://github.com/TuSimple/simpledet)。 34 35 - **性能高** 36 37 MMDetection 这个算法库源自于 COCO 2018 目标检测竞赛的冠军团队 *MMDet* 团队开发的代码,我们在之后持续进行了改进和提升。 38 39 除了 MMDetection 之外,我们还开源了计算机视觉基础库 [MMCV](https://github.com/open-mmlab/mmcv),MMCV 是 MMDetection 的主要依赖。 40 41 ## 开源许可证 42 43 该项目采用 [Apache 2.0 开源许可证](LICENSE)。 44 45 ## 更新日志 46 47 最新的月度版本 v2.17.0 在 2021.09.28 发布。 48 如果想了解更多版本更新细节和历史信息,请阅读[更新日志](docs/changelog.md)。 49 在[兼容性说明文档](docs_zh-CN/compatibility.md)中我们提供了 1.x 和 2.0 版本的详细比较。 50 51 ## 基准测试和模型库 52 53 测试结果和模型可以在[模型库](docs/model_zoo.md)中找到。 54 55 已支持的骨干网络: 56 57 - [x] ResNet (CVPR'2016) 58 - [x] ResNeXt (CVPR'2017) 59 - [x] VGG (ICLR'2015) 60 - [x] MobileNetV2 (CVPR'2018) 61 - [x] HRNet (CVPR'2019) 62 - [x] RegNet (CVPR'2020) 63 - [x] Res2Net (TPAMI'2020) 64 - [x] ResNeSt (ArXiv'2020) 65 - [X] Swin (CVPR'2021) 66 - [x] PVT (ICCV'2021) 67 - [x] PVTv2 (ArXiv'2021) 68 69 已支持的算法: 70 71 - [x] [RPN (NeurIPS'2015)](configs/rpn) 72 - [x] [Fast R-CNN (ICCV'2015)](configs/fast_rcnn) 73 - [x] [Faster R-CNN (NeurIPS'2015)](configs/faster_rcnn) 74 - [x] [Mask R-CNN (ICCV'2017)](configs/mask_rcnn) 75 - [x] [Cascade R-CNN (CVPR'2018)](configs/cascade_rcnn) 76 - [x] [Cascade Mask R-CNN (CVPR'2018)](configs/cascade_rcnn) 77 - [x] [SSD (ECCV'2016)](configs/ssd) 78 - [x] [RetinaNet (ICCV'2017)](configs/retinanet) 79 - [x] [GHM (AAAI'2019)](configs/ghm) 80 - [x] [Mask Scoring R-CNN (CVPR'2019)](configs/ms_rcnn) 81 - [x] [Double-Head R-CNN (CVPR'2020)](configs/double_heads) 82 - [x] [Hybrid Task Cascade (CVPR'2019)](configs/htc) 83 - [x] [Libra R-CNN (CVPR'2019)](configs/libra_rcnn) 84 - [x] [Guided Anchoring (CVPR'2019)](configs/guided_anchoring) 85 - [x] [FCOS (ICCV'2019)](configs/fcos) 86 - [x] [RepPoints (ICCV'2019)](configs/reppoints) 87 - [x] [Foveabox (TIP'2020)](configs/foveabox) 88 - [x] [FreeAnchor (NeurIPS'2019)](configs/free_anchor) 89 - [x] [NAS-FPN (CVPR'2019)](configs/nas_fpn) 90 - [x] [ATSS (CVPR'2020)](configs/atss) 91 - [x] [FSAF (CVPR'2019)](configs/fsaf) 92 - [x] [PAFPN (CVPR'2018)](configs/pafpn) 93 - [x] [Dynamic R-CNN (ECCV'2020)](configs/dynamic_rcnn) 94 - [x] [PointRend (CVPR'2020)](configs/point_rend) 95 - [x] [CARAFE (ICCV'2019)](configs/carafe/README.md) 96 - [x] [DCNv2 (CVPR'2019)](configs/dcn/README.md) 97 - [x] [Group Normalization (ECCV'2018)](configs/gn/README.md) 98 - [x] [Weight Standardization (ArXiv'2019)](configs/gn+ws/README.md) 99 - [x] [OHEM (CVPR'2016)](configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py) 100 - [x] [Soft-NMS (ICCV'2017)](configs/faster_rcnn/faster_rcnn_r50_fpn_soft_nms_1x_coco.py) 101 - [x] [Generalized Attention (ICCV'2019)](configs/empirical_attention/README.md) 102 - [x] [GCNet (ICCVW'2019)](configs/gcnet/README.md) 103 - [x] [Mixed Precision (FP16) Training (ArXiv'2017)](configs/fp16/README.md) 104 - [x] [InstaBoost (ICCV'2019)](configs/instaboost/README.md) 105 - [x] [GRoIE (ICPR'2020)](configs/groie/README.md) 106 - [x] [DetectoRS (ArXiv'2020)](configs/detectors/README.md) 107 - [x] [Generalized Focal Loss (NeurIPS'2020)](configs/gfl/README.md) 108 - [x] [CornerNet (ECCV'2018)](configs/cornernet/README.md) 109 - [x] [Side-Aware Boundary Localization (ECCV'2020)](configs/sabl/README.md) 110 - [x] [YOLOv3 (ArXiv'2018)](configs/yolo/README.md) 111 - [x] [PAA (ECCV'2020)](configs/paa/README.md) 112 - [x] [YOLACT (ICCV'2019)](configs/yolact/README.md) 113 - [x] [CentripetalNet (CVPR'2020)](configs/centripetalnet/README.md) 114 - [x] [VFNet (ArXiv'2020)](configs/vfnet/README.md) 115 - [x] [DETR (ECCV'2020)](configs/detr/README.md) 116 - [x] [Deformable DETR (ICLR'2021)](configs/deformable_detr/README.md) 117 - [x] [CascadeRPN (NeurIPS'2019)](configs/cascade_rpn/README.md) 118 - [x] [SCNet (AAAI'2021)](configs/scnet/README.md) 119 - [x] [AutoAssign (ArXiv'2020)](configs/autoassign/README.md) 120 - [x] [YOLOF (CVPR'2021)](configs/yolof/README.md) 121 - [x] [Seasaw Loss (CVPR'2021)](configs/seesaw_loss/README.md) 122 - [x] [CenterNet (CVPR'2019)](configs/centernet/README.md) 123 - [x] [YOLOX (ArXiv'2021)](configs/yolox/README.md) 124 - [x] [SOLO (ECCV'2020)](configs/solo/README.md) 125 126 我们在[基于 MMDetection 的项目](./docs/projects.md)中列举了一些其他的支持的算法。 127 128 ## 安装 129 130 请参考[快速入门文档](docs/get_started.md)进行安装。 131 132 ## 快速入门 133 134 请参考[快速入门文档](docs/get_started.md)学习 MMDetection 的基本使用。 135 我们提供了 [colab 教程](demo/MMDet_Tutorial.ipynb),也为新手提供了完整的运行教程,分别针对[已有数据集](docs/1_exist_data_model.md)和[新数据集](docs/2_new_data_model.md) 完整的使用指南 136 137 我们也提供了一些进阶教程,内容覆盖了 [finetune 模型](docs/tutorials/finetune.md),[增加新数据集支持](docs/tutorials/new_dataset.md),[设计新的数据预处理流程](docs/tutorials/data_pipeline.md),[增加自定义模型](ocs/tutorials/customize_models.md),[增加自定义的运行时配置](docs/tutorials/customize_runtime.md),[常用工具和脚本](docs/useful_tools.md)。 138 139 如果遇到问题,请参考 [常见问题解答](docs_zh-CN/faq.md)。 140 141 ## 贡献指南 142 143 我们感谢所有的贡献者为改进和提升 MMDetection 所作出的努力。请参考[贡献指南](.github/CONTRIBUTING.md)来了解参与项目贡献的相关指引。 144 145 ## 致谢 146 147 MMDetection 是一款由来自不同高校和企业的研发人员共同参与贡献的开源项目。我们感谢所有为项目提供算法复现和新功能支持的贡献者,以及提供宝贵反馈的用户。 我们希望这个工具箱和基准测试可以为社区提供灵活的代码工具,供用户复现已有算法并开发自己的新模型,从而不断为开源社区提供贡献。 148 149 ## 引用 150 151 如果你在研究中使用了本项目的代码或者性能基准,请参考如下 bibtex 引用 MMDetection。 152 153 ``` 154 @article{mmdetection, 155 title = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark}, 156 author = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and 157 Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and 158 Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and 159 Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and 160 Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong 161 and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua}, 162 journal= {arXiv preprint arXiv:1906.07155}, 163 year={2019} 164 } 165 ``` 166 167 ## OpenMMLab 的其他项目 168 169 - [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库 170 - [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口 171 - [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱 172 - [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱 173 - [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台 174 - [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱 175 - [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱 176 - [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台 177 - [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱 178 - [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱 179 - [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包 180 - [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱 181 182 ## 欢迎加入 OpenMMLab 社区 183 184 扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3) 185 186 <div align="center"> 187 <img src="/resources/zhihu_qrcode.jpg" height="400" /> <img src="/resources/qq_group_qrcode.jpg" height="400" /> 188 </div> 189 190 我们会在 OpenMMLab 社区为大家 191 192 - 📢 分享 AI 框架的前沿核心技术 193 - 💻 解读 PyTorch 常用模块源码 194 - 📰 发布 OpenMMLab 的相关新闻 195 - 🚀 介绍 OpenMMLab 开发的前沿算法 196 - 🏃 获取更高效的问题答疑和意见反馈 197 - 🔥 提供与各行各业开发者充分交流的平台 198 199 干货满满 📘,等你来撩 💗,OpenMMLab 社区期待您的加入 👬 200 [end of README_zh-CN.md] [start of mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py] 1 # Copyright (c) OpenMMLab. All rights reserved. 2 import torch.nn as nn 3 from mmcv.cnn import ConvModule 4 5 from mmdet.models.builder import HEADS 6 from mmdet.models.utils import build_linear_layer 7 from .bbox_head import BBoxHead 8 9 10 @HEADS.register_module() 11 class ConvFCBBoxHead(BBoxHead): 12 r"""More general bbox head, with shared conv and fc layers and two optional 13 separated branches. 14 15 .. code-block:: none 16 17 /-> cls convs -> cls fcs -> cls 18 shared convs -> shared fcs 19 \-> reg convs -> reg fcs -> reg 20 """ # noqa: W605 21 22 def __init__(self, 23 num_shared_convs=0, 24 num_shared_fcs=0, 25 num_cls_convs=0, 26 num_cls_fcs=0, 27 num_reg_convs=0, 28 num_reg_fcs=0, 29 conv_out_channels=256, 30 fc_out_channels=1024, 31 conv_cfg=None, 32 norm_cfg=None, 33 init_cfg=None, 34 *args, 35 **kwargs): 36 super(ConvFCBBoxHead, self).__init__( 37 *args, init_cfg=init_cfg, **kwargs) 38 assert (num_shared_convs + num_shared_fcs + num_cls_convs + 39 num_cls_fcs + num_reg_convs + num_reg_fcs > 0) 40 if num_cls_convs > 0 or num_reg_convs > 0: 41 assert num_shared_fcs == 0 42 if not self.with_cls: 43 assert num_cls_convs == 0 and num_cls_fcs == 0 44 if not self.with_reg: 45 assert num_reg_convs == 0 and num_reg_fcs == 0 46 self.num_shared_convs = num_shared_convs 47 self.num_shared_fcs = num_shared_fcs 48 self.num_cls_convs = num_cls_convs 49 self.num_cls_fcs = num_cls_fcs 50 self.num_reg_convs = num_reg_convs 51 self.num_reg_fcs = num_reg_fcs 52 self.conv_out_channels = conv_out_channels 53 self.fc_out_channels = fc_out_channels 54 self.conv_cfg = conv_cfg 55 self.norm_cfg = norm_cfg 56 57 # add shared convs and fcs 58 self.shared_convs, self.shared_fcs, last_layer_dim = \ 59 self._add_conv_fc_branch( 60 self.num_shared_convs, self.num_shared_fcs, self.in_channels, 61 True) 62 self.shared_out_channels = last_layer_dim 63 64 # add cls specific branch 65 self.cls_convs, self.cls_fcs, self.cls_last_dim = \ 66 self._add_conv_fc_branch( 67 self.num_cls_convs, self.num_cls_fcs, self.shared_out_channels) 68 69 # add reg specific branch 70 self.reg_convs, self.reg_fcs, self.reg_last_dim = \ 71 self._add_conv_fc_branch( 72 self.num_reg_convs, self.num_reg_fcs, self.shared_out_channels) 73 74 if self.num_shared_fcs == 0 and not self.with_avg_pool: 75 if self.num_cls_fcs == 0: 76 self.cls_last_dim *= self.roi_feat_area 77 if self.num_reg_fcs == 0: 78 self.reg_last_dim *= self.roi_feat_area 79 80 self.relu = nn.ReLU(inplace=True) 81 # reconstruct fc_cls and fc_reg since input channels are changed 82 if self.with_cls: 83 if self.custom_cls_channels: 84 cls_channels = self.loss_cls.get_cls_channels(self.num_classes) 85 else: 86 cls_channels = self.num_classes + 1 87 self.fc_cls = build_linear_layer( 88 self.cls_predictor_cfg, 89 in_features=self.cls_last_dim, 90 out_features=cls_channels) 91 if self.with_reg: 92 out_dim_reg = (4 if self.reg_class_agnostic else 4 * 93 self.num_classes) 94 self.fc_reg = build_linear_layer( 95 self.reg_predictor_cfg, 96 in_features=self.reg_last_dim, 97 out_features=out_dim_reg) 98 99 if init_cfg is None: 100 self.init_cfg += [ 101 dict( 102 type='Xavier', 103 layer='Linear', 104 override=[ 105 dict(name='shared_fcs'), 106 dict(name='cls_fcs'), 107 dict(name='reg_fcs') 108 ]) 109 ] 110 111 def _add_conv_fc_branch(self, 112 num_branch_convs, 113 num_branch_fcs, 114 in_channels, 115 is_shared=False): 116 """Add shared or separable branch. 117 118 convs -> avg pool (optional) -> fcs 119 """ 120 last_layer_dim = in_channels 121 # add branch specific conv layers 122 branch_convs = nn.ModuleList() 123 if num_branch_convs > 0: 124 for i in range(num_branch_convs): 125 conv_in_channels = ( 126 last_layer_dim if i == 0 else self.conv_out_channels) 127 branch_convs.append( 128 ConvModule( 129 conv_in_channels, 130 self.conv_out_channels, 131 3, 132 padding=1, 133 conv_cfg=self.conv_cfg, 134 norm_cfg=self.norm_cfg)) 135 last_layer_dim = self.conv_out_channels 136 # add branch specific fc layers 137 branch_fcs = nn.ModuleList() 138 if num_branch_fcs > 0: 139 # for shared branch, only consider self.with_avg_pool 140 # for separated branches, also consider self.num_shared_fcs 141 if (is_shared 142 or self.num_shared_fcs == 0) and not self.with_avg_pool: 143 last_layer_dim *= self.roi_feat_area 144 for i in range(num_branch_fcs): 145 fc_in_channels = ( 146 last_layer_dim if i == 0 else self.fc_out_channels) 147 branch_fcs.append( 148 nn.Linear(fc_in_channels, self.fc_out_channels)) 149 last_layer_dim = self.fc_out_channels 150 return branch_convs, branch_fcs, last_layer_dim 151 152 def forward(self, x): 153 # shared part 154 if self.num_shared_convs > 0: 155 for conv in self.shared_convs: 156 x = conv(x) 157 158 if self.num_shared_fcs > 0: 159 if self.with_avg_pool: 160 x = self.avg_pool(x) 161 162 x = x.flatten(1) 163 164 for fc in self.shared_fcs: 165 x = self.relu(fc(x)) 166 # separate branches 167 x_cls = x 168 x_reg = x 169 170 for conv in self.cls_convs: 171 x_cls = conv(x_cls) 172 if x_cls.dim() > 2: 173 if self.with_avg_pool: 174 x_cls = self.avg_pool(x_cls) 175 x_cls = x_cls.flatten(1) 176 for fc in self.cls_fcs: 177 x_cls = self.relu(fc(x_cls)) 178 179 for conv in self.reg_convs: 180 x_reg = conv(x_reg) 181 if x_reg.dim() > 2: 182 if self.with_avg_pool: 183 x_reg = self.avg_pool(x_reg) 184 x_reg = x_reg.flatten(1) 185 for fc in self.reg_fcs: 186 x_reg = self.relu(fc(x_reg)) 187 188 cls_score = self.fc_cls(x_cls) if self.with_cls else None 189 bbox_pred = self.fc_reg(x_reg) if self.with_reg else None 190 return cls_score, bbox_pred 191 192 193 @HEADS.register_module() 194 class Shared2FCBBoxHead(ConvFCBBoxHead): 195 196 def __init__(self, fc_out_channels=1024, *args, **kwargs): 197 super(Shared2FCBBoxHead, self).__init__( 198 num_shared_convs=0, 199 num_shared_fcs=2, 200 num_cls_convs=0, 201 num_cls_fcs=0, 202 num_reg_convs=0, 203 num_reg_fcs=0, 204 fc_out_channels=fc_out_channels, 205 *args, 206 **kwargs) 207 208 209 @HEADS.register_module() 210 class Shared4Conv1FCBBoxHead(ConvFCBBoxHead): 211 212 def __init__(self, fc_out_channels=1024, *args, **kwargs): 213 super(Shared4Conv1FCBBoxHead, self).__init__( 214 num_shared_convs=4, 215 num_shared_fcs=1, 216 num_cls_convs=0, 217 num_cls_fcs=0, 218 num_reg_convs=0, 219 num_reg_fcs=0, 220 fc_out_channels=fc_out_channels, 221 *args, 222 **kwargs) 223 [end of mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py] [start of tools/model_converters/upgrade_model_version.py] 1 # Copyright (c) OpenMMLab. All rights reserved. 2 import argparse 3 import re 4 import tempfile 5 from collections import OrderedDict 6 7 import torch 8 from mmcv import Config 9 10 11 def is_head(key): 12 valid_head_list = [ 13 'bbox_head', 'mask_head', 'semantic_head', 'grid_head', 'mask_iou_head' 14 ] 15 16 return any(key.startswith(h) for h in valid_head_list) 17 18 19 def parse_config(config_strings): 20 temp_file = tempfile.NamedTemporaryFile() 21 config_path = f'{temp_file.name}.py' 22 with open(config_path, 'w') as f: 23 f.write(config_strings) 24 25 config = Config.fromfile(config_path) 26 is_two_stage = True 27 is_ssd = False 28 is_retina = False 29 reg_cls_agnostic = False 30 if 'rpn_head' not in config.model: 31 is_two_stage = False 32 # check whether it is SSD 33 if config.model.bbox_head.type == 'SSDHead': 34 is_ssd = True 35 elif config.model.bbox_head.type == 'RetinaHead': 36 is_retina = True 37 elif isinstance(config.model['bbox_head'], list): 38 reg_cls_agnostic = True 39 elif 'reg_class_agnostic' in config.model.bbox_head: 40 reg_cls_agnostic = config.model.bbox_head \ 41 .reg_class_agnostic 42 temp_file.close() 43 return is_two_stage, is_ssd, is_retina, reg_cls_agnostic 44 45 46 def reorder_cls_channel(val, num_classes=81): 47 # bias 48 if val.dim() == 1: 49 new_val = torch.cat((val[1:], val[:1]), dim=0) 50 # weight 51 else: 52 out_channels, in_channels = val.shape[:2] 53 # conv_cls for softmax output 54 if out_channels != num_classes and out_channels % num_classes == 0: 55 new_val = val.reshape(-1, num_classes, in_channels, *val.shape[2:]) 56 new_val = torch.cat((new_val[:, 1:], new_val[:, :1]), dim=1) 57 new_val = new_val.reshape(val.size()) 58 # fc_cls 59 elif out_channels == num_classes: 60 new_val = torch.cat((val[1:], val[:1]), dim=0) 61 # agnostic | retina_cls | rpn_cls 62 else: 63 new_val = val 64 65 return new_val 66 67 68 def truncate_cls_channel(val, num_classes=81): 69 70 # bias 71 if val.dim() == 1: 72 if val.size(0) % num_classes == 0: 73 new_val = val[:num_classes - 1] 74 else: 75 new_val = val 76 # weight 77 else: 78 out_channels, in_channels = val.shape[:2] 79 # conv_logits 80 if out_channels % num_classes == 0: 81 new_val = val.reshape(num_classes, in_channels, *val.shape[2:])[1:] 82 new_val = new_val.reshape(-1, *val.shape[1:]) 83 # agnostic 84 else: 85 new_val = val 86 87 return new_val 88 89 90 def truncate_reg_channel(val, num_classes=81): 91 # bias 92 if val.dim() == 1: 93 # fc_reg | rpn_reg 94 if val.size(0) % num_classes == 0: 95 new_val = val.reshape(num_classes, -1)[:num_classes - 1] 96 new_val = new_val.reshape(-1) 97 # agnostic 98 else: 99 new_val = val 100 # weight 101 else: 102 out_channels, in_channels = val.shape[:2] 103 # fc_reg | rpn_reg 104 if out_channels % num_classes == 0: 105 new_val = val.reshape(num_classes, -1, in_channels, 106 *val.shape[2:])[1:] 107 new_val = new_val.reshape(-1, *val.shape[1:]) 108 # agnostic 109 else: 110 new_val = val 111 112 return new_val 113 114 115 def convert(in_file, out_file, num_classes): 116 """Convert keys in checkpoints. 117 118 There can be some breaking changes during the development of mmdetection, 119 and this tool is used for upgrading checkpoints trained with old versions 120 to the latest one. 121 """ 122 checkpoint = torch.load(in_file) 123 in_state_dict = checkpoint.pop('state_dict') 124 out_state_dict = OrderedDict() 125 meta_info = checkpoint['meta'] 126 is_two_stage, is_ssd, is_retina, reg_cls_agnostic = parse_config( 127 '#' + meta_info['config']) 128 if meta_info['mmdet_version'] <= '0.5.3' and is_retina: 129 upgrade_retina = True 130 else: 131 upgrade_retina = False 132 133 # MMDetection v2.5.0 unifies the class order in RPN 134 # if the model is trained in version<v2.5.0 135 # The RPN model should be upgraded to be used in version>=2.5.0 136 if meta_info['mmdet_version'] < '2.5.0': 137 upgrade_rpn = True 138 else: 139 upgrade_rpn = False 140 141 for key, val in in_state_dict.items(): 142 new_key = key 143 new_val = val 144 if is_two_stage and is_head(key): 145 new_key = 'roi_head.{}'.format(key) 146 147 # classification 148 if upgrade_rpn: 149 m = re.search( 150 r'(conv_cls|retina_cls|rpn_cls|fc_cls|fcos_cls|' 151 r'fovea_cls).(weight|bias)', new_key) 152 else: 153 m = re.search( 154 r'(conv_cls|retina_cls|fc_cls|fcos_cls|' 155 r'fovea_cls).(weight|bias)', new_key) 156 if m is not None: 157 print(f'reorder cls channels of {new_key}') 158 new_val = reorder_cls_channel(val, num_classes) 159 160 # regression 161 if upgrade_rpn: 162 m = re.search(r'(fc_reg).(weight|bias)', new_key) 163 else: 164 m = re.search(r'(fc_reg|rpn_reg).(weight|bias)', new_key) 165 if m is not None and not reg_cls_agnostic: 166 print(f'truncate regression channels of {new_key}') 167 new_val = truncate_reg_channel(val, num_classes) 168 169 # mask head 170 m = re.search(r'(conv_logits).(weight|bias)', new_key) 171 if m is not None: 172 print(f'truncate mask prediction channels of {new_key}') 173 new_val = truncate_cls_channel(val, num_classes) 174 175 m = re.search(r'(cls_convs|reg_convs).\d.(weight|bias)', key) 176 # Legacy issues in RetinaNet since V1.x 177 # Use ConvModule instead of nn.Conv2d in RetinaNet 178 # cls_convs.0.weight -> cls_convs.0.conv.weight 179 if m is not None and upgrade_retina: 180 param = m.groups()[1] 181 new_key = key.replace(param, f'conv.{param}') 182 out_state_dict[new_key] = val 183 print(f'rename the name of {key} to {new_key}') 184 continue 185 186 m = re.search(r'(cls_convs).\d.(weight|bias)', key) 187 if m is not None and is_ssd: 188 print(f'reorder cls channels of {new_key}') 189 new_val = reorder_cls_channel(val, num_classes) 190 191 out_state_dict[new_key] = new_val 192 checkpoint['state_dict'] = out_state_dict 193 torch.save(checkpoint, out_file) 194 195 196 def main(): 197 parser = argparse.ArgumentParser(description='Upgrade model version') 198 parser.add_argument('in_file', help='input checkpoint file') 199 parser.add_argument('out_file', help='output checkpoint file') 200 parser.add_argument( 201 '--num-classes', 202 type=int, 203 default=81, 204 help='number of classes of the original model') 205 args = parser.parse_args() 206 convert(args.in_file, args.out_file, args.num_classes) 207 208 209 if __name__ == '__main__': 210 main() 211 [end of tools/model_converters/upgrade_model_version.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
open-mmlab/mmdetection
1a90fa80a761fe15e69111a625d82874ed783f7b
[BUG]bug in ConvFCBBoxHead's init cfg. https://github.com/open-mmlab/mmdetection/blob/c88509cb9a73d6bd1edcba64eb924d3cf3cfe85d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L103 This line will override initializers for fc_cls and fc_reg because they are also nn.Linear. Or is it what's intended? But I see the old way to initialize fc_cls and fc_reg is using Normal.
Can anyone tell me if it is an error or it is supposed to be like this? I really want to know, as tests on small dataset (voc07) shows a big difference. > Can anyone tell me if it is an error or it is supposed to be like this? I really want to know, as tests on small dataset (voc07) shows a big difference. I will check it > https://github.com/open-mmlab/mmdetection/blob/c88509cb9a73d6bd1edcba64eb924d3cf3cfe85d/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L103 > > This line will override initializers for fc_cls and fc_reg because they are also nn.Linear. Or is it what's intended? But I see the old way to initialize fc_cls and fc_reg is using Normal. Sorry for the late respondence, you are right, we will fix asap
2021-10-14T07:25:46Z
<patch> diff --git a/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py b/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py --- a/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py +++ b/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py @@ -97,10 +97,16 @@ def __init__(self, out_features=out_dim_reg) if init_cfg is None: + # when init_cfg is None, + # It has been set to + # [[dict(type='Normal', std=0.01, override=dict(name='fc_cls'))], + # [dict(type='Normal', std=0.001, override=dict(name='fc_reg'))] + # after `super(ConvFCBBoxHead, self).__init__()` + # we only need to append additional configuration + # for `shared_fcs`, `cls_fcs` and `reg_fcs` self.init_cfg += [ dict( type='Xavier', - layer='Linear', override=[ dict(name='shared_fcs'), dict(name='cls_fcs'), </patch>
[]
[]
Qiskit__qiskit-943
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> LaTeX Barriers are not centered between gates and measures on all columns <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Informations - **Qiskit Terra version**: latest master - **Python version**: 3.7 - **Operating system**: linux ### What is the current behavior? @nonhermitian example in issue #938 shows a circuit with a barrier being drawn with a measure after a gate but the barrier is not centered for the gate and measure on q0_15: ![image](https://user-images.githubusercontent.com/2447371/46037725-0b024580-c0d7-11e8-8dc6-f3d15fadb04f.png) The barrier is being drawn from q0_0 (because the \barrier command in latex draws the barrier line down from the topmost bit) and the default horizontal offset is used because there is no measure immediately after the gate on that bit. This ends up being incorrect on q0_15 because because there is a measure immediately after the gate on that bit which moves the center line between those 2 boxes to the left. ### Steps to reproduce the problem Draw a circuit with a measure right after a gate on a bit covered by the barrier, but is not the bit where \barrier will be written in the latex. ### What is the expected behavior? The barrier line should centered between gates and measures on all bits covered by a barrier, not just the topmost bit (where the \barrier call is made from in the latex) ### Suggested solutions We need to adjust the check in the latex circuit drawer that looks for a measure directly after a gate on a bit to check all bits being covered by the barrier, not just the bit </issue> <code> [start of README.md] 1 # Quantum Information Science Kit (Qiskit) 2 3 [![PyPI](https://img.shields.io/pypi/v/qiskit.svg)](https://pypi.python.org/pypi/qiskit) 4 [![Build Status](https://travis-ci.org/Qiskit/qiskit-terra.svg?branch=master)](https://travis-ci.org/Qiskit/qiskit-terra) 5 [![Build Status IBM Q](https://travis-matrix-badges.herokuapp.com/repos/Qiskit/qiskit-terra/branches/master/8)](https://travis-ci.org/Qiskit/qiskit-terra) 6 7 The Quantum Information Science Kit (**Qiskit** for short) is a software development kit (SDK) for 8 working with [OpenQASM](https://github.com/Qiskit/qiskit-openqasm) and the 9 [IBM Q Experience (QX)](https://quantumexperience.ng.bluemix.net/). 10 11 Use **Qiskit** to create quantum computing programs, compile them, and execute them on one of 12 several backends (online Real quantum processors, online simulators, and local simulators). For 13 the online backends, Qiskit uses our [python API client](https://github.com/Qiskit/qiskit-api-py) 14 to connect to the IBM Q Experience. 15 16 **We use GitHub issues for tracking requests and bugs. Please see the** 17 [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) **for 18 questions and discussion.** 19 20 **If you'd like to contribute to Qiskit, please take a look at our** 21 [contribution guidelines](.github/CONTRIBUTING.rst). 22 23 Links to Sections: 24 25 * [Installation](#installation) 26 * [Creating your first Quantum Program](#creating-your-first-quantum-program) 27 * [More Information](#more-information) 28 * [Authors](#authors-alphabetical) 29 30 ## Installation 31 32 ### Dependencies 33 34 At least [Python 3.5 or later](https://www.python.org/downloads/) is needed for using Qiskit. In 35 addition, [Jupyter Notebook](https://jupyter.readthedocs.io/en/latest/install.html) is recommended 36 for interacting with the tutorials. 37 For this reason we recommend installing the [Anaconda 3](https://www.continuum.io/downloads) 38 python distribution, as it comes with all of these dependencies pre-installed. 39 40 In addition, a basic understanding of quantum information is very helpful when interacting with 41 Qiskit. If you're new to quantum, start with our 42 [User Guides](https://github.com/Qiskit/ibmqx-user-guides)! 43 44 ### Instructions 45 46 We encourage to install Qiskit via the PIP tool (a python package manager): 47 48 ```bash 49 pip install qiskit 50 ``` 51 52 PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version. 53 54 PIP package comes with prebuilt binaries for these platforms: 55 56 * Linux x86_64 57 * Darwin 58 * Win64 59 60 If your platform is not in the list, PIP will try to build from the sources at installation time. It will require to have CMake 3.5 or higher pre-installed and at least one of the [build environments supported by CMake](https://cmake.org/cmake/help/v3.5/manual/cmake-generators.7.html). 61 62 If during the installation PIP doesn't succeed to build, don't worry, you will have Qiskit installed at the end but you probably couldn't take advantage of some of the high-performance components. Anyway, we always provide a python, not-so-fast alternative as a fallback. 63 64 #### Setup your environment 65 66 We recommend using python virtual environments to improve your experience. Refer to our 67 [Environment Setup documentation](doc/install.rst#3.1-Setup-the-environment) for more information. 68 69 ## Creating your first Quantum Program 70 71 Now that the SDK is installed, it's time to begin working with Qiskit. 72 73 We are ready to try out a quantum circuit example, which runs via the local simulator. 74 75 This is a simple example that makes an entangled state. 76 77 ```python 78 # Import the Qiskit SDK 79 from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister 80 from qiskit import available_backends, execute 81 82 # Create a Quantum Register with 2 qubits. 83 q = QuantumRegister(2) 84 # Create a Classical Register with 2 bits. 85 c = ClassicalRegister(2) 86 # Create a Quantum Circuit 87 qc = QuantumCircuit(q, c) 88 89 # Add a H gate on qubit 0, putting this qubit in superposition. 90 qc.h(q[0]) 91 # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting 92 # the qubits in a Bell state. 93 qc.cx(q[0], q[1]) 94 # Add a Measure gate to see the state. 95 qc.measure(q, c) 96 97 # See a list of available local simulators 98 print("Local backends: ", available_backends({'local': True})) 99 100 # Compile and run the Quantum circuit on a simulator backend 101 job_sim = execute(qc, "local_qasm_simulator") 102 sim_result = job_sim.result() 103 104 # Show the results 105 print("simulation: ", sim_result) 106 print(sim_result.get_counts(qc)) 107 ``` 108 109 In this case, the output will be: 110 111 ```python 112 COMPLETED 113 {'counts': {'00': 512, '11': 512}} 114 ``` 115 116 This script is available [here](examples/python/hello_quantum.py), where we also show how to 117 run the same program on a real quantum computer. 118 119 ### Executing your code on a real Quantum chip 120 121 You can also use Qiskit to execute your code on a 122 [real quantum chip](https://github.com/Qiskit/ibmqx-backend-information). 123 In order to do so, you need to configure the SDK for using the credentials in 124 your IBM Q Experience account: 125 126 #### Configure your API token and QX credentials 127 128 1. Create an _[IBM Q Experience](https://quantumexperience.ng.bluemix.net) > Account_ if you haven't already done so. 129 130 2. Get an API token from the IBM Q Experience website under _My Account > Advanced > API Token_. This API token allows you to execute your programs with the IBM Q Experience backends. See: [Example](doc/example_real_backend.rst). 131 132 3. We are now going to add the necessary credentials to QISKit. Take your token 133 from step 2, here called `MY_API_TOKEN`, and pass it to the 134 `store_credentials` function: 135 136 ```python 137 from qiskit import store_credentials 138 139 store_credentials('MY_API_TOKEN') 140 ``` 141 142 4. If you have access to the IBM Q Network features, you also need to pass the 143 url listed on your IBM Q account page to `store_credentials`. 144 145 After calling `store_credentials()`, your credentials will be stored into disk. 146 Once they are stored, Qiskit will automatically load and use them in your program 147 via: 148 149 ```python 150 from qiskit import register 151 152 register() 153 ``` 154 155 For more details on installing Qiskit and for alternative methods for passing 156 the IBM QX credentials, such as using environment variables, sending them 157 explicitly and support for the `Qconfig.py` method available in previous 158 versions, please check 159 [our Qiskit documentation](https://www.qiskit.org/documentation/). 160 161 ### Next Steps 162 163 Now you're set up and ready to check out some of the other examples from our 164 [Tutorial](https://github.com/Qiskit/qiskit-tutorial) repository. Start with the 165 [index tutorial](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and then go to 166 the [‘Getting Started’ example](https://github.com/Qiskit/qiskit-tutorial/blob/master/reference/tools/getting_started.ipynb). 167 If you already have [Jupyter Notebooks installed](https://jupyter.readthedocs.io/en/latest/install.html), 168 you can copy and modify the notebooks to create your own experiments. 169 170 To install the tutorials as part of the Qiskit SDK, see the following 171 [installation details](doc/install.rst#Install-Jupyter-based-tutorials). Complete SDK 172 documentation can be found in the [*doc* directory](doc/qiskit.rst) and in 173 [the official Qiskit site](https://www.qiskit.org/documentation). 174 175 ## More Information 176 177 For more information on how to use Qiskit, tutorial examples, and other helpful links, take a look 178 at these resources: 179 180 * **[User Guides](https://github.com/Qiskit/ibmqx-user-guides)**, 181 a good starting place for learning about quantum information and computing 182 * **[Tutorials](https://github.com/Qiskit/qiskit-tutorial)**, 183 for example notebooks, start with the [index](https://github.com/Qiskit/qiskit-tutorial/blob/master/index.ipynb) and [‘Getting Started’ Jupyter notebook](https://github.com/Qiskit/qiskit-tutorial/blob/002d054c72fc59fc5009bb9fa0ee393e15a69d07/1_introduction/getting_started.ipynb) 184 * **[OpenQASM](https://github.com/Qiskit/openqasm)**, 185 for additional information and examples of QASM code 186 * **[IBM Quantum Experience Composer](https://quantumexperience.ng.bluemix.net/qx/editor)**, 187 a GUI for interacting with real and simulated quantum computers 188 * **[QISkit Python API](https://github.com/Qiskit/qiskit-api-py)**, an API to use the IBM Quantum 189 Experience in Python 190 191 Qiskit was originally developed by researchers and developers on the 192 [IBM-Q](http://www.research.ibm.com/ibm-q/) Team at [IBM Research](http://www.research.ibm.com/), 193 with the aim of offering a high level development kit to work with quantum computers. 194 195 Visit the [IBM Q Experience community](https://quantumexperience.ng.bluemix.net/qx/community) for 196 questions and discussions on Qiskit and quantum computing more broadly. If you'd like to 197 contribute to Qiskit, please take a look at our [contribution guidelines](.github/CONTRIBUTING.rst). 198 199 ## Multilanguage guide 200 201 * **[Korean Translation](doc/ko/README.md)** - basic guide line written in Korean. 202 * **[Chinese Translation](doc/zh/README.md)** - basic guide line written in Chinese. 203 204 ## Authors (alphabetical) 205 206 Qiskit was originally authored by 207 Luciano Bello, Jim Challenger, Andrew Cross, Ismael Faro, Jay Gambetta, Juan Gomez, 208 Ali Javadi-Abhari, Paco Martin, Diego Moreda, Jesus Perez, Erick Winston and Chris Wood. 209 210 And continues to grow with the help and work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 211 to the project at different levels. 212 [end of README.md] [start of qiskit/backends/local/qasm_simulator_py.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 # pylint: disable=invalid-name 9 10 """Contains a (slow) python simulator. 11 12 It simulates a qasm quantum circuit that has been compiled to run on the 13 simulator. It is exponential in the number of qubits. 14 15 We advise using the c++ simulator or online simulator for larger size systems. 16 17 The input is a qobj dictionary 18 19 and the output is a Results object 20 21 results['data']["counts"] where this is dict {"0000" : 454} 22 23 The simulator is run using 24 25 .. code-block:: python 26 27 QasmSimulatorPy(compiled_circuit,shots,seed).run(). 28 29 .. code-block:: guess 30 31 compiled_circuit = 32 { 33 "header": { 34 "number_of_qubits": 2, // int 35 "number_of_clbits": 2, // int 36 "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]] 37 "clbit_labels": [["c", 2]], // list[list[string, int]] 38 } 39 "operations": // list[map] 40 [ 41 { 42 "name": , // required -- string 43 "params": , // optional -- list[double] 44 "qubits": , // required -- list[int] 45 "clbits": , // optional -- list[int] 46 "conditional": // optional -- map 47 { 48 "type": , // string 49 "mask": , // hex string 50 "val": , // bhex string 51 } 52 }, 53 ] 54 } 55 56 .. code-block:: python 57 58 result = 59 { 60 'data': { 61 'statevector': array([ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j]), 62 'classical_state': 0 63 'counts': {'0000': 1} 64 'snapshots': { '0': {'statevector': array([1.+0.j, 0.+0.j, 65 0.+0.j, 0.+0.j])}} 66 } 67 } 68 'time_taken': 0.002 69 'status': 'DONE' 70 } 71 72 """ 73 import random 74 import uuid 75 import time 76 import logging 77 from collections import Counter 78 79 import numpy as np 80 81 from qiskit.result._utils import copy_qasm_from_qobj_into_result, result_from_old_style_dict 82 from qiskit.backends import BaseBackend 83 from qiskit.backends.local.localjob import LocalJob 84 from ._simulatorerror import SimulatorError 85 from ._simulatortools import single_gate_matrix 86 logger = logging.getLogger(__name__) 87 88 89 class QasmSimulatorPy(BaseBackend): 90 """Python implementation of a qasm simulator.""" 91 92 DEFAULT_CONFIGURATION = { 93 'name': 'local_qasm_simulator_py', 94 'url': 'https://github.com/QISKit/qiskit-terra', 95 'simulator': True, 96 'local': True, 97 'description': 'A python simulator for qasm files', 98 'coupling_map': 'all-to-all', 99 'basis_gates': 'u1,u2,u3,cx,id,snapshot' 100 } 101 102 def __init__(self, configuration=None): 103 """ 104 Args: 105 configuration (dict): backend configuration 106 """ 107 super().__init__(configuration or self.DEFAULT_CONFIGURATION.copy()) 108 109 self._local_random = random.Random() 110 111 # Define attributes in __init__. 112 self._classical_state = 0 113 self._statevector = 0 114 self._snapshots = {} 115 self._number_of_cbits = 0 116 self._number_of_qubits = 0 117 self._shots = 0 118 self._qobj_config = None 119 120 @staticmethod 121 def _index1(b, i, k): 122 """Magic index1 function. 123 124 Takes a bitstring k and inserts bit b as the ith bit, 125 shifting bits >= i over to make room. 126 """ 127 retval = k 128 lowbits = k & ((1 << i) - 1) # get the low i bits 129 130 retval >>= i 131 retval <<= 1 132 133 retval |= b 134 135 retval <<= i 136 retval |= lowbits 137 138 return retval 139 140 @staticmethod 141 def _index2(b1, i1, b2, i2, k): 142 """Magic index1 function. 143 144 Takes a bitstring k and inserts bits b1 as the i1th bit 145 and b2 as the i2th bit 146 """ 147 assert i1 != i2 148 149 if i1 > i2: 150 # insert as (i1-1)th bit, will be shifted left 1 by next line 151 retval = QasmSimulatorPy._index1(b1, i1-1, k) 152 retval = QasmSimulatorPy._index1(b2, i2, retval) 153 else: # i2>i1 154 # insert as (i2-1)th bit, will be shifted left 1 by next line 155 retval = QasmSimulatorPy._index1(b2, i2-1, k) 156 retval = QasmSimulatorPy._index1(b1, i1, retval) 157 return retval 158 159 def _add_qasm_single(self, gate, qubit): 160 """Apply an arbitary 1-qubit operator to a qubit. 161 162 Gate is the single qubit applied. 163 qubit is the qubit the gate is applied to. 164 """ 165 psi = self._statevector 166 bit = 1 << qubit 167 for k1 in range(0, 1 << self._number_of_qubits, 1 << (qubit+1)): 168 for k2 in range(0, 1 << qubit, 1): 169 k = k1 | k2 170 cache0 = psi[k] 171 cache1 = psi[k | bit] 172 psi[k] = gate[0, 0] * cache0 + gate[0, 1] * cache1 173 psi[k | bit] = gate[1, 0] * cache0 + gate[1, 1] * cache1 174 175 def _add_qasm_cx(self, q0, q1): 176 """Optimized ideal CX on two qubits. 177 178 q0 is the first qubit (control) counts from 0. 179 q1 is the second qubit (target). 180 """ 181 psi = self._statevector 182 for k in range(0, 1 << (self._number_of_qubits - 2)): 183 # first bit is control, second is target 184 ind1 = self._index2(1, q0, 0, q1, k) 185 # swap target if control is 1 186 ind3 = self._index2(1, q0, 1, q1, k) 187 cache0 = psi[ind1] 188 cache1 = psi[ind3] 189 psi[ind3] = cache0 190 psi[ind1] = cache1 191 192 def _add_qasm_decision(self, qubit): 193 """Apply the decision of measurement/reset qubit gate. 194 195 qubit is the qubit that is measured/reset 196 """ 197 probability_zero = 0 198 random_number = self._local_random.random() 199 for ii in range(1 << self._number_of_qubits): 200 if ii & (1 << qubit) == 0: 201 probability_zero += np.abs(self._statevector[ii])**2 202 if random_number <= probability_zero: 203 outcome = '0' 204 norm = np.sqrt(probability_zero) 205 else: 206 outcome = '1' 207 norm = np.sqrt(1-probability_zero) 208 return (outcome, norm) 209 210 def _add_qasm_measure(self, qubit, cbit): 211 """Apply the measurement qubit gate. 212 213 qubit is the qubit measured. 214 cbit is the classical bit the measurement is assigned to. 215 """ 216 outcome, norm = self._add_qasm_decision(qubit) 217 for ii in range(1 << self._number_of_qubits): 218 # update quantum state 219 if (ii >> qubit) & 1 == int(outcome): 220 self._statevector[ii] = self._statevector[ii]/norm 221 else: 222 self._statevector[ii] = 0 223 # update classical state 224 bit = 1 << cbit 225 self._classical_state = (self._classical_state & (~bit)) | (int(outcome) << cbit) 226 227 def _add_qasm_reset(self, qubit): 228 """Apply the reset to the qubit. 229 230 This is done by doing a measruement and if 0 do nothing and 231 if 1 flip the qubit. 232 233 qubit is the qubit that is reset. 234 """ 235 # TODO: slow, refactor later 236 outcome, norm = self._add_qasm_decision(qubit) 237 temp = np.copy(self._statevector) 238 self._statevector.fill(0.0) 239 # measurement 240 for ii in range(1 << self._number_of_qubits): 241 if (ii >> qubit) & 1 == int(outcome): 242 temp[ii] = temp[ii]/norm 243 else: 244 temp[ii] = 0 245 # reset 246 if outcome == '1': 247 for ii in range(1 << self._number_of_qubits): 248 iip = (~ (1 << qubit)) & ii # bit number qubit set to zero 249 self._statevector[iip] += temp[ii] 250 else: 251 self._statevector = temp 252 253 def _add_qasm_snapshot(self, slot): 254 """Snapshot instruction to record simulator's internal representation 255 of quantum statevector. 256 257 slot is an integer indicating a snapshot slot number. 258 """ 259 self._snapshots.setdefault(str(int(slot)), 260 {}).setdefault("statevector", 261 []).append(np.copy(self._statevector)) 262 263 def run(self, qobj): 264 """Run qobj asynchronously. 265 266 Args: 267 qobj (dict): job description 268 269 Returns: 270 LocalJob: derived from BaseJob 271 """ 272 local_job = LocalJob(self._run_job, qobj) 273 local_job.submit() 274 return local_job 275 276 def _run_job(self, qobj): 277 """Run circuits in qobj""" 278 self._validate(qobj) 279 result_list = [] 280 self._shots = qobj.config.shots 281 self._qobj_config = qobj.config 282 start = time.time() 283 284 for circuit in qobj.experiments: 285 result_list.append(self.run_circuit(circuit)) 286 end = time.time() 287 job_id = str(uuid.uuid4()) 288 result = {'backend': self._configuration['name'], 289 'id': qobj.qobj_id, 290 'job_id': job_id, 291 'result': result_list, 292 'status': 'COMPLETED', 293 'success': True, 294 'time_taken': (end - start)} 295 296 copy_qasm_from_qobj_into_result(qobj, result) 297 298 return result_from_old_style_dict( 299 result, [circuit.header.name for circuit in qobj.experiments]) 300 301 def run_circuit(self, circuit): 302 """Run a circuit and return a single Result. 303 304 Args: 305 circuit (QobjExperiment): experiment from qobj experiments list 306 307 Returns: 308 dict: A dictionary of results which looks something like:: 309 310 { 311 "data": 312 { #### DATA CAN BE A DIFFERENT DICTIONARY FOR EACH BACKEND #### 313 "counts": {'00000': XXXX, '00001': XXXXX}, 314 "time" : xx.xxxxxxxx 315 }, 316 "status": --status (string)-- 317 } 318 Raises: 319 SimulatorError: if an error occurred. 320 """ 321 self._number_of_qubits = circuit.header.number_of_qubits 322 self._number_of_cbits = circuit.header.number_of_clbits 323 self._statevector = 0 324 self._classical_state = 0 325 self._snapshots = {} 326 cl_reg_index = [] # starting bit index of classical register 327 cl_reg_nbits = [] # number of bits in classical register 328 cbit_index = 0 329 for cl_reg in circuit.header.clbit_labels: 330 cl_reg_nbits.append(cl_reg[1]) 331 cl_reg_index.append(cbit_index) 332 cbit_index += cl_reg[1] 333 334 # Get the seed looking in circuit, qobj, and then random. 335 seed = getattr(circuit.config, 'seed', 336 getattr(self._qobj_config, 'seed', 337 random.getrandbits(32))) 338 self._local_random.seed(seed) 339 outcomes = [] 340 341 start = time.time() 342 for _ in range(self._shots): 343 self._statevector = np.zeros(1 << self._number_of_qubits, 344 dtype=complex) 345 self._statevector[0] = 1 346 self._classical_state = 0 347 for operation in circuit.instructions: 348 if getattr(operation, 'conditional', None): 349 mask = int(operation.conditional.mask, 16) 350 if mask > 0: 351 value = self._classical_state & mask 352 while (mask & 0x1) == 0: 353 mask >>= 1 354 value >>= 1 355 if value != int(operation.conditional.val, 16): 356 continue 357 # Check if single gate 358 if operation.name in ('U', 'u1', 'u2', 'u3'): 359 params = getattr(operation, 'params', None) 360 qubit = operation.qubits[0] 361 gate = single_gate_matrix(operation.name, params) 362 self._add_qasm_single(gate, qubit) 363 # Check if CX gate 364 elif operation.name in ('id', 'u0'): 365 pass 366 elif operation.name in ('CX', 'cx'): 367 qubit0 = operation.qubits[0] 368 qubit1 = operation.qubits[1] 369 self._add_qasm_cx(qubit0, qubit1) 370 # Check if measure 371 elif operation.name == 'measure': 372 qubit = operation.qubits[0] 373 cbit = operation.clbits[0] 374 self._add_qasm_measure(qubit, cbit) 375 # Check if reset 376 elif operation.name == 'reset': 377 qubit = operation.qubits[0] 378 self._add_qasm_reset(qubit) 379 # Check if barrier 380 elif operation.name == 'barrier': 381 pass 382 # Check if snapshot command 383 elif operation.name == 'snapshot': 384 params = operation.params 385 self._add_qasm_snapshot(params[0]) 386 else: 387 backend = self._configuration['name'] 388 err_msg = '{0} encountered unrecognized operation "{1}"' 389 raise SimulatorError(err_msg.format(backend, 390 operation.name)) 391 # Turn classical_state (int) into bit string 392 outcomes.append(bin(self._classical_state)[2:].zfill( 393 self._number_of_cbits)) 394 # Return the results 395 counts = dict(Counter(outcomes)) 396 data = { 397 'counts': self._format_result(counts, cl_reg_index, cl_reg_nbits), 398 'snapshots': self._snapshots 399 } 400 end = time.time() 401 return {'name': circuit.header.name, 402 'seed': seed, 403 'shots': self._shots, 404 'data': data, 405 'status': 'DONE', 406 'success': True, 407 'time_taken': (end-start)} 408 409 def _validate(self, qobj): 410 for experiment in qobj.experiments: 411 if 'measure' not in [op.name for 412 op in experiment.instructions]: 413 logger.warning("no measurements in circuit '%s', " 414 "classical register will remain all zeros.", 415 experiment.header.name) 416 417 def _format_result(self, counts, cl_reg_index, cl_reg_nbits): 418 """Format the result bit string. 419 420 This formats the result bit strings such that spaces are inserted 421 at register divisions. 422 423 Args: 424 counts (dict): dictionary of counts e.g. {'1111': 1000, '0000':5} 425 cl_reg_index (list): starting bit index of classical register 426 cl_reg_nbits (list): total amount of bits in classical register 427 Returns: 428 dict: spaces inserted into dictionary keys at register boundaries. 429 """ 430 fcounts = {} 431 for key, value in counts.items(): 432 if cl_reg_nbits: 433 new_key = [key[-cl_reg_nbits[0]:]] 434 for index, nbits in zip(cl_reg_index[1:], 435 cl_reg_nbits[1:]): 436 new_key.insert(0, key[-(index+nbits):-index]) 437 fcounts[' '.join(new_key)] = value 438 return fcounts 439 [end of qiskit/backends/local/qasm_simulator_py.py] [start of qiskit/unroll/_jsonbackend.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 """Backend for the unroller that composes qasm into json file. 9 10 The input is a AST and a basis set and returns a json memory object:: 11 12 { 13 "header": { 14 "number_of_qubits": 2, // int 15 "number_of_clbits": 2, // int 16 "qubit_labels": [["q", 0], ["v", 0]], // list[list[string, int]] 17 "clbit_labels": [["c", 2]], // list[list[string, int]] 18 } 19 "instructions": // list[map] 20 [ 21 { 22 "name": , // required -- string 23 "params": , // optional -- list[double] 24 "texparams": , // optional -- list[string] 25 "qubits": , // optional -- list[int] 26 "cbits": , //optional -- list[int] 27 "conditional": // optional -- map 28 { 29 "type": "equals", // string 30 "mask": "0xHexadecimalString", // big int 31 "val": "0xHexadecimalString", // big int 32 } 33 }, 34 ] 35 } 36 """ 37 from qiskit.unroll import BackendError 38 from qiskit.unroll import UnrollerBackend 39 from qiskit import QISKitError 40 41 42 class JsonBackend(UnrollerBackend): 43 """Backend for the unroller that makes a Json quantum circuit.""" 44 45 def __init__(self, basis=None): 46 """Setup this backend. 47 48 basis is a list of operation name strings. 49 The default basis is ["U", "CX"]. 50 """ 51 super().__init__(basis) 52 self.circuit = {} 53 self.circuit['instructions'] = [] 54 self.circuit['header'] = { 55 'number_of_qubits': 0, 56 'number_of_clbits': 0, 57 'qubit_labels': [], 58 'clbit_labels': [] 59 } 60 self._number_of_qubits = 0 61 self._number_of_cbits = 0 62 self._qubit_order = [] 63 self._cbit_order = [] 64 self._qubit_order_internal = {} 65 self._cbit_order_internal = {} 66 67 self.creg = None 68 self.cval = None 69 self.gates = {} 70 if basis: 71 self.basis = basis 72 else: 73 self.basis = [] # default, unroll to U, CX 74 self.listen = True 75 self.in_gate = "" 76 self.printed_gates = [] 77 78 def set_basis(self, basis): 79 """Declare the set of user-defined gates to emit. 80 81 basis is a list of operation name strings. 82 """ 83 self.basis = basis 84 85 def version(self, version): 86 """Print the version string. 87 88 v is a version number. 89 """ 90 pass 91 92 def new_qreg(self, name, size): 93 """Create a new quantum register. 94 95 name = name of the register 96 sz = size of the register 97 """ 98 assert size >= 0, "invalid qreg size" 99 100 for j in range(size): 101 self._qubit_order.append([name, j]) 102 self._qubit_order_internal[(name, j)] = self._number_of_qubits + j 103 self._number_of_qubits += size 104 self.circuit['header']['number_of_qubits'] = self._number_of_qubits 105 self.circuit['header']['qubit_labels'] = self._qubit_order 106 107 def new_creg(self, name, size): 108 """Create a new classical register. 109 110 name = name of the register 111 sz = size of the register 112 """ 113 assert size >= 0, "invalid creg size" 114 self._cbit_order.append([name, size]) 115 for j in range(size): 116 self._cbit_order_internal[(name, j)] = self._number_of_cbits + j 117 self._number_of_cbits += size 118 self.circuit['header']['number_of_clbits'] = self._number_of_cbits 119 self.circuit['header']['clbit_labels'] = self._cbit_order 120 121 def define_gate(self, name, gatedata): 122 """Define a new quantum gate. 123 124 name is a string. 125 gatedata is the AST node for the gate. 126 """ 127 self.gates[name] = gatedata 128 129 def u(self, arg, qubit, nested_scope=None): 130 """Fundamental single-qubit gate. 131 132 arg is 3-tuple of Node expression objects. 133 qubit is (regname, idx) tuple. 134 nested_scope is a list of dictionaries mapping expression variables 135 to Node expression objects in order of increasing nesting depth. 136 """ 137 if self.listen: 138 if "U" not in self.basis: 139 self.basis.append("U") 140 qubit_indices = [self._qubit_order_internal.get(qubit)] 141 self.circuit['instructions'].append({ 142 'name': "U", 143 # TODO: keep these real for now, until a later time 144 'params': [float(arg[0].real(nested_scope)), 145 float(arg[1].real(nested_scope)), 146 float(arg[2].real(nested_scope))], 147 'texparams': [arg[0].latex(prec=8, nested_scope=nested_scope), 148 arg[1].latex(prec=8, nested_scope=nested_scope), 149 arg[2].latex(prec=8, nested_scope=nested_scope)], 150 'qubits': qubit_indices 151 }) 152 self._add_condition() 153 154 def _add_condition(self): 155 """Check for a condition (self.creg) and add fields if necessary. 156 157 Fields are added to the last operation in the circuit. 158 """ 159 if self.creg is not None: 160 mask = 0 161 for cbit, index in self._cbit_order_internal.items(): 162 if cbit[0] == self.creg: 163 mask |= (1 << index) 164 # Would be nicer to zero pad the mask, but we 165 # need to know the total number of cbits. 166 # format_spec = "{0:#0{%d}X}" % number_of_clbits 167 # format_spec.format(mask) 168 conditional = { 169 'type': "equals", 170 'mask': "0x%X" % mask, 171 'val': "0x%X" % self.cval 172 } 173 self.circuit['instructions'][-1]['conditional'] = conditional 174 175 def cx(self, qubit0, qubit1): 176 """Fundamental two-qubit gate. 177 178 qubit0 is (regname, idx) tuple for the control qubit. 179 qubit1 is (regname, idx) tuple for the target qubit. 180 """ 181 if self.listen: 182 if "CX" not in self.basis: 183 self.basis.append("CX") 184 qubit_indices = [self._qubit_order_internal.get(qubit0), 185 self._qubit_order_internal.get(qubit1)] 186 self.circuit['instructions'].append({ 187 'name': 'CX', 188 'qubits': qubit_indices, 189 }) 190 self._add_condition() 191 192 def measure(self, qubit, bit): 193 """Measurement operation. 194 195 qubit is (regname, idx) tuple for the input qubit. 196 bit is (regname, idx) tuple for the output bit. 197 """ 198 if "measure" not in self.basis: 199 self.basis.append("measure") 200 qubit_indices = [self._qubit_order_internal.get(qubit)] 201 clbit_indices = [self._cbit_order_internal.get(bit)] 202 self.circuit['instructions'].append({ 203 'name': 'measure', 204 'qubits': qubit_indices, 205 'clbits': clbit_indices, 206 'memory': clbit_indices.copy() 207 }) 208 self._add_condition() 209 210 def barrier(self, qubitlists): 211 """Barrier instruction. 212 213 qubitlists is a list of lists of (regname, idx) tuples. 214 """ 215 if self.listen: 216 if "barrier" not in self.basis: 217 self.basis.append("barrier") 218 qubit_indices = [] 219 for qubitlist in qubitlists: 220 for qubits in qubitlist: 221 qubit_indices.append(self._qubit_order_internal.get(qubits)) 222 self.circuit['instructions'].append({ 223 'name': 'barrier', 224 'qubits': qubit_indices, 225 }) 226 # no conditions on barrier, even when it appears 227 # in body of conditioned gate 228 229 def reset(self, qubit): 230 """Reset instruction. 231 232 qubit is a (regname, idx) tuple. 233 """ 234 if "reset" not in self.basis: 235 self.basis.append("reset") 236 qubit_indices = [self._qubit_order_internal.get(qubit)] 237 self.circuit['instructions'].append({ 238 'name': 'reset', 239 'qubits': qubit_indices, 240 }) 241 self._add_condition() 242 243 def set_condition(self, creg, cval): 244 """Attach a current condition. 245 246 creg is a name string. 247 cval is the integer value for the test. 248 """ 249 self.creg = creg 250 self.cval = cval 251 252 def drop_condition(self): 253 """Drop the current condition.""" 254 self.creg = None 255 self.cval = None 256 257 def start_gate(self, name, args, qubits, nested_scope=None, extra_fields=None): 258 if self.listen and name not in self.basis \ 259 and self.gates[name]["opaque"]: 260 raise BackendError("opaque gate %s not in basis" % name) 261 if self.listen and name in self.basis: 262 self.in_gate = name 263 self.listen = False 264 qubit_indices = [self._qubit_order_internal.get(qubit) 265 for qubit in qubits] 266 gate_instruction = { 267 'name': name, 268 # TODO: keep these real for now, until a later time 269 'params': list(map(lambda x: float(x.real(nested_scope)), 270 args)), 271 'texparams': list(map(lambda x: 272 x.latex(prec=8, 273 nested_scope=nested_scope), 274 args)), 275 'qubits': qubit_indices, 276 } 277 if extra_fields is not None: 278 gate_instruction.update(extra_fields) 279 self.circuit['instructions'].append(gate_instruction) 280 self._add_condition() 281 282 def end_gate(self, name, args, qubits, nested_scope=None): 283 """End a custom gate. 284 285 name is name string. 286 args is list of Node expression objects. 287 qubits is list of (regname, idx) tuples. 288 nested_scope is a list of dictionaries mapping expression variables 289 to Node expression objects in order of increasing nesting depth. 290 """ 291 if name == self.in_gate: 292 self.in_gate = "" 293 self.listen = True 294 295 def get_output(self): 296 """Returns the generated circuit.""" 297 if not self._is_circuit_valid(): 298 raise QISKitError("Invalid circuit! Please check the syntax of your circuit." 299 "Has the Qasm parsing been called?. e.g: unroller.execute().") 300 return self.circuit 301 302 def _is_circuit_valid(self): 303 """Checks whether the circuit object is a valid one or not.""" 304 return (len(self.circuit['header']) > 0 and 305 len(self.circuit['instructions']) > 0) 306 [end of qiskit/unroll/_jsonbackend.py] [start of qiskit/unroll/_printerbackend.py] 1 # -*- coding: utf-8 -*- 2 3 # Copyright 2017, IBM. 4 # 5 # This source code is licensed under the Apache License, Version 2.0 found in 6 # the LICENSE.txt file in the root directory of this source tree. 7 8 """ 9 Backend for the unroller that prints OpenQASM. 10 """ 11 from ._backenderror import BackendError 12 from ._unrollerbackend import UnrollerBackend 13 14 15 class PrinterBackend(UnrollerBackend): 16 """Backend for the unroller that prints OpenQASM. 17 18 This backend also serves as an example class for other unroller backends. 19 """ 20 21 def __init__(self, basis=None): 22 """Setup this backend. 23 24 basis is a list of operation name strings. 25 """ 26 super().__init__(basis) 27 self.prec = 15 28 self.creg = None 29 self.cval = None 30 self.gates = {} 31 self.comments = False 32 if basis: 33 self.basis = basis 34 else: 35 self.basis = [] 36 self.listen = True 37 self.in_gate = "" 38 self.printed_gates = [] 39 40 def set_comments(self, comments): 41 """Set comments to True to enable.""" 42 self.comments = comments 43 44 def set_basis(self, basis): 45 """Declare the set of user-defined gates to emit. 46 47 basis is a list of operation name strings. 48 """ 49 self.basis = basis 50 51 def version(self, version): 52 """Print the version string. 53 54 v is a version number. 55 """ 56 print("OPENQASM %s;" % version) 57 58 def new_qreg(self, name, size): 59 """Create a new quantum register. 60 61 name = name of the register 62 sz = size of the register 63 """ 64 assert size >= 0, "invalid qreg size" 65 print("qreg %s[%d];" % (name, size)) 66 67 def new_creg(self, name, size): 68 """Create a new classical register. 69 70 name = name of the register 71 sz = size of the register 72 """ 73 print("creg %s[%d];" % (name, size)) 74 75 def _gate_string(self, name): 76 """Print OPENQASM for the named gate.""" 77 out = "" 78 if self.gates[name]["opaque"]: 79 out = "opaque " + name 80 else: 81 out = "gate " + name 82 if self.gates[name]["n_args"] > 0: 83 out += "(" + ",".join(self.gates[name]["args"]) + ")" 84 out += " " + ",".join(self.gates[name]["bits"]) 85 if self.gates[name]["opaque"]: 86 out += ";" 87 else: 88 out += "\n{\n" + self.gates[name]["body"].qasm() + "}" 89 return out 90 91 def define_gate(self, name, gatedata): 92 """Define a new quantum gate. 93 94 name is a string. 95 gatedata is the AST node for the gate. 96 """ 97 atomics = ["U", "CX", "measure", "reset", "barrier"] 98 self.gates[name] = gatedata 99 # Print out the gate definition if it is in self.basis 100 if name in self.basis and name not in atomics: 101 # Print the hierarchy of gates this gate calls 102 if not self.gates[name]["opaque"]: 103 calls = self.gates[name]["body"].calls() 104 for call in calls: 105 if call not in self.printed_gates: 106 print(self._gate_string(call)) 107 self.printed_gates.append(call) 108 # Print the gate itself 109 if name not in self.printed_gates: 110 print(self._gate_string(name)) 111 self.printed_gates.append(name) 112 113 def u(self, arg, qubit, nested_scope=None): 114 """Fundamental single qubit gate. 115 116 arg is 3-tuple of Node expression objects. 117 qubit is (regname,idx) tuple. 118 nested_scope is a list of dictionaries mapping expression variables 119 to Node expression objects in order of increasing nesting depth. 120 """ 121 if self.listen: 122 if "U" not in self.basis: 123 self.basis.append("U") 124 if self.creg is not None: 125 print("if(%s==%d) " % (self.creg, self.cval), end="") 126 print("U(%s,%s,%s) %s[%d];" % (arg[0].sym(nested_scope), 127 arg[1].sym(nested_scope), 128 arg[2].sym(nested_scope), 129 qubit[0], 130 qubit[1])) 131 132 def cx(self, qubit0, qubit1): 133 """Fundamental two qubit gate. 134 135 qubit0 is (regname,idx) tuple for the control qubit. 136 qubit1 is (regname,idx) tuple for the target qubit. 137 """ 138 if self.listen: 139 if "CX" not in self.basis: 140 self.basis.append("CX") 141 if self.creg is not None: 142 print("if(%s==%d) " % (self.creg, self.cval), end="") 143 print("CX %s[%d],%s[%d];" % (qubit0[0], qubit0[1], 144 qubit1[0], qubit1[1])) 145 146 def measure(self, qubit, bit): 147 """Measurement operation. 148 149 qubit is (regname, idx) tuple for the input qubit. 150 bit is (regname, idx) tuple for the output bit. 151 """ 152 if "measure" not in self.basis: 153 self.basis.append("measure") 154 if self.creg is not None: 155 print("if(%s==%d) " % (self.creg, self.cval), end="") 156 print("measure %s[%d] -> %s[%d];" % (qubit[0], qubit[1], 157 bit[0], bit[1])) 158 159 def barrier(self, qubitlists): 160 """Barrier instruction. 161 162 qubitlists is a list of lists of (regname, idx) tuples. 163 """ 164 if self.listen: 165 if "barrier" not in self.basis: 166 self.basis.append("barrier") 167 names = [] 168 for qubitlist in qubitlists: 169 if len(qubitlist) == 1: 170 names.append("%s[%d]" % (qubitlist[0][0], qubitlist[0][1])) 171 else: 172 names.append("%s" % qubitlist[0][0]) 173 print("barrier %s;" % ",".join(names)) 174 175 def reset(self, qubit): 176 """Reset instruction. 177 178 qubit is a (regname, idx) tuple. 179 """ 180 if "reset" not in self.basis: 181 self.basis.append("reset") 182 if self.creg is not None: 183 print("if(%s==%d) " % (self.creg, self.cval), end="") 184 print("reset %s[%d];" % (qubit[0], qubit[1])) 185 186 def set_condition(self, creg, cval): 187 """Attach a current condition. 188 189 creg is a name string. 190 cval is the integer value for the test. 191 """ 192 self.creg = creg 193 self.cval = cval 194 if self.comments: 195 print("// set condition %s, %s" % (creg, cval)) 196 197 def drop_condition(self): 198 """Drop the current condition.""" 199 self.creg = None 200 self.cval = None 201 if self.comments: 202 print("// drop condition") 203 204 def start_gate(self, name, args, qubits, nested_scope=None, extra_fields=None): 205 """Begin a custom gate. 206 207 name is name string. 208 args is list of Node expression objects. 209 qubits is list of (regname, idx) tuples. 210 nested_scope is a list of dictionaries mapping expression variables 211 to Node expression objects in order of increasing nesting depth. 212 """ 213 if self.listen and self.comments: 214 print("// start %s, %s, %s" % (name, 215 list(map(lambda x: 216 str(x.sym(nested_scope)), 217 args)), 218 qubits)) 219 if self.listen and name not in self.basis \ 220 and self.gates[name]["opaque"]: 221 raise BackendError("opaque gate %s not in basis" % name) 222 if self.listen and name in self.basis: 223 self.in_gate = name 224 self.listen = False 225 squbits = ["%s[%d]" % (x[0], x[1]) for x in qubits] 226 if self.creg is not None: 227 print("if(%s==%d) " % (self.creg, self.cval), end="") 228 print(name, end="") 229 if args: 230 print("(%s)" % ",".join(map(lambda x: 231 str(x.sym(nested_scope)), 232 args)), end="") 233 print(" %s;" % ",".join(squbits)) 234 235 def end_gate(self, name, args, qubits, nested_scope=None): 236 """End a custom gate. 237 238 name is name string. 239 args is list of Node expression objects. 240 qubits is list of (regname, idx) tuples. 241 nested_scope is a list of dictionaries mapping expression variables 242 to Node expression objects in order of increasing nesting depth. 243 """ 244 if name == self.in_gate: 245 self.in_gate = "" 246 self.listen = True 247 if self.listen and self.comments: 248 print("// end %s, %s, %s" % (name, 249 list(map(lambda x: 250 str(x.sym(nested_scope)), 251 args)), 252 qubits)) 253 254 def get_output(self): 255 """This backend will return nothing, as the output has been directly 256 written to screen""" 257 pass 258 [end of qiskit/unroll/_printerbackend.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
5de883f2de23a60e322fd890243b27163e2bbaae
LaTeX Barriers are not centered between gates and measures on all columns <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Informations - **Qiskit Terra version**: latest master - **Python version**: 3.7 - **Operating system**: linux ### What is the current behavior? @nonhermitian example in issue #938 shows a circuit with a barrier being drawn with a measure after a gate but the barrier is not centered for the gate and measure on q0_15: ![image](https://user-images.githubusercontent.com/2447371/46037725-0b024580-c0d7-11e8-8dc6-f3d15fadb04f.png) The barrier is being drawn from q0_0 (because the \barrier command in latex draws the barrier line down from the topmost bit) and the default horizontal offset is used because there is no measure immediately after the gate on that bit. This ends up being incorrect on q0_15 because because there is a measure immediately after the gate on that bit which moves the center line between those 2 boxes to the left. ### Steps to reproduce the problem Draw a circuit with a measure right after a gate on a bit covered by the barrier, but is not the bit where \barrier will be written in the latex. ### What is the expected behavior? The barrier line should centered between gates and measures on all bits covered by a barrier, not just the topmost bit (where the \barrier call is made from in the latex) ### Suggested solutions We need to adjust the check in the latex circuit drawer that looks for a measure directly after a gate on a bit to check all bits being covered by the barrier, not just the bit
2018-09-25T21:04:15Z
<patch> diff --git a/qiskit/tools/visualization/_circuit_visualization.py b/qiskit/tools/visualization/_circuit_visualization.py --- a/qiskit/tools/visualization/_circuit_visualization.py +++ b/qiskit/tools/visualization/_circuit_visualization.py @@ -1257,10 +1257,16 @@ def _build_latex_array(self, aliases=None): try: self._latex[pos_1][columns] = "\\meter" - prev_entry = self._latex[pos_1][columns - 1] - if 'barrier' in prev_entry: - self._latex[pos_1][columns - 1] = prev_entry.replace( - '\\barrier{', '\\barrier[-1.15em]{') + prev_column = [x[columns - 1] for x in self._latex] + for item, prev_entry in enumerate(prev_column): + if 'barrier' in prev_entry: + span = re.search('barrier{(.*)}', prev_entry) + if span and ( + item + int(span.group(1))) - pos_1 >= 0: + self._latex[ + item][columns - 1] = prev_entry.replace( + '\\barrier{', '\\barrier[-1.15em]{') + self._latex[pos_2][columns] = \ "\\cw \\cwx[-" + str(pos_2 - pos_1) + "]" except Exception as e: </patch>
[]
[]
pandas-dev__pandas-22037
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Rank 'na_option="bottom"' Usage Clarification #### Code Sample, a copy-pastable example if possible ```python df = pd.DataFrame({'val': [2, np.nan, 2, 8, 2, np.nan, 6]}) # Works as documented - missing values are highest rank In []: df.rank(na_option='top') Out []: val 0 4.0 1 1.5 2 4.0 3 7.0 4 4.0 5 1.5 6 6.0 # Technically works - missing values are lowest rank In []: df.rank(na_option='bottom') Out []: val 0 2.0 1 6.5 2 2.0 3 5.0 4 2.0 5 6.5 6 4.0 # However, we could say anything besides 'foo' In []: df.rank(na_option='foo') Out []: val 0 2.0 1 6.5 2 2.0 3 5.0 4 2.0 5 6.5 6 4.0 ``` #### Problem description For the sake of being explicit it would be better to raise for an unknown `na_option`, or alternately update the documentation to reflect that any value outside of 'keep' and 'top' would trigger this behavior <details> INSTALLED VERSIONS ------------------ commit: d3f7d2a666aa824e2df98083aa5c1fd9bb63252e python: 3.6.3.final.0 python-bits: 64 OS: Darwin OS-release: 17.4.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.23.0.dev0+169.gd3f7d2a66.dirty pytest: 3.2.1 pip: 9.0.1 setuptools: 36.5.0.post20170921 Cython: 0.26.1 numpy: 1.13.3 scipy: 1.0.0 pyarrow: 0.8.0 xarray: 0.10.0 IPython: 6.2.1 sphinx: 1.6.3 patsy: 0.4.1 dateutil: 2.6.1 pytz: 2017.2 blosc: None bottleneck: 1.2.1 tables: 3.4.2 numexpr: 2.6.4 feather: 0.4.0 matplotlib: 2.1.1 openpyxl: 2.5.0b1 xlrd: 1.1.0 xlwt: 1.3.0 xlsxwriter: 1.0.2 lxml: 4.1.1 bs4: 4.6.0 html5lib: 1.0.1 sqlalchemy: 1.1.13 pymysql: 0.7.11.None psycopg2: None jinja2: 2.10 s3fs: 0.1.2 fastparquet: 0.1.3 pandas_gbq: None pandas_datareader: None </details> </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://circleci.com/gh/pandas-dev/pandas"> 52 <img src="https://circleci.com/gh/circleci/mongofinil/tree/master.svg?style=shield&circle-token=223d8cafa7b02902c3e150242520af8944e34671" alt="circleci build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td></td> 58 <td> 59 <a href="https://ci.appveyor.com/project/pandas-dev/pandas"> 60 <img src="https://ci.appveyor.com/api/projects/status/86vn83mxgnl4xf1s/branch/master?svg=true" alt="appveyor build status" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Coverage</td> 66  <td> 67 <a href="https://codecov.io/gh/pandas-dev/pandas"> 68 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Downloads</td> 74 <td> 75 <a href="https://pandas.pydata.org"> 76 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 77 </a> 78 </td> 79 </tr> 80 <tr> 81 <td>Gitter</td> 82 <td> 83 <a href="https://gitter.im/pydata/pandas"> 84 <img src="https://badges.gitter.im/Join%20Chat.svg" 85 </a> 86 </td> 87 </tr> 88 </table> 89 90 91 92 ## What is it 93 94 **pandas** is a Python package providing fast, flexible, and expressive data 95 structures designed to make working with "relational" or "labeled" data both 96 easy and intuitive. It aims to be the fundamental high-level building block for 97 doing practical, **real world** data analysis in Python. Additionally, it has 98 the broader goal of becoming **the most powerful and flexible open source data 99 analysis / manipulation tool available in any language**. It is already well on 100 its way toward this goal. 101 102 ## Main Features 103 Here are just a few of the things that pandas does well: 104 105 - Easy handling of [**missing data**][missing-data] (represented as 106 `NaN`) in floating point as well as non-floating point data 107 - Size mutability: columns can be [**inserted and 108 deleted**][insertion-deletion] from DataFrame and higher dimensional 109 objects 110 - Automatic and explicit [**data alignment**][alignment]: objects can 111 be explicitly aligned to a set of labels, or the user can simply 112 ignore the labels and let `Series`, `DataFrame`, etc. automatically 113 align the data for you in computations 114 - Powerful, flexible [**group by**][groupby] functionality to perform 115 split-apply-combine operations on data sets, for both aggregating 116 and transforming data 117 - Make it [**easy to convert**][conversion] ragged, 118 differently-indexed data in other Python and NumPy data structures 119 into DataFrame objects 120 - Intelligent label-based [**slicing**][slicing], [**fancy 121 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 122 large data sets 123 - Intuitive [**merging**][merging] and [**joining**][joining] data 124 sets 125 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 126 data sets 127 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 128 labels per tick) 129 - Robust IO tools for loading data from [**flat files**][flat-files] 130 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 131 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 132 - [**Time series**][timeseries]-specific functionality: date range 133 generation and frequency conversion, moving window statistics, 134 moving window linear regressions, date shifting and lagging, etc. 135 136 137 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 138 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 139 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 140 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 141 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 142 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 143 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 144 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 145 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 146 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 147 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 148 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 149 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 150 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 151 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 152 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 153 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 154 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 155 156 ## Where to get it 157 The source code is currently hosted on GitHub at: 158 https://github.com/pandas-dev/pandas 159 160 Binary installers for the latest released version are available at the [Python 161 package index](https://pypi.org/project/pandas) and on conda. 162 163 ```sh 164 # conda 165 conda install pandas 166 ``` 167 168 ```sh 169 # or PyPI 170 pip install pandas 171 ``` 172 173 ## Dependencies 174 - [NumPy](https://www.numpy.org): 1.9.0 or higher 175 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 176 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 177 178 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 179 for recommended and optional dependencies. 180 181 ## Installation from sources 182 To install pandas from source you need Cython in addition to the normal 183 dependencies above. Cython can be installed from pypi: 184 185 ```sh 186 pip install cython 187 ``` 188 189 In the `pandas` directory (same one where you found this file after 190 cloning the git repo), execute: 191 192 ```sh 193 python setup.py install 194 ``` 195 196 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 197 198 ```sh 199 python setup.py develop 200 ``` 201 202 Alternatively, you can use `pip` if you want all the dependencies pulled 203 in automatically (the `-e` option is for installing it in [development 204 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 205 206 ```sh 207 pip install -e . 208 ``` 209 210 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 211 212 ## License 213 [BSD 3](LICENSE) 214 215 ## Documentation 216 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 217 218 ## Background 219 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 220 has been under active development since then. 221 222 ## Getting Help 223 224 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 225 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 226 227 ## Discussion and Development 228 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 229 230 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 231 232 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 233 234 A detailed overview on how to contribute can be found in the **[contributing guide.](https://pandas.pydata.org/pandas-docs/stable/contributing.html)** 235 236 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub “issues” tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 237 238 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 239 240 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 241 242 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 243 [end of README.md] [start of pandas/core/reshape/tile.py] 1 """ 2 Quantilization functions and related stuff 3 """ 4 from functools import partial 5 6 from pandas.core.dtypes.missing import isna 7 from pandas.core.dtypes.common import ( 8 is_integer, 9 is_scalar, 10 is_categorical_dtype, 11 is_datetime64_dtype, 12 is_timedelta64_dtype, 13 is_datetime64tz_dtype, 14 is_datetime_or_timedelta_dtype, 15 ensure_int64) 16 17 import pandas.core.algorithms as algos 18 import pandas.core.nanops as nanops 19 from pandas._libs.lib import infer_dtype 20 from pandas import (to_timedelta, to_datetime, 21 Categorical, Timestamp, Timedelta, 22 Series, Index, Interval, IntervalIndex) 23 24 import numpy as np 25 26 27 def cut(x, bins, right=True, labels=None, retbins=False, precision=3, 28 include_lowest=False, duplicates='raise'): 29 """ 30 Bin values into discrete intervals. 31 32 Use `cut` when you need to segment and sort data values into bins. This 33 function is also useful for going from a continuous variable to a 34 categorical variable. For example, `cut` could convert ages to groups of 35 age ranges. Supports binning into an equal number of bins, or a 36 pre-specified array of bins. 37 38 Parameters 39 ---------- 40 x : array-like 41 The input array to be binned. Must be 1-dimensional. 42 bins : int, sequence of scalars, or pandas.IntervalIndex 43 The criteria to bin by. 44 45 * int : Defines the number of equal-width bins in the range of `x`. The 46 range of `x` is extended by .1% on each side to include the minimum 47 and maximum values of `x`. 48 * sequence of scalars : Defines the bin edges allowing for non-uniform 49 width. No extension of the range of `x` is done. 50 * IntervalIndex : Defines the exact bins to be used. 51 52 right : bool, default True 53 Indicates whether `bins` includes the rightmost edge or not. If 54 ``right == True`` (the default), then the `bins` ``[1, 2, 3, 4]`` 55 indicate (1,2], (2,3], (3,4]. This argument is ignored when 56 `bins` is an IntervalIndex. 57 labels : array or bool, optional 58 Specifies the labels for the returned bins. Must be the same length as 59 the resulting bins. If False, returns only integer indicators of the 60 bins. This affects the type of the output container (see below). 61 This argument is ignored when `bins` is an IntervalIndex. 62 retbins : bool, default False 63 Whether to return the bins or not. Useful when bins is provided 64 as a scalar. 65 precision : int, default 3 66 The precision at which to store and display the bins labels. 67 include_lowest : bool, default False 68 Whether the first interval should be left-inclusive or not. 69 duplicates : {default 'raise', 'drop'}, optional 70 If bin edges are not unique, raise ValueError or drop non-uniques. 71 72 .. versionadded:: 0.23.0 73 74 Returns 75 ------- 76 out : pandas.Categorical, Series, or ndarray 77 An array-like object representing the respective bin for each value 78 of `x`. The type depends on the value of `labels`. 79 80 * True (default) : returns a Series for Series `x` or a 81 pandas.Categorical for all other inputs. The values stored within 82 are Interval dtype. 83 84 * sequence of scalars : returns a Series for Series `x` or a 85 pandas.Categorical for all other inputs. The values stored within 86 are whatever the type in the sequence is. 87 88 * False : returns an ndarray of integers. 89 90 bins : numpy.ndarray or IntervalIndex. 91 The computed or specified bins. Only returned when `retbins=True`. 92 For scalar or sequence `bins`, this is an ndarray with the computed 93 bins. If set `duplicates=drop`, `bins` will drop non-unique bin. For 94 an IntervalIndex `bins`, this is equal to `bins`. 95 96 See Also 97 -------- 98 qcut : Discretize variable into equal-sized buckets based on rank 99 or based on sample quantiles. 100 pandas.Categorical : Array type for storing data that come from a 101 fixed set of values. 102 Series : One-dimensional array with axis labels (including time series). 103 pandas.IntervalIndex : Immutable Index implementing an ordered, 104 sliceable set. 105 106 Notes 107 ----- 108 Any NA values will be NA in the result. Out of bounds values will be NA in 109 the resulting Series or pandas.Categorical object. 110 111 Examples 112 -------- 113 Discretize into three equal-sized bins. 114 115 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3) 116 ... # doctest: +ELLIPSIS 117 [(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ... 118 Categories (3, interval[float64]): [(0.994, 3.0] < (3.0, 5.0] ... 119 120 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 3, retbins=True) 121 ... # doctest: +ELLIPSIS 122 ([(0.994, 3.0], (5.0, 7.0], (3.0, 5.0], (3.0, 5.0], (5.0, 7.0], ... 123 Categories (3, interval[float64]): [(0.994, 3.0] < (3.0, 5.0] ... 124 array([0.994, 3. , 5. , 7. ])) 125 126 Discovers the same bins, but assign them specific labels. Notice that 127 the returned Categorical's categories are `labels` and is ordered. 128 129 >>> pd.cut(np.array([1, 7, 5, 4, 6, 3]), 130 ... 3, labels=["bad", "medium", "good"]) 131 [bad, good, medium, medium, good, bad] 132 Categories (3, object): [bad < medium < good] 133 134 ``labels=False`` implies you just want the bins back. 135 136 >>> pd.cut([0, 1, 1, 2], bins=4, labels=False) 137 array([0, 1, 1, 3]) 138 139 Passing a Series as an input returns a Series with categorical dtype: 140 141 >>> s = pd.Series(np.array([2, 4, 6, 8, 10]), 142 ... index=['a', 'b', 'c', 'd', 'e']) 143 >>> pd.cut(s, 3) 144 ... # doctest: +ELLIPSIS 145 a (1.992, 4.667] 146 b (1.992, 4.667] 147 c (4.667, 7.333] 148 d (7.333, 10.0] 149 e (7.333, 10.0] 150 dtype: category 151 Categories (3, interval[float64]): [(1.992, 4.667] < (4.667, ... 152 153 Passing a Series as an input returns a Series with mapping value. 154 It is used to map numerically to intervals based on bins. 155 156 >>> s = pd.Series(np.array([2, 4, 6, 8, 10]), 157 ... index=['a', 'b', 'c', 'd', 'e']) 158 >>> pd.cut(s, [0, 2, 4, 6, 8, 10], labels=False, retbins=True, right=False) 159 ... # doctest: +ELLIPSIS 160 (a 0.0 161 b 1.0 162 c 2.0 163 d 3.0 164 e 4.0 165 dtype: float64, array([0, 2, 4, 6, 8])) 166 167 Use `drop` optional when bins is not unique 168 169 >>> pd.cut(s, [0, 2, 4, 6, 10, 10], labels=False, retbins=True, 170 ... right=False, duplicates='drop') 171 ... # doctest: +ELLIPSIS 172 (a 0.0 173 b 1.0 174 c 2.0 175 d 3.0 176 e 3.0 177 dtype: float64, array([0, 2, 4, 6, 8])) 178 179 Passing an IntervalIndex for `bins` results in those categories exactly. 180 Notice that values not covered by the IntervalIndex are set to NaN. 0 181 is to the left of the first bin (which is closed on the right), and 1.5 182 falls between two bins. 183 184 >>> bins = pd.IntervalIndex.from_tuples([(0, 1), (2, 3), (4, 5)]) 185 >>> pd.cut([0, 0.5, 1.5, 2.5, 4.5], bins) 186 [NaN, (0, 1], NaN, (2, 3], (4, 5]] 187 Categories (3, interval[int64]): [(0, 1] < (2, 3] < (4, 5]] 188 """ 189 # NOTE: this binning code is changed a bit from histogram for var(x) == 0 190 191 # for handling the cut for datetime and timedelta objects 192 x_is_series, series_index, name, x = _preprocess_for_cut(x) 193 x, dtype = _coerce_to_type(x) 194 195 if not np.iterable(bins): 196 if is_scalar(bins) and bins < 1: 197 raise ValueError("`bins` should be a positive integer.") 198 199 try: # for array-like 200 sz = x.size 201 except AttributeError: 202 x = np.asarray(x) 203 sz = x.size 204 205 if sz == 0: 206 raise ValueError('Cannot cut empty array') 207 208 rng = (nanops.nanmin(x), nanops.nanmax(x)) 209 mn, mx = [mi + 0.0 for mi in rng] 210 211 if mn == mx: # adjust end points before binning 212 mn -= .001 * abs(mn) if mn != 0 else .001 213 mx += .001 * abs(mx) if mx != 0 else .001 214 bins = np.linspace(mn, mx, bins + 1, endpoint=True) 215 else: # adjust end points after binning 216 bins = np.linspace(mn, mx, bins + 1, endpoint=True) 217 adj = (mx - mn) * 0.001 # 0.1% of the range 218 if right: 219 bins[0] -= adj 220 else: 221 bins[-1] += adj 222 223 elif isinstance(bins, IntervalIndex): 224 pass 225 else: 226 bins = np.asarray(bins) 227 bins = _convert_bin_to_numeric_type(bins, dtype) 228 if (np.diff(bins) < 0).any(): 229 raise ValueError('bins must increase monotonically.') 230 231 fac, bins = _bins_to_cuts(x, bins, right=right, labels=labels, 232 precision=precision, 233 include_lowest=include_lowest, 234 dtype=dtype, 235 duplicates=duplicates) 236 237 return _postprocess_for_cut(fac, bins, retbins, x_is_series, 238 series_index, name, dtype) 239 240 241 def qcut(x, q, labels=None, retbins=False, precision=3, duplicates='raise'): 242 """ 243 Quantile-based discretization function. Discretize variable into 244 equal-sized buckets based on rank or based on sample quantiles. For example 245 1000 values for 10 quantiles would produce a Categorical object indicating 246 quantile membership for each data point. 247 248 Parameters 249 ---------- 250 x : 1d ndarray or Series 251 q : integer or array of quantiles 252 Number of quantiles. 10 for deciles, 4 for quartiles, etc. Alternately 253 array of quantiles, e.g. [0, .25, .5, .75, 1.] for quartiles 254 labels : array or boolean, default None 255 Used as labels for the resulting bins. Must be of the same length as 256 the resulting bins. If False, return only integer indicators of the 257 bins. 258 retbins : bool, optional 259 Whether to return the (bins, labels) or not. Can be useful if bins 260 is given as a scalar. 261 precision : int, optional 262 The precision at which to store and display the bins labels 263 duplicates : {default 'raise', 'drop'}, optional 264 If bin edges are not unique, raise ValueError or drop non-uniques. 265 266 .. versionadded:: 0.20.0 267 268 Returns 269 ------- 270 out : Categorical or Series or array of integers if labels is False 271 The return type (Categorical or Series) depends on the input: a Series 272 of type category if input is a Series else Categorical. Bins are 273 represented as categories when categorical data is returned. 274 bins : ndarray of floats 275 Returned only if `retbins` is True. 276 277 Notes 278 ----- 279 Out of bounds values will be NA in the resulting Categorical object 280 281 Examples 282 -------- 283 >>> pd.qcut(range(5), 4) 284 ... # doctest: +ELLIPSIS 285 [(-0.001, 1.0], (-0.001, 1.0], (1.0, 2.0], (2.0, 3.0], (3.0, 4.0]] 286 Categories (4, interval[float64]): [(-0.001, 1.0] < (1.0, 2.0] ... 287 288 >>> pd.qcut(range(5), 3, labels=["good", "medium", "bad"]) 289 ... # doctest: +SKIP 290 [good, good, medium, bad, bad] 291 Categories (3, object): [good < medium < bad] 292 293 >>> pd.qcut(range(5), 4, labels=False) 294 array([0, 0, 1, 2, 3]) 295 """ 296 x_is_series, series_index, name, x = _preprocess_for_cut(x) 297 298 x, dtype = _coerce_to_type(x) 299 300 if is_integer(q): 301 quantiles = np.linspace(0, 1, q + 1) 302 else: 303 quantiles = q 304 bins = algos.quantile(x, quantiles) 305 fac, bins = _bins_to_cuts(x, bins, labels=labels, 306 precision=precision, include_lowest=True, 307 dtype=dtype, duplicates=duplicates) 308 309 return _postprocess_for_cut(fac, bins, retbins, x_is_series, 310 series_index, name, dtype) 311 312 313 def _bins_to_cuts(x, bins, right=True, labels=None, 314 precision=3, include_lowest=False, 315 dtype=None, duplicates='raise'): 316 317 if duplicates not in ['raise', 'drop']: 318 raise ValueError("invalid value for 'duplicates' parameter, " 319 "valid options are: raise, drop") 320 321 if isinstance(bins, IntervalIndex): 322 # we have a fast-path here 323 ids = bins.get_indexer(x) 324 result = algos.take_nd(bins, ids) 325 result = Categorical(result, categories=bins, ordered=True) 326 return result, bins 327 328 unique_bins = algos.unique(bins) 329 if len(unique_bins) < len(bins) and len(bins) != 2: 330 if duplicates == 'raise': 331 raise ValueError("Bin edges must be unique: {bins!r}.\nYou " 332 "can drop duplicate edges by setting " 333 "the 'duplicates' kwarg".format(bins=bins)) 334 else: 335 bins = unique_bins 336 337 side = 'left' if right else 'right' 338 ids = ensure_int64(bins.searchsorted(x, side=side)) 339 340 if include_lowest: 341 # Numpy 1.9 support: ensure this mask is a Numpy array 342 ids[np.asarray(x == bins[0])] = 1 343 344 na_mask = isna(x) | (ids == len(bins)) | (ids == 0) 345 has_nas = na_mask.any() 346 347 if labels is not False: 348 if labels is None: 349 labels = _format_labels(bins, precision, right=right, 350 include_lowest=include_lowest, 351 dtype=dtype) 352 else: 353 if len(labels) != len(bins) - 1: 354 raise ValueError('Bin labels must be one fewer than ' 355 'the number of bin edges') 356 if not is_categorical_dtype(labels): 357 labels = Categorical(labels, categories=labels, ordered=True) 358 359 np.putmask(ids, na_mask, 0) 360 result = algos.take_nd(labels, ids - 1) 361 362 else: 363 result = ids - 1 364 if has_nas: 365 result = result.astype(np.float64) 366 np.putmask(result, na_mask, np.nan) 367 368 return result, bins 369 370 371 def _trim_zeros(x): 372 while len(x) > 1 and x[-1] == '0': 373 x = x[:-1] 374 if len(x) > 1 and x[-1] == '.': 375 x = x[:-1] 376 return x 377 378 379 def _coerce_to_type(x): 380 """ 381 if the passed data is of datetime/timedelta type, 382 this method converts it to numeric so that cut method can 383 handle it 384 """ 385 dtype = None 386 387 if is_datetime64tz_dtype(x): 388 dtype = x.dtype 389 elif is_datetime64_dtype(x): 390 x = to_datetime(x) 391 dtype = np.datetime64 392 elif is_timedelta64_dtype(x): 393 x = to_timedelta(x) 394 dtype = np.timedelta64 395 396 if dtype is not None: 397 # GH 19768: force NaT to NaN during integer conversion 398 x = np.where(x.notna(), x.view(np.int64), np.nan) 399 400 return x, dtype 401 402 403 def _convert_bin_to_numeric_type(bins, dtype): 404 """ 405 if the passed bin is of datetime/timedelta type, 406 this method converts it to integer 407 408 Parameters 409 ---------- 410 bins : list-like of bins 411 dtype : dtype of data 412 413 Raises 414 ------ 415 ValueError if bins are not of a compat dtype to dtype 416 """ 417 bins_dtype = infer_dtype(bins) 418 if is_timedelta64_dtype(dtype): 419 if bins_dtype in ['timedelta', 'timedelta64']: 420 bins = to_timedelta(bins).view(np.int64) 421 else: 422 raise ValueError("bins must be of timedelta64 dtype") 423 elif is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype): 424 if bins_dtype in ['datetime', 'datetime64']: 425 bins = to_datetime(bins).view(np.int64) 426 else: 427 raise ValueError("bins must be of datetime64 dtype") 428 429 return bins 430 431 432 def _convert_bin_to_datelike_type(bins, dtype): 433 """ 434 Convert bins to a DatetimeIndex or TimedeltaIndex if the orginal dtype is 435 datelike 436 437 Parameters 438 ---------- 439 bins : list-like of bins 440 dtype : dtype of data 441 442 Returns 443 ------- 444 bins : Array-like of bins, DatetimeIndex or TimedeltaIndex if dtype is 445 datelike 446 """ 447 if is_datetime64tz_dtype(dtype) or is_datetime_or_timedelta_dtype(dtype): 448 bins = Index(bins.astype(np.int64), dtype=dtype) 449 return bins 450 451 452 def _format_labels(bins, precision, right=True, 453 include_lowest=False, dtype=None): 454 """ based on the dtype, return our labels """ 455 456 closed = 'right' if right else 'left' 457 458 if is_datetime64tz_dtype(dtype): 459 formatter = partial(Timestamp, tz=dtype.tz) 460 adjust = lambda x: x - Timedelta('1ns') 461 elif is_datetime64_dtype(dtype): 462 formatter = Timestamp 463 adjust = lambda x: x - Timedelta('1ns') 464 elif is_timedelta64_dtype(dtype): 465 formatter = Timedelta 466 adjust = lambda x: x - Timedelta('1ns') 467 else: 468 precision = _infer_precision(precision, bins) 469 formatter = lambda x: _round_frac(x, precision) 470 adjust = lambda x: x - 10 ** (-precision) 471 472 breaks = [formatter(b) for b in bins] 473 labels = IntervalIndex.from_breaks(breaks, closed=closed) 474 475 if right and include_lowest: 476 # we will adjust the left hand side by precision to 477 # account that we are all right closed 478 v = adjust(labels[0].left) 479 480 i = IntervalIndex([Interval(v, labels[0].right, closed='right')]) 481 labels = i.append(labels[1:]) 482 483 return labels 484 485 486 def _preprocess_for_cut(x): 487 """ 488 handles preprocessing for cut where we convert passed 489 input to array, strip the index information and store it 490 separately 491 """ 492 x_is_series = isinstance(x, Series) 493 series_index = None 494 name = None 495 496 if x_is_series: 497 series_index = x.index 498 name = x.name 499 500 # Check that the passed array is a Pandas or Numpy object 501 # We don't want to strip away a Pandas data-type here (e.g. datetimetz) 502 ndim = getattr(x, 'ndim', None) 503 if ndim is None: 504 x = np.asarray(x) 505 if x.ndim != 1: 506 raise ValueError("Input array must be 1 dimensional") 507 508 return x_is_series, series_index, name, x 509 510 511 def _postprocess_for_cut(fac, bins, retbins, x_is_series, 512 series_index, name, dtype): 513 """ 514 handles post processing for the cut method where 515 we combine the index information if the originally passed 516 datatype was a series 517 """ 518 if x_is_series: 519 fac = Series(fac, index=series_index, name=name) 520 521 if not retbins: 522 return fac 523 524 bins = _convert_bin_to_datelike_type(bins, dtype) 525 526 return fac, bins 527 528 529 def _round_frac(x, precision): 530 """ 531 Round the fractional part of the given number 532 """ 533 if not np.isfinite(x) or x == 0: 534 return x 535 else: 536 frac, whole = np.modf(x) 537 if whole == 0: 538 digits = -int(np.floor(np.log10(abs(frac)))) - 1 + precision 539 else: 540 digits = precision 541 return np.around(x, digits) 542 543 544 def _infer_precision(base_precision, bins): 545 """Infer an appropriate precision for _round_frac 546 """ 547 for precision in range(base_precision, 20): 548 levels = [_round_frac(b, precision) for b in bins] 549 if algos.unique(levels).size == bins.size: 550 return precision 551 return base_precision # default 552 [end of pandas/core/reshape/tile.py] [start of pandas/util/_print_versions.py] 1 import os 2 import platform 3 import sys 4 import struct 5 import subprocess 6 import codecs 7 import locale 8 import importlib 9 10 11 def get_sys_info(): 12 "Returns system information as a dict" 13 14 blob = [] 15 16 # get full commit hash 17 commit = None 18 if os.path.isdir(".git") and os.path.isdir("pandas"): 19 try: 20 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), 21 stdout=subprocess.PIPE, 22 stderr=subprocess.PIPE) 23 so, serr = pipe.communicate() 24 except: 25 pass 26 else: 27 if pipe.returncode == 0: 28 commit = so 29 try: 30 commit = so.decode('utf-8') 31 except ValueError: 32 pass 33 commit = commit.strip().strip('"') 34 35 blob.append(('commit', commit)) 36 37 try: 38 (sysname, nodename, release, 39 version, machine, processor) = platform.uname() 40 blob.extend([ 41 ("python", '.'.join(map(str, sys.version_info))), 42 ("python-bits", struct.calcsize("P") * 8), 43 ("OS", "{sysname}".format(sysname=sysname)), 44 ("OS-release", "{release}".format(release=release)), 45 # ("Version", "{version}".format(version=version)), 46 ("machine", "{machine}".format(machine=machine)), 47 ("processor", "{processor}".format(processor=processor)), 48 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)), 49 ("LC_ALL", "{lc}".format(lc=os.environ.get('LC_ALL', "None"))), 50 ("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))), 51 ("LOCALE", '.'.join(map(str, locale.getlocale()))), 52 ]) 53 except: 54 pass 55 56 return blob 57 58 59 def show_versions(as_json=False): 60 sys_info = get_sys_info() 61 62 deps = [ 63 # (MODULE_NAME, f(mod) -> mod version) 64 ("pandas", lambda mod: mod.__version__), 65 ("pytest", lambda mod: mod.__version__), 66 ("pip", lambda mod: mod.__version__), 67 ("setuptools", lambda mod: mod.__version__), 68 ("Cython", lambda mod: mod.__version__), 69 ("numpy", lambda mod: mod.version.version), 70 ("scipy", lambda mod: mod.version.version), 71 ("pyarrow", lambda mod: mod.__version__), 72 ("xarray", lambda mod: mod.__version__), 73 ("IPython", lambda mod: mod.__version__), 74 ("sphinx", lambda mod: mod.__version__), 75 ("patsy", lambda mod: mod.__version__), 76 ("dateutil", lambda mod: mod.__version__), 77 ("pytz", lambda mod: mod.VERSION), 78 ("blosc", lambda mod: mod.__version__), 79 ("bottleneck", lambda mod: mod.__version__), 80 ("tables", lambda mod: mod.__version__), 81 ("numexpr", lambda mod: mod.__version__), 82 ("feather", lambda mod: mod.__version__), 83 ("matplotlib", lambda mod: mod.__version__), 84 ("openpyxl", lambda mod: mod.__version__), 85 ("xlrd", lambda mod: mod.__VERSION__), 86 ("xlwt", lambda mod: mod.__VERSION__), 87 ("xlsxwriter", lambda mod: mod.__version__), 88 ("lxml", lambda mod: mod.etree.__version__), 89 ("bs4", lambda mod: mod.__version__), 90 ("html5lib", lambda mod: mod.__version__), 91 ("sqlalchemy", lambda mod: mod.__version__), 92 ("pymysql", lambda mod: mod.__version__), 93 ("psycopg2", lambda mod: mod.__version__), 94 ("jinja2", lambda mod: mod.__version__), 95 ("s3fs", lambda mod: mod.__version__), 96 ("fastparquet", lambda mod: mod.__version__), 97 ("pandas_gbq", lambda mod: mod.__version__), 98 ("pandas_datareader", lambda mod: mod.__version__), 99 ("gcsfs", lambda mod: mod.__version__), 100 ] 101 102 deps_blob = list() 103 for (modname, ver_f) in deps: 104 try: 105 if modname in sys.modules: 106 mod = sys.modules[modname] 107 else: 108 mod = importlib.import_module(modname) 109 ver = ver_f(mod) 110 deps_blob.append((modname, ver)) 111 except: 112 deps_blob.append((modname, None)) 113 114 if (as_json): 115 try: 116 import json 117 except: 118 import simplejson as json 119 120 j = dict(system=dict(sys_info), dependencies=dict(deps_blob)) 121 122 if as_json is True: 123 print(j) 124 else: 125 with codecs.open(as_json, "wb", encoding='utf8') as f: 126 json.dump(j, f, indent=2) 127 128 else: 129 130 print("\nINSTALLED VERSIONS") 131 print("------------------") 132 133 for k, stat in sys_info: 134 print("{k}: {stat}".format(k=k, stat=stat)) 135 136 print("") 137 for k, stat in deps_blob: 138 print("{k}: {stat}".format(k=k, stat=stat)) 139 140 141 def main(): 142 from optparse import OptionParser 143 parser = OptionParser() 144 parser.add_option("-j", "--json", metavar="FILE", nargs=1, 145 help="Save output as JSON into file, pass in " 146 "'-' to output to stdout") 147 148 (options, args) = parser.parse_args() 149 150 if options.json == "-": 151 options.json = True 152 153 show_versions(as_json=options.json) 154 155 return 0 156 157 158 if __name__ == "__main__": 159 sys.exit(main()) 160 [end of pandas/util/_print_versions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
2d0c96119391c85bd4f7ffbb847759ee3777162a
Rank 'na_option="bottom"' Usage Clarification #### Code Sample, a copy-pastable example if possible ```python df = pd.DataFrame({'val': [2, np.nan, 2, 8, 2, np.nan, 6]}) # Works as documented - missing values are highest rank In []: df.rank(na_option='top') Out []: val 0 4.0 1 1.5 2 4.0 3 7.0 4 4.0 5 1.5 6 6.0 # Technically works - missing values are lowest rank In []: df.rank(na_option='bottom') Out []: val 0 2.0 1 6.5 2 2.0 3 5.0 4 2.0 5 6.5 6 4.0 # However, we could say anything besides 'foo' In []: df.rank(na_option='foo') Out []: val 0 2.0 1 6.5 2 2.0 3 5.0 4 2.0 5 6.5 6 4.0 ``` #### Problem description For the sake of being explicit it would be better to raise for an unknown `na_option`, or alternately update the documentation to reflect that any value outside of 'keep' and 'top' would trigger this behavior <details> INSTALLED VERSIONS ------------------ commit: d3f7d2a666aa824e2df98083aa5c1fd9bb63252e python: 3.6.3.final.0 python-bits: 64 OS: Darwin OS-release: 17.4.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.23.0.dev0+169.gd3f7d2a66.dirty pytest: 3.2.1 pip: 9.0.1 setuptools: 36.5.0.post20170921 Cython: 0.26.1 numpy: 1.13.3 scipy: 1.0.0 pyarrow: 0.8.0 xarray: 0.10.0 IPython: 6.2.1 sphinx: 1.6.3 patsy: 0.4.1 dateutil: 2.6.1 pytz: 2017.2 blosc: None bottleneck: 1.2.1 tables: 3.4.2 numexpr: 2.6.4 feather: 0.4.0 matplotlib: 2.1.1 openpyxl: 2.5.0b1 xlrd: 1.1.0 xlwt: 1.3.0 xlsxwriter: 1.0.2 lxml: 4.1.1 bs4: 4.6.0 html5lib: 1.0.1 sqlalchemy: 1.1.13 pymysql: 0.7.11.None psycopg2: None jinja2: 2.10 s3fs: 0.1.2 fastparquet: 0.1.3 pandas_gbq: None pandas_datareader: None </details>
I think raising a value error is better so that users will know when a argument is passed by mistake, e.g ``` df.rank(na_option=True)``` @peterpanmj agreed on the `ValueError` - PRs are welcome if interested! I'm trying to create a PR for this, but I keep getting the error `remote: Permission to pandas-dev/pandas.git denied to raguiar2.` Any advice? I'm assuming from the message you are trying to push directly to the pandas repo instead of to your own. Assuming you have your own fork established as `origin` you could just do: ```sh git push origin your-branch-name ``` If in doubt be sure to check out the forking section of the contributing guide as well: https://pandas.pydata.org/pandas-docs/stable/contributing.html#forking
2018-07-24T06:13:29Z
<patch> diff --git a/pandas/core/generic.py b/pandas/core/generic.py --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -7480,6 +7480,10 @@ def rank(self, axis=0, method='average', numeric_only=None, msg = "rank does not make sense when ndim > 2" raise NotImplementedError(msg) + if na_option not in {'keep', 'top', 'bottom'}: + msg = "na_option must be one of 'keep', 'top', or 'bottom'" + raise ValueError(msg) + def ranker(data): ranks = algos.rank(data.values, axis=axis, method=method, ascending=ascending, na_option=na_option, </patch>
[]
[]
pandas-dev__pandas-27083
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: Categorical.copy deep kwarg Would close #26995 if I hadn't just updated that to reflect the fact that several other pandas-internal EAs don't handle the `deep` kwarg correctly. - [ ] closes #xxxx - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /> 30 </a> 31 </td> 32 </tr> 33 <tr> 34 <td>License</td> 35 <td> 36 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 37 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 38 </a> 39 </td> 40 </tr> 41 <tr> 42 <td>Build Status</td> 43 <td> 44 <a href="https://travis-ci.org/pandas-dev/pandas"> 45 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 46 </a> 47 </td> 48 </tr> 49 <tr> 50 <td></td> 51 <td> 52 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 53 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 54 </a> 55 </td> 56 </tr> 57 <tr> 58 <td>Coverage</td> 59  <td> 60 <a href="https://codecov.io/gh/pandas-dev/pandas"> 61 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 62 </a> 63 </td> 64 </tr> 65 <tr> 66 <td>Downloads</td> 67 <td> 68 <a href="https://pandas.pydata.org"> 69 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 70 </a> 71 </td> 72 </tr> 73 <tr> 74 <td>Gitter</td> 75 <td> 76 <a href="https://gitter.im/pydata/pandas"> 77 <img src="https://badges.gitter.im/Join%20Chat.svg" /> 78 </a> 79 </td> 80 </tr> 81 </table> 82 83 84 85 ## What is it? 86 87 **pandas** is a Python package providing fast, flexible, and expressive data 88 structures designed to make working with "relational" or "labeled" data both 89 easy and intuitive. It aims to be the fundamental high-level building block for 90 doing practical, **real world** data analysis in Python. Additionally, it has 91 the broader goal of becoming **the most powerful and flexible open source data 92 analysis / manipulation tool available in any language**. It is already well on 93 its way towards this goal. 94 95 ## Main Features 96 Here are just a few of the things that pandas does well: 97 98 - Easy handling of [**missing data**][missing-data] (represented as 99 `NaN`) in floating point as well as non-floating point data 100 - Size mutability: columns can be [**inserted and 101 deleted**][insertion-deletion] from DataFrame and higher dimensional 102 objects 103 - Automatic and explicit [**data alignment**][alignment]: objects can 104 be explicitly aligned to a set of labels, or the user can simply 105 ignore the labels and let `Series`, `DataFrame`, etc. automatically 106 align the data for you in computations 107 - Powerful, flexible [**group by**][groupby] functionality to perform 108 split-apply-combine operations on data sets, for both aggregating 109 and transforming data 110 - Make it [**easy to convert**][conversion] ragged, 111 differently-indexed data in other Python and NumPy data structures 112 into DataFrame objects 113 - Intelligent label-based [**slicing**][slicing], [**fancy 114 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 115 large data sets 116 - Intuitive [**merging**][merging] and [**joining**][joining] data 117 sets 118 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 119 data sets 120 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 121 labels per tick) 122 - Robust IO tools for loading data from [**flat files**][flat-files] 123 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 124 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 125 - [**Time series**][timeseries]-specific functionality: date range 126 generation and frequency conversion, moving window statistics, 127 moving window linear regressions, date shifting and lagging, etc. 128 129 130 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 131 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 132 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 133 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 134 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 135 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 136 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 137 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 138 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 139 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 140 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 141 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 142 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 143 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 144 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 145 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 146 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 147 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 148 149 ## Where to get it 150 The source code is currently hosted on GitHub at: 151 https://github.com/pandas-dev/pandas 152 153 Binary installers for the latest released version are available at the [Python 154 package index](https://pypi.org/project/pandas) and on conda. 155 156 ```sh 157 # conda 158 conda install pandas 159 ``` 160 161 ```sh 162 # or PyPI 163 pip install pandas 164 ``` 165 166 ## Dependencies 167 - [NumPy](https://www.numpy.org): 1.13.3 or higher 168 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 169 - [pytz](https://pythonhosted.org/pytz): 2015.4 or higher 170 171 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 172 for recommended and optional dependencies. 173 174 ## Installation from sources 175 To install pandas from source you need Cython in addition to the normal 176 dependencies above. Cython can be installed from pypi: 177 178 ```sh 179 pip install cython 180 ``` 181 182 In the `pandas` directory (same one where you found this file after 183 cloning the git repo), execute: 184 185 ```sh 186 python setup.py install 187 ``` 188 189 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 190 191 ```sh 192 python setup.py develop 193 ``` 194 195 Alternatively, you can use `pip` if you want all the dependencies pulled 196 in automatically (the `-e` option is for installing it in [development 197 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 198 199 ```sh 200 pip install -e . 201 ``` 202 203 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 204 205 ## License 206 [BSD 3](LICENSE) 207 208 ## Documentation 209 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 210 211 ## Background 212 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 213 has been under active development since then. 214 215 ## Getting Help 216 217 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 218 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 219 220 ## Discussion and Development 221 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 222 223 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 224 225 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 226 227 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 228 229 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 230 231 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 232 233 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 234 235 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 236 [end of README.md] [start of scripts/merge-pr.py] 1 #!/usr/bin/env python 2 3 # 4 # Licensed to the Apache Software Foundation (ASF) under one or more 5 # contributor license agreements. See the NOTICE file distributed with 6 # this work for additional information regarding copyright ownership. 7 # The ASF licenses this file to You under the Apache License, Version 2.0 8 # (the "License"); you may not use this file except in compliance with 9 # the License. You may obtain a copy of the License at 10 # 11 # http://www.apache.org/licenses/LICENSE-2.0 12 # 13 # Unless required by applicable law or agreed to in writing, software 14 # distributed under the License is distributed on an "AS IS" BASIS, 15 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 # See the License for the specific language governing permissions and 17 # limitations under the License. 18 # 19 20 # Utility for creating well-formed pull request merges and pushing them to 21 # Apache. 22 # usage: ./apache-pr-merge.py (see config env vars below) 23 # 24 # Lightly modified from version of this script in incubator-parquet-format 25 from subprocess import check_output 26 from requests.auth import HTTPBasicAuth 27 import requests 28 29 import os 30 import sys 31 import textwrap 32 33 PANDAS_HOME = '.' 34 PROJECT_NAME = 'pandas' 35 print("PANDAS_HOME = " + PANDAS_HOME) 36 37 # Remote name with the PR 38 PR_REMOTE_NAME = os.environ.get("PR_REMOTE_NAME", "upstream") 39 40 # Remote name where results pushed 41 PUSH_REMOTE_NAME = os.environ.get("PUSH_REMOTE_NAME", "upstream") 42 43 GITHUB_BASE = "https://github.com/pandas-dev/" + PROJECT_NAME + "/pull" 44 GITHUB_API_BASE = "https://api.github.com/repos/pandas-dev/" + PROJECT_NAME 45 46 # Prefix added to temporary branches 47 BRANCH_PREFIX = "PR_TOOL" 48 49 os.chdir(PANDAS_HOME) 50 51 auth_required = False 52 53 if auth_required: 54 GITHUB_USERNAME = os.environ['GITHUB_USER'] 55 import getpass 56 GITHUB_PASSWORD = getpass.getpass('Enter github.com password for %s:' 57 % GITHUB_USERNAME) 58 59 def get_json_auth(url): 60 auth = HTTPBasicAuth(GITHUB_USERNAME, GITHUB_PASSWORD) 61 req = requests.get(url, auth=auth) 62 return req.json() 63 64 get_json = get_json_auth 65 else: 66 def get_json_no_auth(url): 67 req = requests.get(url) 68 return req.json() 69 70 get_json = get_json_no_auth 71 72 73 def fail(msg): 74 print(msg) 75 clean_up() 76 sys.exit(-1) 77 78 79 def run_cmd(cmd): 80 if isinstance(cmd, str): 81 cmd = cmd.split(' ') 82 83 output = check_output(cmd) 84 85 if isinstance(output, bytes): 86 output = output.decode('utf-8') 87 return output 88 89 90 def continue_maybe(prompt): 91 result = input("\n%s (y/n): " % prompt) 92 if result.lower() != "y": 93 fail("Okay, exiting") 94 95 96 def continue_maybe2(prompt): 97 result = input("\n%s (y/n): " % prompt) 98 if result.lower() != "y": 99 return False 100 else: 101 return True 102 103 104 original_head = run_cmd("git rev-parse HEAD")[:8] 105 106 107 def clean_up(): 108 print("Restoring head pointer to %s" % original_head) 109 run_cmd("git checkout %s" % original_head) 110 111 branches = run_cmd("git branch").replace(" ", "").split("\n") 112 113 for branch in [b for b in branches if b.startswith(BRANCH_PREFIX)]: 114 print("Deleting local branch %s" % branch) 115 run_cmd("git branch -D %s" % branch) 116 117 118 # Merge the requested PR and return the merge hash 119 def merge_pr(pr_num, target_ref): 120 121 pr_branch_name = "%s_MERGE_PR_%s" % (BRANCH_PREFIX, pr_num) 122 target_branch_name = "%s_MERGE_PR_%s_%s" % (BRANCH_PREFIX, pr_num, 123 target_ref.upper()) 124 run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num, 125 pr_branch_name)) 126 run_cmd("git fetch %s %s:%s" % (PUSH_REMOTE_NAME, target_ref, 127 target_branch_name)) 128 run_cmd("git checkout %s" % target_branch_name) 129 130 had_conflicts = False 131 try: 132 run_cmd(['git', 'merge', pr_branch_name, '--squash']) 133 except Exception as e: 134 msg = ("Error merging: %s\nWould you like to manually fix-up " 135 "this merge?" % e) 136 continue_maybe(msg) 137 msg = ("Okay, please fix any conflicts and 'git add' " 138 "conflicting files... Finished?") 139 continue_maybe(msg) 140 had_conflicts = True 141 142 commit_authors = run_cmd(['git', 'log', 'HEAD..%s' % pr_branch_name, 143 '--pretty=format:%an <%ae>']).split("\n") 144 distinct_authors = sorted(set(commit_authors), 145 key=lambda x: commit_authors.count(x), 146 reverse=True) 147 primary_author = distinct_authors[0] 148 commits = run_cmd(['git', 'log', 'HEAD..%s' % pr_branch_name, 149 '--pretty=format:%h [%an] %s']).split("\n\n") 150 151 merge_message_flags = [] 152 153 merge_message_flags += ["-m", title] 154 if body is not None: 155 merge_message_flags += ["-m", '\n'.join(textwrap.wrap(body))] 156 157 authors = "\n".join("Author: %s" % a for a in distinct_authors) 158 159 merge_message_flags += ["-m", authors] 160 161 if had_conflicts: 162 committer_name = run_cmd("git config --get user.name").strip() 163 committer_email = run_cmd("git config --get user.email").strip() 164 message = ("This patch had conflicts when merged, " 165 "resolved by\nCommitter: %s <%s>" 166 % (committer_name, committer_email)) 167 merge_message_flags += ["-m", message] 168 169 # The string "Closes #%s" string is required for GitHub to correctly close 170 # the PR 171 merge_message_flags += [ 172 "-m", 173 "Closes #%s from %s and squashes the following commits:" 174 % (pr_num, pr_repo_desc)] 175 for c in commits: 176 merge_message_flags += ["-m", c] 177 178 run_cmd(['git', 'commit', '--author="%s"' % primary_author] + 179 merge_message_flags) 180 181 continue_maybe("Merge complete (local ref %s). Push to %s?" % ( 182 target_branch_name, PUSH_REMOTE_NAME)) 183 184 try: 185 run_cmd('git push %s %s:%s' % (PUSH_REMOTE_NAME, target_branch_name, 186 target_ref)) 187 except Exception as e: 188 clean_up() 189 fail("Exception while pushing: %s" % e) 190 191 merge_hash = run_cmd("git rev-parse %s" % target_branch_name)[:8] 192 clean_up() 193 print("Pull request #%s merged!" % pr_num) 194 print("Merge hash: %s" % merge_hash) 195 return merge_hash 196 197 198 def update_pr(pr_num, user_login, base_ref): 199 200 pr_branch_name = "%s_MERGE_PR_%s" % (BRANCH_PREFIX, pr_num) 201 202 run_cmd("git fetch %s pull/%s/head:%s" % (PR_REMOTE_NAME, pr_num, 203 pr_branch_name)) 204 run_cmd("git checkout %s" % pr_branch_name) 205 206 continue_maybe("Update ready (local ref %s)? Push to %s/%s?" % ( 207 pr_branch_name, user_login, base_ref)) 208 209 push_user_remote = "https://github.com/%s/pandas.git" % user_login 210 211 try: 212 run_cmd('git push %s %s:%s' % (push_user_remote, pr_branch_name, 213 base_ref)) 214 except Exception as e: 215 216 if continue_maybe2("Force push?"): 217 try: 218 run_cmd( 219 'git push -f %s %s:%s' % (push_user_remote, pr_branch_name, 220 base_ref)) 221 except Exception as e: 222 fail("Exception while pushing: %s" % e) 223 clean_up() 224 else: 225 fail("Exception while pushing: %s" % e) 226 clean_up() 227 228 clean_up() 229 print("Pull request #%s updated!" % pr_num) 230 231 232 def cherry_pick(pr_num, merge_hash, default_branch): 233 pick_ref = input("Enter a branch name [%s]: " % default_branch) 234 if pick_ref == "": 235 pick_ref = default_branch 236 237 pick_branch_name = "%s_PICK_PR_%s_%s" % (BRANCH_PREFIX, pr_num, 238 pick_ref.upper()) 239 240 run_cmd("git fetch %s %s:%s" % (PUSH_REMOTE_NAME, pick_ref, 241 pick_branch_name)) 242 run_cmd("git checkout %s" % pick_branch_name) 243 run_cmd("git cherry-pick -sx %s" % merge_hash) 244 245 continue_maybe("Pick complete (local ref %s). Push to %s?" % ( 246 pick_branch_name, PUSH_REMOTE_NAME)) 247 248 try: 249 run_cmd('git push %s %s:%s' % (PUSH_REMOTE_NAME, pick_branch_name, 250 pick_ref)) 251 except Exception as e: 252 clean_up() 253 fail("Exception while pushing: %s" % e) 254 255 pick_hash = run_cmd("git rev-parse %s" % pick_branch_name)[:8] 256 clean_up() 257 258 print("Pull request #%s picked into %s!" % (pr_num, pick_ref)) 259 print("Pick hash: %s" % pick_hash) 260 return pick_ref 261 262 263 def fix_version_from_branch(branch, versions): 264 # Note: Assumes this is a sorted (newest->oldest) list of un-released 265 # versions 266 if branch == "master": 267 return versions[0] 268 else: 269 branch_ver = branch.replace("branch-", "") 270 return filter(lambda x: x.name.startswith(branch_ver), versions)[-1] 271 272 273 pr_num = input("Which pull request would you like to merge? (e.g. 34): ") 274 pr = get_json("%s/pulls/%s" % (GITHUB_API_BASE, pr_num)) 275 276 url = pr["url"] 277 title = pr["title"] 278 body = pr["body"] 279 target_ref = pr["base"]["ref"] 280 user_login = pr["user"]["login"] 281 base_ref = pr["head"]["ref"] 282 pr_repo_desc = "%s/%s" % (user_login, base_ref) 283 284 if pr["merged"] is True: 285 print("Pull request {0} has already been merged, please backport manually" 286 .format(pr_num)) 287 sys.exit(0) 288 289 if not bool(pr["mergeable"]): 290 msg = ("Pull request {0} is not mergeable in its current form.\n" 291 "Continue? (experts only!)".format(pr_num)) 292 continue_maybe(msg) 293 294 print("\n=== Pull Request #%s ===" % pr_num) 295 296 # we may have un-printable unicode in our title 297 try: 298 title = title.encode('raw_unicode_escape') 299 except Exception: 300 pass 301 302 print("title\t{title}\nsource\t{source}\ntarget\t{target}\nurl\t{url}".format( 303 title=title, source=pr_repo_desc, target=target_ref, url=url)) 304 305 306 merged_refs = [target_ref] 307 308 print("\nProceed with updating or merging pull request #%s?" % pr_num) 309 update = input("Update PR and push to remote (r), merge locally (l), " 310 "or do nothing (n) ?") 311 update = update.lower() 312 313 if update == 'r': 314 merge_hash = update_pr(pr_num, user_login, base_ref) 315 elif update == 'l': 316 merge_hash = merge_pr(pr_num, target_ref) 317 [end of scripts/merge-pr.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
38ab7523c15d09e23c538755703aeb338ec35a1c
BUG: Categorical.copy deep kwarg Would close #26995 if I hadn't just updated that to reflect the fact that several other pandas-internal EAs don't handle the `deep` kwarg correctly. - [ ] closes #xxxx - [x] tests added / passed - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` - [ ] whatsnew entry
# [Codecov](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=h1) Report > Merging [#27024](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=desc) into [master](https://codecov.io/gh/pandas-dev/pandas/commit/83fe8d78b6b086f3ceabe81cd420a3c7affe9aba?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/pandas-dev/pandas/pull/27024/graphs/tree.svg?width=650&token=eZ4WkYLtcO&height=150&src=pr)](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #27024 +/- ## ========================================== + Coverage 91.99% 92% +<.01% ========================================== Files 180 180 Lines 50774 50760 -14 ========================================== - Hits 46712 46703 -9 + Misses 4062 4057 -5 ``` | Flag | Coverage Δ | | |---|---|---| | #multiple | `90.64% <100%> (+0.01%)` | :arrow_up: | | #single | `41.86% <56.25%> (-0.05%)` | :arrow_down: | | [Impacted Files](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [pandas/core/arrays/categorical.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvYXJyYXlzL2NhdGVnb3JpY2FsLnB5) | `95.94% <100%> (+0.02%)` | :arrow_up: | | [pandas/core/internals/blocks.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvaW50ZXJuYWxzL2Jsb2Nrcy5weQ==) | `94.62% <100%> (+0.24%)` | :arrow_up: | | [pandas/core/internals/construction.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvaW50ZXJuYWxzL2NvbnN0cnVjdGlvbi5weQ==) | `95.95% <100%> (ø)` | :arrow_up: | | [pandas/io/gbq.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2lvL2dicS5weQ==) | `88.88% <0%> (-11.12%)` | :arrow_down: | | [pandas/core/frame.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvZnJhbWUucHk=) | `96.89% <0%> (-0.12%)` | :arrow_down: | | [pandas/core/ops.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvb3BzLnB5) | `94.66% <0%> (-0.03%)` | :arrow_down: | | [pandas/core/indexes/datetimelike.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvaW5kZXhlcy9kYXRldGltZWxpa2UucHk=) | `98.14% <0%> (-0.01%)` | :arrow_down: | | [pandas/io/formats/format.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2lvL2Zvcm1hdHMvZm9ybWF0LnB5) | `97.91% <0%> (ø)` | :arrow_up: | | [pandas/core/sorting.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvc29ydGluZy5weQ==) | `98.35% <0%> (ø)` | :arrow_up: | | [pandas/core/arrays/base.py](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree#diff-cGFuZGFzL2NvcmUvYXJyYXlzL2Jhc2UucHk=) | `99.43% <0%> (ø)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/pandas-dev/pandas/pull/27024/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=footer). Last update [83fe8d7...82fcd54](https://codecov.io/gh/pandas-dev/pandas/pull/27024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). What was the decision in the issue? I thought we were tending toward deprecating `deep` in `ExtensionArray.copy`, but I may be wrong. > What was the decision in the issue? I thought we were tending toward deprecating deep in ExtensionArray.copy, but I may be wrong. I don't think a decision has been reached there (though I agree with your assessment of the momentum). Until/unless that change is made, this is the right move. Largely motivated by wanting to clear road-blocks to #27015. yeah let's not add the deep kwarg on EA .copy(), it doesn't really make sense; we may actually be able to deprecate it entirely on Series/DataFrame as well (separate issue). > yeah let's not add the deep kwarg on EA .copy() The issue is that EA already has the deep, kwarg, but Categorical doesn't. I'm pretty sure that this will fix some latent bugs for other EAs, will check and add tests if so.
2019-06-27T18:34:25Z
<patch> diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst --- a/doc/source/whatsnew/v0.25.0.rst +++ b/doc/source/whatsnew/v0.25.0.rst @@ -804,6 +804,7 @@ ExtensionArray - Bug in :func:`factorize` when passing an ``ExtensionArray`` with a custom ``na_sentinel`` (:issue:`25696`). - :meth:`Series.count` miscounts NA values in ExtensionArrays (:issue:`26835`) +- Keyword argument ``deep`` has been removed from :method:`ExtensionArray.copy` (:issue:`27083`) Other ^^^^^ diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py --- a/pandas/core/arrays/base.py +++ b/pandas/core/arrays/base.py @@ -820,15 +820,10 @@ def take(self, indices, allow_fill=False, fill_value=None): # pandas.api.extensions.take raise AbstractMethodError(self) - def copy(self, deep: bool = False) -> ABCExtensionArray: + def copy(self) -> ABCExtensionArray: """ Return a copy of the array. - Parameters - ---------- - deep : bool, default False - Also copy the underlying data backing this array. - Returns ------- ExtensionArray diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py --- a/pandas/core/arrays/datetimelike.py +++ b/pandas/core/arrays/datetimelike.py @@ -605,7 +605,7 @@ def _concat_same_type(cls, to_concat): values = np.concatenate([x.asi8 for x in to_concat]) return cls(values, dtype=dtype) - def copy(self, deep=False): + def copy(self): values = self.asi8.copy() return type(self)._simple_new(values, dtype=self.dtype, freq=self.freq) diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py --- a/pandas/core/arrays/integer.py +++ b/pandas/core/arrays/integer.py @@ -1,4 +1,3 @@ -import copy import sys from typing import Type import warnings @@ -375,14 +374,10 @@ def take(self, indexer, allow_fill=False, fill_value=None): return type(self)(result, mask, copy=False) - def copy(self, deep=False): + def copy(self): data, mask = self._data, self._mask - if deep: - data = copy.deepcopy(data) - mask = copy.deepcopy(mask) - else: - data = data.copy() - mask = mask.copy() + data = data.copy() + mask = mask.copy() return type(self)(data, mask, copy=False) def __setitem__(self, key, value): diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py --- a/pandas/core/arrays/interval.py +++ b/pandas/core/arrays/interval.py @@ -680,21 +680,16 @@ def _shallow_copy(self, left=None, right=None, closed=None): return self._simple_new( left, right, closed=closed, verify_integrity=False) - def copy(self, deep=False): + def copy(self): """ Return a copy of the array. - Parameters - ---------- - deep : bool, default False - Also copy the underlying data backing this array. - Returns ------- IntervalArray """ - left = self.left.copy(deep=True) if deep else self.left - right = self.right.copy(deep=True) if deep else self.right + left = self.left.copy(deep=True) + right = self.right.copy(deep=True) closed = self.closed # TODO: Could skip verify_integrity here. return type(self).from_arrays(left, right, closed=closed) diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py --- a/pandas/core/arrays/numpy_.py +++ b/pandas/core/arrays/numpy_.py @@ -285,7 +285,7 @@ def take(self, indices, allow_fill=False, fill_value=None): fill_value=fill_value) return type(self)(result) - def copy(self, deep=False): + def copy(self): return type(self)(self._ndarray.copy()) def _values_for_argsort(self): diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py --- a/pandas/core/arrays/sparse.py +++ b/pandas/core/arrays/sparse.py @@ -1262,12 +1262,8 @@ def searchsorted(self, v, side="left", sorter=None): v, side, sorter ) - def copy(self, deep=False): - if deep: - values = self.sp_values.copy() - else: - values = self.sp_values - + def copy(self): + values = self.sp_values.copy() return self._simple_new(values, self.sp_index, self.dtype) @classmethod diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py --- a/pandas/core/indexes/interval.py +++ b/pandas/core/indexes/interval.py @@ -429,7 +429,9 @@ def __reduce__(self): @Appender(_index_shared_docs['copy']) def copy(self, deep=False, name=None): - array = self._data.copy(deep=deep) + array = self._data + if deep: + array = array.copy() attributes = self._get_attributes_dict() if name is not None: attributes.update(name=name) diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -2212,7 +2212,7 @@ def copy(self, deep=True): """ copy constructor """ values = self.values if deep: - values = values.copy(deep=True) + values = values.copy() return self.make_block_same_class(values) def get_values(self, dtype=None): diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py --- a/pandas/core/internals/construction.py +++ b/pandas/core/internals/construction.py @@ -199,8 +199,10 @@ def init_dict(data, index, columns, dtype=None): arrays = (com.maybe_iterable_to_list(data[k]) for k in keys) # GH#24096 need copy to be deep for datetime64tz case # TODO: See if we can avoid these copies + arrays = [arr if not isinstance(arr, ABCIndexClass) else arr._data + for arr in arrays] arrays = [arr if not is_datetime64tz_dtype(arr) else - arr.copy(deep=True) for arr in arrays] + arr.copy() for arr in arrays] return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype) diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py --- a/pandas/core/sparse/series.py +++ b/pandas/core/sparse/series.py @@ -450,7 +450,9 @@ def copy(self, deep=True): """ # TODO: https://github.com/pandas-dev/pandas/issues/22314 # We skip the block manager till that is resolved. - new_data = self.values.copy(deep=deep) + new_data = self.values + if deep: + new_data = new_data.copy() return self._constructor(new_data, sparse_index=self.sp_index, fill_value=self.fill_value, index=self.index.copy(), </patch>
[]
[]
conda__conda-2355
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BeeGFS hard-links [BeeGFS](http://www.beegfs.com), a parallel cluster file system, [does not support](https://groups.google.com/forum/#!topic/fhgfs-user/cTJcqGZceVA) `hard-links` between files in differents directories. Depending on the [configuration](https://groups.google.com/forum/#!topic/fhgfs-user/pvQSo0QWicw), either an error is issued, or a `symbolic-link` is created. If a `symbolic-link` is created instead of a `hard-link`, it can cause problem, for example: ``` pkgs/bin/mpicc pkgs/lib/libopen-pal.so envs/bin/mpicc envs/lib/libopen-pal.so ``` Here, when `envs/bin/mpicc` is executed, it is actually `pkgs/bin/mpicc` that is executed, and the library `$PREFIX/../lib/libopen-pal.so` actually loaded is `pkgs/lib/libopen-pal.so,` which is different from `envs/lib/libopen-pal.so` for which conda has fixed hard-coded prefix path, and as a final consequence `mpicc` fails to find its configuration file. #805 is another (closed) related to `BeeGFS`. Would we need a new conda options to turn hard-link into copy? </issue> <code> [start of README.rst] 1 .. NOTE: This file serves both as the README on GitHub and the index.html for 2 conda.pydata.org. If you update this file, be sure to cd to the web 3 directory and run ``make html; make live`` 4 5 ===== 6 Conda 7 ===== 8 9 .. image:: https://travis-ci.org/conda/conda.svg?branch=master 10 :alt: Travis-CI Build Status 11 :target: https://travis-ci.org/conda/conda 12 13 .. image:: https://ci.appveyor.com/api/projects/status/9k80kxa9gra9cjr9/branch/master?svg=true 14 :alt: Appveyor Build Status 15 :target: https://ci.appveyor.com/project/ironmancio54716/conda/branch/master 16 17 .. image:: https://codecov.io/github/conda/conda/coverage.svg?branch=master 18 :alt: Codecov Status 19 :target: https://codecov.io/github/conda/conda?branch=master 20 21 .. image:: https://scrutinizer-ci.com/g/conda/conda/badges/quality-score.png?b=master 22 :alt: Scrutinizer Code Quality 23 :target: https://scrutinizer-ci.com/g/conda/conda/?branch=master 24 25 .. image:: https://www.quantifiedcode.com/api/v1/project/81377831ebe54def8b31c55a4b5b4cb0/badge.svg 26 :alt: Quantified Code 27 :target: https://www.quantifiedcode.com/app/project/81377831ebe54def8b31c55a4b5b4cb0 28 29 .. image:: https://badges.gitter.im/conda/conda.svg 30 :alt: Join the chat at https://gitter.im/conda/conda 31 :target: https://gitter.im/conda/conda?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge 32 33 Conda is a cross-platform, Python-agnostic binary package manager. It is the 34 package manager used by `Anaconda 35 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be 36 used for other systems as well. Conda makes environments first-class 37 citizens, making it easy to create independent environments even for C 38 libraries. Conda is written entirely in Python, and is BSD licensed open 39 source. 40 41 Conda is enhanced by organizations, tools, and repositories created and managed by the amazing members of the conda community. Some of them can be found `here <https://github.com/conda/conda/wiki/Conda-Community>`_. 42 43 44 Installation 45 ------------ 46 47 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a 48 minimal installation that only includes conda and its dependencies, called 49 `Miniconda <http://conda.pydata.org/miniconda.html>`_. 50 51 52 Getting Started 53 --------------- 54 55 If you install Anaconda, you will already have hundreds of packages 56 installed. You can see what packages are installed by running 57 58 .. code-block:: bash 59 60 $ conda list 61 62 to see all the packages that are available, use 63 64 .. code-block:: bash 65 66 $ conda search 67 68 and to install a package, use 69 70 .. code-block:: bash 71 72 $ conda install <package-name> 73 74 75 The real power of conda comes from its ability to manage environments. In 76 conda, an environment can be thought of as a completely separate installation. 77 Conda installs packages into environments efficiently using `hard links 78 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so 79 environments are space efficient, and take seconds to create. 80 81 The default environment, which ``conda`` itself is installed into is called 82 ``root``. To create another environment, use the ``conda create`` 83 command. For instance, to create an environment with the IPython notebook and 84 NumPy 1.6, which is older than the version that comes with Anaconda by 85 default, you would run 86 87 .. code-block:: bash 88 89 $ conda create -n numpy16 ipython-notebook numpy=1.6 90 91 This creates an environment called ``numpy16`` with the latest version of 92 the IPython notebook, NumPy 1.6, and their dependencies. 93 94 We can now activate this environment, use 95 96 .. code-block:: bash 97 98 # On Linux and Mac OS X 99 $ source activate numpy16 100 101 # On Windows 102 > activate numpy16 103 104 This puts the bin directory of the ``numpy16`` environment in the front of the 105 ``PATH``, and sets it as the default environment for all subsequent conda commands. 106 107 To go back to the root environment, use 108 109 .. code-block:: bash 110 111 # On Linux and Mac OS X 112 $ source deactivate 113 114 # On Windows 115 > deactivate 116 117 118 Building Your Own Packages 119 -------------------------- 120 121 You can easily build your own packages for conda, and upload them 122 to `anaconda.org <https://anaconda.org>`_, a free service for hosting 123 packages for conda, as well as other package managers. 124 To build a package, create a recipe. 125 See http://github.com/conda/conda-recipes for many example recipes, and 126 http://docs.continuum.io/conda/build.html for documentation on how to build 127 recipes. 128 129 To upload to anaconda.org, create an account. Then, install the 130 anaconda-client and login 131 132 .. code-block:: bash 133 134 $ conda install anaconda-client 135 $ anaconda login 136 137 Then, after you build your recipe 138 139 .. code-block:: bash 140 141 $ conda build <recipe-dir> 142 143 you will be prompted to upload to anaconda.org. 144 145 To add your anaconda.org channel, or the channel of others to conda so 146 that ``conda install`` will find and install their packages, run 147 148 .. code-block:: bash 149 150 $ conda config --add channels https://conda.anaconda.org/username 151 152 (replacing ``username`` with the user name of the person whose channel you want 153 to add). 154 155 Getting Help 156 ------------ 157 158 The documentation for conda is at http://conda.pydata.org/docs/. You can 159 subscribe to the `conda mailing list 160 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source 161 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_. 162 163 Contributing 164 ------------ 165 166 Contributions to conda are welcome. Just fork the GitHub repository and send a 167 pull request. 168 169 To develop on conda, the easiest way is to use ``python setup.py develop`` in your 170 root conda environment. This will install a link to the local conda source 171 code, so that any change you make to conda will be instantly available. To undo 172 this, run ``python setup.py develop -u``. If you are worried about breaking 173 your conda installation, you can install a separate instance of `Miniconda 174 <http://conda.pydata.org/miniconda.html>`_ and work off it. This is also the 175 only way to test conda in both Python 2 and Python 3, as conda can only be 176 installed into a root environment. 177 178 Run the conda tests by ``conda install pytest`` and then running ``py.test`` 179 in the conda directory. The tests are also run by Travis CI when you make a 180 pull request. 181 [end of README.rst] [start of conda/compat.py] 1 """ 2 For compatibility between Python versions. 3 Taken mostly from six.py by Benjamin Peterson. 4 """ 5 6 import sys 7 import types 8 import os 9 10 # True if we are running on Python 3. 11 PY3 = sys.version_info[0] == 3 12 13 if PY3: 14 string_types = str, 15 integer_types = int, 16 class_types = type, 17 text_type = str 18 binary_type = bytes 19 input = input 20 def lchmod(path, mode): 21 try: 22 os.chmod(path, mode, follow_symlinks=False) 23 except (TypeError, NotImplementedError, SystemError): 24 # On systems that don't allow permissions on symbolic links, skip 25 # links entirely. 26 if not os.path.islink(path): 27 os.chmod(path, mode) 28 import configparser 29 from io import StringIO 30 import urllib.parse as urlparse 31 from urllib.parse import quote as urllib_quote 32 from itertools import zip_longest 33 from math import log2, ceil 34 from shlex import quote 35 from tempfile import TemporaryDirectory 36 range = range 37 zip = zip 38 else: 39 import ConfigParser as configparser 40 from cStringIO import StringIO 41 import urlparse 42 from urllib import quote as urllib_quote 43 string_types = basestring, 44 integer_types = (int, long) 45 class_types = (type, types.ClassType) 46 text_type = unicode 47 binary_type = str 48 input = raw_input 49 try: 50 lchmod = os.lchmod 51 except AttributeError: 52 def lchmod(path, mode): 53 # On systems that don't allow permissions on symbolic links, skip 54 # links entirely. 55 if not os.path.islink(path): 56 os.chmod(path, mode) 57 from itertools import izip_longest as zip_longest 58 from math import log 59 def log2(x): 60 return log(x, 2) 61 def ceil(x): 62 from math import ceil 63 return int(ceil(x)) 64 from pipes import quote 65 66 # Modified from http://hg.python.org/cpython/file/3.3/Lib/tempfile.py. Don't 67 # use the 3.4 one. It uses the new weakref.finalize feature. 68 import shutil as _shutil 69 import warnings as _warnings 70 import os as _os 71 from tempfile import mkdtemp 72 range = xrange 73 from itertools import izip as zip 74 75 class TemporaryDirectory(object): 76 """Create and return a temporary directory. This has the same 77 behavior as mkdtemp but can be used as a context manager. For 78 example: 79 80 with TemporaryDirectory() as tmpdir: 81 ... 82 83 Upon exiting the context, the directory and everything contained 84 in it are removed. 85 """ 86 87 # Handle mkdtemp raising an exception 88 name = None 89 _closed = False 90 91 def __init__(self, suffix="", prefix='tmp', dir=None): 92 self.name = mkdtemp(suffix, prefix, dir) 93 94 def __repr__(self): 95 return "<{} {!r}>".format(self.__class__.__name__, self.name) 96 97 def __enter__(self): 98 return self.name 99 100 def cleanup(self, _warn=False, _warnings=_warnings): 101 if self.name and not self._closed: 102 try: 103 _shutil.rmtree(self.name) 104 except (TypeError, AttributeError) as ex: 105 if "None" not in '%s' % (ex,): 106 raise 107 self._rmtree(self.name) 108 self._closed = True 109 if _warn and _warnings.warn: 110 _warnings.warn("Implicitly cleaning up {!r}".format(self), 111 ResourceWarning) 112 113 def __exit__(self, exc, value, tb): 114 self.cleanup() 115 116 def __del__(self): 117 # Issue a ResourceWarning if implicit cleanup needed 118 self.cleanup(_warn=True) 119 120 def _rmtree(self, path, _OSError=OSError, _sep=_os.path.sep, 121 _listdir=_os.listdir, _remove=_os.remove, _rmdir=_os.rmdir): 122 # Essentially a stripped down version of shutil.rmtree. We can't 123 # use globals because they may be None'ed out at shutdown. 124 if not isinstance(path, str): 125 _sep = _sep.encode() 126 try: 127 for name in _listdir(path): 128 fullname = path + _sep + name 129 try: 130 _remove(fullname) 131 except _OSError: 132 self._rmtree(fullname) 133 _rmdir(path) 134 except _OSError: 135 pass 136 137 if PY3: 138 _iterkeys = "keys" 139 _itervalues = "values" 140 _iteritems = "items" 141 else: 142 _iterkeys = "iterkeys" 143 _itervalues = "itervalues" 144 _iteritems = "iteritems" 145 146 147 def iterkeys(d): 148 """Return an iterator over the keys of a dictionary.""" 149 return iter(getattr(d, _iterkeys)()) 150 151 def itervalues(d): 152 """Return an iterator over the values of a dictionary.""" 153 return iter(getattr(d, _itervalues)()) 154 155 def iteritems(d): 156 """Return an iterator over the (key, value) pairs of a dictionary.""" 157 return iter(getattr(d, _iteritems)()) 158 159 def get_http_value(u, key): 160 if PY3: 161 return u.headers.get(key) 162 else: 163 return u.info().getheader(key) 164 165 def with_metaclass(meta, *bases): 166 """ 167 Create a base class with a metaclass. 168 169 For example, if you have the metaclass 170 171 >>> class Meta(type): 172 ... pass 173 174 Use this as the metaclass by doing 175 176 >>> from sympy.core.compatibility import with_metaclass 177 >>> class MyClass(with_metaclass(Meta, object)): 178 ... pass 179 180 This is equivalent to the Python 2:: 181 182 class MyClass(object): 183 __metaclass__ = Meta 184 185 or Python 3:: 186 187 class MyClass(object, metaclass=Meta): 188 pass 189 190 That is, the first argument is the metaclass, and the remaining arguments 191 are the base classes. Note that if the base class is just ``object``, you 192 may omit it. 193 194 >>> MyClass.__mro__ 195 (<class 'MyClass'>, <... 'object'>) 196 >>> type(MyClass) 197 <class 'Meta'> 198 199 """ 200 class metaclass(meta): 201 __call__ = type.__call__ 202 __init__ = type.__init__ 203 def __new__(cls, name, this_bases, d): 204 if this_bases is None: 205 return type.__new__(cls, name, (), d) 206 return meta(name, bases, d) 207 return metaclass("NewBase", None, {}) 208 [end of conda/compat.py] [start of conda/install.py] 1 # (c) 2012-2014 Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 ''' This module contains: 7 * all low-level code for extracting, linking and unlinking packages 8 * a very simple CLI 9 10 These API functions have argument names referring to: 11 12 dist: canonical package name (e.g. 'numpy-1.6.2-py26_0') 13 14 pkgs_dir: the "packages directory" (e.g. '/opt/anaconda/pkgs' or 15 '/home/joe/envs/.pkgs') 16 17 prefix: the prefix of a particular environment, which may also 18 be the "default" environment (i.e. sys.prefix), 19 but is otherwise something like '/opt/anaconda/envs/foo', 20 or even any prefix, e.g. '/home/joe/myenv' 21 22 Also, this module is directly invoked by the (self extracting (sfx)) tarball 23 installer to create the initial environment, therefore it needs to be 24 standalone, i.e. not import any other parts of `conda` (only depend on 25 the standard library). 26 ''' 27 28 from __future__ import print_function, division, absolute_import 29 30 import errno 31 import functools 32 import json 33 import logging 34 import os 35 import shlex 36 import shutil 37 import stat 38 import subprocess 39 import sys 40 import tarfile 41 import time 42 import traceback 43 import re 44 from os.path import (abspath, basename, dirname, isdir, isfile, islink, 45 join, relpath, normpath) 46 47 try: 48 from conda.lock import Locked 49 except ImportError: 50 # Make sure this still works as a standalone script for the Anaconda 51 # installer. 52 class Locked(object): 53 def __init__(self, *args, **kwargs): 54 pass 55 56 def __enter__(self): 57 pass 58 59 def __exit__(self, exc_type, exc_value, traceback): 60 pass 61 62 try: 63 from conda.utils import win_path_to_unix 64 except ImportError: 65 def win_path_to_unix(path, root_prefix=""): 66 """Convert a path or ;-separated string of paths into a unix representation 67 68 Does not add cygdrive. If you need that, set root_prefix to "/cygdrive" 69 """ 70 path_re = '(?<![:/^a-zA-Z])([a-zA-Z]:[\/\\\\]+(?:[^:*?"<>|]+[\/\\\\]+)*[^:*?"<>|;\/\\\\]+?(?![a-zA-Z]:))' # noqa 71 72 def translation(found_path): 73 found = found_path.group(1).replace("\\", "/").replace(":", "") 74 return root_prefix + "/" + found 75 return re.sub(path_re, translation, path).replace(";/", ":/") 76 77 on_win = bool(sys.platform == "win32") 78 79 if on_win: 80 import ctypes 81 from ctypes import wintypes 82 83 CreateHardLink = ctypes.windll.kernel32.CreateHardLinkW 84 CreateHardLink.restype = wintypes.BOOL 85 CreateHardLink.argtypes = [wintypes.LPCWSTR, wintypes.LPCWSTR, 86 wintypes.LPVOID] 87 try: 88 CreateSymbolicLink = ctypes.windll.kernel32.CreateSymbolicLinkW 89 CreateSymbolicLink.restype = wintypes.BOOL 90 CreateSymbolicLink.argtypes = [wintypes.LPCWSTR, wintypes.LPCWSTR, 91 wintypes.DWORD] 92 except AttributeError: 93 CreateSymbolicLink = None 94 95 def win_hard_link(src, dst): 96 "Equivalent to os.link, using the win32 CreateHardLink call." 97 if not CreateHardLink(dst, src, None): 98 raise OSError('win32 hard link failed') 99 100 def win_soft_link(src, dst): 101 "Equivalent to os.symlink, using the win32 CreateSymbolicLink call." 102 if CreateSymbolicLink is None: 103 raise OSError('win32 soft link not supported') 104 if not CreateSymbolicLink(dst, src, isdir(src)): 105 raise OSError('win32 soft link failed') 106 107 def win_conda_bat_redirect(src, dst, shell): 108 """Special function for Windows XP where the `CreateSymbolicLink` 109 function is not available. 110 111 Simply creates a `.bat` file at `dst` which calls `src` together with 112 all command line arguments. 113 114 Works of course only with callable files, e.g. `.bat` or `.exe` files. 115 """ 116 from conda.utils import shells 117 try: 118 os.makedirs(os.path.dirname(dst)) 119 except OSError as exc: # Python >2.5 120 if exc.errno == errno.EEXIST and os.path.isdir(os.path.dirname(dst)): 121 pass 122 else: 123 raise 124 125 if 'cmd.exe' in shell.lower(): 126 # bat file redirect 127 with open(dst+'.bat', 'w') as f: 128 f.write('@echo off\n"%s" %%*\n' % src) 129 130 elif 'powershell' in shell.lower(): 131 # TODO: probably need one here for powershell at some point 132 pass 133 134 else: 135 # This one is for bash/cygwin/msys 136 with open(dst, "w") as f: 137 f.write("#!/bin/sh \n") 138 if src.endswith("conda"): 139 f.write('%s "$@"' % shells[shell]['path_to'](src+".exe")) 140 else: 141 f.write('source %s "$@"' % shells[shell]['path_to'](src)) 142 # Make the new file executable 143 # http://stackoverflow.com/a/30463972/1170370 144 mode = os.stat(dst).st_mode 145 mode |= (mode & 292) >> 2 # copy R bits to X 146 os.chmod(dst, mode) 147 148 log = logging.getLogger(__name__) 149 stdoutlog = logging.getLogger('stdoutlog') 150 151 class NullHandler(logging.Handler): 152 """ Copied from Python 2.7 to avoid getting 153 `No handlers could be found for logger "patch"` 154 http://bugs.python.org/issue16539 155 """ 156 157 def handle(self, record): 158 pass 159 160 def emit(self, record): 161 pass 162 163 def createLock(self): 164 self.lock = None 165 166 log.addHandler(NullHandler()) 167 168 LINK_HARD = 1 169 LINK_SOFT = 2 170 LINK_COPY = 3 171 link_name_map = { 172 LINK_HARD: 'hard-link', 173 LINK_SOFT: 'soft-link', 174 LINK_COPY: 'copy', 175 } 176 177 def _link(src, dst, linktype=LINK_HARD): 178 if linktype == LINK_HARD: 179 if on_win: 180 win_hard_link(src, dst) 181 else: 182 os.link(src, dst) 183 elif linktype == LINK_SOFT: 184 if on_win: 185 win_soft_link(src, dst) 186 else: 187 os.symlink(src, dst) 188 elif linktype == LINK_COPY: 189 # copy relative symlinks as symlinks 190 if not on_win and islink(src) and not os.readlink(src).startswith('/'): 191 os.symlink(os.readlink(src), dst) 192 else: 193 shutil.copy2(src, dst) 194 else: 195 raise Exception("Did not expect linktype=%r" % linktype) 196 197 198 def _remove_readonly(func, path, excinfo): 199 os.chmod(path, stat.S_IWRITE) 200 func(path) 201 202 def warn_failed_remove(function, path, exc_info): 203 if exc_info[1].errno == errno.EACCES: 204 log.warn("Cannot remove, permission denied: {0}".format(path)) 205 elif exc_info[1].errno == errno.ENOTEMPTY: 206 log.warn("Cannot remove, not empty: {0}".format(path)) 207 else: 208 log.warn("Cannot remove, unknown reason: {0}".format(path)) 209 210 def rm_rf(path, max_retries=5, trash=True): 211 """ 212 Completely delete path 213 214 max_retries is the number of times to retry on failure. The default is 215 5. This only applies to deleting a directory. 216 217 If removing path fails and trash is True, files will be moved to the trash directory. 218 """ 219 if islink(path) or isfile(path): 220 # Note that we have to check if the destination is a link because 221 # exists('/path/to/dead-link') will return False, although 222 # islink('/path/to/dead-link') is True. 223 try: 224 os.unlink(path) 225 except (OSError, IOError): 226 log.warn("Cannot remove, permission denied: {0}".format(path)) 227 228 elif isdir(path): 229 for i in range(max_retries): 230 try: 231 shutil.rmtree(path, ignore_errors=False, onerror=warn_failed_remove) 232 return 233 except OSError as e: 234 msg = "Unable to delete %s\n%s\n" % (path, e) 235 if on_win: 236 try: 237 shutil.rmtree(path, onerror=_remove_readonly) 238 return 239 except OSError as e1: 240 msg += "Retry with onerror failed (%s)\n" % e1 241 242 p = subprocess.Popen(['cmd', '/c', 'rd', '/s', '/q', path], 243 stdout=subprocess.PIPE, 244 stderr=subprocess.PIPE) 245 (stdout, stderr) = p.communicate() 246 if p.returncode != 0: 247 msg += '%s\n%s\n' % (stdout, stderr) 248 else: 249 if not isdir(path): 250 return 251 252 if trash: 253 try: 254 move_path_to_trash(path) 255 if not isdir(path): 256 return 257 except OSError as e2: 258 raise 259 msg += "Retry with onerror failed (%s)\n" % e2 260 261 log.debug(msg + "Retrying after %s seconds..." % i) 262 time.sleep(i) 263 # Final time. pass exceptions to caller. 264 shutil.rmtree(path, ignore_errors=False, onerror=warn_failed_remove) 265 266 def rm_empty_dir(path): 267 """ 268 Remove the directory `path` if it is a directory and empty. 269 If the directory does not exist or is not empty, do nothing. 270 """ 271 try: 272 os.rmdir(path) 273 except OSError: # directory might not exist or not be empty 274 pass 275 276 277 def yield_lines(path): 278 for line in open(path): 279 line = line.strip() 280 if not line or line.startswith('#'): 281 continue 282 yield line 283 284 285 prefix_placeholder = ('/opt/anaconda1anaconda2' 286 # this is intentionally split into parts, 287 # such that running this program on itself 288 # will leave it unchanged 289 'anaconda3') 290 def read_has_prefix(path): 291 """ 292 reads `has_prefix` file and return dict mapping filenames to 293 tuples(placeholder, mode) 294 """ 295 res = {} 296 try: 297 for line in yield_lines(path): 298 try: 299 placeholder, mode, f = [x.strip('"\'') for x in 300 shlex.split(line, posix=False)] 301 res[f] = (placeholder, mode) 302 except ValueError: 303 res[line] = (prefix_placeholder, 'text') 304 except IOError: 305 pass 306 return res 307 308 class PaddingError(Exception): 309 pass 310 311 def binary_replace(data, a, b): 312 """ 313 Perform a binary replacement of `data`, where the placeholder `a` is 314 replaced with `b` and the remaining string is padded with null characters. 315 All input arguments are expected to be bytes objects. 316 """ 317 318 def replace(match): 319 occurances = match.group().count(a) 320 padding = (len(a) - len(b))*occurances 321 if padding < 0: 322 raise PaddingError(a, b, padding) 323 return match.group().replace(a, b) + b'\0' * padding 324 pat = re.compile(re.escape(a) + b'([^\0]*?)\0') 325 res = pat.sub(replace, data) 326 assert len(res) == len(data) 327 return res 328 329 def update_prefix(path, new_prefix, placeholder=prefix_placeholder, 330 mode='text'): 331 if on_win and (placeholder != prefix_placeholder) and ('/' in placeholder): 332 # original prefix uses unix-style path separators 333 # replace with unix-style path separators 334 new_prefix = new_prefix.replace('\\', '/') 335 336 path = os.path.realpath(path) 337 with open(path, 'rb') as fi: 338 data = fi.read() 339 if mode == 'text': 340 new_data = data.replace(placeholder.encode('utf-8'), 341 new_prefix.encode('utf-8')) 342 elif mode == 'binary': 343 new_data = binary_replace(data, placeholder.encode('utf-8'), 344 new_prefix.encode('utf-8')) 345 else: 346 sys.exit("Invalid mode:" % mode) 347 348 if new_data == data: 349 return 350 st = os.lstat(path) 351 # Remove file before rewriting to avoid destroying hard-linked cache 352 os.remove(path) 353 with open(path, 'wb') as fo: 354 fo.write(new_data) 355 os.chmod(path, stat.S_IMODE(st.st_mode)) 356 357 358 def name_dist(dist): 359 return dist.rsplit('-', 2)[0] 360 361 362 def create_meta(prefix, dist, info_dir, extra_info): 363 """ 364 Create the conda metadata, in a given prefix, for a given package. 365 """ 366 # read info/index.json first 367 with open(join(info_dir, 'index.json')) as fi: 368 meta = json.load(fi) 369 # add extra info 370 meta.update(extra_info) 371 # write into <env>/conda-meta/<dist>.json 372 meta_dir = join(prefix, 'conda-meta') 373 if not isdir(meta_dir): 374 os.makedirs(meta_dir) 375 with open(join(meta_dir, dist + '.json'), 'w') as fo: 376 json.dump(meta, fo, indent=2, sort_keys=True) 377 378 379 def mk_menus(prefix, files, remove=False): 380 """ 381 Create cross-platform menu items (e.g. Windows Start Menu) 382 383 Passes all menu config files %PREFIX%/Menu/*.json to ``menuinst.install``. 384 ``remove=True`` will remove the menu items. 385 """ 386 menu_files = [f for f in files 387 if (f.lower().startswith('menu/') and 388 f.lower().endswith('.json'))] 389 if not menu_files: 390 return 391 elif basename(abspath(prefix)).startswith('_'): 392 logging.warn("Environment name starts with underscore '_'. " 393 "Skipping menu installation.") 394 return 395 396 try: 397 import menuinst 398 except: 399 logging.warn("Menuinst could not be imported:") 400 logging.warn(traceback.format_exc()) 401 return 402 403 for f in menu_files: 404 try: 405 menuinst.install(join(prefix, f), remove, prefix) 406 except: 407 stdoutlog.error("menuinst Exception:") 408 stdoutlog.error(traceback.format_exc()) 409 410 411 def run_script(prefix, dist, action='post-link', env_prefix=None): 412 """ 413 call the post-link (or pre-unlink) script, and return True on success, 414 False on failure 415 """ 416 path = join(prefix, 'Scripts' if on_win else 'bin', '.%s-%s.%s' % ( 417 name_dist(dist), 418 action, 419 'bat' if on_win else 'sh')) 420 if not isfile(path): 421 return True 422 if on_win: 423 try: 424 args = [os.environ['COMSPEC'], '/c', path] 425 except KeyError: 426 return False 427 else: 428 shell_path = '/bin/sh' if 'bsd' in sys.platform else '/bin/bash' 429 args = [shell_path, path] 430 env = os.environ 431 env['ROOT_PREFIX'] = sys.prefix 432 env['PREFIX'] = str(env_prefix or prefix) 433 env['PKG_NAME'], env['PKG_VERSION'], env['PKG_BUILDNUM'] = str(dist).rsplit('-', 2) 434 if action == 'pre-link': 435 env['SOURCE_DIR'] = str(prefix) 436 try: 437 subprocess.check_call(args, env=env) 438 except subprocess.CalledProcessError: 439 return False 440 return True 441 442 443 def read_url(pkgs_dir, dist): 444 try: 445 data = open(join(pkgs_dir, 'urls.txt')).read() 446 urls = data.split() 447 for url in urls[::-1]: 448 if url.endswith('/%s.tar.bz2' % dist): 449 return url 450 except IOError: 451 pass 452 return None 453 454 455 def read_icondata(source_dir): 456 import base64 457 458 try: 459 data = open(join(source_dir, 'info', 'icon.png'), 'rb').read() 460 return base64.b64encode(data).decode('utf-8') 461 except IOError: 462 pass 463 return None 464 465 def read_no_link(info_dir): 466 res = set() 467 for fn in 'no_link', 'no_softlink': 468 try: 469 res.update(set(yield_lines(join(info_dir, fn)))) 470 except IOError: 471 pass 472 return res 473 474 # Should this be an API function? 475 476 def symlink_conda(prefix, root_dir, shell): 477 # do not symlink root env - this clobbers activate incorrectly. 478 if normpath(prefix) == normpath(sys.prefix): 479 return 480 if on_win: 481 where = 'Scripts' 482 symlink_fn = functools.partial(win_conda_bat_redirect, shell=shell) 483 else: 484 where = 'bin' 485 symlink_fn = os.symlink 486 if not isdir(join(prefix, where)): 487 os.makedirs(join(prefix, where)) 488 symlink_conda_hlp(prefix, root_dir, where, symlink_fn) 489 490 491 def symlink_conda_hlp(prefix, root_dir, where, symlink_fn): 492 scripts = ["conda", "activate", "deactivate"] 493 prefix_where = join(prefix, where) 494 if not isdir(prefix_where): 495 os.makedirs(prefix_where) 496 for f in scripts: 497 root_file = join(root_dir, where, f) 498 prefix_file = join(prefix_where, f) 499 # try to kill stale links if they exist 500 if os.path.lexists(prefix_file): 501 os.remove(prefix_file) 502 # if they're in use, they won't be killed. Skip making new symlink. 503 if not os.path.lexists(prefix_file): 504 symlink_fn(root_file, prefix_file) 505 506 507 # ========================== begin API functions ========================= 508 509 def try_hard_link(pkgs_dir, prefix, dist): 510 src = join(pkgs_dir, dist, 'info', 'index.json') 511 dst = join(prefix, '.tmp-%s' % dist) 512 assert isfile(src), src 513 assert not isfile(dst), dst 514 try: 515 if not isdir(prefix): 516 os.makedirs(prefix) 517 _link(src, dst, LINK_HARD) 518 return True 519 except OSError: 520 return False 521 finally: 522 rm_rf(dst) 523 rm_empty_dir(prefix) 524 525 # ------- package cache ----- fetched 526 def is_fetched(pkgs_dir, dist): 527 return isfile(join(pkgs_dir, dist + '.tar.bz2')) 528 529 def rm_fetched(pkgs_dir, dist): 530 with Locked(pkgs_dir): 531 path = join(pkgs_dir, dist + '.tar.bz2') 532 rm_rf(path) 533 534 # ------- package cache ----- extracted 535 536 def extracted(pkgs_dir): 537 """ 538 return the (set of canonical names) of all extracted packages 539 """ 540 if not isdir(pkgs_dir): 541 return set() 542 return set(dn for dn in os.listdir(pkgs_dir) 543 if (isfile(join(pkgs_dir, dn, 'info', 'files')) and 544 isfile(join(pkgs_dir, dn, 'info', 'index.json')))) 545 546 def extract(pkgs_dir, dist): 547 """ 548 Extract a package, i.e. make a package available for linkage. We assume 549 that the compressed packages is located in the packages directory. 550 """ 551 with Locked(pkgs_dir): 552 path = join(pkgs_dir, dist) 553 t = tarfile.open(path + '.tar.bz2') 554 t.extractall(path=path) 555 t.close() 556 if sys.platform.startswith('linux') and os.getuid() == 0: 557 # When extracting as root, tarfile will by restore ownership 558 # of extracted files. However, we want root to be the owner 559 # (our implementation of --no-same-owner). 560 for root, dirs, files in os.walk(path): 561 for fn in files: 562 p = join(root, fn) 563 os.lchown(p, 0, 0) 564 565 def is_extracted(pkgs_dir, dist): 566 return (isfile(join(pkgs_dir, dist, 'info', 'files')) and 567 isfile(join(pkgs_dir, dist, 'info', 'index.json'))) 568 569 def rm_extracted(pkgs_dir, dist): 570 with Locked(pkgs_dir): 571 path = join(pkgs_dir, dist) 572 rm_rf(path) 573 574 # ------- linkage of packages 575 576 def linked_data(prefix): 577 """ 578 Return a dictionary of the linked packages in prefix. 579 """ 580 res = {} 581 meta_dir = join(prefix, 'conda-meta') 582 if isdir(meta_dir): 583 for fn in os.listdir(meta_dir): 584 if fn.endswith('.json'): 585 try: 586 with open(join(meta_dir, fn)) as fin: 587 res[fn[:-5]] = json.load(fin) 588 except IOError: 589 pass 590 return res 591 592 def linked(prefix): 593 """ 594 Return the (set of canonical names) of linked packages in prefix. 595 """ 596 meta_dir = join(prefix, 'conda-meta') 597 if not isdir(meta_dir): 598 return set() 599 return set(fn[:-5] for fn in os.listdir(meta_dir) if fn.endswith('.json')) 600 601 # FIXME Functions that begin with `is_` should return True/False 602 def is_linked(prefix, dist): 603 """ 604 Return the install meta-data for a linked package in a prefix, or None 605 if the package is not linked in the prefix. 606 """ 607 meta_path = join(prefix, 'conda-meta', dist + '.json') 608 try: 609 with open(meta_path) as fi: 610 return json.load(fi) 611 except IOError: 612 return None 613 614 def delete_trash(prefix=None): 615 from conda import config 616 617 for pkg_dir in config.pkgs_dirs: 618 trash_dir = join(pkg_dir, '.trash') 619 try: 620 log.debug("Trying to delete the trash dir %s" % trash_dir) 621 rm_rf(trash_dir, max_retries=1, trash=False) 622 except OSError as e: 623 log.debug("Could not delete the trash dir %s (%s)" % (trash_dir, e)) 624 625 def move_to_trash(prefix, f, tempdir=None): 626 """ 627 Move a file f from prefix to the trash 628 629 tempdir is a deprecated parameter, and will be ignored. 630 631 This function is deprecated in favor of `move_path_to_trash`. 632 """ 633 return move_path_to_trash(join(prefix, f)) 634 635 def move_path_to_trash(path): 636 """ 637 Move a path to the trash 638 """ 639 # Try deleting the trash every time we use it. 640 delete_trash() 641 642 from conda import config 643 644 for pkg_dir in config.pkgs_dirs: 645 import tempfile 646 trash_dir = join(pkg_dir, '.trash') 647 648 try: 649 os.makedirs(trash_dir) 650 except OSError as e1: 651 if e1.errno != errno.EEXIST: 652 continue 653 654 trash_dir = tempfile.mkdtemp(dir=trash_dir) 655 trash_dir = join(trash_dir, relpath(os.path.dirname(path), config.root_dir)) 656 657 try: 658 os.makedirs(trash_dir) 659 except OSError as e2: 660 if e2.errno != errno.EEXIST: 661 continue 662 663 try: 664 shutil.move(path, trash_dir) 665 except OSError as e: 666 log.debug("Could not move %s to %s (%s)" % (path, trash_dir, e)) 667 else: 668 return True 669 670 log.debug("Could not move %s to trash" % path) 671 return False 672 673 def link(pkgs_dir, prefix, dist, linktype=LINK_HARD, index=None): 674 ''' 675 Set up a package in a specified (environment) prefix. We assume that 676 the package has been extracted (using extract() above). 677 ''' 678 index = index or {} 679 log.debug('pkgs_dir=%r, prefix=%r, dist=%r, linktype=%r' % 680 (pkgs_dir, prefix, dist, linktype)) 681 682 source_dir = join(pkgs_dir, dist) 683 if not run_script(source_dir, dist, 'pre-link', prefix): 684 sys.exit('Error: pre-link failed: %s' % dist) 685 686 info_dir = join(source_dir, 'info') 687 files = list(yield_lines(join(info_dir, 'files'))) 688 has_prefix_files = read_has_prefix(join(info_dir, 'has_prefix')) 689 no_link = read_no_link(info_dir) 690 691 with Locked(prefix), Locked(pkgs_dir): 692 for f in files: 693 src = join(source_dir, f) 694 dst = join(prefix, f) 695 dst_dir = dirname(dst) 696 if not isdir(dst_dir): 697 os.makedirs(dst_dir) 698 if os.path.exists(dst): 699 log.warn("file already exists: %r" % dst) 700 try: 701 os.unlink(dst) 702 except OSError: 703 log.error('failed to unlink: %r' % dst) 704 if on_win: 705 try: 706 move_path_to_trash(dst) 707 except ImportError: 708 # This shouldn't be an issue in the installer anyway 709 pass 710 711 lt = linktype 712 if f in has_prefix_files or f in no_link or islink(src): 713 lt = LINK_COPY 714 try: 715 _link(src, dst, lt) 716 except OSError as e: 717 log.error('failed to link (src=%r, dst=%r, type=%r, error=%r)' % 718 (src, dst, lt, e)) 719 720 if name_dist(dist) == '_cache': 721 return 722 723 for f in sorted(has_prefix_files): 724 placeholder, mode = has_prefix_files[f] 725 try: 726 update_prefix(join(prefix, f), prefix, placeholder, mode) 727 except PaddingError: 728 sys.exit("ERROR: placeholder '%s' too short in: %s\n" % 729 (placeholder, dist)) 730 731 mk_menus(prefix, files, remove=False) 732 733 if not run_script(prefix, dist, 'post-link'): 734 sys.exit("Error: post-link failed for: %s" % dist) 735 736 # Make sure the script stays standalone for the installer 737 try: 738 from conda.config import remove_binstar_tokens 739 except ImportError: 740 # There won't be any binstar tokens in the installer anyway 741 def remove_binstar_tokens(url): 742 return url 743 744 meta_dict = index.get(dist + '.tar.bz2', {}) 745 meta_dict['url'] = read_url(pkgs_dir, dist) 746 if meta_dict['url']: 747 meta_dict['url'] = remove_binstar_tokens(meta_dict['url']) 748 try: 749 alt_files_path = join(prefix, 'conda-meta', dist + '.files') 750 meta_dict['files'] = list(yield_lines(alt_files_path)) 751 os.unlink(alt_files_path) 752 except IOError: 753 meta_dict['files'] = files 754 meta_dict['link'] = {'source': source_dir, 755 'type': link_name_map.get(linktype)} 756 if 'channel' in meta_dict: 757 meta_dict['channel'] = remove_binstar_tokens(meta_dict['channel']) 758 if 'icon' in meta_dict: 759 meta_dict['icondata'] = read_icondata(source_dir) 760 761 create_meta(prefix, dist, info_dir, meta_dict) 762 763 764 def unlink(prefix, dist): 765 ''' 766 Remove a package from the specified environment, it is an error if the 767 package does not exist in the prefix. 768 ''' 769 with Locked(prefix): 770 run_script(prefix, dist, 'pre-unlink') 771 772 meta_path = join(prefix, 'conda-meta', dist + '.json') 773 with open(meta_path) as fi: 774 meta = json.load(fi) 775 776 mk_menus(prefix, meta['files'], remove=True) 777 dst_dirs1 = set() 778 779 for f in meta['files']: 780 dst = join(prefix, f) 781 dst_dirs1.add(dirname(dst)) 782 try: 783 os.unlink(dst) 784 except OSError: # file might not exist 785 log.debug("could not remove file: '%s'" % dst) 786 if on_win and os.path.exists(join(prefix, f)): 787 try: 788 log.debug("moving to trash") 789 move_path_to_trash(dst) 790 except ImportError: 791 # This shouldn't be an issue in the installer anyway 792 # but it can potentially happen with importing conda.config 793 log.debug("cannot import conda.config; probably not an issue") 794 795 # remove the meta-file last 796 os.unlink(meta_path) 797 798 dst_dirs2 = set() 799 for path in dst_dirs1: 800 while len(path) > len(prefix): 801 dst_dirs2.add(path) 802 path = dirname(path) 803 # in case there is nothing left 804 dst_dirs2.add(join(prefix, 'conda-meta')) 805 dst_dirs2.add(prefix) 806 807 for path in sorted(dst_dirs2, key=len, reverse=True): 808 rm_empty_dir(path) 809 810 811 def messages(prefix): 812 path = join(prefix, '.messages.txt') 813 try: 814 with open(path) as fi: 815 sys.stdout.write(fi.read()) 816 except IOError: 817 pass 818 finally: 819 rm_rf(path) 820 821 822 def duplicates_to_remove(linked_dists, keep_dists): 823 """ 824 Returns the (sorted) list of distributions to be removed, such that 825 only one distribution (for each name) remains. `keep_dists` is an 826 interable of distributions (which are not allowed to be removed). 827 """ 828 from collections import defaultdict 829 830 keep_dists = set(keep_dists) 831 ldists = defaultdict(set) # map names to set of distributions 832 for dist in linked_dists: 833 name = name_dist(dist) 834 ldists[name].add(dist) 835 836 res = set() 837 for dists in ldists.values(): 838 # `dists` is the group of packages with the same name 839 if len(dists) == 1: 840 # if there is only one package, nothing has to be removed 841 continue 842 if dists & keep_dists: 843 # if the group has packages which are have to be kept, we just 844 # take the set of packages which are in group but not in the 845 # ones which have to be kept 846 res.update(dists - keep_dists) 847 else: 848 # otherwise, we take lowest (n-1) (sorted) packages 849 res.update(sorted(dists)[:-1]) 850 return sorted(res) 851 852 853 # =========================== end API functions ========================== 854 855 856 def main(): 857 from optparse import OptionParser 858 859 p = OptionParser(description="conda link tool used by installer") 860 861 p.add_option('--file', 862 action="store", 863 help="path of a file containing distributions to link, " 864 "by default all packages extracted in the cache are " 865 "linked") 866 867 p.add_option('--prefix', 868 action="store", 869 default=sys.prefix, 870 help="prefix (defaults to %default)") 871 872 p.add_option('-v', '--verbose', 873 action="store_true") 874 875 opts, args = p.parse_args() 876 if args: 877 p.error('no arguments expected') 878 879 logging.basicConfig() 880 881 prefix = opts.prefix 882 pkgs_dir = join(prefix, 'pkgs') 883 if opts.verbose: 884 print("prefix: %r" % prefix) 885 886 if opts.file: 887 idists = list(yield_lines(join(prefix, opts.file))) 888 else: 889 idists = sorted(extracted(pkgs_dir)) 890 891 linktype = (LINK_HARD 892 if try_hard_link(pkgs_dir, prefix, idists[0]) else 893 LINK_COPY) 894 if opts.verbose: 895 print("linktype: %s" % link_name_map[linktype]) 896 897 for dist in idists: 898 if opts.verbose: 899 print("linking: %s" % dist) 900 link(pkgs_dir, prefix, dist, linktype) 901 902 messages(prefix) 903 904 for dist in duplicates_to_remove(linked(prefix), idists): 905 meta_path = join(prefix, 'conda-meta', dist + '.json') 906 print("WARNING: unlinking: %s" % meta_path) 907 try: 908 os.rename(meta_path, meta_path + '.bak') 909 except OSError: 910 rm_rf(meta_path) 911 912 913 if __name__ == '__main__': 914 main() 915 [end of conda/install.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
f8cfd1e9998ed2503480d1a26f932f337838731f
BeeGFS hard-links [BeeGFS](http://www.beegfs.com), a parallel cluster file system, [does not support](https://groups.google.com/forum/#!topic/fhgfs-user/cTJcqGZceVA) `hard-links` between files in differents directories. Depending on the [configuration](https://groups.google.com/forum/#!topic/fhgfs-user/pvQSo0QWicw), either an error is issued, or a `symbolic-link` is created. If a `symbolic-link` is created instead of a `hard-link`, it can cause problem, for example: ``` pkgs/bin/mpicc pkgs/lib/libopen-pal.so envs/bin/mpicc envs/lib/libopen-pal.so ``` Here, when `envs/bin/mpicc` is executed, it is actually `pkgs/bin/mpicc` that is executed, and the library `$PREFIX/../lib/libopen-pal.so` actually loaded is `pkgs/lib/libopen-pal.so,` which is different from `envs/lib/libopen-pal.so` for which conda has fixed hard-coded prefix path, and as a final consequence `mpicc` fails to find its configuration file. #805 is another (closed) related to `BeeGFS`. Would we need a new conda options to turn hard-link into copy?
2016-04-14T06:23:18Z
<patch> diff --git a/conda/install.py b/conda/install.py --- a/conda/install.py +++ b/conda/install.py @@ -515,7 +515,12 @@ def try_hard_link(pkgs_dir, prefix, dist): if not isdir(prefix): os.makedirs(prefix) _link(src, dst, LINK_HARD) - return True + # Some file systems (at least BeeGFS) do not support hard-links + # between files in different directories. Depending on the + # file system configuration, a symbolic link may be created + # instead. If a symbolic link is created instead of a hard link, + # return False. + return not os.path.islink(dst) except OSError: return False finally: </patch>
[]
[]
ray-project__ray-4469
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [tune] EXAMPLE DOESN'T RUN only show failing information from two examples: mnist_pytorch.py and tune_mnist_keras.py <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: > NAME="Ubuntu" > VERSION="16.04.5 LTS (Xenial Xerus)" > ID=ubuntu > ID_LIKE=debian > PRETTY_NAME="Ubuntu 16.04.5 LTS" > VERSION_ID="16.04" > HOME_URL="http://www.ubuntu.com/" > SUPPORT_URL="http://help.ubuntu.com/" > BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" > VERSION_CODENAME=xenial > UBUNTU_CODENAME=xenial - **Ray installed from (source or binary)**: source - **Ray version**: 0.6.5 - **Python version**: Python 3.6.5 - **Exact command to reproduce**: ``` cd ray/python/ray/tune/examples python mnist_pytorch.py ``` pytorch version: > 1.0.0 or ``` cd ray/python/ray/tune/examples python tune_mnist_keras.py ``` TF version: > 1.12.0 keras version: > 2.2.4 <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem Without any modfifications, build Ray from source, try to directly use tune provided examples, but seems most of the examples failed due to the > Destroying actor for trial xxxx. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. Btw, the machine has GPU and the version: > Cuda compilation tools, release 9.0, V9.0.176 However, after trying add `reuse_actors=True` , the same error msg appear. Since the trials are suddenly stopped without any error or exception, could you please help to take a look? @richardliaw @robertnishihara Thanks! ### Source code / logs `python mnist_pytorch.py` > 2019-03-23 23:54:34,913 WARNING worker.py:1406 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes. > 2019-03-23 23:54:34,914 INFO node.py:423 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-03-23_23-54-34_52746/logs. > 2019-03-23 23:54:35,021 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:24948 to respond... > 2019-03-23 23:54:35,130 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:39939 to respond... > 2019-03-23 23:54:35,132 INFO services.py:760 -- Starting Redis shard with 10.0 GB max memory. > 2019-03-23 23:54:35,147 WARNING services.py:1236 -- Warning: Capping object memory store to 20.0GB. To increase this further, specify `object_store_memory` when calling ray.init() or ray start. > 2019-03-23 23:54:35,148 INFO services.py:1384 -- Starting the Plasma object store with 20.0 GB memory using /dev/shm. > 2019-03-23 23:54:35,793 INFO tune.py:60 -- Tip: to resume incomplete experiments, pass resume='prompt' or resume=True to run() > 2019-03-23 23:54:35,796 INFO tune.py:211 -- Starting a new experiment. > 2019-03-23 23:54:37,283 WARNING util.py:62 -- The `start_trial` operation took 1.3957560062408447 seconds to complete, which may be a performance bottleneck. > 2019-03-23 23:54:58,442 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_0_lr=0.081371,momentum=0.40185. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:58,754 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_3_lr=0.010086,momentum=0.41713. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,133 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_1_lr=0.028139,momentum=0.40255. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,160 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_7_lr=0.030289,momentum=0.55615. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,299 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_5_lr=0.08914,momentum=0.18464. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,449 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_6_lr=0.066883,momentum=0.68077. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:00,221 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_4_lr=0.059111,momentum=0.82238. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:00,525 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_2_lr=0.063279,momentum=0.43368. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:21,020 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_9_lr=0.084676,momentum=0.45356. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:21,150 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_8_lr=0.051943,momentum=0.6297. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. `python tune_mnist_keras.py` > (pid=57890) 60000 train samples > (pid=57890) 10000 test samples > (pid=57881) x_train shape: (60000, 28, 28, 1) > (pid=57881) 60000 train samples > (pid=57881) 10000 test samples > (pid=57899) x_train shape: (60000, 28, 28, 1) > (pid=57899) 60000 train samples > (pid=57899) 10000 test samples > (pid=57916) x_train shape: (60000, 28, 28, 1) > (pid=57916) 60000 train samples > (pid=57916) 10000 test samples > (pid=57913) x_train shape: (60000, 28, 28, 1) > (pid=57913) 60000 train samples > (pid=57913) 10000 test samples > (pid=57910) x_train shape: (60000, 28, 28, 1) > (pid=57910) 60000 train samples > (pid=57910) 10000 test samples > 2019-03-24 00:09:22,154 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_3_dropout1=0.41208,hidden=53,lr=0.0045996,momentum=0.29457. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:23,633 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_9_dropout1=0.78277,hidden=424,lr=0.085855,momentum=0.11821. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:28,650 WARNING util.py:62 -- The `experiment_checkpoint` operation took 0.14834022521972656 seconds to complete, which may be a performance bottleneck. > 2019-03-24 00:09:36,315 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_1_dropout1=0.77148,hidden=307,lr=0.084435,momentum=0.87804. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:37,978 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_4_dropout1=0.71993,hidden=442,lr=0.014533,momentum=0.65771. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:18,199 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_6_dropout1=0.72255,hidden=446,lr=0.086364,momentum=0.86826. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:44,899 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_2_dropout1=0.73158,hidden=107,lr=0.087594,momentum=0.5979. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:48,515 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_0_dropout1=0.2571,hidden=236,lr=0.0083709,momentum=0.47214. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:51,434 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_7_dropout1=0.47593,hidden=218,lr=0.067242,momentum=0.85505. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:54,745 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_8_dropout1=0.47459,hidden=383,lr=0.094025,momentum=0.39063. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:56,552 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_5_dropout1=0.5431,hidden=429,lr=0.031262,momentum=0.61523. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. </issue> <code> [start of README.rst] 1 .. image:: https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png 2 3 .. image:: https://travis-ci.com/ray-project/ray.svg?branch=master 4 :target: https://travis-ci.com/ray-project/ray 5 6 .. image:: https://readthedocs.org/projects/ray/badge/?version=latest 7 :target: http://ray.readthedocs.io/en/latest/?badge=latest 8 9 .. image:: https://img.shields.io/badge/pypi-0.6.4-blue.svg 10 :target: https://pypi.org/project/ray/ 11 12 | 13 14 **Ray is a flexible, high-performance distributed execution framework.** 15 16 17 Ray is easy to install: ``pip install ray`` 18 19 Example Use 20 ----------- 21 22 +------------------------------------------------+----------------------------------------------------+ 23 | **Basic Python** | **Distributed with Ray** | 24 +------------------------------------------------+----------------------------------------------------+ 25 |.. code-block:: python |.. code-block:: python | 26 | | | 27 | # Execute f serially. | # Execute f in parallel. | 28 | | | 29 | | @ray.remote | 30 | def f(): | def f(): | 31 | time.sleep(1) | time.sleep(1) | 32 | return 1 | return 1 | 33 | | | 34 | | | 35 | | ray.init() | 36 | results = [f() for i in range(4)] | results = ray.get([f.remote() for i in range(4)]) | 37 +------------------------------------------------+----------------------------------------------------+ 38 39 40 Ray comes with libraries that accelerate deep learning and reinforcement learning development: 41 42 - `Tune`_: Hyperparameter Optimization Framework 43 - `RLlib`_: Scalable Reinforcement Learning 44 - `Distributed Training <http://ray.readthedocs.io/en/latest/distributed_sgd.html>`__ 45 46 .. _`Tune`: http://ray.readthedocs.io/en/latest/tune.html 47 .. _`RLlib`: http://ray.readthedocs.io/en/latest/rllib.html 48 49 Installation 50 ------------ 51 52 Ray can be installed on Linux and Mac with ``pip install ray``. 53 54 To build Ray from source or to install the nightly versions, see the `installation documentation`_. 55 56 .. _`installation documentation`: http://ray.readthedocs.io/en/latest/installation.html 57 58 More Information 59 ---------------- 60 61 - `Documentation`_ 62 - `Tutorial`_ 63 - `Blog`_ 64 - `Ray paper`_ 65 - `Ray HotOS paper`_ 66 67 .. _`Documentation`: http://ray.readthedocs.io/en/latest/index.html 68 .. _`Tutorial`: https://github.com/ray-project/tutorial 69 .. _`Blog`: https://ray-project.github.io/ 70 .. _`Ray paper`: https://arxiv.org/abs/1712.05889 71 .. _`Ray HotOS paper`: https://arxiv.org/abs/1703.03924 72 73 Getting Involved 74 ---------------- 75 76 - `[email protected]`_: For discussions about development or any general 77 questions. 78 - `StackOverflow`_: For questions about how to use Ray. 79 - `GitHub Issues`_: For reporting bugs and feature requests. 80 - `Pull Requests`_: For submitting code contributions. 81 82 .. _`[email protected]`: https://groups.google.com/forum/#!forum/ray-dev 83 .. _`GitHub Issues`: https://github.com/ray-project/ray/issues 84 .. _`StackOverflow`: https://stackoverflow.com/questions/tagged/ray 85 .. _`Pull Requests`: https://github.com/ray-project/ray/pulls 86 [end of README.rst] [start of python/ray/tune/ray_trial_executor.py] 1 # coding: utf-8 2 from __future__ import absolute_import 3 from __future__ import division 4 from __future__ import print_function 5 6 import logging 7 import os 8 import random 9 import time 10 import traceback 11 12 import ray 13 from ray.tune.error import TuneError, AbortTrialExecution 14 from ray.tune.logger import NoopLogger 15 from ray.tune.trial import Trial, Resources, Checkpoint 16 from ray.tune.trial_executor import TrialExecutor 17 from ray.tune.util import warn_if_slow 18 19 logger = logging.getLogger(__name__) 20 21 RESOURCE_REFRESH_PERIOD = 0.5 # Refresh resources every 500 ms 22 BOTTLENECK_WARN_PERIOD_S = 60 23 NONTRIVIAL_WAIT_TIME_THRESHOLD_S = 1e-3 24 25 26 class _LocalWrapper(object): 27 def __init__(self, result): 28 self._result = result 29 30 def unwrap(self): 31 """Returns the wrapped result.""" 32 return self._result 33 34 35 class RayTrialExecutor(TrialExecutor): 36 """An implemention of TrialExecutor based on Ray.""" 37 38 def __init__(self, 39 queue_trials=False, 40 reuse_actors=False, 41 refresh_period=RESOURCE_REFRESH_PERIOD): 42 super(RayTrialExecutor, self).__init__(queue_trials) 43 self._running = {} 44 # Since trial resume after paused should not run 45 # trial.train.remote(), thus no more new remote object id generated. 46 # We use self._paused to store paused trials here. 47 self._paused = {} 48 self._reuse_actors = reuse_actors 49 self._cached_actor = None 50 51 self._avail_resources = Resources(cpu=0, gpu=0) 52 self._committed_resources = Resources(cpu=0, gpu=0) 53 self._resources_initialized = False 54 self._refresh_period = refresh_period 55 self._last_resource_refresh = float("-inf") 56 self._last_nontrivial_wait = time.time() 57 if ray.is_initialized(): 58 self._update_avail_resources() 59 60 def _setup_runner(self, trial, reuse_allowed): 61 if (self._reuse_actors and reuse_allowed 62 and self._cached_actor is not None): 63 logger.debug("Reusing cached runner {} for {}".format( 64 self._cached_actor, trial.trial_id)) 65 existing_runner = self._cached_actor 66 self._cached_actor = None 67 else: 68 if self._cached_actor: 69 logger.debug( 70 "Cannot reuse cached runner {} for new trial".format( 71 self._cached_actor)) 72 self._cached_actor.stop.remote() 73 self._cached_actor.__ray_terminate__.remote() 74 self._cached_actor = None 75 existing_runner = None 76 cls = ray.remote( 77 num_cpus=trial.resources.cpu, 78 num_gpus=trial.resources.gpu, 79 resources=trial.resources.custom_resources)( 80 trial._get_trainable_cls()) 81 82 trial.init_logger() 83 # We checkpoint metadata here to try mitigating logdir duplication 84 self.try_checkpoint_metadata(trial) 85 remote_logdir = trial.logdir 86 87 if existing_runner: 88 trial.runner = existing_runner 89 if not self.reset_trial(trial, trial.config, trial.experiment_tag): 90 raise AbortTrialExecution( 91 "Trial runner reuse requires reset_trial() to be " 92 "implemented and return True.") 93 return existing_runner 94 95 def logger_creator(config): 96 # Set the working dir in the remote process, for user file writes 97 if not os.path.exists(remote_logdir): 98 os.makedirs(remote_logdir) 99 os.chdir(remote_logdir) 100 return NoopLogger(config, remote_logdir) 101 102 # Logging for trials is handled centrally by TrialRunner, so 103 # configure the remote runner to use a noop-logger. 104 return cls.remote(config=trial.config, logger_creator=logger_creator) 105 106 def _train(self, trial): 107 """Start one iteration of training and save remote id.""" 108 109 assert trial.status == Trial.RUNNING, trial.status 110 remote = trial.runner.train.remote() 111 112 # Local Mode 113 if isinstance(remote, dict): 114 remote = _LocalWrapper(remote) 115 116 self._running[remote] = trial 117 118 def _start_trial(self, trial, checkpoint=None): 119 """Starts trial and restores last result if trial was paused. 120 121 Raises: 122 ValueError if restoring from checkpoint fails. 123 """ 124 prior_status = trial.status 125 self.set_status(trial, Trial.RUNNING) 126 trial.runner = self._setup_runner( 127 trial, 128 reuse_allowed=checkpoint is not None 129 or trial._checkpoint.value is not None) 130 if not self.restore(trial, checkpoint): 131 if trial.status == Trial.ERROR: 132 raise RuntimeError( 133 "Restore from checkpoint failed for Trial {}.".format( 134 str(trial))) 135 136 previous_run = self._find_item(self._paused, trial) 137 if (prior_status == Trial.PAUSED and previous_run): 138 # If Trial was in flight when paused, self._paused stores result. 139 self._paused.pop(previous_run[0]) 140 self._running[previous_run[0]] = trial 141 else: 142 self._train(trial) 143 144 def _stop_trial(self, trial, error=False, error_msg=None, 145 stop_logger=True): 146 """Stops this trial. 147 148 Stops this trial, releasing all allocating resources. If stopping the 149 trial fails, the run will be marked as terminated in error, but no 150 exception will be thrown. 151 152 Args: 153 error (bool): Whether to mark this trial as terminated in error. 154 error_msg (str): Optional error message. 155 stop_logger (bool): Whether to shut down the trial logger. 156 """ 157 158 if stop_logger: 159 trial.close_logger() 160 161 if error: 162 self.set_status(trial, Trial.ERROR) 163 else: 164 self.set_status(trial, Trial.TERMINATED) 165 166 try: 167 trial.write_error_log(error_msg) 168 if hasattr(trial, 'runner') and trial.runner: 169 if (not error and self._reuse_actors 170 and self._cached_actor is None): 171 logger.debug("Reusing actor for {}".format(trial.runner)) 172 self._cached_actor = trial.runner 173 else: 174 logger.info( 175 "Destroying actor for trial {}. If your trainable is " 176 "slow to initialize, consider setting " 177 "reuse_actors=True to reduce actor creation " 178 "overheads.".format(trial)) 179 trial.runner.stop.remote() 180 trial.runner.__ray_terminate__.remote() 181 except Exception: 182 logger.exception("Error stopping runner for Trial %s", str(trial)) 183 self.set_status(trial, Trial.ERROR) 184 finally: 185 trial.runner = None 186 187 def start_trial(self, trial, checkpoint=None): 188 """Starts the trial. 189 190 Will not return resources if trial repeatedly fails on start. 191 192 Args: 193 trial (Trial): Trial to be started. 194 checkpoint (Checkpoint): A Python object or path storing the state 195 of trial. 196 """ 197 198 self._commit_resources(trial.resources) 199 try: 200 self._start_trial(trial, checkpoint) 201 except Exception as e: 202 logger.exception("Error starting runner for Trial %s", str(trial)) 203 error_msg = traceback.format_exc() 204 time.sleep(2) 205 self._stop_trial(trial, error=True, error_msg=error_msg) 206 if isinstance(e, AbortTrialExecution): 207 return # don't retry fatal Tune errors 208 try: 209 # This forces the trial to not start from checkpoint. 210 trial.clear_checkpoint() 211 logger.info( 212 "Trying to start runner for Trial %s without checkpoint.", 213 str(trial)) 214 self._start_trial(trial) 215 except Exception: 216 logger.exception( 217 "Error starting runner for Trial %s, aborting!", 218 str(trial)) 219 error_msg = traceback.format_exc() 220 self._stop_trial(trial, error=True, error_msg=error_msg) 221 # note that we don't return the resources, since they may 222 # have been lost 223 224 def _find_item(self, dictionary, item): 225 out = [rid for rid, t in dictionary.items() if t is item] 226 return out 227 228 def stop_trial(self, trial, error=False, error_msg=None, stop_logger=True): 229 """Only returns resources if resources allocated.""" 230 prior_status = trial.status 231 self._stop_trial( 232 trial, error=error, error_msg=error_msg, stop_logger=stop_logger) 233 if prior_status == Trial.RUNNING: 234 logger.debug("Returning resources for Trial %s.", str(trial)) 235 self._return_resources(trial.resources) 236 out = self._find_item(self._running, trial) 237 for result_id in out: 238 self._running.pop(result_id) 239 240 def continue_training(self, trial): 241 """Continues the training of this trial.""" 242 243 self._train(trial) 244 245 def pause_trial(self, trial): 246 """Pauses the trial. 247 248 If trial is in-flight, preserves return value in separate queue 249 before pausing, which is restored when Trial is resumed. 250 """ 251 252 trial_future = self._find_item(self._running, trial) 253 if trial_future: 254 self._paused[trial_future[0]] = trial 255 super(RayTrialExecutor, self).pause_trial(trial) 256 257 def reset_trial(self, trial, new_config, new_experiment_tag): 258 """Tries to invoke `Trainable.reset_config()` to reset trial. 259 260 Args: 261 trial (Trial): Trial to be reset. 262 new_config (dict): New configuration for Trial 263 trainable. 264 new_experiment_tag (str): New experiment name 265 for trial. 266 267 Returns: 268 True if `reset_config` is successful else False. 269 """ 270 trial.experiment_tag = new_experiment_tag 271 trial.config = new_config 272 trainable = trial.runner 273 with warn_if_slow("reset_config"): 274 reset_val = ray.get(trainable.reset_config.remote(new_config)) 275 return reset_val 276 277 def get_running_trials(self): 278 """Returns the running trials.""" 279 280 return list(self._running.values()) 281 282 def get_next_available_trial(self): 283 shuffled_results = list(self._running.keys()) 284 random.shuffle(shuffled_results) 285 # Note: We shuffle the results because `ray.wait` by default returns 286 # the first available result, and we want to guarantee that slower 287 # trials (i.e. trials that run remotely) also get fairly reported. 288 # See https://github.com/ray-project/ray/issues/4211 for details. 289 start = time.time() 290 [result_id], _ = ray.wait(shuffled_results) 291 wait_time = time.time() - start 292 if wait_time > NONTRIVIAL_WAIT_TIME_THRESHOLD_S: 293 self._last_nontrivial_wait = time.time() 294 if time.time() - self._last_nontrivial_wait > BOTTLENECK_WARN_PERIOD_S: 295 logger.warn( 296 "Over the last {} seconds, the Tune event loop has been " 297 "backlogged processing new results. Consider increasing your " 298 "period of result reporting to improve performance.".format( 299 BOTTLENECK_WARN_PERIOD_S)) 300 301 self._last_nontrivial_wait = time.time() 302 return self._running[result_id] 303 304 def fetch_result(self, trial): 305 """Fetches one result of the running trials. 306 307 Returns: 308 Result of the most recent trial training run.""" 309 trial_future = self._find_item(self._running, trial) 310 if not trial_future: 311 raise ValueError("Trial was not running.") 312 self._running.pop(trial_future[0]) 313 with warn_if_slow("fetch_result"): 314 result = ray.get(trial_future[0]) 315 316 # For local mode 317 if isinstance(result, _LocalWrapper): 318 result = result.unwrap() 319 return result 320 321 def _commit_resources(self, resources): 322 committed = self._committed_resources 323 all_keys = set(resources.custom_resources).union( 324 set(committed.custom_resources)) 325 326 custom_resources = { 327 k: committed.get(k) + resources.get_res_total(k) 328 for k in all_keys 329 } 330 331 self._committed_resources = Resources( 332 committed.cpu + resources.cpu_total(), 333 committed.gpu + resources.gpu_total(), 334 custom_resources=custom_resources) 335 336 def _return_resources(self, resources): 337 committed = self._committed_resources 338 339 all_keys = set(resources.custom_resources).union( 340 set(committed.custom_resources)) 341 342 custom_resources = { 343 k: committed.get(k) - resources.get_res_total(k) 344 for k in all_keys 345 } 346 self._committed_resources = Resources( 347 committed.cpu - resources.cpu_total(), 348 committed.gpu - resources.gpu_total(), 349 custom_resources=custom_resources) 350 351 assert self._committed_resources.is_nonnegative(), ( 352 "Resource invalid: {}".format(resources)) 353 354 def _update_avail_resources(self, num_retries=5): 355 for i in range(num_retries): 356 try: 357 resources = ray.global_state.cluster_resources() 358 except Exception: 359 # TODO(rliaw): Remove this when local mode is fixed. 360 # https://github.com/ray-project/ray/issues/4147 361 logger.debug("Using resources for local machine.") 362 resources = ray.services.check_and_update_resources( 363 None, None, None) 364 if not resources: 365 logger.warning("Cluster resources not detected. Retrying...") 366 time.sleep(0.5) 367 368 if not resources or "CPU" not in resources: 369 raise TuneError("Cluster resources cannot be detected. " 370 "You can resume this experiment by passing in " 371 "`resume=True` to `run`.") 372 373 resources = resources.copy() 374 num_cpus = resources.pop("CPU") 375 num_gpus = resources.pop("GPU") 376 custom_resources = resources 377 378 self._avail_resources = Resources( 379 int(num_cpus), int(num_gpus), custom_resources=custom_resources) 380 self._last_resource_refresh = time.time() 381 self._resources_initialized = True 382 383 def has_resources(self, resources): 384 """Returns whether this runner has at least the specified resources. 385 386 This refreshes the Ray cluster resources if the time since last update 387 has exceeded self._refresh_period. This also assumes that the 388 cluster is not resizing very frequently. 389 """ 390 if time.time() - self._last_resource_refresh > self._refresh_period: 391 self._update_avail_resources() 392 393 currently_available = Resources.subtract(self._avail_resources, 394 self._committed_resources) 395 396 have_space = ( 397 resources.cpu_total() <= currently_available.cpu 398 and resources.gpu_total() <= currently_available.gpu and all( 399 resources.get_res_total(res) <= currently_available.get(res) 400 for res in resources.custom_resources)) 401 402 if have_space: 403 return True 404 405 can_overcommit = self._queue_trials 406 407 if (resources.cpu_total() > 0 and currently_available.cpu <= 0) or \ 408 (resources.gpu_total() > 0 and currently_available.gpu <= 0) or \ 409 any((resources.get_res_total(res_name) > 0 410 and currently_available.get(res_name) <= 0) 411 for res_name in resources.custom_resources): 412 can_overcommit = False # requested resource is already saturated 413 414 if can_overcommit: 415 logger.warning( 416 "Allowing trial to start even though the " 417 "cluster does not have enough free resources. Trial actors " 418 "may appear to hang until enough resources are added to the " 419 "cluster (e.g., via autoscaling). You can disable this " 420 "behavior by specifying `queue_trials=False` in " 421 "ray.tune.run().") 422 return True 423 424 return False 425 426 def debug_string(self): 427 """Returns a human readable message for printing to the console.""" 428 429 if self._resources_initialized: 430 status = "Resources requested: {}/{} CPUs, {}/{} GPUs".format( 431 self._committed_resources.cpu, self._avail_resources.cpu, 432 self._committed_resources.gpu, self._avail_resources.gpu) 433 customs = ", ".join([ 434 "{}/{} {}".format( 435 self._committed_resources.get_res_total(name), 436 self._avail_resources.get_res_total(name), name) 437 for name in self._avail_resources.custom_resources 438 ]) 439 if customs: 440 status += " ({})".format(customs) 441 return status 442 else: 443 return "Resources requested: ?" 444 445 def resource_string(self): 446 """Returns a string describing the total resources available.""" 447 448 if self._resources_initialized: 449 res_str = "{} CPUs, {} GPUs".format(self._avail_resources.cpu, 450 self._avail_resources.gpu) 451 if self._avail_resources.custom_resources: 452 custom = ", ".join( 453 "{} {}".format( 454 self._avail_resources.get_res_total(name), name) 455 for name in self._avail_resources.custom_resources) 456 res_str += " ({})".format(custom) 457 return res_str 458 else: 459 return "? CPUs, ? GPUs" 460 461 def on_step_begin(self): 462 """Before step() called, update the available resources.""" 463 self._update_avail_resources() 464 465 def save(self, trial, storage=Checkpoint.DISK): 466 """Saves the trial's state to a checkpoint.""" 467 trial._checkpoint.storage = storage 468 trial._checkpoint.last_result = trial.last_result 469 if storage == Checkpoint.MEMORY: 470 trial._checkpoint.value = trial.runner.save_to_object.remote() 471 else: 472 with warn_if_slow("save_to_disk"): 473 trial._checkpoint.value = ray.get(trial.runner.save.remote()) 474 return trial._checkpoint.value 475 476 def restore(self, trial, checkpoint=None): 477 """Restores training state from a given model checkpoint. 478 479 This will also sync the trial results to a new location 480 if restoring on a different node. 481 """ 482 if checkpoint is None or checkpoint.value is None: 483 checkpoint = trial._checkpoint 484 if checkpoint is None or checkpoint.value is None: 485 return True 486 if trial.runner is None: 487 logger.error("Unable to restore - no runner.") 488 self.set_status(trial, Trial.ERROR) 489 return False 490 try: 491 value = checkpoint.value 492 if checkpoint.storage == Checkpoint.MEMORY: 493 assert type(value) != Checkpoint, type(value) 494 trial.runner.restore_from_object.remote(value) 495 else: 496 worker_ip = ray.get(trial.runner.current_ip.remote()) 497 trial.sync_logger_to_new_location(worker_ip) 498 with warn_if_slow("restore_from_disk"): 499 ray.get(trial.runner.restore.remote(value)) 500 trial.last_result = checkpoint.last_result 501 return True 502 except Exception: 503 logger.exception("Error restoring runner for Trial %s.", trial) 504 self.set_status(trial, Trial.ERROR) 505 return False 506 507 def export_trial_if_needed(self, trial): 508 """Exports model of this trial based on trial.export_formats. 509 510 Return: 511 A dict that maps ExportFormats to successfully exported models. 512 """ 513 if trial.export_formats and len(trial.export_formats) > 0: 514 return ray.get( 515 trial.runner.export_model.remote(trial.export_formats)) 516 return {} 517 [end of python/ray/tune/ray_trial_executor.py] [start of python/ray/tune/tune.py] 1 from __future__ import absolute_import 2 from __future__ import division 3 from __future__ import print_function 4 5 import click 6 import logging 7 import os 8 import time 9 10 from ray.tune.error import TuneError 11 from ray.tune.experiment import convert_to_experiment_list, Experiment 12 from ray.tune.suggest import BasicVariantGenerator 13 from ray.tune.trial import Trial, DEBUG_PRINT_INTERVAL 14 from ray.tune.log_sync import wait_for_log_sync 15 from ray.tune.trial_runner import TrialRunner 16 from ray.tune.schedulers import (HyperBandScheduler, AsyncHyperBandScheduler, 17 FIFOScheduler, MedianStoppingRule) 18 from ray.tune.web_server import TuneServer 19 20 logger = logging.getLogger(__name__) 21 22 _SCHEDULERS = { 23 "FIFO": FIFOScheduler, 24 "MedianStopping": MedianStoppingRule, 25 "HyperBand": HyperBandScheduler, 26 "AsyncHyperBand": AsyncHyperBandScheduler, 27 } 28 29 30 def _make_scheduler(args): 31 if args.scheduler in _SCHEDULERS: 32 return _SCHEDULERS[args.scheduler](**args.scheduler_config) 33 else: 34 raise TuneError("Unknown scheduler: {}, should be one of {}".format( 35 args.scheduler, _SCHEDULERS.keys())) 36 37 38 def _find_checkpoint_dir(exp): 39 # TODO(rliaw): Make sure the checkpoint_dir is resolved earlier. 40 # Right now it is resolved somewhere far down the trial generation process 41 return os.path.join(exp.spec["local_dir"], exp.name) 42 43 44 def _prompt_restore(checkpoint_dir, resume): 45 restore = False 46 if TrialRunner.checkpoint_exists(checkpoint_dir): 47 if resume == "prompt": 48 msg = ("Found incomplete experiment at {}. " 49 "Would you like to resume it?".format(checkpoint_dir)) 50 restore = click.confirm(msg, default=False) 51 if restore: 52 logger.info("Tip: to always resume, " 53 "pass resume=True to run()") 54 else: 55 logger.info("Tip: to always start a new experiment, " 56 "pass resume=False to run()") 57 elif resume: 58 restore = True 59 else: 60 logger.info("Tip: to resume incomplete experiments, " 61 "pass resume='prompt' or resume=True to run()") 62 else: 63 logger.info( 64 "Did not find checkpoint file in {}.".format(checkpoint_dir)) 65 return restore 66 67 68 def run(run_or_experiment, 69 name=None, 70 stop=None, 71 config=None, 72 resources_per_trial=None, 73 num_samples=1, 74 local_dir=None, 75 upload_dir=None, 76 trial_name_creator=None, 77 loggers=None, 78 sync_function=None, 79 checkpoint_freq=0, 80 checkpoint_at_end=False, 81 export_formats=None, 82 max_failures=3, 83 restore=None, 84 search_alg=None, 85 scheduler=None, 86 with_server=False, 87 server_port=TuneServer.DEFAULT_PORT, 88 verbose=2, 89 resume=False, 90 queue_trials=False, 91 reuse_actors=False, 92 trial_executor=None, 93 raise_on_failed_trial=True): 94 """Executes training. 95 96 Args: 97 run_or_experiment (function|class|str|Experiment): If 98 function|class|str, this is the algorithm or model to train. 99 This may refer to the name of a built-on algorithm 100 (e.g. RLLib's DQN or PPO), a user-defined trainable 101 function or class, or the string identifier of a 102 trainable function or class registered in the tune registry. 103 If Experiment, then Tune will execute training based on 104 Experiment.spec. 105 name (str): Name of experiment. 106 stop (dict): The stopping criteria. The keys may be any field in 107 the return result of 'train()', whichever is reached first. 108 Defaults to empty dict. 109 config (dict): Algorithm-specific configuration for Tune variant 110 generation (e.g. env, hyperparams). Defaults to empty dict. 111 Custom search algorithms may ignore this. 112 resources_per_trial (dict): Machine resources to allocate per trial, 113 e.g. ``{"cpu": 64, "gpu": 8}``. Note that GPUs will not be 114 assigned unless you specify them here. Defaults to 1 CPU and 0 115 GPUs in ``Trainable.default_resource_request()``. 116 num_samples (int): Number of times to sample from the 117 hyperparameter space. Defaults to 1. If `grid_search` is 118 provided as an argument, the grid will be repeated 119 `num_samples` of times. 120 local_dir (str): Local dir to save training results to. 121 Defaults to ``~/ray_results``. 122 upload_dir (str): Optional URI to sync training results 123 to (e.g. ``s3://bucket``). 124 trial_name_creator (func): Optional function for generating 125 the trial string representation. 126 loggers (list): List of logger creators to be used with 127 each Trial. If None, defaults to ray.tune.logger.DEFAULT_LOGGERS. 128 See `ray/tune/logger.py`. 129 sync_function (func|str): Function for syncing the local_dir to 130 upload_dir. If string, then it must be a string template for 131 syncer to run. If not provided, the sync command defaults 132 to standard S3 or gsutil sync comamnds. 133 checkpoint_freq (int): How many training iterations between 134 checkpoints. A value of 0 (default) disables checkpointing. 135 checkpoint_at_end (bool): Whether to checkpoint at the end of the 136 experiment regardless of the checkpoint_freq. Default is False. 137 export_formats (list): List of formats that exported at the end of 138 the experiment. Default is None. 139 max_failures (int): Try to recover a trial from its last 140 checkpoint at least this many times. Only applies if 141 checkpointing is enabled. Setting to -1 will lead to infinite 142 recovery retries. Defaults to 3. 143 restore (str): Path to checkpoint. Only makes sense to set if 144 running 1 trial. Defaults to None. 145 search_alg (SearchAlgorithm): Search Algorithm. Defaults to 146 BasicVariantGenerator. 147 scheduler (TrialScheduler): Scheduler for executing 148 the experiment. Choose among FIFO (default), MedianStopping, 149 AsyncHyperBand, and HyperBand. 150 with_server (bool): Starts a background Tune server. Needed for 151 using the Client API. 152 server_port (int): Port number for launching TuneServer. 153 verbose (int): 0, 1, or 2. Verbosity mode. 0 = silent, 154 1 = only status updates, 2 = status and trial results. 155 resume (bool|"prompt"): If checkpoint exists, the experiment will 156 resume from there. If resume is "prompt", Tune will prompt if 157 checkpoint detected. 158 queue_trials (bool): Whether to queue trials when the cluster does 159 not currently have enough resources to launch one. This should 160 be set to True when running on an autoscaling cluster to enable 161 automatic scale-up. 162 reuse_actors (bool): Whether to reuse actors between different trials 163 when possible. This can drastically speed up experiments that start 164 and stop actors often (e.g., PBT in time-multiplexing mode). This 165 requires trials to have the same resource requirements. 166 trial_executor (TrialExecutor): Manage the execution of trials. 167 raise_on_failed_trial (bool): Raise TuneError if there exists failed 168 trial (of ERROR state) when the experiments complete. 169 170 Returns: 171 List of Trial objects. 172 173 Raises: 174 TuneError if any trials failed and `raise_on_failed_trial` is True. 175 176 Examples: 177 >>> tune.run(mytrainable, scheduler=PopulationBasedTraining()) 178 179 >>> tune.run(mytrainable, num_samples=5, reuse_actors=True) 180 181 >>> tune.run( 182 "PG", 183 num_samples=5, 184 config={ 185 "env": "CartPole-v0", 186 "lr": tune.sample_from(lambda _: np.random.rand()) 187 } 188 ) 189 """ 190 experiment = run_or_experiment 191 if not isinstance(run_or_experiment, Experiment): 192 experiment = Experiment( 193 name, run_or_experiment, stop, config, resources_per_trial, 194 num_samples, local_dir, upload_dir, trial_name_creator, loggers, 195 sync_function, checkpoint_freq, checkpoint_at_end, export_formats, 196 max_failures, restore) 197 else: 198 logger.debug("Ignoring some parameters passed into tune.run.") 199 200 checkpoint_dir = _find_checkpoint_dir(experiment) 201 should_restore = _prompt_restore(checkpoint_dir, resume) 202 203 runner = None 204 if should_restore: 205 try: 206 runner = TrialRunner.restore(checkpoint_dir, search_alg, scheduler, 207 trial_executor) 208 except Exception: 209 logger.exception("Runner restore failed. Restarting experiment.") 210 else: 211 logger.info("Starting a new experiment.") 212 213 if not runner: 214 scheduler = scheduler or FIFOScheduler() 215 search_alg = search_alg or BasicVariantGenerator() 216 217 search_alg.add_configurations([experiment]) 218 219 runner = TrialRunner( 220 search_alg, 221 scheduler=scheduler, 222 metadata_checkpoint_dir=checkpoint_dir, 223 launch_web_server=with_server, 224 server_port=server_port, 225 verbose=bool(verbose > 1), 226 queue_trials=queue_trials, 227 reuse_actors=reuse_actors, 228 trial_executor=trial_executor) 229 230 if verbose: 231 print(runner.debug_string(max_debug=99999)) 232 233 last_debug = 0 234 while not runner.is_finished(): 235 runner.step() 236 if time.time() - last_debug > DEBUG_PRINT_INTERVAL: 237 if verbose: 238 print(runner.debug_string()) 239 last_debug = time.time() 240 241 if verbose: 242 print(runner.debug_string(max_debug=99999)) 243 244 wait_for_log_sync() 245 246 errored_trials = [] 247 for trial in runner.get_trials(): 248 if trial.status != Trial.TERMINATED: 249 errored_trials += [trial] 250 251 if errored_trials: 252 if raise_on_failed_trial: 253 raise TuneError("Trials did not complete", errored_trials) 254 else: 255 logger.error("Trials did not complete: %s", errored_trials) 256 257 return runner.get_trials() 258 259 260 def run_experiments(experiments, 261 search_alg=None, 262 scheduler=None, 263 with_server=False, 264 server_port=TuneServer.DEFAULT_PORT, 265 verbose=2, 266 resume=False, 267 queue_trials=False, 268 reuse_actors=False, 269 trial_executor=None, 270 raise_on_failed_trial=True): 271 """Runs and blocks until all trials finish. 272 273 Examples: 274 >>> experiment_spec = Experiment("experiment", my_func) 275 >>> run_experiments(experiments=experiment_spec) 276 277 >>> experiment_spec = {"experiment": {"run": my_func}} 278 >>> run_experiments(experiments=experiment_spec) 279 280 >>> run_experiments( 281 >>> experiments=experiment_spec, 282 >>> scheduler=MedianStoppingRule(...)) 283 284 >>> run_experiments( 285 >>> experiments=experiment_spec, 286 >>> search_alg=SearchAlgorithm(), 287 >>> scheduler=MedianStoppingRule(...)) 288 289 Returns: 290 List of Trial objects, holding data for each executed trial. 291 292 """ 293 # This is important to do this here 294 # because it schematize the experiments 295 # and it conducts the implicit registration. 296 experiments = convert_to_experiment_list(experiments) 297 298 trials = [] 299 for exp in experiments: 300 trials += run( 301 exp, 302 search_alg=search_alg, 303 scheduler=scheduler, 304 with_server=with_server, 305 server_port=server_port, 306 verbose=verbose, 307 resume=resume, 308 queue_trials=queue_trials, 309 reuse_actors=reuse_actors, 310 trial_executor=trial_executor, 311 raise_on_failed_trial=raise_on_failed_trial) 312 return trials 313 [end of python/ray/tune/tune.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
ray-project/ray
01699ce4ea52062b8bbf2757ad83da65ae26781f
[tune] EXAMPLE DOESN'T RUN only show failing information from two examples: mnist_pytorch.py and tune_mnist_keras.py <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: > NAME="Ubuntu" > VERSION="16.04.5 LTS (Xenial Xerus)" > ID=ubuntu > ID_LIKE=debian > PRETTY_NAME="Ubuntu 16.04.5 LTS" > VERSION_ID="16.04" > HOME_URL="http://www.ubuntu.com/" > SUPPORT_URL="http://help.ubuntu.com/" > BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" > VERSION_CODENAME=xenial > UBUNTU_CODENAME=xenial - **Ray installed from (source or binary)**: source - **Ray version**: 0.6.5 - **Python version**: Python 3.6.5 - **Exact command to reproduce**: ``` cd ray/python/ray/tune/examples python mnist_pytorch.py ``` pytorch version: > 1.0.0 or ``` cd ray/python/ray/tune/examples python tune_mnist_keras.py ``` TF version: > 1.12.0 keras version: > 2.2.4 <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem Without any modfifications, build Ray from source, try to directly use tune provided examples, but seems most of the examples failed due to the > Destroying actor for trial xxxx. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. Btw, the machine has GPU and the version: > Cuda compilation tools, release 9.0, V9.0.176 However, after trying add `reuse_actors=True` , the same error msg appear. Since the trials are suddenly stopped without any error or exception, could you please help to take a look? @richardliaw @robertnishihara Thanks! ### Source code / logs `python mnist_pytorch.py` > 2019-03-23 23:54:34,913 WARNING worker.py:1406 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes. > 2019-03-23 23:54:34,914 INFO node.py:423 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-03-23_23-54-34_52746/logs. > 2019-03-23 23:54:35,021 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:24948 to respond... > 2019-03-23 23:54:35,130 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:39939 to respond... > 2019-03-23 23:54:35,132 INFO services.py:760 -- Starting Redis shard with 10.0 GB max memory. > 2019-03-23 23:54:35,147 WARNING services.py:1236 -- Warning: Capping object memory store to 20.0GB. To increase this further, specify `object_store_memory` when calling ray.init() or ray start. > 2019-03-23 23:54:35,148 INFO services.py:1384 -- Starting the Plasma object store with 20.0 GB memory using /dev/shm. > 2019-03-23 23:54:35,793 INFO tune.py:60 -- Tip: to resume incomplete experiments, pass resume='prompt' or resume=True to run() > 2019-03-23 23:54:35,796 INFO tune.py:211 -- Starting a new experiment. > 2019-03-23 23:54:37,283 WARNING util.py:62 -- The `start_trial` operation took 1.3957560062408447 seconds to complete, which may be a performance bottleneck. > 2019-03-23 23:54:58,442 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_0_lr=0.081371,momentum=0.40185. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:58,754 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_3_lr=0.010086,momentum=0.41713. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,133 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_1_lr=0.028139,momentum=0.40255. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,160 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_7_lr=0.030289,momentum=0.55615. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,299 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_5_lr=0.08914,momentum=0.18464. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,449 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_6_lr=0.066883,momentum=0.68077. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:00,221 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_4_lr=0.059111,momentum=0.82238. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:00,525 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_2_lr=0.063279,momentum=0.43368. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:21,020 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_9_lr=0.084676,momentum=0.45356. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:21,150 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_8_lr=0.051943,momentum=0.6297. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. `python tune_mnist_keras.py` > (pid=57890) 60000 train samples > (pid=57890) 10000 test samples > (pid=57881) x_train shape: (60000, 28, 28, 1) > (pid=57881) 60000 train samples > (pid=57881) 10000 test samples > (pid=57899) x_train shape: (60000, 28, 28, 1) > (pid=57899) 60000 train samples > (pid=57899) 10000 test samples > (pid=57916) x_train shape: (60000, 28, 28, 1) > (pid=57916) 60000 train samples > (pid=57916) 10000 test samples > (pid=57913) x_train shape: (60000, 28, 28, 1) > (pid=57913) 60000 train samples > (pid=57913) 10000 test samples > (pid=57910) x_train shape: (60000, 28, 28, 1) > (pid=57910) 60000 train samples > (pid=57910) 10000 test samples > 2019-03-24 00:09:22,154 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_3_dropout1=0.41208,hidden=53,lr=0.0045996,momentum=0.29457. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:23,633 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_9_dropout1=0.78277,hidden=424,lr=0.085855,momentum=0.11821. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:28,650 WARNING util.py:62 -- The `experiment_checkpoint` operation took 0.14834022521972656 seconds to complete, which may be a performance bottleneck. > 2019-03-24 00:09:36,315 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_1_dropout1=0.77148,hidden=307,lr=0.084435,momentum=0.87804. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:37,978 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_4_dropout1=0.71993,hidden=442,lr=0.014533,momentum=0.65771. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:18,199 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_6_dropout1=0.72255,hidden=446,lr=0.086364,momentum=0.86826. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:44,899 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_2_dropout1=0.73158,hidden=107,lr=0.087594,momentum=0.5979. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:48,515 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_0_dropout1=0.2571,hidden=236,lr=0.0083709,momentum=0.47214. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:51,434 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_7_dropout1=0.47593,hidden=218,lr=0.067242,momentum=0.85505. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:54,745 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_8_dropout1=0.47459,hidden=383,lr=0.094025,momentum=0.39063. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:56,552 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_5_dropout1=0.5431,hidden=429,lr=0.031262,momentum=0.61523. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
2019-03-24T04:37:46Z
<patch> diff --git a/python/ray/tune/examples/mnist_pytorch.py b/python/ray/tune/examples/mnist_pytorch.py --- a/python/ray/tune/examples/mnist_pytorch.py +++ b/python/ray/tune/examples/mnist_pytorch.py @@ -171,7 +171,6 @@ def test(): tune.run( "TRAIN_FN", name="exp", - verbose=0, scheduler=sched, **{ "stop": { diff --git a/python/ray/tune/examples/mnist_pytorch_trainable.py b/python/ray/tune/examples/mnist_pytorch_trainable.py --- a/python/ray/tune/examples/mnist_pytorch_trainable.py +++ b/python/ray/tune/examples/mnist_pytorch_trainable.py @@ -179,7 +179,6 @@ def _restore(self, checkpoint_path): time_attr="training_iteration", reward_attr="neg_mean_loss") tune.run( TrainMNIST, - verbose=0, scheduler=sched, **{ "stop": { diff --git a/python/ray/tune/examples/tune_mnist_keras.py b/python/ray/tune/examples/tune_mnist_keras.py --- a/python/ray/tune/examples/tune_mnist_keras.py +++ b/python/ray/tune/examples/tune_mnist_keras.py @@ -183,7 +183,6 @@ def create_parser(): tune.run( "TRAIN_FN", name="exp", - verbose=0, scheduler=sched, **{ "stop": { </patch>
[]
[]
pyca__cryptography-3638
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Update release automation for new wheel builder Once #3636 is merged we need to update the release automation to trigger the new wheel builder and download the artifacts. </issue> <code> [start of README.rst] 1 pyca/cryptography 2 ================= 3 4 .. image:: https://img.shields.io/pypi/v/cryptography.svg 5 :target: https://pypi.python.org/pypi/cryptography/ 6 :alt: Latest Version 7 8 .. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest 9 :target: https://cryptography.io 10 :alt: Latest Docs 11 12 .. image:: https://travis-ci.org/pyca/cryptography.svg?branch=master 13 :target: https://travis-ci.org/pyca/cryptography 14 15 .. image:: https://codecov.io/github/pyca/cryptography/coverage.svg?branch=master 16 :target: https://codecov.io/github/pyca/cryptography?branch=master 17 18 19 ``cryptography`` is a package which provides cryptographic recipes and 20 primitives to Python developers. Our goal is for it to be your "cryptographic 21 standard library". It supports Python 2.6-2.7, Python 3.3+, and PyPy 5.3+. 22 23 ``cryptography`` includes both high level recipes and low level interfaces to 24 common cryptographic algorithms such as symmetric ciphers, message digests, and 25 key derivation functions. For example, to encrypt something with 26 ``cryptography``'s high level symmetric encryption recipe: 27 28 .. code-block:: pycon 29 30 >>> from cryptography.fernet import Fernet 31 >>> # Put this somewhere safe! 32 >>> key = Fernet.generate_key() 33 >>> f = Fernet(key) 34 >>> token = f.encrypt(b"A really secret message. Not for prying eyes.") 35 >>> token 36 '...' 37 >>> f.decrypt(token) 38 'A really secret message. Not for prying eyes.' 39 40 You can find more information in the `documentation`_. 41 42 You can install ``cryptography`` with: 43 44 .. code-block:: console 45 46 $ pip install cryptography 47 48 For full details see `the installation documentation`_. 49 50 Discussion 51 ~~~~~~~~~~ 52 53 If you run into bugs, you can file them in our `issue tracker`_. 54 55 We maintain a `cryptography-dev`_ mailing list for development discussion. 56 57 You can also join ``#cryptography-dev`` on Freenode to ask questions or get 58 involved. 59 60 61 .. _`documentation`: https://cryptography.io/ 62 .. _`the installation documentation`: https://cryptography.io/en/latest/installation/ 63 .. _`issue tracker`: https://github.com/pyca/cryptography/issues 64 .. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev 65 [end of README.rst] [start of docs/conf.py] 1 # -*- coding: utf-8 -*- 2 3 # This file is dual licensed under the terms of the Apache License, Version 4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 5 # for complete details. 6 7 # 8 # Cryptography documentation build configuration file, created by 9 # sphinx-quickstart on Tue Aug 6 19:19:14 2013. 10 # 11 # This file is execfile()d with the current directory set to its containing dir 12 # 13 # Note that not all possible configuration values are present in this 14 # autogenerated file. 15 # 16 # All configuration values have a default; values that are commented out 17 # serve to show the default. 18 19 from __future__ import absolute_import, division, print_function 20 21 import os 22 import sys 23 24 try: 25 import sphinx_rtd_theme 26 except ImportError: 27 sphinx_rtd_theme = None 28 29 try: 30 from sphinxcontrib import spelling 31 except ImportError: 32 spelling = None 33 34 35 # If extensions (or modules to document with autodoc) are in another directory, 36 # add these directories to sys.path here. If the directory is relative to the 37 # documentation root, use os.path.abspath to make it absolute, like shown here. 38 sys.path.insert(0, os.path.abspath('.')) 39 40 # -- General configuration ---------------------------------------------------- 41 42 # If your documentation needs a minimal Sphinx version, state it here. 43 # needs_sphinx = '1.0' 44 45 # Add any Sphinx extension module names here, as strings. They can be 46 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 47 extensions = [ 48 'sphinx.ext.autodoc', 49 'sphinx.ext.doctest', 50 'sphinx.ext.intersphinx', 51 'sphinx.ext.viewcode', 52 'cryptography-docs', 53 ] 54 55 if spelling is not None: 56 extensions.append('sphinxcontrib.spelling') 57 58 # Add any paths that contain templates here, relative to this directory. 59 templates_path = ['_templates'] 60 61 nitpicky = True 62 63 # The suffix of source filenames. 64 source_suffix = '.rst' 65 66 # The encoding of source files. 67 # source_encoding = 'utf-8-sig' 68 69 # The master toctree document. 70 master_doc = 'index' 71 72 # General information about the project. 73 project = 'Cryptography' 74 copyright = '2013-2017, Individual Contributors' 75 76 # The version info for the project you're documenting, acts as replacement for 77 # |version| and |release|, also used in various other places throughout the 78 # built documents. 79 # 80 81 base_dir = os.path.join(os.path.dirname(__file__), os.pardir) 82 about = {} 83 with open(os.path.join(base_dir, "src", "cryptography", "__about__.py")) as f: 84 exec(f.read(), about) 85 86 version = release = about["__version__"] 87 88 # The language for content autogenerated by Sphinx. Refer to documentation 89 # for a list of supported languages. 90 # language = None 91 92 # There are two options for replacing |today|: either, you set today to some 93 # non-false value, then it is used: 94 # today = '' 95 # Else, today_fmt is used as the format for a strftime call. 96 # today_fmt = '%B %d, %Y' 97 98 # List of patterns, relative to source directory, that match files and 99 # directories to ignore when looking for source files. 100 exclude_patterns = ['_build'] 101 102 # The reST default role (used for this markup: `text`) to use for all documents 103 # default_role = None 104 105 # If true, '()' will be appended to :func: etc. cross-reference text. 106 # add_function_parentheses = True 107 108 # If true, the current module name will be prepended to all description 109 # unit titles (such as .. function::). 110 # add_module_names = True 111 112 # If true, sectionauthor and moduleauthor directives will be shown in the 113 # output. They are ignored by default. 114 # show_authors = False 115 116 # The name of the Pygments (syntax highlighting) style to use. 117 pygments_style = 'sphinx' 118 119 # -- Options for HTML output -------------------------------------------------- 120 121 # The theme to use for HTML and HTML Help pages. See the documentation for 122 # a list of builtin themes. 123 124 if sphinx_rtd_theme: 125 html_theme = "sphinx_rtd_theme" 126 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 127 else: 128 html_theme = "default" 129 130 # Add any paths that contain custom static files (such as style sheets) here, 131 # relative to this directory. They are copied after the builtin static files, 132 # so a file named "default.css" will overwrite the builtin "default.css". 133 html_static_path = ['_static'] 134 135 # Output file base name for HTML help builder. 136 htmlhelp_basename = 'Cryptographydoc' 137 138 139 # -- Options for LaTeX output ------------------------------------------------- 140 141 latex_elements = { 142 } 143 144 # Grouping the document tree into LaTeX files. List of tuples 145 # (source start file, target name, title, author, documentclass [howto/manual]) 146 latex_documents = [ 147 ('index', 'Cryptography.tex', 'Cryptography Documentation', 148 'Individual Contributors', 'manual'), 149 ] 150 151 # -- Options for manual page output ------------------------------------------- 152 153 # One entry per manual page. List of tuples 154 # (source start file, name, description, authors, manual section). 155 man_pages = [ 156 ('index', 'cryptography', 'Cryptography Documentation', 157 ['Individual Contributors'], 1) 158 ] 159 160 # -- Options for Texinfo output ----------------------------------------------- 161 162 # Grouping the document tree into Texinfo files. List of tuples 163 # (source start file, target name, title, author, 164 # dir menu entry, description, category) 165 texinfo_documents = [ 166 ('index', 'Cryptography', 'Cryptography Documentation', 167 'Individual Contributors', 'Cryptography', 168 'One line description of project.', 169 'Miscellaneous'), 170 ] 171 172 # Example configuration for intersphinx: refer to the Python standard library. 173 intersphinx_mapping = {'https://docs.python.org/3': None} 174 175 epub_theme = 'epub' 176 177 # Retry requests in the linkcheck builder so that we're resillient against 178 # transient network errors. 179 linkcheck_retries = 2 180 181 linkcheck_ignore = [ 182 # Certificate is issued by a Japanese CA that isn't publicly trusted 183 "https://www.cryptrec.go.jp", 184 ] 185 [end of docs/conf.py] [start of release.py] 1 # This file is dual licensed under the terms of the Apache License, Version 2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 3 # for complete details. 4 5 from __future__ import absolute_import, division, print_function 6 7 import getpass 8 import io 9 import os 10 import subprocess 11 import time 12 13 import click 14 15 from clint.textui.progress import Bar as ProgressBar 16 17 import requests 18 19 20 JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder" 21 22 23 def run(*args, **kwargs): 24 kwargs.setdefault("stderr", subprocess.STDOUT) 25 subprocess.check_output(list(args), **kwargs) 26 27 28 def wait_for_build_completed(session): 29 # Wait 20 seconds before actually checking if the build is complete, to 30 # ensure that it had time to really start. 31 time.sleep(20) 32 while True: 33 response = session.get( 34 "{0}/lastBuild/api/json/".format(JENKINS_URL), 35 headers={ 36 "Accept": "application/json", 37 } 38 ) 39 response.raise_for_status() 40 if not response.json()["building"]: 41 assert response.json()["result"] == "SUCCESS" 42 break 43 time.sleep(0.1) 44 45 46 def download_artifacts(session): 47 response = session.get( 48 "{0}/lastBuild/api/json/".format(JENKINS_URL), 49 headers={ 50 "Accept": "application/json" 51 } 52 ) 53 response.raise_for_status() 54 assert not response.json()["building"] 55 assert response.json()["result"] == "SUCCESS" 56 57 paths = [] 58 59 last_build_number = response.json()["number"] 60 for run in response.json()["runs"]: 61 if run["number"] != last_build_number: 62 print( 63 "Skipping {0} as it is not from the latest build ({1})".format( 64 run["url"], last_build_number 65 ) 66 ) 67 continue 68 69 response = session.get( 70 run["url"] + "api/json/", 71 headers={ 72 "Accept": "application/json", 73 } 74 ) 75 response.raise_for_status() 76 for artifact in response.json()["artifacts"]: 77 response = session.get( 78 "{0}artifact/{1}".format(run["url"], artifact["relativePath"]), 79 stream=True 80 ) 81 assert response.headers["content-length"] 82 print("Downloading {0}".format(artifact["fileName"])) 83 bar = ProgressBar( 84 expected_size=int(response.headers["content-length"]), 85 filled_char="=" 86 ) 87 content = io.BytesIO() 88 for data in response.iter_content(chunk_size=8192): 89 content.write(data) 90 bar.show(content.tell()) 91 assert bar.expected_size == content.tell() 92 bar.done() 93 out_path = os.path.join( 94 os.path.dirname(__file__), 95 "dist", 96 artifact["fileName"], 97 ) 98 with open(out_path, "wb") as f: 99 f.write(content.getvalue()) 100 paths.append(out_path) 101 return paths 102 103 104 @click.command() 105 @click.argument("version") 106 def release(version): 107 """ 108 ``version`` should be a string like '0.4' or '1.0'. 109 """ 110 run("git", "tag", "-s", version, "-m", "{0} release".format(version)) 111 run("git", "push", "--tags") 112 113 run("python", "setup.py", "sdist") 114 run("python", "setup.py", "sdist", "bdist_wheel", cwd="vectors/") 115 116 run( 117 "twine", "upload", "-s", "dist/cryptography-{0}*".format(version), 118 "vectors/dist/cryptography_vectors-{0}*".format(version), shell=True 119 ) 120 121 session = requests.Session() 122 123 # This tells the CDN to delete the cached response for the URL. We do this 124 # so that the Jenkins builders will see the new sdist immediately when they 125 # go to build the wheels. 126 response = session.request( 127 "PURGE", "https://pypi.python.org/simple/cryptography/" 128 ) 129 response.raise_for_status() 130 131 username = getpass.getpass("Input the GitHub/Jenkins username: ") 132 token = getpass.getpass("Input the Jenkins token: ") 133 response = session.post( 134 "{0}/build".format(JENKINS_URL), 135 auth=requests.auth.HTTPBasicAuth( 136 username, token 137 ), 138 params={ 139 "cause": "Building wheels for {0}".format(version) 140 } 141 ) 142 response.raise_for_status() 143 wait_for_build_completed(session) 144 paths = download_artifacts(session) 145 run("twine", "upload", " ".join(paths)) 146 147 148 if __name__ == "__main__": 149 release() 150 [end of release.py] [start of src/_cffi_src/openssl/crypto.py] 1 # This file is dual licensed under the terms of the Apache License, Version 2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 3 # for complete details. 4 5 from __future__ import absolute_import, division, print_function 6 7 INCLUDES = """ 8 #include <openssl/crypto.h> 9 """ 10 11 TYPES = """ 12 static const long Cryptography_HAS_LOCKING_CALLBACKS; 13 static const long Cryptography_HAS_MEM_FUNCTIONS; 14 15 static const int SSLEAY_VERSION; 16 static const int SSLEAY_CFLAGS; 17 static const int SSLEAY_PLATFORM; 18 static const int SSLEAY_DIR; 19 static const int SSLEAY_BUILT_ON; 20 static const int OPENSSL_VERSION; 21 static const int OPENSSL_CFLAGS; 22 static const int OPENSSL_BUILT_ON; 23 static const int OPENSSL_PLATFORM; 24 static const int OPENSSL_DIR; 25 static const int CRYPTO_MEM_CHECK_ON; 26 static const int CRYPTO_MEM_CHECK_OFF; 27 static const int CRYPTO_MEM_CHECK_ENABLE; 28 static const int CRYPTO_MEM_CHECK_DISABLE; 29 static const int CRYPTO_LOCK; 30 static const int CRYPTO_UNLOCK; 31 static const int CRYPTO_READ; 32 static const int CRYPTO_LOCK_SSL; 33 """ 34 35 FUNCTIONS = """ 36 int CRYPTO_mem_ctrl(int); 37 """ 38 39 MACROS = """ 40 /* CRYPTO_cleanup_all_ex_data became a macro in 1.1.0 */ 41 void CRYPTO_cleanup_all_ex_data(void); 42 43 /* as of 1.1.0 OpenSSL does its own locking *angelic chorus*. These functions 44 have become macros that are no ops */ 45 int CRYPTO_num_locks(void); 46 void CRYPTO_set_locking_callback(void(*)(int, int, const char *, int)); 47 void (*CRYPTO_get_locking_callback(void))(int, int, const char *, int); 48 49 /* SSLeay was removed in 1.1.0 */ 50 unsigned long SSLeay(void); 51 const char *SSLeay_version(int); 52 /* these functions were added to replace the SSLeay functions in 1.1.0 */ 53 unsigned long OpenSSL_version_num(void); 54 const char *OpenSSL_version(int); 55 56 /* this is a macro in 1.1.0 */ 57 void *OPENSSL_malloc(size_t); 58 void OPENSSL_free(void *); 59 60 /* This was removed in 1.1.0 */ 61 void CRYPTO_lock(int, int, const char *, int); 62 63 /* Signature changed significantly in 1.1.0, only expose there for sanity */ 64 int Cryptography_CRYPTO_set_mem_functions( 65 void *(*)(size_t, const char *, int), 66 void *(*)(void *, size_t, const char *, int), 67 void (*)(void *, const char *, int)); 68 69 void *Cryptography_malloc_wrapper(size_t, const char *, int); 70 void *Cryptography_realloc_wrapper(void *, size_t, const char *, int); 71 void Cryptography_free_wrapper(void *, const char *, int); 72 """ 73 74 CUSTOMIZATIONS = """ 75 /* In 1.1.0 SSLeay has finally been retired. We bidirectionally define the 76 values so you can use either one. This is so we can use the new function 77 names no matter what OpenSSL we're running on, but users on older pyOpenSSL 78 releases won't see issues if they're running OpenSSL 1.1.0 */ 79 #if !defined(SSLEAY_VERSION) 80 # define SSLeay OpenSSL_version_num 81 # define SSLeay_version OpenSSL_version 82 # define SSLEAY_VERSION_NUMBER OPENSSL_VERSION_NUMBER 83 # define SSLEAY_VERSION OPENSSL_VERSION 84 # define SSLEAY_CFLAGS OPENSSL_CFLAGS 85 # define SSLEAY_BUILT_ON OPENSSL_BUILT_ON 86 # define SSLEAY_PLATFORM OPENSSL_PLATFORM 87 # define SSLEAY_DIR OPENSSL_DIR 88 #endif 89 #if !defined(OPENSSL_VERSION) 90 # define OpenSSL_version_num SSLeay 91 # define OpenSSL_version SSLeay_version 92 # define OPENSSL_VERSION SSLEAY_VERSION 93 # define OPENSSL_CFLAGS SSLEAY_CFLAGS 94 # define OPENSSL_BUILT_ON SSLEAY_BUILT_ON 95 # define OPENSSL_PLATFORM SSLEAY_PLATFORM 96 # define OPENSSL_DIR SSLEAY_DIR 97 #endif 98 #if CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 99 static const long Cryptography_HAS_LOCKING_CALLBACKS = 1; 100 #else 101 static const long Cryptography_HAS_LOCKING_CALLBACKS = 0; 102 #if !defined(CRYPTO_LOCK) 103 static const long CRYPTO_LOCK = 0; 104 #endif 105 #if !defined(CRYPTO_UNLOCK) 106 static const long CRYPTO_UNLOCK = 0; 107 #endif 108 #if !defined(CRYPTO_READ) 109 static const long CRYPTO_READ = 0; 110 #endif 111 #if !defined(CRYPTO_LOCK_SSL) 112 static const long CRYPTO_LOCK_SSL = 0; 113 #endif 114 void (*CRYPTO_lock)(int, int, const char *, int) = NULL; 115 #endif 116 117 #if CRYPTOGRAPHY_OPENSSL_LESS_THAN_110 118 /* This function has a significantly different signature pre-1.1.0. since it is 119 * for testing only, we don't bother to expose it on older OpenSSLs. 120 */ 121 static const long Cryptography_HAS_MEM_FUNCTIONS = 0; 122 int (*Cryptography_CRYPTO_set_mem_functions)( 123 void *(*)(size_t, const char *, int), 124 void *(*)(void *, size_t, const char *, int), 125 void (*)(void *, const char *, int)) = NULL; 126 127 #else 128 static const long Cryptography_HAS_MEM_FUNCTIONS = 1; 129 130 int Cryptography_CRYPTO_set_mem_functions( 131 void *(*m)(size_t, const char *, int), 132 void *(*r)(void *, size_t, const char *, int), 133 void (*f)(void *, const char *, int) 134 ) { 135 return CRYPTO_set_mem_functions(m, r, f); 136 } 137 #endif 138 139 void *Cryptography_malloc_wrapper(size_t size, const char *path, int line) { 140 return malloc(size); 141 } 142 143 void *Cryptography_realloc_wrapper(void *ptr, size_t size, const char *path, 144 int line) { 145 return realloc(ptr, size); 146 } 147 148 void Cryptography_free_wrapper(void *ptr, const char *path, int line) { 149 return free(ptr); 150 } 151 """ 152 [end of src/_cffi_src/openssl/crypto.py] [start of src/cryptography/__init__.py] 1 # This file is dual licensed under the terms of the Apache License, Version 2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 3 # for complete details. 4 5 from __future__ import absolute_import, division, print_function 6 7 import sys 8 import warnings 9 10 from cryptography.__about__ import ( 11 __author__, __copyright__, __email__, __license__, __summary__, __title__, 12 __uri__, __version__ 13 ) 14 15 16 __all__ = [ 17 "__title__", "__summary__", "__uri__", "__version__", "__author__", 18 "__email__", "__license__", "__copyright__", 19 ] 20 21 if sys.version_info[:2] == (2, 6): 22 warnings.warn( 23 "Python 2.6 is no longer supported by the Python core team, please " 24 "upgrade your Python. A future version of cryptography will drop " 25 "support for Python 2.6", 26 DeprecationWarning 27 ) 28 if sys.version_info[:2] == (3, 3): 29 warnings.warn( 30 "Python 3.3 support will be dropped in the next release of" 31 "cryptography. Please upgrade your Python.", 32 DeprecationWarning, 33 ) 34 [end of src/cryptography/__init__.py] [start of src/cryptography/hazmat/bindings/openssl/binding.py] 1 # This file is dual licensed under the terms of the Apache License, Version 2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 3 # for complete details. 4 5 from __future__ import absolute_import, division, print_function 6 7 import collections 8 import threading 9 import types 10 11 from cryptography import utils 12 from cryptography.exceptions import InternalError 13 from cryptography.hazmat.bindings._openssl import ffi, lib 14 from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES 15 16 _OpenSSLErrorWithText = collections.namedtuple( 17 "_OpenSSLErrorWithText", ["code", "lib", "func", "reason", "reason_text"] 18 ) 19 20 21 class _OpenSSLError(object): 22 def __init__(self, code, lib, func, reason): 23 self._code = code 24 self._lib = lib 25 self._func = func 26 self._reason = reason 27 28 def _lib_reason_match(self, lib, reason): 29 return lib == self.lib and reason == self.reason 30 31 code = utils.read_only_property("_code") 32 lib = utils.read_only_property("_lib") 33 func = utils.read_only_property("_func") 34 reason = utils.read_only_property("_reason") 35 36 37 def _consume_errors(lib): 38 errors = [] 39 while True: 40 code = lib.ERR_get_error() 41 if code == 0: 42 break 43 44 err_lib = lib.ERR_GET_LIB(code) 45 err_func = lib.ERR_GET_FUNC(code) 46 err_reason = lib.ERR_GET_REASON(code) 47 48 errors.append(_OpenSSLError(code, err_lib, err_func, err_reason)) 49 50 return errors 51 52 53 def _openssl_assert(lib, ok): 54 if not ok: 55 errors = _consume_errors(lib) 56 errors_with_text = [] 57 for err in errors: 58 err_text_reason = ffi.string( 59 lib.ERR_error_string(err.code, ffi.NULL) 60 ) 61 errors_with_text.append( 62 _OpenSSLErrorWithText( 63 err.code, err.lib, err.func, err.reason, err_text_reason 64 ) 65 ) 66 67 raise InternalError( 68 "Unknown OpenSSL error. This error is commonly encountered when " 69 "another library is not cleaning up the OpenSSL error stack. If " 70 "you are using cryptography with another library that uses " 71 "OpenSSL try disabling it before reporting a bug. Otherwise " 72 "please file an issue at https://github.com/pyca/cryptography/" 73 "issues with information on how to reproduce " 74 "this. ({0!r})".format(errors_with_text), 75 errors_with_text 76 ) 77 78 79 def build_conditional_library(lib, conditional_names): 80 conditional_lib = types.ModuleType("lib") 81 conditional_lib._original_lib = lib 82 excluded_names = set() 83 for condition, names in conditional_names.items(): 84 if not getattr(lib, condition): 85 excluded_names |= set(names) 86 87 for attr in dir(lib): 88 if attr not in excluded_names: 89 setattr(conditional_lib, attr, getattr(lib, attr)) 90 91 return conditional_lib 92 93 94 class Binding(object): 95 """ 96 OpenSSL API wrapper. 97 """ 98 lib = None 99 ffi = ffi 100 _lib_loaded = False 101 _init_lock = threading.Lock() 102 _lock_init_lock = threading.Lock() 103 104 def __init__(self): 105 self._ensure_ffi_initialized() 106 107 @classmethod 108 def _register_osrandom_engine(cls): 109 # Clear any errors extant in the queue before we start. In many 110 # scenarios other things may be interacting with OpenSSL in the same 111 # process space and it has proven untenable to assume that they will 112 # reliably clear the error queue. Once we clear it here we will 113 # error on any subsequent unexpected item in the stack. 114 cls.lib.ERR_clear_error() 115 cls._osrandom_engine_id = cls.lib.Cryptography_osrandom_engine_id 116 cls._osrandom_engine_name = cls.lib.Cryptography_osrandom_engine_name 117 result = cls.lib.Cryptography_add_osrandom_engine() 118 _openssl_assert(cls.lib, result in (1, 2)) 119 120 @classmethod 121 def _ensure_ffi_initialized(cls): 122 with cls._init_lock: 123 if not cls._lib_loaded: 124 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES) 125 cls._lib_loaded = True 126 # initialize the SSL library 127 cls.lib.SSL_library_init() 128 # adds all ciphers/digests for EVP 129 cls.lib.OpenSSL_add_all_algorithms() 130 # loads error strings for libcrypto and libssl functions 131 cls.lib.SSL_load_error_strings() 132 cls._register_osrandom_engine() 133 134 @classmethod 135 def init_static_locks(cls): 136 with cls._lock_init_lock: 137 cls._ensure_ffi_initialized() 138 # Use Python's implementation if available, importing _ssl triggers 139 # the setup for this. 140 __import__("_ssl") 141 142 if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL: 143 return 144 145 # If nothing else has setup a locking callback already, we set up 146 # our own 147 res = lib._setup_ssl_threads() 148 _openssl_assert(cls.lib, res == 1) 149 150 151 # OpenSSL is not thread safe until the locks are initialized. We call this 152 # method in module scope so that it executes with the import lock. On 153 # Pythons < 3.4 this import lock is a global lock, which can prevent a race 154 # condition registering the OpenSSL locks. On Python 3.4+ the import lock 155 # is per module so this approach will not work. 156 Binding.init_static_locks() 157 [end of src/cryptography/hazmat/bindings/openssl/binding.py] [start of src/cryptography/hazmat/primitives/asymmetric/__init__.py] 1 # This file is dual licensed under the terms of the Apache License, Version 2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 3 # for complete details. 4 5 from __future__ import absolute_import, division, print_function 6 7 import abc 8 9 import six 10 11 12 @six.add_metaclass(abc.ABCMeta) 13 class AsymmetricSignatureContext(object): 14 @abc.abstractmethod 15 def update(self, data): 16 """ 17 Processes the provided bytes and returns nothing. 18 """ 19 20 @abc.abstractmethod 21 def finalize(self): 22 """ 23 Returns the signature as bytes. 24 """ 25 26 27 @six.add_metaclass(abc.ABCMeta) 28 class AsymmetricVerificationContext(object): 29 @abc.abstractmethod 30 def update(self, data): 31 """ 32 Processes the provided bytes and returns nothing. 33 """ 34 35 @abc.abstractmethod 36 def verify(self): 37 """ 38 Raises an exception if the bytes provided to update do not match the 39 signature or the signature does not match the public key. 40 """ 41 [end of src/cryptography/hazmat/primitives/asymmetric/__init__.py] [start of src/cryptography/x509/base.py] 1 # This file is dual licensed under the terms of the Apache License, Version 2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository 3 # for complete details. 4 5 from __future__ import absolute_import, division, print_function 6 7 import abc 8 import datetime 9 import os 10 from enum import Enum 11 12 import six 13 14 from cryptography import utils 15 from cryptography.hazmat.primitives.asymmetric import dsa, ec, rsa 16 from cryptography.x509.extensions import Extension, ExtensionType 17 from cryptography.x509.name import Name 18 19 20 _UNIX_EPOCH = datetime.datetime(1970, 1, 1) 21 22 23 def _convert_to_naive_utc_time(time): 24 """Normalizes a datetime to a naive datetime in UTC. 25 26 time -- datetime to normalize. Assumed to be in UTC if not timezone 27 aware. 28 """ 29 if time.tzinfo is not None: 30 offset = time.utcoffset() 31 offset = offset if offset else datetime.timedelta() 32 return time.replace(tzinfo=None) - offset 33 else: 34 return time 35 36 37 class Version(Enum): 38 v1 = 0 39 v3 = 2 40 41 42 def load_pem_x509_certificate(data, backend): 43 return backend.load_pem_x509_certificate(data) 44 45 46 def load_der_x509_certificate(data, backend): 47 return backend.load_der_x509_certificate(data) 48 49 50 def load_pem_x509_csr(data, backend): 51 return backend.load_pem_x509_csr(data) 52 53 54 def load_der_x509_csr(data, backend): 55 return backend.load_der_x509_csr(data) 56 57 58 def load_pem_x509_crl(data, backend): 59 return backend.load_pem_x509_crl(data) 60 61 62 def load_der_x509_crl(data, backend): 63 return backend.load_der_x509_crl(data) 64 65 66 class InvalidVersion(Exception): 67 def __init__(self, msg, parsed_version): 68 super(InvalidVersion, self).__init__(msg) 69 self.parsed_version = parsed_version 70 71 72 @six.add_metaclass(abc.ABCMeta) 73 class Certificate(object): 74 @abc.abstractmethod 75 def fingerprint(self, algorithm): 76 """ 77 Returns bytes using digest passed. 78 """ 79 80 @abc.abstractproperty 81 def serial_number(self): 82 """ 83 Returns certificate serial number 84 """ 85 86 @abc.abstractproperty 87 def version(self): 88 """ 89 Returns the certificate version 90 """ 91 92 @abc.abstractmethod 93 def public_key(self): 94 """ 95 Returns the public key 96 """ 97 98 @abc.abstractproperty 99 def not_valid_before(self): 100 """ 101 Not before time (represented as UTC datetime) 102 """ 103 104 @abc.abstractproperty 105 def not_valid_after(self): 106 """ 107 Not after time (represented as UTC datetime) 108 """ 109 110 @abc.abstractproperty 111 def issuer(self): 112 """ 113 Returns the issuer name object. 114 """ 115 116 @abc.abstractproperty 117 def subject(self): 118 """ 119 Returns the subject name object. 120 """ 121 122 @abc.abstractproperty 123 def signature_hash_algorithm(self): 124 """ 125 Returns a HashAlgorithm corresponding to the type of the digest signed 126 in the certificate. 127 """ 128 129 @abc.abstractproperty 130 def signature_algorithm_oid(self): 131 """ 132 Returns the ObjectIdentifier of the signature algorithm. 133 """ 134 135 @abc.abstractproperty 136 def extensions(self): 137 """ 138 Returns an Extensions object. 139 """ 140 141 @abc.abstractproperty 142 def signature(self): 143 """ 144 Returns the signature bytes. 145 """ 146 147 @abc.abstractproperty 148 def tbs_certificate_bytes(self): 149 """ 150 Returns the tbsCertificate payload bytes as defined in RFC 5280. 151 """ 152 153 @abc.abstractmethod 154 def __eq__(self, other): 155 """ 156 Checks equality. 157 """ 158 159 @abc.abstractmethod 160 def __ne__(self, other): 161 """ 162 Checks not equal. 163 """ 164 165 @abc.abstractmethod 166 def __hash__(self): 167 """ 168 Computes a hash. 169 """ 170 171 @abc.abstractmethod 172 def public_bytes(self, encoding): 173 """ 174 Serializes the certificate to PEM or DER format. 175 """ 176 177 178 @six.add_metaclass(abc.ABCMeta) 179 class CertificateRevocationList(object): 180 @abc.abstractmethod 181 def public_bytes(self, encoding): 182 """ 183 Serializes the CRL to PEM or DER format. 184 """ 185 186 @abc.abstractmethod 187 def fingerprint(self, algorithm): 188 """ 189 Returns bytes using digest passed. 190 """ 191 192 @abc.abstractproperty 193 def signature_hash_algorithm(self): 194 """ 195 Returns a HashAlgorithm corresponding to the type of the digest signed 196 in the certificate. 197 """ 198 199 @abc.abstractproperty 200 def signature_algorithm_oid(self): 201 """ 202 Returns the ObjectIdentifier of the signature algorithm. 203 """ 204 205 @abc.abstractproperty 206 def issuer(self): 207 """ 208 Returns the X509Name with the issuer of this CRL. 209 """ 210 211 @abc.abstractproperty 212 def next_update(self): 213 """ 214 Returns the date of next update for this CRL. 215 """ 216 217 @abc.abstractproperty 218 def last_update(self): 219 """ 220 Returns the date of last update for this CRL. 221 """ 222 223 @abc.abstractproperty 224 def extensions(self): 225 """ 226 Returns an Extensions object containing a list of CRL extensions. 227 """ 228 229 @abc.abstractproperty 230 def signature(self): 231 """ 232 Returns the signature bytes. 233 """ 234 235 @abc.abstractproperty 236 def tbs_certlist_bytes(self): 237 """ 238 Returns the tbsCertList payload bytes as defined in RFC 5280. 239 """ 240 241 @abc.abstractmethod 242 def __eq__(self, other): 243 """ 244 Checks equality. 245 """ 246 247 @abc.abstractmethod 248 def __ne__(self, other): 249 """ 250 Checks not equal. 251 """ 252 253 254 @six.add_metaclass(abc.ABCMeta) 255 class CertificateSigningRequest(object): 256 @abc.abstractmethod 257 def __eq__(self, other): 258 """ 259 Checks equality. 260 """ 261 262 @abc.abstractmethod 263 def __ne__(self, other): 264 """ 265 Checks not equal. 266 """ 267 268 @abc.abstractmethod 269 def __hash__(self): 270 """ 271 Computes a hash. 272 """ 273 274 @abc.abstractmethod 275 def public_key(self): 276 """ 277 Returns the public key 278 """ 279 280 @abc.abstractproperty 281 def subject(self): 282 """ 283 Returns the subject name object. 284 """ 285 286 @abc.abstractproperty 287 def signature_hash_algorithm(self): 288 """ 289 Returns a HashAlgorithm corresponding to the type of the digest signed 290 in the certificate. 291 """ 292 293 @abc.abstractproperty 294 def signature_algorithm_oid(self): 295 """ 296 Returns the ObjectIdentifier of the signature algorithm. 297 """ 298 299 @abc.abstractproperty 300 def extensions(self): 301 """ 302 Returns the extensions in the signing request. 303 """ 304 305 @abc.abstractmethod 306 def public_bytes(self, encoding): 307 """ 308 Encodes the request to PEM or DER format. 309 """ 310 311 @abc.abstractproperty 312 def signature(self): 313 """ 314 Returns the signature bytes. 315 """ 316 317 @abc.abstractproperty 318 def tbs_certrequest_bytes(self): 319 """ 320 Returns the PKCS#10 CertificationRequestInfo bytes as defined in RFC 321 2986. 322 """ 323 324 @abc.abstractproperty 325 def is_signature_valid(self): 326 """ 327 Verifies signature of signing request. 328 """ 329 330 331 @six.add_metaclass(abc.ABCMeta) 332 class RevokedCertificate(object): 333 @abc.abstractproperty 334 def serial_number(self): 335 """ 336 Returns the serial number of the revoked certificate. 337 """ 338 339 @abc.abstractproperty 340 def revocation_date(self): 341 """ 342 Returns the date of when this certificate was revoked. 343 """ 344 345 @abc.abstractproperty 346 def extensions(self): 347 """ 348 Returns an Extensions object containing a list of Revoked extensions. 349 """ 350 351 352 class CertificateSigningRequestBuilder(object): 353 def __init__(self, subject_name=None, extensions=[]): 354 """ 355 Creates an empty X.509 certificate request (v1). 356 """ 357 self._subject_name = subject_name 358 self._extensions = extensions 359 360 def subject_name(self, name): 361 """ 362 Sets the certificate requestor's distinguished name. 363 """ 364 if not isinstance(name, Name): 365 raise TypeError('Expecting x509.Name object.') 366 if self._subject_name is not None: 367 raise ValueError('The subject name may only be set once.') 368 return CertificateSigningRequestBuilder(name, self._extensions) 369 370 def add_extension(self, extension, critical): 371 """ 372 Adds an X.509 extension to the certificate request. 373 """ 374 if not isinstance(extension, ExtensionType): 375 raise TypeError("extension must be an ExtensionType") 376 377 extension = Extension(extension.oid, critical, extension) 378 379 # TODO: This is quadratic in the number of extensions 380 for e in self._extensions: 381 if e.oid == extension.oid: 382 raise ValueError('This extension has already been set.') 383 return CertificateSigningRequestBuilder( 384 self._subject_name, self._extensions + [extension] 385 ) 386 387 def sign(self, private_key, algorithm, backend): 388 """ 389 Signs the request using the requestor's private key. 390 """ 391 if self._subject_name is None: 392 raise ValueError("A CertificateSigningRequest must have a subject") 393 return backend.create_x509_csr(self, private_key, algorithm) 394 395 396 class CertificateBuilder(object): 397 def __init__(self, issuer_name=None, subject_name=None, 398 public_key=None, serial_number=None, not_valid_before=None, 399 not_valid_after=None, extensions=[]): 400 self._version = Version.v3 401 self._issuer_name = issuer_name 402 self._subject_name = subject_name 403 self._public_key = public_key 404 self._serial_number = serial_number 405 self._not_valid_before = not_valid_before 406 self._not_valid_after = not_valid_after 407 self._extensions = extensions 408 409 def issuer_name(self, name): 410 """ 411 Sets the CA's distinguished name. 412 """ 413 if not isinstance(name, Name): 414 raise TypeError('Expecting x509.Name object.') 415 if self._issuer_name is not None: 416 raise ValueError('The issuer name may only be set once.') 417 return CertificateBuilder( 418 name, self._subject_name, self._public_key, 419 self._serial_number, self._not_valid_before, 420 self._not_valid_after, self._extensions 421 ) 422 423 def subject_name(self, name): 424 """ 425 Sets the requestor's distinguished name. 426 """ 427 if not isinstance(name, Name): 428 raise TypeError('Expecting x509.Name object.') 429 if self._subject_name is not None: 430 raise ValueError('The subject name may only be set once.') 431 return CertificateBuilder( 432 self._issuer_name, name, self._public_key, 433 self._serial_number, self._not_valid_before, 434 self._not_valid_after, self._extensions 435 ) 436 437 def public_key(self, key): 438 """ 439 Sets the requestor's public key (as found in the signing request). 440 """ 441 if not isinstance(key, (dsa.DSAPublicKey, rsa.RSAPublicKey, 442 ec.EllipticCurvePublicKey)): 443 raise TypeError('Expecting one of DSAPublicKey, RSAPublicKey,' 444 ' or EllipticCurvePublicKey.') 445 if self._public_key is not None: 446 raise ValueError('The public key may only be set once.') 447 return CertificateBuilder( 448 self._issuer_name, self._subject_name, key, 449 self._serial_number, self._not_valid_before, 450 self._not_valid_after, self._extensions 451 ) 452 453 def serial_number(self, number): 454 """ 455 Sets the certificate serial number. 456 """ 457 if not isinstance(number, six.integer_types): 458 raise TypeError('Serial number must be of integral type.') 459 if self._serial_number is not None: 460 raise ValueError('The serial number may only be set once.') 461 if number <= 0: 462 raise ValueError('The serial number should be positive.') 463 464 # ASN.1 integers are always signed, so most significant bit must be 465 # zero. 466 if utils.bit_length(number) >= 160: # As defined in RFC 5280 467 raise ValueError('The serial number should not be more than 159 ' 468 'bits.') 469 return CertificateBuilder( 470 self._issuer_name, self._subject_name, 471 self._public_key, number, self._not_valid_before, 472 self._not_valid_after, self._extensions 473 ) 474 475 def not_valid_before(self, time): 476 """ 477 Sets the certificate activation time. 478 """ 479 if not isinstance(time, datetime.datetime): 480 raise TypeError('Expecting datetime object.') 481 if self._not_valid_before is not None: 482 raise ValueError('The not valid before may only be set once.') 483 time = _convert_to_naive_utc_time(time) 484 if time <= _UNIX_EPOCH: 485 raise ValueError('The not valid before date must be after the unix' 486 ' epoch (1970 January 1).') 487 if self._not_valid_after is not None and time > self._not_valid_after: 488 raise ValueError( 489 'The not valid before date must be before the not valid after ' 490 'date.' 491 ) 492 return CertificateBuilder( 493 self._issuer_name, self._subject_name, 494 self._public_key, self._serial_number, time, 495 self._not_valid_after, self._extensions 496 ) 497 498 def not_valid_after(self, time): 499 """ 500 Sets the certificate expiration time. 501 """ 502 if not isinstance(time, datetime.datetime): 503 raise TypeError('Expecting datetime object.') 504 if self._not_valid_after is not None: 505 raise ValueError('The not valid after may only be set once.') 506 time = _convert_to_naive_utc_time(time) 507 if time <= _UNIX_EPOCH: 508 raise ValueError('The not valid after date must be after the unix' 509 ' epoch (1970 January 1).') 510 if (self._not_valid_before is not None and 511 time < self._not_valid_before): 512 raise ValueError( 513 'The not valid after date must be after the not valid before ' 514 'date.' 515 ) 516 return CertificateBuilder( 517 self._issuer_name, self._subject_name, 518 self._public_key, self._serial_number, self._not_valid_before, 519 time, self._extensions 520 ) 521 522 def add_extension(self, extension, critical): 523 """ 524 Adds an X.509 extension to the certificate. 525 """ 526 if not isinstance(extension, ExtensionType): 527 raise TypeError("extension must be an ExtensionType") 528 529 extension = Extension(extension.oid, critical, extension) 530 531 # TODO: This is quadratic in the number of extensions 532 for e in self._extensions: 533 if e.oid == extension.oid: 534 raise ValueError('This extension has already been set.') 535 536 return CertificateBuilder( 537 self._issuer_name, self._subject_name, 538 self._public_key, self._serial_number, self._not_valid_before, 539 self._not_valid_after, self._extensions + [extension] 540 ) 541 542 def sign(self, private_key, algorithm, backend): 543 """ 544 Signs the certificate using the CA's private key. 545 """ 546 if self._subject_name is None: 547 raise ValueError("A certificate must have a subject name") 548 549 if self._issuer_name is None: 550 raise ValueError("A certificate must have an issuer name") 551 552 if self._serial_number is None: 553 raise ValueError("A certificate must have a serial number") 554 555 if self._not_valid_before is None: 556 raise ValueError("A certificate must have a not valid before time") 557 558 if self._not_valid_after is None: 559 raise ValueError("A certificate must have a not valid after time") 560 561 if self._public_key is None: 562 raise ValueError("A certificate must have a public key") 563 564 return backend.create_x509_certificate(self, private_key, algorithm) 565 566 567 class CertificateRevocationListBuilder(object): 568 def __init__(self, issuer_name=None, last_update=None, next_update=None, 569 extensions=[], revoked_certificates=[]): 570 self._issuer_name = issuer_name 571 self._last_update = last_update 572 self._next_update = next_update 573 self._extensions = extensions 574 self._revoked_certificates = revoked_certificates 575 576 def issuer_name(self, issuer_name): 577 if not isinstance(issuer_name, Name): 578 raise TypeError('Expecting x509.Name object.') 579 if self._issuer_name is not None: 580 raise ValueError('The issuer name may only be set once.') 581 return CertificateRevocationListBuilder( 582 issuer_name, self._last_update, self._next_update, 583 self._extensions, self._revoked_certificates 584 ) 585 586 def last_update(self, last_update): 587 if not isinstance(last_update, datetime.datetime): 588 raise TypeError('Expecting datetime object.') 589 if self._last_update is not None: 590 raise ValueError('Last update may only be set once.') 591 last_update = _convert_to_naive_utc_time(last_update) 592 if last_update <= _UNIX_EPOCH: 593 raise ValueError('The last update date must be after the unix' 594 ' epoch (1970 January 1).') 595 if self._next_update is not None and last_update > self._next_update: 596 raise ValueError( 597 'The last update date must be before the next update date.' 598 ) 599 return CertificateRevocationListBuilder( 600 self._issuer_name, last_update, self._next_update, 601 self._extensions, self._revoked_certificates 602 ) 603 604 def next_update(self, next_update): 605 if not isinstance(next_update, datetime.datetime): 606 raise TypeError('Expecting datetime object.') 607 if self._next_update is not None: 608 raise ValueError('Last update may only be set once.') 609 next_update = _convert_to_naive_utc_time(next_update) 610 if next_update <= _UNIX_EPOCH: 611 raise ValueError('The last update date must be after the unix' 612 ' epoch (1970 January 1).') 613 if self._last_update is not None and next_update < self._last_update: 614 raise ValueError( 615 'The next update date must be after the last update date.' 616 ) 617 return CertificateRevocationListBuilder( 618 self._issuer_name, self._last_update, next_update, 619 self._extensions, self._revoked_certificates 620 ) 621 622 def add_extension(self, extension, critical): 623 """ 624 Adds an X.509 extension to the certificate revocation list. 625 """ 626 if not isinstance(extension, ExtensionType): 627 raise TypeError("extension must be an ExtensionType") 628 629 extension = Extension(extension.oid, critical, extension) 630 631 # TODO: This is quadratic in the number of extensions 632 for e in self._extensions: 633 if e.oid == extension.oid: 634 raise ValueError('This extension has already been set.') 635 return CertificateRevocationListBuilder( 636 self._issuer_name, self._last_update, self._next_update, 637 self._extensions + [extension], self._revoked_certificates 638 ) 639 640 def add_revoked_certificate(self, revoked_certificate): 641 """ 642 Adds a revoked certificate to the CRL. 643 """ 644 if not isinstance(revoked_certificate, RevokedCertificate): 645 raise TypeError("Must be an instance of RevokedCertificate") 646 647 return CertificateRevocationListBuilder( 648 self._issuer_name, self._last_update, 649 self._next_update, self._extensions, 650 self._revoked_certificates + [revoked_certificate] 651 ) 652 653 def sign(self, private_key, algorithm, backend): 654 if self._issuer_name is None: 655 raise ValueError("A CRL must have an issuer name") 656 657 if self._last_update is None: 658 raise ValueError("A CRL must have a last update time") 659 660 if self._next_update is None: 661 raise ValueError("A CRL must have a next update time") 662 663 return backend.create_x509_crl(self, private_key, algorithm) 664 665 666 class RevokedCertificateBuilder(object): 667 def __init__(self, serial_number=None, revocation_date=None, 668 extensions=[]): 669 self._serial_number = serial_number 670 self._revocation_date = revocation_date 671 self._extensions = extensions 672 673 def serial_number(self, number): 674 if not isinstance(number, six.integer_types): 675 raise TypeError('Serial number must be of integral type.') 676 if self._serial_number is not None: 677 raise ValueError('The serial number may only be set once.') 678 if number <= 0: 679 raise ValueError('The serial number should be positive') 680 681 # ASN.1 integers are always signed, so most significant bit must be 682 # zero. 683 if utils.bit_length(number) >= 160: # As defined in RFC 5280 684 raise ValueError('The serial number should not be more than 159 ' 685 'bits.') 686 return RevokedCertificateBuilder( 687 number, self._revocation_date, self._extensions 688 ) 689 690 def revocation_date(self, time): 691 if not isinstance(time, datetime.datetime): 692 raise TypeError('Expecting datetime object.') 693 if self._revocation_date is not None: 694 raise ValueError('The revocation date may only be set once.') 695 time = _convert_to_naive_utc_time(time) 696 if time <= _UNIX_EPOCH: 697 raise ValueError('The revocation date must be after the unix' 698 ' epoch (1970 January 1).') 699 return RevokedCertificateBuilder( 700 self._serial_number, time, self._extensions 701 ) 702 703 def add_extension(self, extension, critical): 704 if not isinstance(extension, ExtensionType): 705 raise TypeError("extension must be an ExtensionType") 706 707 extension = Extension(extension.oid, critical, extension) 708 709 # TODO: This is quadratic in the number of extensions 710 for e in self._extensions: 711 if e.oid == extension.oid: 712 raise ValueError('This extension has already been set.') 713 return RevokedCertificateBuilder( 714 self._serial_number, self._revocation_date, 715 self._extensions + [extension] 716 ) 717 718 def build(self, backend): 719 if self._serial_number is None: 720 raise ValueError("A revoked certificate must have a serial number") 721 if self._revocation_date is None: 722 raise ValueError( 723 "A revoked certificate must have a revocation date" 724 ) 725 726 return backend.create_x509_revoked_certificate(self) 727 728 729 def random_serial_number(): 730 return utils.int_from_bytes(os.urandom(20), "big") >> 1 731 [end of src/cryptography/x509/base.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pyca/cryptography
7bc36865fcdb1057a4d2925d28f688c5590d6eaf
Update release automation for new wheel builder Once #3636 is merged we need to update the release automation to trigger the new wheel builder and download the artifacts.
2017-05-29T20:21:19Z
<patch> diff --git a/release.py b/release.py --- a/release.py +++ b/release.py @@ -17,7 +17,10 @@ import requests -JENKINS_URL = "https://jenkins.cryptography.io/job/cryptography-wheel-builder" +JENKINS_URL = ( + "https://ci.cryptography.io/job/cryptography-support-jobs/" + "job/wheel-builder" +) def run(*args, **kwargs): @@ -128,14 +131,11 @@ def release(version): ) response.raise_for_status() - username = getpass.getpass("Input the GitHub/Jenkins username: ") token = getpass.getpass("Input the Jenkins token: ") - response = session.post( + response = session.get( "{0}/build".format(JENKINS_URL), - auth=requests.auth.HTTPBasicAuth( - username, token - ), params={ + "token": token, "cause": "Building wheels for {0}".format(version) } ) </patch>
[]
[]
conan-io__conan-13841
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [bug] In the CMakeDeps Generator IMPORTED_LOCATION is overridden by IMPORTED_LOCATION${config_suffix} ### Environment details * Operating System+version: Windows 10 * Compiler+version: MSVC 19 * Conan version: 2.0.2 * Python version: 3.10 ### Steps to reproduce 1. Download a dependency (e.g. thrift) with conan using the CMakeDeps generator 2. Have a look at the created cmakedeps_macros.cmake In the function `conan_package_library_targets` line 48 and 49 for windows the correct target properties are set for IMPORTED_IMPLIB and IMPORTED_LOCATION. `set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_SHARED_FOUND_LIBRARY}) set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_IMPLIB ${CONAN_FOUND_LIBRARY})` Unfortunately those values are superseded by the line 61 where a per config value for IMPORTED_LOCATION_${config_suffix} is set, making the previous setting useless. `set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_FOUND_LIBRARY})` 3. Create a cmake project using ` add_custom_command(TARGET mytarget POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_if_different $<TARGET_RUNTIME_DLLS:mytarget> $<TARGET_FILE_DIR:mytarget>)` 4. Instead of copying the dlls files to the binary folder upon building, the .lib files are copied. ### Logs _No response_ </issue> <code> [start of README.md] 1 <picture> 2 <!-- These are also used for https://github.com/conan-io/.github/blob/main/profile/README.md --> 3 <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/conan-io/conan/develop2/.github/conan2-logo-for-dark.svg"> 4 <source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/conan-io/conan/develop2/.github/conan2-logo-for-light.svg"> 5 <img alt="JFrog | Conan 2.0 Logo" src="https://raw.githubusercontent.com/conan-io/conan/develop2/.github/conan2-logo-with-bg.svg"> 6 </picture> 7 8 # Conan 9 10 Decentralized, open-source (MIT), C/C++ package manager. 11 12 - Homepage: https://conan.io/ 13 - Github: https://github.com/conan-io/conan 14 - Docs: https://docs.conan.io 15 - Slack: https://cpplang.slack.com (#conan channel) 16 - Twitter: https://twitter.com/conan_io 17 18 19 Conan is a package manager for C and C++ developers: 20 21 - It is fully decentralized. Users can host their packages on their servers, privately. Integrates with Artifactory and Bintray. 22 - Portable. Works across all platforms, including Linux, OSX, Windows (with native and first-class support, WSL, MinGW), 23 Solaris, FreeBSD, embedded and cross-compiling, docker, WSL 24 - Manage binaries. It can create, upload and download binaries for any configuration and platform, 25 even cross-compiling, saving lots of time in development and continuous integration. The binary compatibility can be configured 26 and customized. Manage all your artifacts in the same way on all platforms. 27 - Integrates with any build system, including any proprietary and custom one. Provides tested support for major build systems 28 (CMake, MSBuild, Makefiles, Meson, etc). 29 - Extensible: Its python based recipes, together with extensions points allows for great power and flexibility. 30 - Large and active community, especially in Github (https://github.com/conan-io/conan) and Slack (https://cpplang-inviter.cppalliance.org/ #conan channel). 31 This community also creates and maintains packages in ConanCenter and Bincrafters repositories in Bintray. 32 - Stable. Used in production by many companies, since 1.0 there is a commitment not to break package recipes and documented behavior. 33 34 35 This is the **developer/maintainer** documentation. For user documentation, go to https://docs.conan.io 36 37 38 | **develop2** | 39 |-------------------------| 40 | [![Build Status Develop](https://ci.conan.io/buildStatus/icon?job=ConanTestSuite/develop)](https://ci.conan.io/blue/organizations/jenkins/ConanTestSuitev2/activity) | 41 42 43 44 ## Setup 45 46 You can run Conan from source in Windows, MacOS, and Linux: 47 48 - **Install pip following** [pip docs](https://pip.pypa.io/en/stable/installation/). 49 50 - **Clone Conan repository:** 51 52 ```bash 53 $ git clone https://github.com/conan-io/conan.git conan-io 54 ``` 55 56 > **Note**: repository directory name matters, some directories are known to be problematic to run tests (e.g. `conan`). `conan-io` directory name was tested and guaranteed to be working. 57 58 - **Install in editable mode** 59 60 ```bash 61 $ cd conan-io && sudo pip install -e . 62 ``` 63 64 If you are in Windows, using ``sudo`` is not required. 65 66 - **You are ready, try to run Conan:** 67 68 ```bash 69 $ conan --help 70 71 Consumer commands 72 install Installs the requirements specified in a recipe (conanfile.py or conanfile.txt). 73 ... 74 75 Conan commands. Type "conan <command> -h" for help 76 ``` 77 78 ## Contributing to the project 79 80 81 Feedback and contribution are always welcome in this project. 82 Please read our [contributing guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). 83 Also, if you plan to contribute, please add some testing for your changes. You can read the [Conan 84 tests guidelines section](https://github.com/conan-io/conan/blob/develop/conans/test/README.md) for 85 some advise on how to write tests for Conan. 86 87 ### Running the tests 88 89 90 **Install python requirements** 91 92 ```bash 93 $ python -m pip install -r conans/requirements_server.txt 94 $ python -m pip install -r conans/requirements_dev.txt 95 ``` 96 97 If you are not Windows and you are not using a python virtual environment, you will need to run these 98 commands using `sudo`. 99 100 Before you can run the tests, you need to set a few environment variables first. 101 102 ```bash 103 $ export PYTHONPATH=$PYTHONPATH:$(pwd) 104 ``` 105 106 On Windows it would be (while being in the Conan root directory): 107 108 ```bash 109 $ set PYTHONPATH=. 110 ``` 111 112 Conan test suite defines and configure some required tools (CMake, Ninja, etc) in the 113 ``conftest.py`` and allows to define a custom ``conftest_user.py``. 114 Some specific versions, like cmake>=3.15 are necessary. 115 116 117 You can run the tests like this: 118 119 ```bash 120 $ python -m pytest . 121 ``` 122 123 A few minutes later it should print ``OK``: 124 125 ```bash 126 ............................................................................................ 127 ---------------------------------------------------------------------- 128 Ran 146 tests in 50.993s 129 130 OK 131 ``` 132 133 To run specific tests, you can specify the test name too, something like: 134 135 ```bash 136 $ python -m pytest conans/test/unittests/client/cmd/export_test.py::ExportTest::test_export_warning -s 137 ``` 138 139 The `-s` argument can be useful to see some output that otherwise is captured by pytest. 140 141 Also, you can run tests against an instance of Artifactory. Those tests should add the attribute 142 `artifactory_ready`. 143 144 ```bash 145 $ python -m pytest . -m artifactory_ready 146 ``` 147 148 Some environment variables have to be defined to run them. For example, for an 149 Artifactory instance that is running on the localhost with default user and password configured, the 150 variables could take the values: 151 152 ```bash 153 $ export CONAN_TEST_WITH_ARTIFACTORY=1 154 $ export ARTIFACTORY_DEFAULT_URL=http://localhost:8081/artifactory 155 $ export ARTIFACTORY_DEFAULT_USER=admin 156 $ export ARTIFACTORY_DEFAULT_PASSWORD=password 157 ``` 158 159 `ARTIFACTORY_DEFAULT_URL` is the base url for the Artifactory repo, not one for a specific 160 repository. Running the tests with a real Artifactory instance will create repos on the fly so please 161 use a separate server for testing purposes. 162 163 ## License 164 165 [MIT LICENSE](LICENSE.md) 166 [end of README.md] [start of conan/tools/cmake/cmakedeps/templates/config.py] 1 import textwrap 2 3 from conan.tools.cmake.cmakedeps.templates import CMakeDepsFileTemplate 4 5 """ 6 7 FooConfig.cmake 8 foo-config.cmake 9 10 """ 11 12 13 class ConfigTemplate(CMakeDepsFileTemplate): 14 15 @property 16 def filename(self): 17 if self.generating_module: 18 return "Find{}.cmake".format(self.file_name) 19 else: 20 if self.file_name == self.file_name.lower(): 21 return "{}-config.cmake".format(self.file_name) 22 else: 23 return "{}Config.cmake".format(self.file_name) 24 25 @property 26 def context(self): 27 targets_include = "" if not self.generating_module else "module-" 28 targets_include += "{}Targets.cmake".format(self.file_name) 29 return {"is_module": self.generating_module, 30 "version": self.conanfile.ref.version, 31 "file_name": self.file_name, 32 "pkg_name": self.pkg_name, 33 "config_suffix": self.config_suffix, 34 "check_components_exist": self.cmakedeps.check_components_exist, 35 "targets_include_file": targets_include} 36 37 @property 38 def template(self): 39 return textwrap.dedent("""\ 40 ########## MACROS ########################################################################### 41 ############################################################################################# 42 43 # Requires CMake > 3.15 44 if(${CMAKE_VERSION} VERSION_LESS "3.15") 45 message(FATAL_ERROR "The 'CMakeDeps' generator only works with CMake >= 3.15") 46 endif() 47 48 if({{ file_name }}_FIND_QUIETLY) 49 set({{ file_name }}_MESSAGE_MODE VERBOSE) 50 else() 51 set({{ file_name }}_MESSAGE_MODE STATUS) 52 endif() 53 54 include(${CMAKE_CURRENT_LIST_DIR}/cmakedeps_macros.cmake) 55 include(${CMAKE_CURRENT_LIST_DIR}/{{ targets_include_file }}) 56 include(CMakeFindDependencyMacro) 57 58 check_build_type_defined() 59 60 foreach(_DEPENDENCY {{ '${' + pkg_name + '_FIND_DEPENDENCY_NAMES' + '}' }} ) 61 # Check that we have not already called a find_package with the transitive dependency 62 if(NOT {{ '${_DEPENDENCY}' }}_FOUND) 63 find_dependency({{ '${_DEPENDENCY}' }} REQUIRED ${${_DEPENDENCY}_FIND_MODE}) 64 endif() 65 endforeach() 66 67 set({{ file_name }}_VERSION_STRING "{{ version }}") 68 set({{ file_name }}_INCLUDE_DIRS {{ '${' + pkg_name + '_INCLUDE_DIRS' + config_suffix + '}' }} ) 69 set({{ file_name }}_INCLUDE_DIR {{ '${' + pkg_name + '_INCLUDE_DIRS' + config_suffix + '}' }} ) 70 set({{ file_name }}_LIBRARIES {{ '${' + pkg_name + '_LIBRARIES' + config_suffix + '}' }} ) 71 set({{ file_name }}_DEFINITIONS {{ '${' + pkg_name + '_DEFINITIONS' + config_suffix + '}' }} ) 72 73 # Only the first installed configuration is included to avoid the collision 74 foreach(_BUILD_MODULE {{ '${' + pkg_name + '_BUILD_MODULES_PATHS' + config_suffix + '}' }} ) 75 message({% raw %}${{% endraw %}{{ file_name }}_MESSAGE_MODE} "Conan: Including build module from '${_BUILD_MODULE}'") 76 include({{ '${_BUILD_MODULE}' }}) 77 endforeach() 78 79 {% if check_components_exist %} 80 # Check that the specified components in the find_package(Foo COMPONENTS x y z) are there 81 # This is the variable filled by CMake with the requested components in find_package 82 if({{ file_name }}_FIND_COMPONENTS) 83 foreach(_FIND_COMPONENT {{ '${'+file_name+'_FIND_COMPONENTS}' }}) 84 if (TARGET ${_FIND_COMPONENT}) 85 message({% raw %}${{% endraw %}{{ file_name }}_MESSAGE_MODE} "Conan: Component '${_FIND_COMPONENT}' found in package '{{ pkg_name }}'") 86 else() 87 message(FATAL_ERROR "Conan: Component '${_FIND_COMPONENT}' NOT found in package '{{ pkg_name }}'") 88 endif() 89 endforeach() 90 endif() 91 {% endif %} 92 93 {% if is_module %} 94 include(FindPackageHandleStandardArgs) 95 set({{ file_name }}_FOUND 1) 96 set({{ file_name }}_VERSION "{{ version }}") 97 98 find_package_handle_standard_args({{ file_name }} 99 REQUIRED_VARS {{ file_name }}_VERSION 100 VERSION_VAR {{ file_name }}_VERSION) 101 mark_as_advanced({{ file_name }}_FOUND {{ file_name }}_VERSION) 102 {% endif %} 103 """) 104 [end of conan/tools/cmake/cmakedeps/templates/config.py] [start of conan/tools/cmake/cmakedeps/templates/macros.py] 1 import textwrap 2 3 from conan.tools.cmake.cmakedeps.templates import CMakeDepsFileTemplate 4 5 """ 6 7 cmakedeps_macros.cmake 8 9 """ 10 11 12 class MacrosTemplate(CMakeDepsFileTemplate): 13 """cmakedeps_macros.cmake""" 14 15 def __init__(self): 16 super(MacrosTemplate, self).__init__(cmakedeps=None, require=None, conanfile=None) 17 18 @property 19 def filename(self): 20 return "cmakedeps_macros.cmake" 21 22 @property 23 def context(self): 24 return {} 25 26 @property 27 def template(self): 28 return textwrap.dedent(""" 29 macro(conan_find_apple_frameworks FRAMEWORKS_FOUND FRAMEWORKS FRAMEWORKS_DIRS) 30 if(APPLE) 31 foreach(_FRAMEWORK ${FRAMEWORKS}) 32 # https://cmake.org/pipermail/cmake-developers/2017-August/030199.html 33 find_library(CONAN_FRAMEWORK_${_FRAMEWORK}_FOUND NAMES ${_FRAMEWORK} PATHS ${FRAMEWORKS_DIRS} CMAKE_FIND_ROOT_PATH_BOTH) 34 if(CONAN_FRAMEWORK_${_FRAMEWORK}_FOUND) 35 list(APPEND ${FRAMEWORKS_FOUND} ${CONAN_FRAMEWORK_${_FRAMEWORK}_FOUND}) 36 message(VERBOSE "Framework found! ${FRAMEWORKS_FOUND}") 37 else() 38 message(FATAL_ERROR "Framework library ${_FRAMEWORK} not found in paths: ${FRAMEWORKS_DIRS}") 39 endif() 40 endforeach() 41 endif() 42 endmacro() 43 44 45 function(conan_package_library_targets libraries package_libdir package_bindir library_type 46 is_host_windows deps_target out_libraries_target config_suffix package_name no_soname_mode) 47 set(_out_libraries_target "") 48 49 foreach(_LIBRARY_NAME ${libraries}) 50 find_library(CONAN_FOUND_LIBRARY NAMES ${_LIBRARY_NAME} PATHS ${package_libdir} 51 NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH) 52 if(CONAN_FOUND_LIBRARY) 53 message(VERBOSE "Conan: Library ${_LIBRARY_NAME} found ${CONAN_FOUND_LIBRARY}") 54 55 # Create a micro-target for each lib/a found 56 # Allow only some characters for the target name 57 string(REGEX REPLACE "[^A-Za-z0-9.+_-]" "_" _LIBRARY_NAME ${_LIBRARY_NAME}) 58 set(_LIB_NAME CONAN_LIB::${package_name}_${_LIBRARY_NAME}${config_suffix}) 59 60 if(is_host_windows AND library_type STREQUAL "SHARED") 61 set(CMAKE_FIND_LIBRARY_SUFFIXES .dll ${CMAKE_FIND_LIBRARY_SUFFIXES}) 62 find_library(CONAN_SHARED_FOUND_LIBRARY NAMES ${_LIBRARY_NAME} PATHS ${package_bindir} 63 NO_DEFAULT_PATH NO_CMAKE_FIND_ROOT_PATH) 64 if(NOT CONAN_SHARED_FOUND_LIBRARY) 65 message(STATUS "Cannot locate shared library: ${_LIBRARY_NAME}") 66 message(DEBUG "DLL library not found, creating UNKNOWN IMPORTED target") 67 if(NOT TARGET ${_LIB_NAME}) 68 add_library(${_LIB_NAME} UNKNOWN IMPORTED) 69 endif() 70 set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_FOUND_LIBRARY}) 71 else() 72 if(NOT TARGET ${_LIB_NAME}) 73 add_library(${_LIB_NAME} SHARED IMPORTED) 74 endif() 75 set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_SHARED_FOUND_LIBRARY}) 76 set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_IMPLIB ${CONAN_FOUND_LIBRARY}) 77 message(DEBUG "Found DLL and STATIC at ${CONAN_SHARED_FOUND_LIBRARY}, ${CONAN_FOUND_LIBRARY}") 78 endif() 79 unset(CONAN_SHARED_FOUND_LIBRARY CACHE) 80 else() 81 if(NOT TARGET ${_LIB_NAME}) 82 # library_type can be STATIC, still UNKNOWN (if no package type available in the recipe) or SHARED (but no windows) 83 add_library(${_LIB_NAME} ${library_type} IMPORTED) 84 endif() 85 message(DEBUG "Created target ${_LIB_NAME} ${library_type} IMPORTED") 86 set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_FOUND_LIBRARY} IMPORTED_NO_SONAME ${no_soname_mode}) 87 endif() 88 # Link library file 89 set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_FOUND_LIBRARY}) 90 list(APPEND _out_libraries_target ${_LIB_NAME}) 91 message(VERBOSE "Conan: Found: ${CONAN_FOUND_LIBRARY}") 92 else() 93 message(FATAL_ERROR "Library '${_LIBRARY_NAME}' not found in package. If '${_LIBRARY_NAME}' is a system library, declare it with 'cpp_info.system_libs' property") 94 endif() 95 unset(CONAN_FOUND_LIBRARY CACHE) 96 endforeach() 97 98 # Add the dependencies target for all the imported libraries 99 foreach(_T ${_out_libraries_target}) 100 set_property(TARGET ${_T} PROPERTY INTERFACE_LINK_LIBRARIES ${deps_target} APPEND) 101 endforeach() 102 103 set(${out_libraries_target} ${_out_libraries_target} PARENT_SCOPE) 104 endfunction() 105 106 macro(check_build_type_defined) 107 # Check that the -DCMAKE_BUILD_TYPE argument is always present 108 get_property(isMultiConfig GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG) 109 if(NOT isMultiConfig AND NOT CMAKE_BUILD_TYPE) 110 message(FATAL_ERROR "Please, set the CMAKE_BUILD_TYPE variable when calling to CMake " 111 "adding the '-DCMAKE_BUILD_TYPE=<build_type>' argument.") 112 endif() 113 endmacro() 114 115 """) 116 [end of conan/tools/cmake/cmakedeps/templates/macros.py] [start of conan/tools/cmake/cmakedeps/templates/target_configuration.py] 1 import textwrap 2 3 from conan.tools.cmake.cmakedeps.templates import CMakeDepsFileTemplate 4 from conans.model.dependencies import get_transitive_requires 5 6 """ 7 8 FooTarget-release.cmake 9 10 """ 11 12 13 class TargetConfigurationTemplate(CMakeDepsFileTemplate): 14 15 @property 16 def filename(self): 17 name = "" if not self.generating_module else "module-" 18 name += "{}-Target-{}.cmake".format(self.file_name, self.cmakedeps.configuration.lower()) 19 return name 20 21 @property 22 def context(self): 23 deps_targets_names = self.get_deps_targets_names() \ 24 if not self.require.build else [] 25 26 components_targets_names = self.get_declared_components_targets_names() 27 components_names = [(components_target_name.replace("::", "_"), components_target_name) 28 for components_target_name in components_targets_names] 29 30 is_win = self.conanfile.settings.get_safe("os") == "Windows" 31 auto_link = self.conanfile.cpp_info.get_property("cmake_set_interface_link_directories") 32 return {"pkg_name": self.pkg_name, 33 "root_target_name": self.root_target_name, 34 "config_suffix": self.config_suffix, 35 "config": self.configuration.upper(), 36 "deps_targets_names": ";".join(deps_targets_names), 37 "components_names": components_names, 38 "configuration": self.cmakedeps.configuration, 39 "set_interface_link_directories": auto_link and is_win} 40 41 @property 42 def template(self): 43 return textwrap.dedent("""\ 44 # Avoid multiple calls to find_package to append duplicated properties to the targets 45 include_guard() 46 47 {%- macro tvalue(pkg_name, comp_name, var, config_suffix) -%} 48 {{'${'+pkg_name+'_'+comp_name+'_'+var+config_suffix+'}'}} 49 {%- endmacro -%} 50 51 ########### VARIABLES ####################################################################### 52 ############################################################################################# 53 set({{ pkg_name }}_FRAMEWORKS_FOUND{{ config_suffix }} "") # Will be filled later 54 conan_find_apple_frameworks({{ pkg_name }}_FRAMEWORKS_FOUND{{ config_suffix }} "{{ '${' }}{{ pkg_name }}_FRAMEWORKS{{ config_suffix }}}" "{{ '${' }}{{ pkg_name }}_FRAMEWORK_DIRS{{ config_suffix }}}") 55 56 set({{ pkg_name }}_LIBRARIES_TARGETS "") # Will be filled later 57 58 59 ######## Create an interface target to contain all the dependencies (frameworks, system and conan deps) 60 if(NOT TARGET {{ pkg_name+'_DEPS_TARGET'}}) 61 add_library({{ pkg_name+'_DEPS_TARGET'}} INTERFACE IMPORTED) 62 endif() 63 64 set_property(TARGET {{ pkg_name + '_DEPS_TARGET'}} 65 PROPERTY INTERFACE_LINK_LIBRARIES 66 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_FRAMEWORKS_FOUND'+config_suffix+'}' }}> 67 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_SYSTEM_LIBS'+config_suffix+'}' }}> 68 $<$<CONFIG:{{configuration}}>:{{ deps_targets_names }}> 69 APPEND) 70 71 ####### Find the libraries declared in cpp_info.libs, create an IMPORTED target for each one and link the 72 ####### {{pkg_name}}_DEPS_TARGET to all of them 73 conan_package_library_targets("{{ '${' }}{{ pkg_name }}_LIBS{{ config_suffix }}}" # libraries 74 "{{ '${' }}{{ pkg_name }}_LIB_DIRS{{ config_suffix }}}" # package_libdir 75 "{{ '${' }}{{ pkg_name }}_BIN_DIRS{{ config_suffix }}}" # package_bindir 76 "{{ '${' }}{{ pkg_name }}_LIBRARY_TYPE{{ config_suffix }}}" 77 "{{ '${' }}{{ pkg_name }}_IS_HOST_WINDOWS{{ config_suffix }}}" 78 {{ pkg_name + '_DEPS_TARGET'}} 79 {{ pkg_name }}_LIBRARIES_TARGETS # out_libraries_targets 80 "{{ config_suffix }}" 81 "{{ pkg_name }}" # package_name 82 "{{ '${' }}{{ pkg_name }}_NO_SONAME_MODE{{ config_suffix }}}") # soname 83 84 # FIXME: What is the result of this for multi-config? All configs adding themselves to path? 85 set(CMAKE_MODULE_PATH {{ '${' }}{{ pkg_name }}_BUILD_DIRS{{ config_suffix }}} {{ '${' }}CMAKE_MODULE_PATH}) 86 {% if not components_names %} 87 88 ########## GLOBAL TARGET PROPERTIES {{ configuration }} ######################################## 89 set_property(TARGET {{root_target_name}} 90 PROPERTY INTERFACE_LINK_LIBRARIES 91 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_OBJECTS'+config_suffix+'}' }}> 92 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_LIBRARIES_TARGETS}> 93 APPEND) 94 95 if("{{ '${' }}{{ pkg_name }}_LIBS{{ config_suffix }}}" STREQUAL "") 96 # If the package is not declaring any "cpp_info.libs" the package deps, system libs, 97 # frameworks etc are not linked to the imported targets and we need to do it to the 98 # global target 99 set_property(TARGET {{root_target_name}} 100 PROPERTY INTERFACE_LINK_LIBRARIES 101 {{pkg_name}}_DEPS_TARGET 102 APPEND) 103 endif() 104 105 set_property(TARGET {{root_target_name}} 106 PROPERTY INTERFACE_LINK_OPTIONS 107 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_LINKER_FLAGS{{config_suffix}}}> APPEND) 108 set_property(TARGET {{root_target_name}} 109 PROPERTY INTERFACE_INCLUDE_DIRECTORIES 110 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_INCLUDE_DIRS{{config_suffix}}}> APPEND) 111 # Necessary to find LINK shared libraries in Linux 112 set_property(TARGET {{root_target_name}} 113 PROPERTY INTERFACE_LINK_DIRECTORIES 114 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_LIB_DIRS{{config_suffix}}}> APPEND) 115 set_property(TARGET {{root_target_name}} 116 PROPERTY INTERFACE_COMPILE_DEFINITIONS 117 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_COMPILE_DEFINITIONS{{config_suffix}}}> APPEND) 118 set_property(TARGET {{root_target_name}} 119 PROPERTY INTERFACE_COMPILE_OPTIONS 120 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_COMPILE_OPTIONS{{config_suffix}}}> APPEND) 121 122 {%- if set_interface_link_directories %} 123 124 # This is only used for '#pragma comment(lib, "foo")' (automatic link) 125 set_property(TARGET {{root_target_name}} 126 PROPERTY INTERFACE_LINK_DIRECTORIES 127 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_LIB_DIRS{{config_suffix}}}> APPEND) 128 {%- endif %} 129 130 131 {%- else %} 132 133 ########## COMPONENTS TARGET PROPERTIES {{ configuration }} ######################################## 134 135 {%- for comp_variable_name, comp_target_name in components_names %} 136 137 138 ########## COMPONENT {{ comp_target_name }} ############# 139 140 set({{ pkg_name }}_{{ comp_variable_name }}_FRAMEWORKS_FOUND{{ config_suffix }} "") 141 conan_find_apple_frameworks({{ pkg_name }}_{{ comp_variable_name }}_FRAMEWORKS_FOUND{{ config_suffix }} "{{ '${'+pkg_name+'_'+comp_variable_name+'_FRAMEWORKS'+config_suffix+'}' }}" "{{ '${'+pkg_name+'_'+comp_variable_name+'_FRAMEWORK_DIRS'+config_suffix+'}' }}") 142 143 set({{ pkg_name }}_{{ comp_variable_name }}_LIBRARIES_TARGETS "") 144 145 ######## Create an interface target to contain all the dependencies (frameworks, system and conan deps) 146 if(NOT TARGET {{ pkg_name + '_' + comp_variable_name + '_DEPS_TARGET'}}) 147 add_library({{ pkg_name + '_' + comp_variable_name + '_DEPS_TARGET'}} INTERFACE IMPORTED) 148 endif() 149 150 set_property(TARGET {{ pkg_name + '_' + comp_variable_name + '_DEPS_TARGET'}} 151 PROPERTY INTERFACE_LINK_LIBRARIES 152 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_'+comp_variable_name+'_FRAMEWORKS_FOUND'+config_suffix+'}' }}> 153 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_'+comp_variable_name+'_SYSTEM_LIBS'+config_suffix+'}' }}> 154 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_'+comp_variable_name+'_DEPENDENCIES'+config_suffix+'}' }}> 155 APPEND) 156 157 ####### Find the libraries declared in cpp_info.component["xxx"].libs, 158 ####### create an IMPORTED target for each one and link the '{{pkg_name}}_{{comp_variable_name}}_DEPS_TARGET' to all of them 159 conan_package_library_targets("{{ '${'+pkg_name+'_'+comp_variable_name+'_LIBS'+config_suffix+'}' }}" 160 "{{ '${'+pkg_name+'_'+comp_variable_name+'_LIB_DIRS'+config_suffix+'}' }}" 161 "{{ '${'+pkg_name+'_'+comp_variable_name+'_BIN_DIRS'+config_suffix+'}' }}" # package_bindir 162 "{{ '${'+pkg_name+'_'+comp_variable_name+'_LIBRARY_TYPE'+config_suffix+'}' }}" 163 "{{ '${'+pkg_name+'_'+comp_variable_name+'_IS_HOST_WINDOWS'+config_suffix+'}' }}" 164 {{ pkg_name + '_' + comp_variable_name + '_DEPS_TARGET'}} 165 {{ pkg_name }}_{{ comp_variable_name }}_LIBRARIES_TARGETS 166 "{{ config_suffix }}" 167 "{{ pkg_name }}_{{ comp_variable_name }}" 168 "{{ '${'+pkg_name+'_'+comp_variable_name+'_NO_SONAME_MODE'+config_suffix+'}' }}") 169 170 171 ########## TARGET PROPERTIES ##################################### 172 set_property(TARGET {{comp_target_name}} 173 PROPERTY INTERFACE_LINK_LIBRARIES 174 $<$<CONFIG:{{configuration}}>:{{ '${'+pkg_name+'_'+comp_variable_name+'_OBJECTS'+config_suffix+'}' }}> 175 $<$<CONFIG:{{configuration}}>:${{'{'}}{{pkg_name}}_{{comp_variable_name}}_LIBRARIES_TARGETS}> 176 APPEND) 177 178 if("{{ '${' }}{{ pkg_name }}_{{comp_variable_name}}_LIBS{{ config_suffix }}}" STREQUAL "") 179 # If the component is not declaring any "cpp_info.components['foo'].libs" the system, frameworks etc are not 180 # linked to the imported targets and we need to do it to the global target 181 set_property(TARGET {{comp_target_name}} 182 PROPERTY INTERFACE_LINK_LIBRARIES 183 {{pkg_name}}_{{comp_variable_name}}_DEPS_TARGET 184 APPEND) 185 endif() 186 187 set_property(TARGET {{ comp_target_name }} PROPERTY INTERFACE_LINK_OPTIONS 188 $<$<CONFIG:{{ configuration }}>:{{tvalue(pkg_name, comp_variable_name, 'LINKER_FLAGS', config_suffix)}}> APPEND) 189 set_property(TARGET {{ comp_target_name }} PROPERTY INTERFACE_INCLUDE_DIRECTORIES 190 $<$<CONFIG:{{ configuration }}>:{{tvalue(pkg_name, comp_variable_name, 'INCLUDE_DIRS', config_suffix)}}> APPEND) 191 set_property(TARGET {{comp_target_name }} PROPERTY INTERFACE_LINK_DIRECTORIES 192 $<$<CONFIG:{{ configuration }}>:{{tvalue(pkg_name, comp_variable_name, 'LIB_DIRS', config_suffix)}}> APPEND) 193 set_property(TARGET {{ comp_target_name }} PROPERTY INTERFACE_COMPILE_DEFINITIONS 194 $<$<CONFIG:{{ configuration }}>:{{tvalue(pkg_name, comp_variable_name, 'COMPILE_DEFINITIONS', config_suffix)}}> APPEND) 195 set_property(TARGET {{ comp_target_name }} PROPERTY INTERFACE_COMPILE_OPTIONS 196 $<$<CONFIG:{{ configuration }}>:{{tvalue(pkg_name, comp_variable_name, 'COMPILE_OPTIONS', config_suffix)}}> APPEND) 197 198 {%- if set_interface_link_directories %} 199 # This is only used for '#pragma comment(lib, "foo")' (automatic link) 200 set_property(TARGET {{ comp_target_name }} PROPERTY INTERFACE_LINK_DIRECTORIES 201 $<$<CONFIG:{{ configuration }}>:{{tvalue(pkg_name, comp_variable_name, 'LIB_DIRS', config_suffix)}}> APPEND) 202 203 {%- endif %} 204 {%endfor %} 205 206 207 ########## AGGREGATED GLOBAL TARGET WITH THE COMPONENTS ##################### 208 {%- for comp_variable_name, comp_target_name in components_names %} 209 210 set_property(TARGET {{root_target_name}} PROPERTY INTERFACE_LINK_LIBRARIES {{ comp_target_name }} APPEND) 211 212 {%- endfor %} 213 214 215 {%- endif %} 216 217 218 ########## For the modules (FindXXX) 219 set({{ pkg_name }}_LIBRARIES{{ config_suffix }} {{root_target_name}}) 220 221 """) 222 223 def get_declared_components_targets_names(self): 224 """Returns a list of component_name""" 225 ret = [] 226 sorted_comps = self.conanfile.cpp_info.get_sorted_components() 227 for comp_name, comp in sorted_comps.items(): 228 ret.append(self.get_component_alias(self.conanfile, comp_name)) 229 ret.reverse() 230 return ret 231 232 def get_deps_targets_names(self): 233 """ 234 - [{foo}::{bar}, ] of the required 235 """ 236 ret = [] 237 238 # Get a list of dependencies target names 239 # Declared cppinfo.requires or .components[].requires 240 transitive_reqs = get_transitive_requires(self.cmakedeps._conanfile, self.conanfile) 241 if self.conanfile.cpp_info.required_components: 242 for dep_name, component_name in self.conanfile.cpp_info.required_components: 243 try: 244 # if not dep_name, it is internal, from current self.conanfile 245 req = transitive_reqs[dep_name] if dep_name is not None else self.conanfile 246 except KeyError: 247 # if it raises it means the required component is not in the direct_host 248 # dependencies, maybe it has been filtered out by traits => Skip 249 pass 250 else: 251 component_name = self.get_component_alias(req, component_name) 252 ret.append(component_name) 253 elif transitive_reqs: 254 # Regular external "conanfile.requires" declared, not cpp_info requires 255 ret = [self.get_root_target_name(r) for r in transitive_reqs.values()] 256 return ret 257 [end of conan/tools/cmake/cmakedeps/templates/target_configuration.py] [start of conan/tools/cmake/cmakedeps/templates/target_data.py] 1 import os 2 import textwrap 3 4 from conan.tools.cmake.cmakedeps import FIND_MODE_NONE, FIND_MODE_CONFIG, FIND_MODE_MODULE, \ 5 FIND_MODE_BOTH 6 from conan.tools.cmake.cmakedeps.templates import CMakeDepsFileTemplate 7 from conans.errors import ConanException 8 from conans.model.dependencies import get_transitive_requires 9 10 11 """ 12 13 foo-release-x86_64-data.cmake 14 15 """ 16 17 18 class ConfigDataTemplate(CMakeDepsFileTemplate): 19 20 @property 21 def filename(self): 22 data_fname = "" if not self.generating_module else "module-" 23 data_fname += "{}-{}".format(self.file_name, self.configuration.lower()) 24 if self.arch: 25 data_fname += "-{}".format(self.arch) 26 data_fname += "-data.cmake" 27 return data_fname 28 29 @property 30 def context(self): 31 global_cpp = self._get_global_cpp_cmake() 32 if not self.build_modules_activated: 33 global_cpp.build_modules_paths = "" 34 35 components = self._get_required_components_cpp() 36 # using the target names to name components, may change in the future? 37 components_names = " ".join([components_target_name for components_target_name, _ in 38 reversed(components)]) 39 40 components_cpp = [(cmake_target_name.replace("::", "_"), cmake_target_name, cpp) 41 for cmake_target_name, cpp in components] 42 43 # For the build requires, we don't care about the transitive (only runtime for the br) 44 # so as the xxx-conf.cmake files won't be generated, don't include them as find_dependency 45 # This is because in Conan 2.0 model, only the pure tools like CMake will be build_requires 46 # for example a framework test won't be a build require but a "test/not public" require. 47 dependency_filenames = self._get_dependency_filenames() 48 # Get the nodes that have the property cmake_find_mode=None (no files to generate) 49 dependency_find_modes = self._get_dependencies_find_modes() 50 51 # Make the root_folder relative to the generated xxx-data.cmake file 52 root_folder = self._root_folder 53 root_folder = root_folder.replace('\\', '/').replace('$', '\\$').replace('"', '\\"') 54 55 return {"global_cpp": global_cpp, 56 "has_components": self.conanfile.cpp_info.has_components, 57 "pkg_name": self.pkg_name, 58 "file_name": self.file_name, 59 "package_folder": root_folder, 60 "config_suffix": self.config_suffix, 61 "components_names": components_names, 62 "components_cpp": components_cpp, 63 "dependency_filenames": " ".join(dependency_filenames), 64 "dependency_find_modes": dependency_find_modes} 65 66 @property 67 def cmake_package_type(self): 68 return {"shared-library": "SHARED", 69 "static-library": "STATIC"}.get(str(self.conanfile.package_type), "UNKNOWN") 70 71 @property 72 def is_host_windows(self): 73 # to account for all WindowsStore, WindowsCE and Windows OS in settings 74 return "Windows" in self.conanfile.settings.get_safe("os", "") 75 76 @property 77 def template(self): 78 # This will be at: XXX-release-data.cmake 79 ret = textwrap.dedent("""\ 80 ########### AGGREGATED COMPONENTS AND DEPENDENCIES FOR THE MULTI CONFIG ##################### 81 ############################################################################################# 82 83 {% if components_names %} 84 list(APPEND {{ pkg_name }}_COMPONENT_NAMES {{ components_names }}) 85 list(REMOVE_DUPLICATES {{ pkg_name }}_COMPONENT_NAMES) 86 {% else %} 87 set({{ pkg_name }}_COMPONENT_NAMES "") 88 {% endif %} 89 {% if dependency_filenames %} 90 list(APPEND {{ pkg_name }}_FIND_DEPENDENCY_NAMES {{ dependency_filenames }}) 91 list(REMOVE_DUPLICATES {{ pkg_name }}_FIND_DEPENDENCY_NAMES) 92 {% else %} 93 set({{ pkg_name }}_FIND_DEPENDENCY_NAMES "") 94 {% endif %} 95 {% for dep_name, mode in dependency_find_modes.items() %} 96 set({{ dep_name }}_FIND_MODE "{{ mode }}") 97 {% endfor %} 98 99 ########### VARIABLES ####################################################################### 100 ############################################################################################# 101 set({{ pkg_name }}_PACKAGE_FOLDER{{ config_suffix }} "{{ package_folder }}") 102 set({{ pkg_name }}_BUILD_MODULES_PATHS{{ config_suffix }} {{ global_cpp.build_modules_paths }}) 103 104 105 set({{ pkg_name }}_INCLUDE_DIRS{{ config_suffix }} {{ global_cpp.include_paths }}) 106 set({{ pkg_name }}_RES_DIRS{{ config_suffix }} {{ global_cpp.res_paths }}) 107 set({{ pkg_name }}_DEFINITIONS{{ config_suffix }} {{ global_cpp.defines }}) 108 set({{ pkg_name }}_SHARED_LINK_FLAGS{{ config_suffix }} {{ global_cpp.sharedlinkflags_list }}) 109 set({{ pkg_name }}_EXE_LINK_FLAGS{{ config_suffix }} {{ global_cpp.exelinkflags_list }}) 110 set({{ pkg_name }}_OBJECTS{{ config_suffix }} {{ global_cpp.objects_list }}) 111 set({{ pkg_name }}_COMPILE_DEFINITIONS{{ config_suffix }} {{ global_cpp.compile_definitions }}) 112 set({{ pkg_name }}_COMPILE_OPTIONS_C{{ config_suffix }} {{ global_cpp.cflags_list }}) 113 set({{ pkg_name }}_COMPILE_OPTIONS_CXX{{ config_suffix }} {{ global_cpp.cxxflags_list}}) 114 set({{ pkg_name }}_LIB_DIRS{{ config_suffix }} {{ global_cpp.lib_paths }}) 115 set({{ pkg_name }}_BIN_DIRS{{ config_suffix }} {{ global_cpp.bin_paths }}) 116 set({{ pkg_name }}_LIBRARY_TYPE{{ config_suffix }} {{ global_cpp.library_type }}) 117 set({{ pkg_name }}_IS_HOST_WINDOWS{{ config_suffix }} {{ global_cpp.is_host_windows }}) 118 set({{ pkg_name }}_LIBS{{ config_suffix }} {{ global_cpp.libs }}) 119 set({{ pkg_name }}_SYSTEM_LIBS{{ config_suffix }} {{ global_cpp.system_libs }}) 120 set({{ pkg_name }}_FRAMEWORK_DIRS{{ config_suffix }} {{ global_cpp.framework_paths }}) 121 set({{ pkg_name }}_FRAMEWORKS{{ config_suffix }} {{ global_cpp.frameworks }}) 122 set({{ pkg_name }}_BUILD_DIRS{{ config_suffix }} {{ global_cpp.build_paths }}) 123 set({{ pkg_name }}_NO_SONAME_MODE{{ config_suffix }} {{ global_cpp.no_soname }}) 124 125 126 # COMPOUND VARIABLES 127 set({{ pkg_name }}_COMPILE_OPTIONS{{ config_suffix }} 128 "$<$<COMPILE_LANGUAGE:CXX>{{ ':${' }}{{ pkg_name }}_COMPILE_OPTIONS_CXX{{ config_suffix }}}>" 129 "$<$<COMPILE_LANGUAGE:C>{{ ':${' }}{{ pkg_name }}_COMPILE_OPTIONS_C{{ config_suffix }}}>") 130 set({{ pkg_name }}_LINKER_FLAGS{{ config_suffix }} 131 "$<$<STREQUAL{{ ':$' }}<TARGET_PROPERTY:TYPE>,SHARED_LIBRARY>{{ ':${' }}{{ pkg_name }}_SHARED_LINK_FLAGS{{ config_suffix }}}>" 132 "$<$<STREQUAL{{ ':$' }}<TARGET_PROPERTY:TYPE>,MODULE_LIBRARY>{{ ':${' }}{{ pkg_name }}_SHARED_LINK_FLAGS{{ config_suffix }}}>" 133 "$<$<STREQUAL{{ ':$' }}<TARGET_PROPERTY:TYPE>,EXECUTABLE>{{ ':${' }}{{ pkg_name }}_EXE_LINK_FLAGS{{ config_suffix }}}>") 134 135 136 set({{ pkg_name }}_COMPONENTS{{ config_suffix }} {{ components_names }}) 137 {%- for comp_variable_name, comp_target_name, cpp in components_cpp %} 138 139 ########### COMPONENT {{ comp_target_name }} VARIABLES ############################################ 140 141 set({{ pkg_name }}_{{ comp_variable_name }}_INCLUDE_DIRS{{ config_suffix }} {{ cpp.include_paths }}) 142 set({{ pkg_name }}_{{ comp_variable_name }}_LIB_DIRS{{ config_suffix }} {{ cpp.lib_paths }}) 143 set({{ pkg_name }}_{{ comp_variable_name }}_BIN_DIRS{{ config_suffix }} {{ cpp.bin_paths }}) 144 set({{ pkg_name }}_{{ comp_variable_name }}_LIBRARY_TYPE{{ config_suffix }} {{ cpp.library_type }}) 145 set({{ pkg_name }}_{{ comp_variable_name }}_IS_HOST_WINDOWS{{ config_suffix }} {{ cpp.is_host_windows }}) 146 set({{ pkg_name }}_{{ comp_variable_name }}_RES_DIRS{{ config_suffix }} {{ cpp.res_paths }}) 147 set({{ pkg_name }}_{{ comp_variable_name }}_DEFINITIONS{{ config_suffix }} {{ cpp.defines }}) 148 set({{ pkg_name }}_{{ comp_variable_name }}_OBJECTS{{ config_suffix }} {{ cpp.objects_list }}) 149 set({{ pkg_name }}_{{ comp_variable_name }}_COMPILE_DEFINITIONS{{ config_suffix }} {{ cpp.compile_definitions }}) 150 set({{ pkg_name }}_{{ comp_variable_name }}_COMPILE_OPTIONS_C{{ config_suffix }} "{{ cpp.cflags_list }}") 151 set({{ pkg_name }}_{{ comp_variable_name }}_COMPILE_OPTIONS_CXX{{ config_suffix }} "{{ cpp.cxxflags_list }}") 152 set({{ pkg_name }}_{{ comp_variable_name }}_LIBS{{ config_suffix }} {{ cpp.libs }}) 153 set({{ pkg_name }}_{{ comp_variable_name }}_SYSTEM_LIBS{{ config_suffix }} {{ cpp.system_libs }}) 154 set({{ pkg_name }}_{{ comp_variable_name }}_FRAMEWORK_DIRS{{ config_suffix }} {{ cpp.framework_paths }}) 155 set({{ pkg_name }}_{{ comp_variable_name }}_FRAMEWORKS{{ config_suffix }} {{ cpp.frameworks }}) 156 set({{ pkg_name }}_{{ comp_variable_name }}_DEPENDENCIES{{ config_suffix }} {{ cpp.public_deps }}) 157 set({{ pkg_name }}_{{ comp_variable_name }}_SHARED_LINK_FLAGS{{ config_suffix }} {{ cpp.sharedlinkflags_list }}) 158 set({{ pkg_name }}_{{ comp_variable_name }}_EXE_LINK_FLAGS{{ config_suffix }} {{ cpp.exelinkflags_list }}) 159 set({{ pkg_name }}_{{ comp_variable_name }}_NO_SONAME_MODE{{ config_suffix }} {{ cpp.no_soname }}) 160 161 # COMPOUND VARIABLES 162 set({{ pkg_name }}_{{ comp_variable_name }}_LINKER_FLAGS{{ config_suffix }} 163 $<$<STREQUAL:$<TARGET_PROPERTY:TYPE>,SHARED_LIBRARY>{{ ':${' }}{{ pkg_name }}_{{ comp_variable_name }}_SHARED_LINK_FLAGS{{ config_suffix }}}> 164 $<$<STREQUAL:$<TARGET_PROPERTY:TYPE>,MODULE_LIBRARY>{{ ':${' }}{{ pkg_name }}_{{ comp_variable_name }}_SHARED_LINK_FLAGS{{ config_suffix }}}> 165 $<$<STREQUAL:$<TARGET_PROPERTY:TYPE>,EXECUTABLE>{{ ':${' }}{{ pkg_name }}_{{ comp_variable_name }}_EXE_LINK_FLAGS{{ config_suffix }}}> 166 ) 167 set({{ pkg_name }}_{{ comp_variable_name }}_COMPILE_OPTIONS{{ config_suffix }} 168 "$<$<COMPILE_LANGUAGE:CXX>{{ ':${' }}{{ pkg_name }}_{{ comp_variable_name }}_COMPILE_OPTIONS_CXX{{ config_suffix }}}>" 169 "$<$<COMPILE_LANGUAGE:C>{{ ':${' }}{{ pkg_name }}_{{ comp_variable_name }}_COMPILE_OPTIONS_C{{ config_suffix }}}>") 170 171 {%- endfor %} 172 """) 173 return ret 174 175 def _get_global_cpp_cmake(self): 176 global_cppinfo = self.conanfile.cpp_info.aggregated_components() 177 pfolder_var_name = "{}_PACKAGE_FOLDER{}".format(self.pkg_name, self.config_suffix) 178 return _TargetDataContext(global_cppinfo, pfolder_var_name, self._root_folder, 179 self.require, self.cmake_package_type, self.is_host_windows) 180 181 @property 182 def _root_folder(self): 183 return self.conanfile.recipe_folder if self.conanfile.package_folder is None \ 184 else self.conanfile.package_folder 185 186 def _get_required_components_cpp(self): 187 """Returns a list of (component_name, DepsCppCMake)""" 188 ret = [] 189 sorted_comps = self.conanfile.cpp_info.get_sorted_components() 190 pfolder_var_name = "{}_PACKAGE_FOLDER{}".format(self.pkg_name, self.config_suffix) 191 transitive_requires = get_transitive_requires(self.cmakedeps._conanfile, self.conanfile) 192 for comp_name, comp in sorted_comps.items(): 193 deps_cpp_cmake = _TargetDataContext(comp, pfolder_var_name, self._root_folder, 194 self.require, self.cmake_package_type, 195 self.is_host_windows) 196 197 public_comp_deps = [] 198 for require in comp.requires: 199 if "::" in require: # Points to a component of a different package 200 pkg, cmp_name = require.split("::") 201 try: # Make sure the declared dependency is at least in the recipe requires 202 self.conanfile.dependencies[pkg] 203 except KeyError: 204 raise ConanException(f"{self.conanfile}: component '{comp_name}' required " 205 f"'{require}', but '{pkg}' is not a direct dependency") 206 try: 207 req = transitive_requires[pkg] 208 except KeyError: # The transitive dep might have been skipped 209 pass 210 else: 211 public_comp_deps.append(self.get_component_alias(req, cmp_name)) 212 else: # Points to a component of same package 213 public_comp_deps.append(self.get_component_alias(self.conanfile, require)) 214 deps_cpp_cmake.public_deps = " ".join(public_comp_deps) 215 component_target_name = self.get_component_alias(self.conanfile, comp_name) 216 ret.append((component_target_name, deps_cpp_cmake)) 217 ret.reverse() 218 return ret 219 220 def _get_dependency_filenames(self): 221 if self.require.build: 222 return [] 223 224 transitive_reqs = get_transitive_requires(self.cmakedeps._conanfile, self.conanfile) 225 # Previously it was filtering here components, but not clear why the file dependency 226 # should be skipped if components are not being required, why would it declare a 227 # dependency to it? 228 ret = [self.cmakedeps.get_cmake_package_name(r, self.generating_module) 229 for r in transitive_reqs.values()] 230 return ret 231 232 def _get_dependencies_find_modes(self): 233 ret = {} 234 if self.require.build: 235 return ret 236 deps = get_transitive_requires(self.cmakedeps._conanfile, self.conanfile) 237 for dep in deps.values(): 238 dep_file_name = self.cmakedeps.get_cmake_package_name(dep, self.generating_module) 239 find_mode = self.cmakedeps.get_find_mode(dep) 240 default_value = "NO_MODULE" if not self.generating_module else "MODULE" 241 values = { 242 FIND_MODE_NONE: "", 243 FIND_MODE_CONFIG: "NO_MODULE", 244 FIND_MODE_MODULE: "MODULE", 245 # When the dependency is "both" or not defined, we use the one is forced 246 # by self.find_module_mode (creating modules files-> modules, config -> config) 247 FIND_MODE_BOTH: default_value, 248 None: default_value} 249 ret[dep_file_name] = values[find_mode] 250 return ret 251 252 253 class _TargetDataContext(object): 254 255 def __init__(self, cpp_info, pfolder_var_name, package_folder, require, library_type, 256 is_host_windows): 257 258 def join_paths(paths): 259 """ 260 Paths are doubled quoted, and escaped (but spaces) 261 e.g: set(LIBFOO_INCLUDE_DIRS "/path/to/included/dir" "/path/to/included/dir2") 262 """ 263 ret = [] 264 for p in paths: 265 assert os.path.isabs(p), "{} is not absolute".format(p) 266 267 # Trying to use a ${mypkg_PACKAGE_FOLDER}/include path instead of full 268 if p.startswith(package_folder): 269 # Prepend the {{ pkg_name }}_PACKAGE_FOLDER{{ config_suffix }} 270 rel = p[len(package_folder):] 271 rel = rel.replace('\\', '/').replace('$', '\\$').replace('"', '\\"').lstrip("/") 272 norm_path = ("${%s}/%s" % (pfolder_var_name, rel)) 273 else: 274 norm_path = p.replace('\\', '/').replace('$', '\\$').replace('"', '\\"') 275 ret.append('"{}"'.format(norm_path)) 276 277 return "\n\t\t\t".join(ret) 278 279 def join_flags(separator, values): 280 # Flags have to be escaped 281 ret = separator.join(v.replace('\\', '\\\\').replace('$', '\\$').replace('"', '\\"') 282 for v in values) 283 return ret 284 285 def join_defines(values, prefix=""): 286 # Defines have to be escaped, included spaces 287 return "\n\t\t\t".join('"%s%s"' % (prefix, v.replace('\\', '\\\\').replace('$', '\\$'). 288 replace('"', '\\"')) 289 for v in values) 290 291 self.include_paths = join_paths(cpp_info.includedirs) 292 self.lib_paths = join_paths(cpp_info.libdirs) 293 self.res_paths = join_paths(cpp_info.resdirs) 294 self.bin_paths = join_paths(cpp_info.bindirs) 295 self.build_paths = join_paths(cpp_info.builddirs) 296 self.framework_paths = join_paths(cpp_info.frameworkdirs) 297 self.libs = join_flags(" ", cpp_info.libs) 298 self.system_libs = join_flags(" ", cpp_info.system_libs) 299 self.frameworks = join_flags(" ", cpp_info.frameworks) 300 self.defines = join_defines(cpp_info.defines, "-D") 301 self.compile_definitions = join_defines(cpp_info.defines) 302 self.library_type = library_type 303 self.is_host_windows = "1" if is_host_windows else "0" 304 305 # For modern CMake targets we need to prepare a list to not 306 # loose the elements in the list by replacing " " with ";". Example "-framework Foundation" 307 # Issue: #1251 308 self.cxxflags_list = join_flags(";", cpp_info.cxxflags) 309 self.cflags_list = join_flags(";", cpp_info.cflags) 310 311 # linker flags without magic: trying to mess with - and / => 312 # https://github.com/conan-io/conan/issues/8811 313 # frameworks should be declared with cppinfo.frameworks not "-framework Foundation" 314 self.sharedlinkflags_list = '"{}"'.format(join_flags(";", cpp_info.sharedlinkflags)) \ 315 if cpp_info.sharedlinkflags else '' 316 self.exelinkflags_list = '"{}"'.format(join_flags(";", cpp_info.exelinkflags)) \ 317 if cpp_info.exelinkflags else '' 318 319 self.objects_list = join_paths(cpp_info.objects) 320 321 # traits logic 322 if require and not require.headers: 323 self.include_paths = "" 324 if require and not require.libs: 325 # self.lib_paths = "" IMPORTANT! LINKERS IN LINUX FOR SHARED MIGHT NEED IT EVEN IF 326 # NOT REALLY LINKING LIB 327 self.libs = "" 328 if cpp_info.frameworkdirs: # Only invalidate for in-package frameworks 329 # FIXME: The mix of in-package frameworks + system ones is broken 330 self.frameworks = "" 331 if require and not require.libs and not require.headers: 332 self.defines = "" 333 self.compile_definitions = "" 334 self.cxxflags_list = "" 335 self.cflags_list = "" 336 self.sharedlinkflags_list = "" 337 self.exelinkflags_list = "" 338 self.objects_list = "" 339 if require and not require.run: 340 self.bin_paths = "" 341 342 build_modules = cpp_info.get_property("cmake_build_modules") or [] 343 self.build_modules_paths = join_paths(build_modules) 344 # SONAME flag only makes sense for SHARED libraries 345 self.no_soname = str((cpp_info.get_property("nosoname") if self.library_type == "SHARED" else False) or False).upper() 346 [end of conan/tools/cmake/cmakedeps/templates/target_data.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conan-io/conan
b5e840cac71f43ae67431920785ae223010e7296
[bug] In the CMakeDeps Generator IMPORTED_LOCATION is overridden by IMPORTED_LOCATION${config_suffix} ### Environment details * Operating System+version: Windows 10 * Compiler+version: MSVC 19 * Conan version: 2.0.2 * Python version: 3.10 ### Steps to reproduce 1. Download a dependency (e.g. thrift) with conan using the CMakeDeps generator 2. Have a look at the created cmakedeps_macros.cmake In the function `conan_package_library_targets` line 48 and 49 for windows the correct target properties are set for IMPORTED_IMPLIB and IMPORTED_LOCATION. `set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_SHARED_FOUND_LIBRARY}) set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_IMPLIB ${CONAN_FOUND_LIBRARY})` Unfortunately those values are superseded by the line 61 where a per config value for IMPORTED_LOCATION_${config_suffix} is set, making the previous setting useless. `set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_FOUND_LIBRARY})` 3. Create a cmake project using ` add_custom_command(TARGET mytarget POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_if_different $<TARGET_RUNTIME_DLLS:mytarget> $<TARGET_FILE_DIR:mytarget>)` 4. Instead of copying the dlls files to the binary folder upon building, the .lib files are copied. ### Logs _No response_
The right thing is to not set IMPORTED_LOCATION at all. Instead IMPORTED_LOCATION_DEBUG or IMPORTED_LOCATION_RELEASE shall be set to the Library for LIBRARY targets and to the dll (or so) for a shared library. For the latter IMPORTED_IMPLIB_DEBUG or IMPORTED_IMPLIB_RELEASE has to be set to the library. IMPORTED_LOCATION and IMPORTED_IMPLIB will then be automatically determined by cmake. Hey, I have a fix for it here. [fix-cmakedeps-import-library-config-suffix.patch](https://github.com/conan-io/conan/files/11420056/fix-cmakedeps-import-library-config-suffix.patch) I also tried to upload this into a branch, to start a pull request, but it seems like I have no permissions for that. Could somebody grant this permission to me, so that I can upload this change? > I also tried to upload this into a branch, to start a pull request, but it seems like I have no permissions for that. Could somebody grant this permission to me, so that I can upload this change? Hi @Untzelmann you cannot upload branches to the main repo, you need to push your branches to your Github fork, and do a PRs from your fork. That is the contribution flow to Conan, we all, including the maintainers, do PRs from our forks, not from the central repo.
2023-05-08T10:36:18Z
<patch> diff --git a/conan/tools/cmake/cmakedeps/templates/macros.py b/conan/tools/cmake/cmakedeps/templates/macros.py --- a/conan/tools/cmake/cmakedeps/templates/macros.py +++ b/conan/tools/cmake/cmakedeps/templates/macros.py @@ -67,13 +67,13 @@ def template(self): if(NOT TARGET ${_LIB_NAME}) add_library(${_LIB_NAME} UNKNOWN IMPORTED) endif() - set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_FOUND_LIBRARY}) + set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_FOUND_LIBRARY}) else() if(NOT TARGET ${_LIB_NAME}) add_library(${_LIB_NAME} SHARED IMPORTED) endif() - set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_SHARED_FOUND_LIBRARY}) - set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_IMPLIB ${CONAN_FOUND_LIBRARY}) + set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_SHARED_FOUND_LIBRARY}) + set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_IMPLIB${config_suffix} ${CONAN_FOUND_LIBRARY}) message(DEBUG "Found DLL and STATIC at ${CONAN_SHARED_FOUND_LIBRARY}, ${CONAN_FOUND_LIBRARY}") endif() unset(CONAN_SHARED_FOUND_LIBRARY CACHE) @@ -83,10 +83,8 @@ def template(self): add_library(${_LIB_NAME} ${library_type} IMPORTED) endif() message(DEBUG "Created target ${_LIB_NAME} ${library_type} IMPORTED") - set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION ${CONAN_FOUND_LIBRARY} IMPORTED_NO_SONAME ${no_soname_mode}) + set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_FOUND_LIBRARY} IMPORTED_NO_SONAME ${no_soname_mode}) endif() - # Link library file - set_target_properties(${_LIB_NAME} PROPERTIES IMPORTED_LOCATION${config_suffix} ${CONAN_FOUND_LIBRARY}) list(APPEND _out_libraries_target ${_LIB_NAME}) message(VERBOSE "Conan: Found: ${CONAN_FOUND_LIBRARY}") else() </patch>
[]
[]
apache__airflow-19193
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> IntegrityError inserting into task_fail table with null execution_date from TI.handle_failure_with_callback ### Apache Airflow version 2.2.0 (latest released) ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers ``` apache-airflow-providers-amazon @ file:///root/.cache/pypoetry/artifacts/c9/69/16/ffa2eb7a2e6e850a7048eaf66b6c40c990ef7c58149f20d3d3f333a2e9/apache_airflow_providers_amazon-2.2.0-py3-none-any.whl apache-airflow-providers-celery @ file:///root/.cache/pypoetry/artifacts/6e/1b/2f/f968318a7474e979af4dc53893ecafe8cd11a98a94077a9c3c27304eb7/apache_airflow_providers_celery-2.1.0-py3-none-any.whl apache-airflow-providers-ftp @ file:///root/.cache/pypoetry/artifacts/8b/9a/dd/79a36c62bc7f37f98d0ea33652570e19272e8a7a2297db13a6785698d1/apache_airflow_providers_ftp-2.0.1-py3-none-any.whl apache-airflow-providers-http @ file:///root/.cache/pypoetry/artifacts/52/28/81/03a89147daf7daceb55f1218189d1c4af01c33c45849b568769ca6765f/apache_airflow_providers_http-2.0.1-py3-none-any.whl apache-airflow-providers-imap @ file:///root/.cache/pypoetry/artifacts/1c/5d/c5/269e8a8098e7017a26a2a376eb3020e1a864775b7ff310ed39e1bd503d/apache_airflow_providers_imap-2.0.1-py3-none-any.whl apache-airflow-providers-postgres @ file:///root/.cache/pypoetry/artifacts/fb/69/ac/e8e25a0f6a4b0daf162c81c9cfdbb164a93bef6bd652c1c00eee6e0815/apache_airflow_providers_postgres-2.3.0-py3-none-any.whl apache-airflow-providers-redis @ file:///root/.cache/pypoetry/artifacts/cf/2b/56/75563b6058fe45b70f93886dd92541e8349918eeea9d70c703816f2639/apache_airflow_providers_redis-2.0.1-py3-none-any.whl apache-airflow-providers-sqlite @ file:///root/.cache/pypoetry/artifacts/61/ba/e9/c0b4b7ef2599dbd902b32afc99f2620d8a616b3072122e90f591de4807/apache_airflow_providers_sqlite-2.0.1-py3-none-any.whl ``` ### Deployment Other Docker-based deployment ### Deployment details AWS ECS, Celery Executor, Postgres 13, S3 Logging, Sentry integration ### What happened Noticed our Sentry getting a lot of integrity errors inserting into the task_fail table with a null execution date. This seemed to be caused specifically by zombie task failures (We use AWS ECS Spot instances). Specifically this callback from the dag file processor: https://github.com/apache/airflow/blob/e6c56c4ae475605636f4a1b5ab3884383884a8cf/airflow/models/taskinstance.py#L1746 Adds a task_fail here: https://github.com/apache/airflow/blob/e6c56c4ae475605636f4a1b5ab3884383884a8cf/airflow/models/taskinstance.py#L1705 This blows up when it flushes further down the method. This i believe is because when the task instance is refreshed from the database the `self.dag_run` property is not populated. The proxy from `ti.execution_date` to `ti.dag_run.execution_date` then returns `None` causing our `NOT NULL` violation. ### What you expected to happen Insert into task_fail successfully and trigger callback ### How to reproduce Run this dag: ```python import logging import time from datetime import datetime from airflow import DAG from airflow.operators.python import PythonOperator def long_running_task(): for i in range(60): time.sleep(5) logging.info("Slept for 5") def log_failure_dag(*args, **kwargs): logging.error("Our failure callback") dag = DAG( dag_id="test_null_task_fail", schedule_interval='@daily', catchup=True, start_date=datetime(2021, 10, 9), max_active_runs=1, max_active_tasks=1, on_failure_callback=log_failure_dag, ) with dag: PythonOperator( task_id="long_running", python_callable=long_running_task, on_failure_callback=log_failure_dag ) ``` Kill the celery worker whilst its executing the long_running tasks. Wait for the zombie reaper of the scheduler to begin and call the failure handler. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) </issue> <code> [start of README.md] 1 <!-- 2 Licensed to the Apache Software Foundation (ASF) under one 3 or more contributor license agreements. See the NOTICE file 4 distributed with this work for additional information 5 regarding copyright ownership. The ASF licenses this file 6 to you under the Apache License, Version 2.0 (the 7 "License"); you may not use this file except in compliance 8 with the License. You may obtain a copy of the License at 9 10 http://www.apache.org/licenses/LICENSE-2.0 11 12 Unless required by applicable law or agreed to in writing, 13 software distributed under the License is distributed on an 14 "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 KIND, either express or implied. See the License for the 16 specific language governing permissions and limitations 17 under the License. 18 --> 19 20 # Apache Airflow 21 22 [![PyPI version](https://badge.fury.io/py/apache-airflow.svg)](https://badge.fury.io/py/apache-airflow) 23 [![GitHub Build](https://github.com/apache/airflow/workflows/CI%20Build/badge.svg)](https://github.com/apache/airflow/actions) 24 [![Coverage Status](https://img.shields.io/codecov/c/github/apache/airflow/main.svg)](https://codecov.io/github/apache/airflow?branch=main) 25 [![License](https://img.shields.io/:license-Apache%202-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) 26 [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apache-airflow.svg)](https://pypi.org/project/apache-airflow/) 27 [![Docker Pulls](https://img.shields.io/docker/pulls/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 28 [![Docker Stars](https://img.shields.io/docker/stars/apache/airflow.svg)](https://hub.docker.com/r/apache/airflow) 29 [![PyPI - Downloads](https://img.shields.io/pypi/dm/apache-airflow)](https://pypi.org/project/apache-airflow/) 30 [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/apache-airflow)](https://artifacthub.io/packages/search?repo=apache-airflow) 31 [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 32 [![Twitter Follow](https://img.shields.io/twitter/follow/ApacheAirflow.svg?style=social&label=Follow)](https://twitter.com/ApacheAirflow) 33 [![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://s.apache.org/airflow-slack) 34 35 [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/) (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. 36 37 When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. 38 39 Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. 40 41 <!-- START doctoc generated TOC please keep comment here to allow auto update --> 42 <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> 43 **Table of contents** 44 45 - [Project Focus](#project-focus) 46 - [Principles](#principles) 47 - [Requirements](#requirements) 48 - [Getting started](#getting-started) 49 - [Installing from PyPI](#installing-from-pypi) 50 - [Official source code](#official-source-code) 51 - [Convenience packages](#convenience-packages) 52 - [User Interface](#user-interface) 53 - [Semantic versioning](#semantic-versioning) 54 - [Version Life Cycle](#version-life-cycle) 55 - [Support for Python and Kubernetes versions](#support-for-python-and-kubernetes-versions) 56 - [Contributing](#contributing) 57 - [Who uses Apache Airflow?](#who-uses-apache-airflow) 58 - [Who Maintains Apache Airflow?](#who-maintains-apache-airflow) 59 - [Can I use the Apache Airflow logo in my presentation?](#can-i-use-the-apache-airflow-logo-in-my-presentation) 60 - [Airflow merchandise](#airflow-merchandise) 61 - [Links](#links) 62 - [Sponsors](#sponsors) 63 64 <!-- END doctoc generated TOC please keep comment here to allow auto update --> 65 66 ## Project Focus 67 68 Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include [Luigi](https://github.com/spotify/luigi), [Oozie](https://oozie.apache.org/) and [Azkaban](https://azkaban.github.io/). 69 70 Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's [Xcom feature](https://airflow.apache.org/docs/apache-airflow/stable/concepts.html#xcoms)). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work. 71 72 Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches. 73 74 ## Principles 75 76 - **Dynamic**: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. 77 - **Extensible**: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment. 78 - **Elegant**: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful **Jinja** templating engine. 79 - **Scalable**: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. 80 81 ## Requirements 82 83 Apache Airflow is tested with: 84 85 | | Main version (dev) | Stable version (2.2.0) | 86 | -------------------- | ------------------------- | ------------------------ | 87 | Python | 3.6, 3.7, 3.8, 3.9 | 3.6, 3.7, 3.8, 3.9 | 88 | Kubernetes | 1.18, 1.19, 1.20 | 1.18, 1.19, 1.20 | 89 | PostgreSQL | 9.6, 10, 11, 12, 13 | 9.6, 10, 11, 12, 13 | 90 | MySQL | 5.7, 8 | 5.7, 8 | 91 | SQLite | 3.15.0+ | 3.15.0+ | 92 | MSSQL(Experimental) | 2017, 2019 | | 93 94 **Note**: MySQL 5.x versions are unable to or have limitations with 95 running multiple schedulers -- please see the [Scheduler docs](https://airflow.apache.org/docs/apache-airflow/stable/scheduler.html). 96 MariaDB is not tested/recommended. 97 98 **Note**: SQLite is used in Airflow tests. Do not use it in production. We recommend 99 using the latest stable version of SQLite for local development. 100 101 ## Getting started 102 103 Visit the official Airflow website documentation (latest **stable** release) for help with 104 [installing Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation.html), 105 [getting started](https://airflow.apache.org/docs/apache-airflow/stable/start/index.html), or walking 106 through a more complete [tutorial](https://airflow.apache.org/docs/apache-airflow/stable/tutorial.html). 107 108 > Note: If you're looking for documentation for the main branch (latest development branch): you can find it on [s.apache.org/airflow-docs](https://s.apache.org/airflow-docs/). 109 110 For more information on Airflow Improvement Proposals (AIPs), visit 111 the [Airflow Wiki](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals). 112 113 Documentation for dependent projects like provider packages, Docker image, Helm Chart, you'll find it in [the documentation index](https://airflow.apache.org/docs/). 114 115 ## Installing from PyPI 116 117 We publish Apache Airflow as `apache-airflow` package in PyPI. Installing it however might be sometimes tricky 118 because Airflow is a bit of both a library and application. Libraries usually keep their dependencies open, and 119 applications usually pin them, but we should do neither and both simultaneously. We decided to keep 120 our dependencies as open as possible (in `setup.py`) so users can install different versions of libraries 121 if needed. This means that `pip install apache-airflow` will not work from time to time or will 122 produce unusable Airflow installation. 123 124 To have repeatable installation, however, we keep a set of "known-to-be-working" constraint 125 files in the orphan `constraints-main` and `constraints-2-0` branches. We keep those "known-to-be-working" 126 constraints files separately per major/minor Python version. 127 You can use them as constraint files when installing Airflow from PyPI. Note that you have to specify 128 correct Airflow tag/version/branch and Python versions in the URL. 129 130 131 1. Installing just Airflow: 132 133 > Note: Only `pip` installation is currently officially supported. 134 135 While it is possible to install Airflow with tools like [Poetry](https://python-poetry.org) or 136 [pip-tools](https://pypi.org/project/pip-tools), they do not share the same workflow as 137 `pip` - especially when it comes to constraint vs. requirements management. 138 Installing via `Poetry` or `pip-tools` is not currently supported. 139 140 If you wish to install Airflow using those tools, you should use the constraint files and convert 141 them to the appropriate format and workflow that your tool requires. 142 143 144 ```bash 145 pip install 'apache-airflow==2.2.0' \ 146 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.7.txt" 147 ``` 148 149 2. Installing with extras (i.e., postgres, google) 150 151 ```bash 152 pip install 'apache-airflow[postgres,google]==2.2.0' \ 153 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.0/constraints-3.7.txt" 154 ``` 155 156 For information on installing provider packages, check 157 [providers](http://airflow.apache.org/docs/apache-airflow-providers/index.html). 158 159 ## Official source code 160 161 Apache Airflow is an [Apache Software Foundation](https://www.apache.org) (ASF) project, 162 and our official source code releases: 163 164 - Follow the [ASF Release Policy](https://www.apache.org/legal/release-policy.html) 165 - Can be downloaded from [the ASF Distribution Directory](https://downloads.apache.org/airflow) 166 - Are cryptographically signed by the release manager 167 - Are officially voted on by the PMC members during the 168 [Release Approval Process](https://www.apache.org/legal/release-policy.html#release-approval) 169 170 Following the ASF rules, the source packages released must be sufficient for a user to build and test the 171 release provided they have access to the appropriate platform and tools. 172 173 ## Convenience packages 174 175 There are other ways of installing and using Airflow. Those are "convenience" methods - they are 176 not "official releases" as stated by the `ASF Release Policy`, but they can be used by the users 177 who do not want to build the software themselves. 178 179 Those are - in the order of most common ways people install Airflow: 180 181 - [PyPI releases](https://pypi.org/project/apache-airflow/) to install Airflow using standard `pip` tool 182 - [Docker Images](https://hub.docker.com/r/apache/airflow) to install airflow via 183 `docker` tool, use them in Kubernetes, Helm Charts, `docker-compose`, `docker swarm`, etc. You can 184 read more about using, customising, and extending the images in the 185 [Latest docs](https://airflow.apache.org/docs/docker-stack/index.html), and 186 learn details on the internals in the [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst) document. 187 - [Tags in GitHub](https://github.com/apache/airflow/tags) to retrieve the git project sources that 188 were used to generate official source packages via git 189 190 All those artifacts are not official releases, but they are prepared using officially released sources. 191 Some of those artifacts are "development" or "pre-release" ones, and they are clearly marked as such 192 following the ASF Policy. 193 194 ## User Interface 195 196 - **DAGs**: Overview of all DAGs in your environment. 197 198 ![DAGs](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/dags.png) 199 200 - **Tree**: Tree representation of a DAG that spans across time. 201 202 ![Tree](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/tree.png) 203 204 - **Graph**: Visualization of a DAG's dependencies and their current status for a specific run. 205 206 ![Graph](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/graph.png) 207 208 - **Task Duration**: Total time spent on different tasks over time. 209 210 ![Task Duration](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/duration.png) 211 212 - **Gantt**: Duration and overlap of a DAG. 213 214 ![Gantt](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/gantt.png) 215 216 - **Code**: Quick way to view source code of a DAG. 217 218 ![Code](https://raw.githubusercontent.com/apache/airflow/main/docs/apache-airflow/img/code.png) 219 220 ## Semantic versioning 221 222 As of Airflow 2.0.0, we support a strict [SemVer](https://semver.org/) approach for all packages released. 223 224 There are few specific rules that we agreed to that define details of versioning of the different 225 packages: 226 227 * **Airflow**: SemVer rules apply to core airflow only (excludes any changes to providers). 228 Changing limits for versions of Airflow dependencies is not a breaking change on its own. 229 * **Airflow Providers**: SemVer rules apply to changes in the particular provider's code only. 230 SemVer MAJOR and MINOR versions for the packages are independent of the Airflow version. 231 For example, `google 4.1.0` and `amazon 3.0.3` providers can happily be installed 232 with `Airflow 2.1.2`. If there are limits of cross-dependencies between providers and Airflow packages, 233 they are present in providers as `install_requires` limitations. We aim to keep backwards 234 compatibility of providers with all previously released Airflow 2 versions but 235 there will sometimes be breaking changes that might make some, or all 236 providers, have minimum Airflow version specified. Change of that minimum supported Airflow version 237 is a breaking change for provider because installing the new provider might automatically 238 upgrade Airflow (which might be an undesired side effect of upgrading provider). 239 * **Airflow Helm Chart**: SemVer rules apply to changes in the chart only. SemVer MAJOR and MINOR 240 versions for the chart are independent from the Airflow version. We aim to keep backwards 241 compatibility of the Helm Chart with all released Airflow 2 versions, but some new features might 242 only work starting from specific Airflow releases. We might however limit the Helm 243 Chart to depend on minimal Airflow version. 244 * **Airflow API clients**: SemVer MAJOR and MINOR versions follow MAJOR and MINOR versions of Airflow. 245 The first MAJOR or MINOR X.Y.0 release of Airflow should always be followed by X.Y.0 release of 246 all clients. The clients then can release their own PATCH releases with bugfixes, 247 independently of Airflow PATCH releases. 248 249 ## Version Life Cycle 250 251 Apache Airflow version life cycle: 252 253 | Version | Current Patch/Minor | State | First Release | Limited Support | EOL/Terminated | 254 |---------|---------------------|-----------|---------------|-----------------|----------------| 255 | 2 | 2.2.0 | Supported | Dec 17, 2020 | TBD | TBD | 256 | 1.10 | 1.10.15 | EOL | Aug 27, 2018 | Dec 17, 2020 | June 17, 2021 | 257 | 1.9 | 1.9.0 | EOL | Jan 03, 2018 | Aug 27, 2018 | Aug 27, 2018 | 258 | 1.8 | 1.8.2 | EOL | Mar 19, 2017 | Jan 03, 2018 | Jan 03, 2018 | 259 | 1.7 | 1.7.1.2 | EOL | Mar 28, 2016 | Mar 19, 2017 | Mar 19, 2017 | 260 261 Limited support versions will be supported with security and critical bug fix only. 262 EOL versions will not get any fixes nor support. 263 We always recommend that all users run the latest available minor release for whatever major version is in use. 264 We **highly** recommend upgrading to the latest Airflow major release at the earliest convenient time and before the EOL date. 265 266 ## Support for Python and Kubernetes versions 267 268 As of Airflow 2.0, we agreed to certain rules we follow for Python and Kubernetes support. 269 They are based on the official release schedule of Python and Kubernetes, nicely summarized in the 270 [Python Developer's Guide](https://devguide.python.org/#status-of-python-branches) and 271 [Kubernetes version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/). 272 273 1. We drop support for Python and Kubernetes versions when they reach EOL. We drop support for those 274 EOL versions in main right after EOL date, and it is effectively removed when we release the 275 first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow 276 For example, for Python 3.6 it means that we drop support in main right after 23.12.2021, and the first 277 MAJOR or MINOR version of Airflow released after will not have it. 278 279 2. The "oldest" supported version of Python/Kubernetes is the default one. "Default" is only meaningful 280 in terms of "smoke tests" in CI PRs, which are run using this default version and the default reference 281 image available. Currently `apache/airflow:latest` and `apache/airflow:2.2.0` images 282 are both Python 3.6 images. However, the first MINOR/MAJOR release of Airflow release after 23.12.2021 will 283 become Python 3.7 images. 284 285 3. We support a new version of Python/Kubernetes in main after they are officially released, as soon as we 286 make them work in our CI pipeline (which might not be immediate due to dependencies catching up with 287 new versions of Python mostly) we release new images/support in Airflow based on the working CI setup. 288 289 ### Additional notes on Python version requirements 290 291 * Previous versions [require](https://github.com/apache/airflow/issues/8162) at least Python 3.5.3 292 when using Python 3. 293 294 ## Contributing 295 296 Want to help build Apache Airflow? Check out our [contributing documentation](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst). 297 298 Official Docker (container) images for Apache Airflow are described in [IMAGES.rst](https://github.com/apache/airflow/blob/main/IMAGES.rst). 299 300 ## Who uses Apache Airflow? 301 302 More than 400 organizations are using Apache Airflow 303 [in the wild](https://github.com/apache/airflow/blob/main/INTHEWILD.md). 304 305 ## Who Maintains Apache Airflow? 306 307 Airflow is the work of the [community](https://github.com/apache/airflow/graphs/contributors), 308 but the [core committers/maintainers](https://people.apache.org/committers-by-project.html#airflow) 309 are responsible for reviewing and merging PRs as well as steering conversations around new feature requests. 310 If you would like to become a maintainer, please review the Apache Airflow 311 [committer requirements](https://github.com/apache/airflow/blob/main/COMMITTERS.rst#guidelines-to-become-an-airflow-committer). 312 313 ## Can I use the Apache Airflow logo in my presentation? 314 315 Yes! Be sure to abide by the Apache Foundation [trademark policies](https://www.apache.org/foundation/marks/#books) and the Apache Airflow [Brandbook](https://cwiki.apache.org/confluence/display/AIRFLOW/Brandbook). The most up to date logos are found in [this repo](/docs/apache-airflow/img/logos) and on the Apache Software Foundation [website](https://www.apache.org/logos/about.html). 316 317 ## Airflow merchandise 318 319 If you would love to have Apache Airflow stickers, t-shirt, etc. then check out 320 [Redbubble Shop](https://www.redbubble.com/i/sticker/Apache-Airflow-by-comdev/40497530.EJUG5). 321 322 ## Links 323 324 - [Documentation](https://airflow.apache.org/docs/apache-airflow/stable/) 325 - [Chat](https://s.apache.org/airflow-slack) 326 327 ## Sponsors 328 329 The CI infrastructure for Apache Airflow has been sponsored by: 330 331 <!-- Ordered by most recently "funded" --> 332 333 <a href="https://astronomer.io"><img src="https://assets2.astronomer.io/logos/logoForLIGHTbackground.png" alt="astronomer.io" width="250px"></a> 334 <a href="https://aws.amazon.com/opensource/"><img src="docs/integration-logos/aws/[email protected]" alt="AWS OpenSource" width="130px"></a> 335 [end of README.md] [start of airflow/jobs/triggerer_job.py] 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information 4 # regarding copyright ownership. The ASF licenses this file 5 # to you under the Apache License, Version 2.0 (the 6 # "License"); you may not use this file except in compliance 7 # with the License. You may obtain a copy of the License at 8 # 9 # http://www.apache.org/licenses/LICENSE-2.0 10 # 11 # Unless required by applicable law or agreed to in writing, 12 # software distributed under the License is distributed on an 13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 # KIND, either express or implied. See the License for the 15 # specific language governing permissions and limitations 16 # under the License. 17 18 import asyncio 19 import os 20 import signal 21 import sys 22 import threading 23 import time 24 from collections import deque 25 from typing import Deque, Dict, Set, Tuple, Type 26 27 from sqlalchemy import func 28 29 from airflow.compat.asyncio import create_task 30 from airflow.configuration import conf 31 from airflow.jobs.base_job import BaseJob 32 from airflow.models.trigger import Trigger 33 from airflow.stats import Stats 34 from airflow.triggers.base import BaseTrigger, TriggerEvent 35 from airflow.typing_compat import TypedDict 36 from airflow.utils.log.logging_mixin import LoggingMixin 37 from airflow.utils.module_loading import import_string 38 from airflow.utils.session import provide_session 39 40 41 class TriggererJob(BaseJob): 42 """ 43 TriggererJob continuously runs active triggers in asyncio, watching 44 for them to fire off their events and then dispatching that information 45 to their dependent tasks/DAGs. 46 47 It runs as two threads: 48 - The main thread does DB calls/checkins 49 - A subthread runs all the async code 50 """ 51 52 __mapper_args__ = {'polymorphic_identity': 'TriggererJob'} 53 54 def __init__(self, capacity=None, *args, **kwargs): 55 # Call superclass 56 super().__init__(*args, **kwargs) 57 58 if capacity is None: 59 self.capacity = conf.getint('triggerer', 'default_capacity', fallback=1000) 60 elif isinstance(capacity, int) and capacity > 0: 61 self.capacity = capacity 62 else: 63 raise ValueError(f"Capacity number {capacity} is invalid") 64 65 # Set up runner async thread 66 self.runner = TriggerRunner() 67 68 def register_signals(self) -> None: 69 """Register signals that stop child processes""" 70 signal.signal(signal.SIGINT, self._exit_gracefully) 71 signal.signal(signal.SIGTERM, self._exit_gracefully) 72 73 @classmethod 74 @provide_session 75 def is_needed(cls, session) -> bool: 76 """ 77 Tests if the triggerer job needs to be run (i.e., if there are triggers 78 in the trigger table). 79 This is used for the warning boxes in the UI. 80 """ 81 return session.query(func.count(Trigger.id)).scalar() > 0 82 83 def on_kill(self): 84 """ 85 Called when there is an external kill command (via the heartbeat 86 mechanism, for example) 87 """ 88 self.runner.stop = True 89 90 def _exit_gracefully(self, signum, frame) -> None: # pylint: disable=unused-argument 91 """Helper method to clean up processor_agent to avoid leaving orphan processes.""" 92 # The first time, try to exit nicely 93 if not self.runner.stop: 94 self.log.info("Exiting gracefully upon receiving signal %s", signum) 95 self.runner.stop = True 96 else: 97 self.log.warning("Forcing exit due to second exit signal %s", signum) 98 sys.exit(os.EX_SOFTWARE) 99 100 def _execute(self) -> None: 101 self.log.info("Starting the triggerer") 102 try: 103 # Kick off runner thread 104 self.runner.start() 105 # Start our own DB loop in the main thread 106 self._run_trigger_loop() 107 except Exception: # pylint: disable=broad-except 108 self.log.exception("Exception when executing TriggererJob._run_trigger_loop") 109 raise 110 finally: 111 self.log.info("Waiting for triggers to clean up") 112 # Tell the subthread to stop and then wait for it. 113 # If the user interrupts/terms again, _graceful_exit will allow them 114 # to force-kill here. 115 self.runner.stop = True 116 self.runner.join(30) 117 self.log.info("Exited trigger loop") 118 119 def _run_trigger_loop(self) -> None: 120 """ 121 The main-thread trigger loop. 122 123 This runs synchronously and handles all database reads/writes. 124 """ 125 while not self.runner.stop: 126 # Clean out unused triggers 127 Trigger.clean_unused() 128 # Load/delete triggers 129 self.load_triggers() 130 # Handle events 131 self.handle_events() 132 # Handle failed triggers 133 self.handle_failed_triggers() 134 # Handle heartbeat 135 self.heartbeat(only_if_necessary=True) 136 # Collect stats 137 self.emit_metrics() 138 # Idle sleep 139 time.sleep(1) 140 141 def load_triggers(self): 142 """ 143 Queries the database to get the triggers we're supposed to be running, 144 adds them to our runner, and then removes ones from it we no longer 145 need. 146 """ 147 Trigger.assign_unassigned(self.id, self.capacity) 148 ids = Trigger.ids_for_triggerer(self.id) 149 self.runner.update_triggers(set(ids)) 150 151 def handle_events(self): 152 """ 153 Handles outbound events from triggers - dispatching them into the Trigger 154 model where they are then pushed into the relevant task instances. 155 """ 156 while self.runner.events: 157 # Get the event and its trigger ID 158 trigger_id, event = self.runner.events.popleft() 159 # Tell the model to wake up its tasks 160 Trigger.submit_event(trigger_id=trigger_id, event=event) 161 # Emit stat event 162 Stats.incr('triggers.succeeded') 163 164 def handle_failed_triggers(self): 165 """ 166 Handles "failed" triggers - ones that errored or exited before they 167 sent an event. Task Instances that depend on them need failing. 168 """ 169 while self.runner.failed_triggers: 170 # Tell the model to fail this trigger's deps 171 trigger_id = self.runner.failed_triggers.popleft() 172 Trigger.submit_failure(trigger_id=trigger_id) 173 # Emit stat event 174 Stats.incr('triggers.failed') 175 176 def emit_metrics(self): 177 Stats.gauge('triggers.running', len(self.runner.triggers)) 178 179 180 class TriggerDetails(TypedDict): 181 """Type class for the trigger details dictionary""" 182 183 task: asyncio.Task 184 name: str 185 events: int 186 187 188 class TriggerRunner(threading.Thread, LoggingMixin): 189 """ 190 Runtime environment for all triggers. 191 192 Mainly runs inside its own thread, where it hands control off to an asyncio 193 event loop, but is also sometimes interacted with from the main thread 194 (where all the DB queries are done). All communication between threads is 195 done via Deques. 196 """ 197 198 # Maps trigger IDs to their running tasks and other info 199 triggers: Dict[int, TriggerDetails] 200 201 # Cache for looking up triggers by classpath 202 trigger_cache: Dict[str, Type[BaseTrigger]] 203 204 # Inbound queue of new triggers 205 to_create: Deque[Tuple[int, BaseTrigger]] 206 207 # Inbound queue of deleted triggers 208 to_delete: Deque[int] 209 210 # Outbound queue of events 211 events: Deque[Tuple[int, TriggerEvent]] 212 213 # Outbound queue of failed triggers 214 failed_triggers: Deque[int] 215 216 # Should-we-stop flag 217 stop: bool = False 218 219 def __init__(self): 220 super().__init__() 221 self.triggers = {} 222 self.trigger_cache = {} 223 self.to_create = deque() 224 self.to_delete = deque() 225 self.events = deque() 226 self.failed_triggers = deque() 227 228 def run(self): 229 """Sync entrypoint - just runs arun in an async loop.""" 230 # Pylint complains about this with a 3.6 base, can remove with 3.7+ 231 asyncio.run(self.arun()) # pylint: disable=no-member 232 233 async def arun(self): 234 """ 235 Main (asynchronous) logic loop. 236 237 The loop in here runs trigger addition/deletion/cleanup. Actual 238 triggers run in their own separate coroutines. 239 """ 240 watchdog = create_task(self.block_watchdog()) 241 last_status = time.time() 242 while not self.stop: 243 # Run core logic 244 await self.create_triggers() 245 await self.delete_triggers() 246 await self.cleanup_finished_triggers() 247 # Sleep for a bit 248 await asyncio.sleep(1) 249 # Every minute, log status 250 if time.time() - last_status >= 60: 251 self.log.info("%i triggers currently running", len(self.triggers)) 252 last_status = time.time() 253 # Wait for watchdog to complete 254 await watchdog 255 256 async def create_triggers(self): 257 """ 258 Drain the to_create queue and create all triggers that have been 259 requested in the DB that we don't yet have. 260 """ 261 while self.to_create: 262 trigger_id, trigger_instance = self.to_create.popleft() 263 if trigger_id not in self.triggers: 264 self.triggers[trigger_id] = { 265 "task": create_task(self.run_trigger(trigger_id, trigger_instance)), 266 "name": f"{trigger_instance!r} (ID {trigger_id})", 267 "events": 0, 268 } 269 else: 270 self.log.warning("Trigger %s had insertion attempted twice", trigger_id) 271 await asyncio.sleep(0) 272 273 async def delete_triggers(self): 274 """ 275 Drain the to_delete queue and ensure all triggers that are not in the 276 DB are cancelled, so the cleanup job deletes them. 277 """ 278 while self.to_delete: 279 trigger_id = self.to_delete.popleft() 280 if trigger_id in self.triggers: 281 # We only delete if it did not exit already 282 self.triggers[trigger_id]["task"].cancel() 283 await asyncio.sleep(0) 284 285 async def cleanup_finished_triggers(self): 286 """ 287 Go through all trigger tasks (coroutines) and clean up entries for 288 ones that have exited, optionally warning users if the exit was 289 not normal. 290 """ 291 for trigger_id, details in list(self.triggers.items()): # pylint: disable=too-many-nested-blocks 292 if details["task"].done(): 293 # Check to see if it exited for good reasons 294 try: 295 result = details["task"].result() 296 except (asyncio.CancelledError, SystemExit, KeyboardInterrupt): 297 # These are "expected" exceptions and we stop processing here 298 # If we don't, then the system requesting a trigger be removed - 299 # which turns into CancelledError - results in a failure. 300 del self.triggers[trigger_id] 301 continue 302 except BaseException as e: 303 # This is potentially bad, so log it. 304 self.log.error("Trigger %s exited with error %s", details["name"], e) 305 else: 306 # See if they foolishly returned a TriggerEvent 307 if isinstance(result, TriggerEvent): 308 self.log.error( 309 "Trigger %s returned a TriggerEvent rather than yielding it", details["name"] 310 ) 311 # See if this exited without sending an event, in which case 312 # any task instances depending on it need to be failed 313 if details["events"] == 0: 314 self.log.error( 315 "Trigger %s exited without sending an event. Dependent tasks will be failed.", 316 details["name"], 317 ) 318 self.failed_triggers.append(trigger_id) 319 del self.triggers[trigger_id] 320 await asyncio.sleep(0) 321 322 async def block_watchdog(self): 323 """ 324 Watchdog loop that detects blocking (badly-written) triggers. 325 326 Triggers should be well-behaved async coroutines and await whenever 327 they need to wait; this loop tries to run every 100ms to see if 328 there are badly-written triggers taking longer than that and blocking 329 the event loop. 330 331 Unfortunately, we can't tell what trigger is blocking things, but 332 we can at least detect the top-level problem. 333 """ 334 while not self.stop: 335 last_run = time.monotonic() 336 await asyncio.sleep(0.1) 337 # We allow a generous amount of buffer room for now, since it might 338 # be a busy event loop. 339 time_elapsed = time.monotonic() - last_run 340 if time_elapsed > 0.2: 341 self.log.error( 342 "Triggerer's async thread was blocked for %.2f seconds, " 343 "likely by a badly-written trigger. Set PYTHONASYNCIODEBUG=1 " 344 "to get more information on overrunning coroutines.", 345 time_elapsed, 346 ) 347 Stats.incr('triggers.blocked_main_thread') 348 349 # Async trigger logic 350 351 async def run_trigger(self, trigger_id, trigger): 352 """ 353 Wrapper which runs an actual trigger (they are async generators) 354 and pushes their events into our outbound event deque. 355 """ 356 self.log.info("Trigger %s starting", self.triggers[trigger_id]['name']) 357 try: 358 async for event in trigger.run(): 359 self.log.info("Trigger %s fired: %s", self.triggers[trigger_id]['name'], event) 360 self.triggers[trigger_id]["events"] += 1 361 self.events.append((trigger_id, event)) 362 finally: 363 # CancelledError will get injected when we're stopped - which is 364 # fine, the cleanup process will understand that, but we want to 365 # allow triggers a chance to cleanup, either in that case or if 366 # they exit cleanly. 367 trigger.cleanup() 368 369 # Main-thread sync API 370 371 def update_triggers(self, requested_trigger_ids: Set[int]): 372 """ 373 Called from the main thread to request that we update what 374 triggers we're running. 375 376 Works out the differences - ones to add, and ones to remove - then 377 adds them to the deques so the subthread can actually mutate the running 378 trigger set. 379 """ 380 # Note that `triggers` could be mutated by the other thread during this 381 # line's execution, but we consider that safe, since there's a strict 382 # add -> remove -> never again lifecycle this function is already 383 # handling. 384 current_trigger_ids = set(self.triggers.keys()) 385 # Work out the two difference sets 386 new_trigger_ids = requested_trigger_ids.difference(current_trigger_ids) 387 old_trigger_ids = current_trigger_ids.difference(requested_trigger_ids) 388 # Bulk-fetch new trigger records 389 new_triggers = Trigger.bulk_fetch(new_trigger_ids) 390 # Add in new triggers 391 for new_id in new_trigger_ids: 392 # Check it didn't vanish in the meantime 393 if new_id not in new_triggers: 394 self.log.warning("Trigger ID %s disappeared before we could start it", new_id) 395 continue 396 # Resolve trigger record into an actual class instance 397 try: 398 trigger_class = self.get_trigger_by_classpath(new_triggers[new_id].classpath) 399 except BaseException: 400 # Either the trigger code or the path to it is bad. Fail the trigger. 401 self.failed_triggers.append(new_id) 402 continue 403 self.to_create.append((new_id, trigger_class(**new_triggers[new_id].kwargs))) 404 # Remove old triggers 405 for old_id in old_trigger_ids: 406 self.to_delete.append(old_id) 407 408 def get_trigger_by_classpath(self, classpath: str) -> Type[BaseTrigger]: 409 """ 410 Gets a trigger class by its classpath ("path.to.module.classname") 411 412 Uses a cache dictionary to speed up lookups after the first time. 413 """ 414 if classpath not in self.trigger_cache: 415 self.trigger_cache[classpath] = import_string(classpath) 416 return self.trigger_cache[classpath] 417 [end of airflow/jobs/triggerer_job.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
apache/airflow
5dd690b57a20ca944deb8d96e89ec6ae6161afeb
IntegrityError inserting into task_fail table with null execution_date from TI.handle_failure_with_callback ### Apache Airflow version 2.2.0 (latest released) ### Operating System Debian GNU/Linux 11 (bullseye) ### Versions of Apache Airflow Providers ``` apache-airflow-providers-amazon @ file:///root/.cache/pypoetry/artifacts/c9/69/16/ffa2eb7a2e6e850a7048eaf66b6c40c990ef7c58149f20d3d3f333a2e9/apache_airflow_providers_amazon-2.2.0-py3-none-any.whl apache-airflow-providers-celery @ file:///root/.cache/pypoetry/artifacts/6e/1b/2f/f968318a7474e979af4dc53893ecafe8cd11a98a94077a9c3c27304eb7/apache_airflow_providers_celery-2.1.0-py3-none-any.whl apache-airflow-providers-ftp @ file:///root/.cache/pypoetry/artifacts/8b/9a/dd/79a36c62bc7f37f98d0ea33652570e19272e8a7a2297db13a6785698d1/apache_airflow_providers_ftp-2.0.1-py3-none-any.whl apache-airflow-providers-http @ file:///root/.cache/pypoetry/artifacts/52/28/81/03a89147daf7daceb55f1218189d1c4af01c33c45849b568769ca6765f/apache_airflow_providers_http-2.0.1-py3-none-any.whl apache-airflow-providers-imap @ file:///root/.cache/pypoetry/artifacts/1c/5d/c5/269e8a8098e7017a26a2a376eb3020e1a864775b7ff310ed39e1bd503d/apache_airflow_providers_imap-2.0.1-py3-none-any.whl apache-airflow-providers-postgres @ file:///root/.cache/pypoetry/artifacts/fb/69/ac/e8e25a0f6a4b0daf162c81c9cfdbb164a93bef6bd652c1c00eee6e0815/apache_airflow_providers_postgres-2.3.0-py3-none-any.whl apache-airflow-providers-redis @ file:///root/.cache/pypoetry/artifacts/cf/2b/56/75563b6058fe45b70f93886dd92541e8349918eeea9d70c703816f2639/apache_airflow_providers_redis-2.0.1-py3-none-any.whl apache-airflow-providers-sqlite @ file:///root/.cache/pypoetry/artifacts/61/ba/e9/c0b4b7ef2599dbd902b32afc99f2620d8a616b3072122e90f591de4807/apache_airflow_providers_sqlite-2.0.1-py3-none-any.whl ``` ### Deployment Other Docker-based deployment ### Deployment details AWS ECS, Celery Executor, Postgres 13, S3 Logging, Sentry integration ### What happened Noticed our Sentry getting a lot of integrity errors inserting into the task_fail table with a null execution date. This seemed to be caused specifically by zombie task failures (We use AWS ECS Spot instances). Specifically this callback from the dag file processor: https://github.com/apache/airflow/blob/e6c56c4ae475605636f4a1b5ab3884383884a8cf/airflow/models/taskinstance.py#L1746 Adds a task_fail here: https://github.com/apache/airflow/blob/e6c56c4ae475605636f4a1b5ab3884383884a8cf/airflow/models/taskinstance.py#L1705 This blows up when it flushes further down the method. This i believe is because when the task instance is refreshed from the database the `self.dag_run` property is not populated. The proxy from `ti.execution_date` to `ti.dag_run.execution_date` then returns `None` causing our `NOT NULL` violation. ### What you expected to happen Insert into task_fail successfully and trigger callback ### How to reproduce Run this dag: ```python import logging import time from datetime import datetime from airflow import DAG from airflow.operators.python import PythonOperator def long_running_task(): for i in range(60): time.sleep(5) logging.info("Slept for 5") def log_failure_dag(*args, **kwargs): logging.error("Our failure callback") dag = DAG( dag_id="test_null_task_fail", schedule_interval='@daily', catchup=True, start_date=datetime(2021, 10, 9), max_active_runs=1, max_active_tasks=1, on_failure_callback=log_failure_dag, ) with dag: PythonOperator( task_id="long_running", python_callable=long_running_task, on_failure_callback=log_failure_dag ) ``` Kill the celery worker whilst its executing the long_running tasks. Wait for the zombie reaper of the scheduler to begin and call the failure handler. ### Anything else _No response_ ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
2021-10-24T16:09:35Z
<patch> diff --git a/airflow/__init__.py b/airflow/__init__.py --- a/airflow/__init__.py +++ b/airflow/__init__.py @@ -36,7 +36,7 @@ __version__ = version.version -__all__ = ['__version__', 'login', 'DAG', 'PY36', 'PY37', 'PY38', 'PY39'] +__all__ = ['__version__', 'login', 'DAG', 'PY36', 'PY37', 'PY38', 'PY39', 'PY310'] # Make `airflow` an namespace package, supporting installing # airflow.providers.* in different locations (i.e. one in site, and one in user @@ -51,6 +51,7 @@ PY37 = sys.version_info >= (3, 7) PY38 = sys.version_info >= (3, 8) PY39 = sys.version_info >= (3, 9) +PY310 = sys.version_info >= (3, 10) def __getattr__(name): diff --git a/airflow/__main__.py b/airflow/__main__.py --- a/airflow/__main__.py +++ b/airflow/__main__.py @@ -19,13 +19,15 @@ # under the License. """Main executable module""" - import os +import warnings import argcomplete +from airflow import PY310 from airflow.cli import cli_parser from airflow.configuration import conf +from airflow.utils.docs import get_docs_url def main(): @@ -33,6 +35,12 @@ def main(): if conf.get("core", "security") == 'kerberos': os.environ['KRB5CCNAME'] = conf.get('kerberos', 'ccache') os.environ['KRB5_KTNAME'] = conf.get('kerberos', 'keytab') + if PY310: + docs_url = get_docs_url('installation/prerequisites.html') + warnings.warn( + "Python v3.10 is not official supported on this version of Airflow. Please be careful. " + f"For details, see: {docs_url}" + ) parser = cli_parser.get_parser() argcomplete.autocomplete(parser) diff --git a/airflow/_vendor/connexion/spec.py b/airflow/_vendor/connexion/spec.py --- a/airflow/_vendor/connexion/spec.py +++ b/airflow/_vendor/connexion/spec.py @@ -166,7 +166,7 @@ class Swagger2Specification(Specification): @classmethod def _set_defaults(cls, spec): spec.setdefault('produces', []) - spec.setdefault('consumes', ['application/json']) # type: List[str] + spec.setdefault('consumes', ['application/json']) spec.setdefault('definitions', {}) spec.setdefault('parameters', {}) spec.setdefault('responses', {}) diff --git a/airflow/cli/cli_parser.py b/airflow/cli/cli_parser.py --- a/airflow/cli/cli_parser.py +++ b/airflow/cli/cli_parser.py @@ -26,7 +26,7 @@ from functools import lru_cache from typing import Callable, Dict, Iterable, List, NamedTuple, Optional, Union -from airflow import settings +from airflow import PY37, settings from airflow.cli.commands.legacy_commands import check_legacy_command from airflow.configuration import conf from airflow.exceptions import AirflowException @@ -73,6 +73,9 @@ def _check_value(self, action, value): "To do it, run: pip install 'apache-airflow[cncf.kubernetes]'" ) raise ArgumentError(action, message) + if action.dest == 'subcommand' and value == 'triggerer': + if not PY37: + raise ArgumentError(action, 'triggerer subcommand only works with Python 3.7+') if action.choices is not None and value not in action.choices: check_legacy_command(action, value) @@ -722,7 +725,7 @@ def _check(value): # jobs check ARG_JOB_TYPE_FILTER = Arg( ('--job-type',), - choices=('BackfillJob', 'LocalTaskJob', 'SchedulerJob'), + choices=('BackfillJob', 'LocalTaskJob', 'SchedulerJob', 'TriggererJob'), action='store', help='The type of job(s) that will be checked.', ) diff --git a/airflow/cli/commands/dag_command.py b/airflow/cli/commands/dag_command.py --- a/airflow/cli/commands/dag_command.py +++ b/airflow/cli/commands/dag_command.py @@ -67,11 +67,11 @@ def dag_backfill(args, dag=None): if args.ignore_first_depends_on_past is False: args.ignore_first_depends_on_past = True - dag = dag or get_dag(args.subdir, args.dag_id) - if not args.start_date and not args.end_date: raise AirflowException("Provide a start_date and/or end_date") + dag = dag or get_dag(args.subdir, args.dag_id) + # If only one date is passed, using same as start and end args.end_date = args.end_date or args.start_date args.start_date = args.start_date or args.end_date diff --git a/airflow/jobs/__init__.py b/airflow/jobs/__init__.py --- a/airflow/jobs/__init__.py +++ b/airflow/jobs/__init__.py @@ -19,4 +19,5 @@ import airflow.jobs.backfill_job import airflow.jobs.base_job import airflow.jobs.local_task_job -import airflow.jobs.scheduler_job # noqa +import airflow.jobs.scheduler_job +import airflow.jobs.triggerer_job # noqa diff --git a/airflow/jobs/scheduler_job.py b/airflow/jobs/scheduler_job.py --- a/airflow/jobs/scheduler_job.py +++ b/airflow/jobs/scheduler_job.py @@ -27,7 +27,7 @@ import warnings from collections import defaultdict from datetime import timedelta -from typing import Collection, DefaultDict, Dict, List, Optional, Tuple +from typing import Collection, DefaultDict, Dict, Iterator, List, Optional, Tuple from sqlalchemy import and_, func, not_, or_, tuple_ from sqlalchemy.exc import OperationalError @@ -432,6 +432,9 @@ def _enqueue_task_instances_with_queued_state(self, task_instances: List[TI]) -> """ # actually enqueue them for ti in task_instances: + if ti.dag_run.state in State.finished: + ti.set_state(State.NONE) + continue command = ti.command_as_list( local=True, pickle_id=ti.dag_model.pickle_id, @@ -510,7 +513,15 @@ def _process_executor_events(self, session: Session = None) -> int: # Check state of finished tasks filter_for_tis = TI.filter_for_tis(tis_with_right_state) - tis: List[TI] = session.query(TI).filter(filter_for_tis).options(selectinload('dag_model')).all() + query = session.query(TI).filter(filter_for_tis).options(selectinload('dag_model')) + # row lock this entire set of taskinstances to make sure the scheduler doesn't fail when we have + # multi-schedulers + tis: Iterator[TI] = with_row_locks( + query, + of=TI, + session=session, + **skip_locked(session=session), + ) for ti in tis: try_number = ti_primary_key_to_try_number_map[ti.key.primary] buffer_key = ti.key.with_try_number(try_number) @@ -522,6 +533,31 @@ def _process_executor_events(self, session: Session = None) -> int: self.log.info("Setting external_id for %s to %s", ti, info) continue + msg = ( + "TaskInstance Finished: dag_id=%s, task_id=%s, run_id=%s, " + "run_start_date=%s, run_end_date=%s, " + "run_duration=%s, state=%s, executor_state=%s, try_number=%s, max_tries=%s, job_id=%s, " + "pool=%s, queue=%s, priority_weight=%d, operator=%s" + ) + self.log.info( + msg, + ti.dag_id, + ti.task_id, + ti.run_id, + ti.start_date, + ti.end_date, + ti.duration, + ti.state, + state, + try_number, + ti.max_tries, + ti.job_id, + ti.pool, + ti.queue, + ti.priority_weight, + ti.operator, + ) + if ti.try_number == buffer_key.try_number and ti.state == State.QUEUED: Stats.incr('scheduler.tasks.killed_externally') msg = ( @@ -755,7 +791,12 @@ def _do_scheduling(self, session) -> int: # Send the callbacks after we commit to ensure the context is up to date when it gets run for dag_run, callback_to_run in callback_tuples: - self._send_dag_callbacks_to_processor(dag_run, callback_to_run) + dag = self.dagbag.get_dag(dag_run.dag_id, session=session) + if not dag: + self.log.error("DAG '%s' not found in serialized_dag table", dag_run.dag_id) + continue + + self._send_dag_callbacks_to_processor(dag, callback_to_run) # Without this, the session has an invalid view of the DB session.expunge_all() @@ -832,30 +873,19 @@ def _create_dag_runs(self, dag_models: Collection[DagModel], session: Session) - existing_dagruns = ( session.query(DagRun.dag_id, DagRun.execution_date).filter(existing_dagruns_filter).all() ) - max_queued_dagruns = conf.getint('core', 'max_queued_runs_per_dag') - queued_runs_of_dags = defaultdict( + active_runs_of_dags = defaultdict( int, - session.query(DagRun.dag_id, func.count('*')) - .filter( # We use `list` here because SQLA doesn't accept a set - # We use set to avoid duplicate dag_ids - DagRun.dag_id.in_(list({dm.dag_id for dm in dag_models})), - DagRun.state == State.QUEUED, - ) - .group_by(DagRun.dag_id) - .all(), + DagRun.active_runs_of_dags(dag_ids=(dm.dag_id for dm in dag_models), session=session), ) for dag_model in dag_models: - # Lets quickly check if we have exceeded the number of queued dagruns per dags - total_queued = queued_runs_of_dags[dag_model.dag_id] - if total_queued >= max_queued_dagruns: - continue dag = self.dagbag.get_dag(dag_model.dag_id, session=session) if not dag: self.log.error("DAG '%s' not found in serialized_dag table", dag_model.dag_id) continue + dag_hash = self.dagbag.dags_hash.get(dag.dag_id) data_interval = dag.get_next_data_interval(dag_model) @@ -878,12 +908,28 @@ def _create_dag_runs(self, dag_models: Collection[DagModel], session: Session) - dag_hash=dag_hash, creating_job_id=self.id, ) - queued_runs_of_dags[dag_model.dag_id] += 1 - dag_model.calculate_dagrun_date_fields(dag, data_interval) - + active_runs_of_dags[dag.dag_id] += 1 + self._update_dag_next_dagruns(dag, dag_model, active_runs_of_dags[dag.dag_id]) # TODO[HA]: Should we do a session.flush() so we don't have to keep lots of state/object in # memory for larger dags? or expunge_all() + def _update_dag_next_dagruns(self, dag, dag_model: DagModel, total_active_runs) -> None: + """ + Update the next_dagrun, next_dagrun_data_interval_start/end + and next_dagrun_create_after for this dag. + """ + if total_active_runs >= dag_model.max_active_runs: + self.log.info( + "DAG %s is at (or above) max_active_runs (%d of %d), not creating any more runs", + dag_model.dag_id, + total_active_runs, + dag_model.max_active_runs, + ) + dag_model.next_dagrun_create_after = None + else: + data_interval = dag.get_next_data_interval(dag_model) + dag_model.calculate_dagrun_date_fields(dag, data_interval) + def _start_queued_dagruns( self, session: Session, @@ -892,15 +938,8 @@ def _start_queued_dagruns( dag_runs = self._get_next_dagruns_to_examine(State.QUEUED, session) active_runs_of_dags = defaultdict( - lambda: 0, - session.query(DagRun.dag_id, func.count('*')) - .filter( # We use `list` here because SQLA doesn't accept a set - # We use set to avoid duplicate dag_ids - DagRun.dag_id.in_(list({dr.dag_id for dr in dag_runs})), - DagRun.state == State.RUNNING, - ) - .group_by(DagRun.dag_id) - .all(), + int, + DagRun.active_runs_of_dags((dr.dag_id for dr in dag_runs), only_running=True, session=session), ) def _update_state(dag: DAG, dag_run: DagRun): @@ -951,6 +990,7 @@ def _schedule_dag_run( if not dag: self.log.error("Couldn't find dag %s in DagBag/DB!", dag_run.dag_id) return 0 + dag_model = DM.get_dagmodel(dag.dag_id, session) if ( dag_run.start_date @@ -969,6 +1009,9 @@ def _schedule_dag_run( session.merge(task_instance) session.flush() self.log.info("Run %s of %s has timed-out", dag_run.run_id, dag_run.dag_id) + active_runs = dag.get_num_active_runs(only_running=False, session=session) + # Work out if we should allow creating a new DagRun now? + self._update_dag_next_dagruns(dag, dag_model, active_runs) callback_to_execute = DagCallbackRequest( full_filepath=dag.fileloc, @@ -979,7 +1022,7 @@ def _schedule_dag_run( ) # Send SLA & DAG Success/Failure Callbacks to be executed - self._send_dag_callbacks_to_processor(dag_run, callback_to_execute) + self._send_dag_callbacks_to_processor(dag, callback_to_execute) return 0 @@ -990,6 +1033,10 @@ def _schedule_dag_run( self._verify_integrity_if_dag_changed(dag_run=dag_run, session=session) # TODO[HA]: Rename update_state -> schedule_dag_run, ?? something else? schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False) + if dag_run.state in State.finished: + active_runs = dag.get_num_active_runs(only_running=False, session=session) + # Work out if we should allow creating a new DagRun now? + self._update_dag_next_dagruns(dag, dag_model, active_runs) # This will do one query per dag run. We "could" build up a complex # query to update all the TIs across all the execution dates and dag @@ -1015,13 +1062,10 @@ def _verify_integrity_if_dag_changed(self, dag_run: DagRun, session=None): # Verify integrity also takes care of session.flush dag_run.verify_integrity(session=session) - def _send_dag_callbacks_to_processor( - self, dag_run: DagRun, callback: Optional[DagCallbackRequest] = None - ): + def _send_dag_callbacks_to_processor(self, dag: DAG, callback: Optional[DagCallbackRequest] = None): if not self.processor_agent: raise ValueError("Processor agent is not started.") - dag = dag_run.get_dag() self._send_sla_callbacks_to_processor(dag) if callback: self.processor_agent.send_callback_to_execute(callback) diff --git a/airflow/migrations/versions/7b2661a43ba3_taskinstance_keyed_to_dagrun.py b/airflow/migrations/versions/7b2661a43ba3_taskinstance_keyed_to_dagrun.py --- a/airflow/migrations/versions/7b2661a43ba3_taskinstance_keyed_to_dagrun.py +++ b/airflow/migrations/versions/7b2661a43ba3_taskinstance_keyed_to_dagrun.py @@ -182,10 +182,6 @@ def upgrade(): op.add_column('task_instance', sa.Column('run_id', type_=string_id_col_type, nullable=True)) op.add_column('task_reschedule', sa.Column('run_id', type_=string_id_col_type, nullable=True)) - # Then update the new column by selecting the right value from DagRun - update_query = _multi_table_update(dialect_name, task_instance, task_instance.c.run_id) - op.execute(update_query) - # # TaskReschedule has a FK to TaskInstance, so we have to update that before # we can drop the TI.execution_date column @@ -204,29 +200,81 @@ def upgrade(): batch_op.drop_index('task_reschedule_dag_task_date_fkey') batch_op.drop_index('idx_task_reschedule_dag_task_date') + # Then update the new column by selecting the right value from DagRun + # But first we will drop and recreate indexes to make it faster + if dialect_name == 'postgresql': + # Recreate task_instance, without execution_date and with dagrun.run_id + op.execute( + """ + CREATE TABLE new_task_instance AS SELECT + ti.task_id, + ti.dag_id, + dag_run.run_id, + ti.start_date, + ti.end_date, + ti.duration, + ti.state, + ti.try_number, + ti.hostname, + ti.unixname, + ti.job_id, + ti.pool, + ti.queue, + ti.priority_weight, + ti.operator, + ti.queued_dttm, + ti.pid, + ti.max_tries, + ti.executor_config, + ti.pool_slots, + ti.queued_by_job_id, + ti.external_executor_id, + ti.trigger_id, + ti.trigger_timeout, + ti.next_method, + ti.next_kwargs + FROM task_instance ti + INNER JOIN dag_run ON dag_run.dag_id = ti.dag_id AND dag_run.execution_date = ti.execution_date; + """ + ) + op.drop_table('task_instance') + op.rename_table('new_task_instance', 'task_instance') + + # Fix up columns after the 'create table as select' + with op.batch_alter_table('task_instance', schema=None) as batch_op: + batch_op.alter_column( + 'pool', existing_type=string_id_col_type, existing_nullable=True, nullable=False + ) + batch_op.alter_column('max_tries', existing_type=sa.Integer(), server_default="-1") + batch_op.alter_column( + 'pool_slots', existing_type=sa.Integer(), existing_nullable=True, nullable=False + ) + else: + update_query = _multi_table_update(dialect_name, task_instance, task_instance.c.run_id) + op.execute(update_query) + with op.batch_alter_table('task_instance', schema=None) as batch_op: + if dialect_name != 'postgresql': + # TODO: Is this right for non-postgres? + if dialect_name == 'mssql': + constraints = get_table_constraints(conn, "task_instance") + pk, _ = constraints['PRIMARY KEY'].popitem() + batch_op.drop_constraint(pk, type_='primary') + elif dialect_name not in ('sqlite'): + batch_op.drop_constraint('task_instance_pkey', type_='primary') + batch_op.drop_index('ti_dag_date') + batch_op.drop_index('ti_state_lkp') + batch_op.drop_column('execution_date') + # Then make it non-nullable batch_op.alter_column( 'run_id', existing_type=string_id_col_type, existing_nullable=True, nullable=False ) - batch_op.alter_column( 'dag_id', existing_type=string_id_col_type, existing_nullable=True, nullable=False ) - batch_op.alter_column('execution_date', existing_type=dt_type, existing_nullable=True, nullable=False) - # TODO: Is this right for non-postgres? - if dialect_name == 'mssql': - constraints = get_table_constraints(conn, "task_instance") - pk, _ = constraints['PRIMARY KEY'].popitem() - batch_op.drop_constraint(pk, type_='primary') - elif dialect_name not in ('sqlite'): - batch_op.drop_constraint('task_instance_pkey', type_='primary') batch_op.create_primary_key('task_instance_pkey', ['dag_id', 'task_id', 'run_id']) - - batch_op.drop_index('ti_dag_date') - batch_op.drop_index('ti_state_lkp') - batch_op.drop_column('execution_date') batch_op.create_foreign_key( 'task_instance_dag_run_fkey', 'dag_run', @@ -237,6 +285,15 @@ def upgrade(): batch_op.create_index('ti_dag_run', ['dag_id', 'run_id']) batch_op.create_index('ti_state_lkp', ['dag_id', 'task_id', 'run_id', 'state']) + if dialect_name == 'postgresql': + batch_op.create_index('ti_dag_state', ['dag_id', 'state']) + batch_op.create_index('ti_job_id', ['job_id']) + batch_op.create_index('ti_pool', ['pool', 'state', 'priority_weight']) + batch_op.create_index('ti_state', ['state']) + batch_op.create_foreign_key( + 'task_instance_trigger_id_fkey', 'trigger', ['trigger_id'], ['id'], ondelete="CASCADE" + ) + batch_op.create_index('ti_trigger_id', ['trigger_id']) with op.batch_alter_table('task_reschedule', schema=None) as batch_op: batch_op.drop_column('execution_date') diff --git a/airflow/models/dag.py b/airflow/models/dag.py --- a/airflow/models/dag.py +++ b/airflow/models/dag.py @@ -1138,7 +1138,7 @@ def get_active_runs(self): return active_dates @provide_session - def get_num_active_runs(self, external_trigger=None, session=None): + def get_num_active_runs(self, external_trigger=None, only_running=True, session=None): """ Returns the number of active "running" dag runs @@ -1148,11 +1148,11 @@ def get_num_active_runs(self, external_trigger=None, session=None): :return: number greater than 0 for active dag runs """ # .count() is inefficient - query = ( - session.query(func.count()) - .filter(DagRun.dag_id == self.dag_id) - .filter(DagRun.state == State.RUNNING) - ) + query = session.query(func.count()).filter(DagRun.dag_id == self.dag_id) + if only_running: + query = query.filter(DagRun.state == State.RUNNING) + else: + query = query.filter(DagRun.state.in_({State.RUNNING, State.QUEUED})) if external_trigger is not None: query = query.filter( @@ -2425,6 +2425,10 @@ def bulk_write_to_db(cls, dags: Collection["DAG"], session=None): ) most_recent_runs = {run.dag_id: run for run in most_recent_runs_iter} + # Get number of active dagruns for all dags we are processing as a single query. + + num_active_runs = DagRun.active_runs_of_dags(dag_ids=existing_dag_ids, session=session) + filelocs = [] for orm_dag in sorted(orm_dags, key=lambda d: d.dag_id): @@ -2453,7 +2457,10 @@ def bulk_write_to_db(cls, dags: Collection["DAG"], session=None): data_interval = None else: data_interval = dag.get_run_data_interval(run) - orm_dag.calculate_dagrun_date_fields(dag, data_interval) + if num_active_runs.get(dag.dag_id, 0) >= orm_dag.max_active_runs: + orm_dag.next_dagrun_create_after = None + else: + orm_dag.calculate_dagrun_date_fields(dag, data_interval) for orm_tag in list(orm_dag.tags): if orm_tag.name not in set(dag.tags): @@ -2631,8 +2638,8 @@ def validate_schedule_and_params(self): return for k, v in self.params.items(): - # As type can be an array, we would check if `null` is a allowed type or not - if v.default is None and ("type" not in v.schema or "null" not in v.schema["type"]): + # As type can be an array, we would check if `null` is an allowed type or not + if not v.has_value and ("type" not in v.schema or "null" not in v.schema["type"]): raise AirflowException( "DAG Schedule must be None, if there are any required params without default values" ) diff --git a/airflow/models/dagrun.py b/airflow/models/dagrun.py --- a/airflow/models/dagrun.py +++ b/airflow/models/dagrun.py @@ -17,7 +17,7 @@ # under the License. import warnings from datetime import datetime -from typing import TYPE_CHECKING, Any, Iterable, List, NamedTuple, Optional, Tuple, Union +from typing import TYPE_CHECKING, Any, Dict, Iterable, List, NamedTuple, Optional, Tuple, Union from sqlalchemy import ( Boolean, @@ -207,6 +207,22 @@ def refresh_from_db(self, session: Session = None): self.id = dr.id self.state = dr.state + @classmethod + @provide_session + def active_runs_of_dags(cls, dag_ids=None, only_running=False, session=None) -> Dict[str, int]: + """Get the number of active dag runs for each dag.""" + query = session.query(cls.dag_id, func.count('*')) + if dag_ids is not None: + # 'set' called to avoid duplicate dag_ids, but converted back to 'list' + # because SQLAlchemy doesn't accept a set here. + query = query.filter(cls.dag_id.in_(list(set(dag_ids)))) + if only_running: + query = query.filter(cls.state == State.RUNNING) + else: + query = query.filter(cls.state.in_([State.RUNNING, State.QUEUED])) + query = query.group_by(cls.dag_id) + return {dag_id: count for dag_id, count in query.all()} + @classmethod def next_dagruns_to_examine( cls, @@ -526,6 +542,31 @@ def update_state( else: self.set_state(State.RUNNING) + if self._state == State.FAILED or self._state == State.SUCCESS: + msg = ( + "DagRun Finished: dag_id=%s, execution_date=%s, run_id=%s, " + "run_start_date=%s, run_end_date=%s, run_duration=%s, " + "state=%s, external_trigger=%s, run_type=%s, " + "data_interval_start=%s, data_interval_end=%s, dag_hash=%s" + ) + self.log.info( + msg, + self.dag_id, + self.execution_date, + self.run_id, + self.start_date, + self.end_date, + (self.end_date - self.start_date).total_seconds() + if self.start_date and self.end_date + else None, + self._state, + self.external_trigger, + self.run_type, + self.data_interval_start, + self.data_interval_end, + self.dag_hash, + ) + self._emit_true_scheduling_delay_stats_for_finished_state(finished_tasks) self._emit_duration_stats_for_finished_state() diff --git a/airflow/models/param.py b/airflow/models/param.py --- a/airflow/models/param.py +++ b/airflow/models/param.py @@ -24,6 +24,16 @@ from airflow.exceptions import AirflowException +class NoValueSentinel: + """Sentinel class used to distinguish between None and no passed value""" + + def __str__(self): + return "NoValueSentinel" + + def __repr__(self): + return "NoValueSentinel" + + class Param: """ Class to hold the default value of a Param and rule set to do the validations. Without the rule set @@ -38,22 +48,25 @@ class Param: :type schema: dict """ - def __init__(self, default: Any = None, description: str = None, **kwargs): - self.default = default + __NO_VALUE_SENTINEL = NoValueSentinel() + + def __init__(self, default: Any = __NO_VALUE_SENTINEL, description: str = None, **kwargs): + self.value = default self.description = description self.schema = kwargs.pop('schema') if 'schema' in kwargs else kwargs - # If default is not None, then validate it once, may raise ValueError - if default: + # If we have a value, validate it once. May raise ValueError. + if self.has_value: try: - jsonschema.validate(self.default, self.schema, format_checker=FormatChecker()) + jsonschema.validate(self.value, self.schema, format_checker=FormatChecker()) except ValidationError as err: raise ValueError(err) - def resolve(self, value: Optional[Any] = None, suppress_exception: bool = False) -> Any: + def resolve(self, value: Optional[Any] = __NO_VALUE_SENTINEL, suppress_exception: bool = False) -> Any: """ Runs the validations and returns the Param's final value. - May raise ValueError on failed validations. + May raise ValueError on failed validations, or TypeError + if no value is passed and no value already exists. :param value: The value to be updated for the Param :type value: Optional[Any] @@ -61,14 +74,18 @@ def resolve(self, value: Optional[Any] = None, suppress_exception: bool = False) If true and validations fails, the return value would be None. :type suppress_exception: bool """ + final_val = value if value != self.__NO_VALUE_SENTINEL else self.value + if isinstance(final_val, NoValueSentinel): + if suppress_exception: + return None + raise TypeError("No value passed and Param has no default value") try: - final_val = value or self.default jsonschema.validate(final_val, self.schema, format_checker=FormatChecker()) - self.default = final_val except ValidationError as err: if suppress_exception: return None raise ValueError(err) from None + self.value = final_val return final_val def dump(self) -> dict: @@ -77,6 +94,10 @@ def dump(self) -> dict: out_dict.update(self.__dict__) return out_dict + @property + def has_value(self) -> bool: + return not isinstance(self.value, NoValueSentinel) + class ParamsDict(dict): """ diff --git a/airflow/models/taskinstance.py b/airflow/models/taskinstance.py --- a/airflow/models/taskinstance.py +++ b/airflow/models/taskinstance.py @@ -27,7 +27,7 @@ from datetime import datetime, timedelta from functools import partial from tempfile import NamedTemporaryFile -from typing import IO, TYPE_CHECKING, Any, Dict, Iterable, List, NamedTuple, Optional, Tuple, Union +from typing import IO, TYPE_CHECKING, Any, Callable, Dict, Iterable, List, NamedTuple, Optional, Tuple, Union from urllib.parse import quote import dill @@ -1280,6 +1280,14 @@ def _log_state(self, lead_msg: str = ''): self._date_or_empty('end_date'), ) + # Ensure we unset next_method and next_kwargs to ensure that any + # retries don't re-use them. + def clear_next_method_args(self): + self.log.debug("Clearing next_method and next_kwargs.") + + self.next_method = None + self.next_kwargs = None + @provide_session @Sentry.enrich_errors def _run_raw_task( @@ -1363,9 +1371,15 @@ def _run_raw_task( session.commit() raise except AirflowException as e: + if not test_mode: + self.refresh_from_db(lock_for_update=True, session=session) # for case when task is marked as success/failed externally - # current behavior doesn't hit the success callback - if self.state in {State.SUCCESS, State.FAILED}: + # or dagrun timed out and task is marked as skipped + # current behavior doesn't hit the callbacks + if self.state in State.finished: + self.clear_next_method_args() + session.merge(self) + session.commit() return else: self.handle_failure(e, test_mode, error_file=error_file, session=session) @@ -1379,6 +1393,7 @@ def _run_raw_task( Stats.incr(f'ti.finish.{self.task.dag_id}.{self.task.task_id}.{self.state}') # Recording SKIPPED or SUCCESS + self.clear_next_method_args() self.end_date = timezone.utcnow() self._log_state() self.set_duration() @@ -1664,6 +1679,8 @@ def _handle_reschedule(self, actual_start_date, reschedule_exception, test_mode= # to same log file. self._try_number -= 1 + self.clear_next_method_args() + session.merge(self) session.commit() self.log.info('Rescheduling task, marking task as UP_FOR_RESCHEDULE') @@ -1702,12 +1719,10 @@ def handle_failure( session.add(Log(State.FAILED, self)) # Log failure duration - session.add(TaskFail(task, self.execution_date, self.start_date, self.end_date)) + dag_run = self.get_dagrun(session=session) # self.dag_run not populated by refresh_from_db + session.add(TaskFail(task, dag_run.execution_date, self.start_date, self.end_date)) - # Ensure we unset next_method and next_kwargs to ensure that any - # retries don't re-use them. - self.next_method = None - self.next_kwargs = None + self.clear_next_method_args() # Set state correctly and figure out how to log it and decide whether # to email @@ -1774,6 +1789,7 @@ def get_template_context(self, session: Session = None, ignore_param_exceptions: integrate_macros_plugins() dag_run = self.get_dagrun(session) + data_interval = dag.get_run_data_interval(dag_run) params = ParamsDict(suppress_exception=ignore_param_exceptions) @@ -1784,17 +1800,16 @@ def get_template_context(self, session: Session = None, ignore_param_exceptions: if conf.getboolean('core', 'dag_run_conf_overrides_params'): self.overwrite_params_with_dag_run_conf(params=params, dag_run=dag_run) - interval_start = dag.get_run_data_interval(dag_run).start - ds = interval_start.strftime('%Y-%m-%d') + logical_date = timezone.coerce_datetime(self.execution_date) + ds = logical_date.strftime('%Y-%m-%d') + ds_nodash = ds.replace('-', '') + ts = logical_date.isoformat() + ts_nodash = logical_date.strftime('%Y%m%dT%H%M%S') + ts_nodash_with_tz = ts.replace('-', '').replace(':', '') # Now validates Params and convert them into a simple dict task.params = params.validate() - ds_nodash = ds.replace('-', '') - ts = interval_start.isoformat() - ts_nodash = interval_start.strftime('%Y%m%dT%H%M%S') - ts_nodash_with_tz = ts.replace('-', '').replace(':', '') - @cache # Prevent multiple database access. def _get_previous_dagrun_success() -> Optional["DagRun"]: return self.get_previous_dagrun(state=State.SUCCESS, session=session) @@ -1906,14 +1921,23 @@ def get( # Create lazy proxies for deprecated stuff. - def deprecated_proxy(func, *, key, replacement=None) -> lazy_object_proxy.Proxy: + def deprecated_proxy( + func: Callable[[], Any], + *, + key: str, + replacements: Optional[List[str]] = None, + ) -> lazy_object_proxy.Proxy: def deprecated_func(): message = ( f"Accessing {key!r} from the template is deprecated and " f"will be removed in a future version." ) - if replacement: - message += f" Please use {replacement!r} instead." + if replacements: + display_except_last = ", ".join(repr(r) for r in replacements[:-1]) + if display_except_last: + message += f" Please use {display_except_last} or {replacements[-1]!r} instead." + else: + message += f" Please use {replacements[-1]!r} instead." warnings.warn(message, DeprecationWarning) return func() @@ -1986,27 +2010,28 @@ def get_prev_ds_nodash() -> Optional[str]: 'conf': conf, 'dag': dag, 'dag_run': dag_run, - 'data_interval_end': timezone.coerce_datetime(dag_run.data_interval_end), - 'data_interval_start': timezone.coerce_datetime(dag_run.data_interval_start), + 'data_interval_end': timezone.coerce_datetime(data_interval.end), + 'data_interval_start': timezone.coerce_datetime(data_interval.start), 'ds': ds, 'ds_nodash': ds_nodash, 'execution_date': deprecated_proxy( - lambda: timezone.coerce_datetime(self.execution_date), + lambda: logical_date, key='execution_date', - replacement='data_interval_start', + replacements=['logical_date', 'data_interval_start'], ), 'inlets': task.inlets, + 'logical_date': logical_date, 'macros': macros, - 'next_ds': deprecated_proxy(get_next_ds, key="next_ds", replacement="data_interval_end | ds"), + 'next_ds': deprecated_proxy(get_next_ds, key="next_ds", replacements=["data_interval_end | ds"]), 'next_ds_nodash': deprecated_proxy( get_next_ds_nodash, key="next_ds_nodash", - replacement="data_interval_end | ds_nodash", + replacements=["data_interval_end | ds_nodash"], ), 'next_execution_date': deprecated_proxy( get_next_execution_date, key='next_execution_date', - replacement='data_interval_end', + replacements=['data_interval_end'], ), 'outlets': task.outlets, 'params': task.params, @@ -2018,7 +2043,7 @@ def get_prev_ds_nodash() -> Optional[str]: 'prev_execution_date_success': deprecated_proxy( lambda: self.get_previous_execution_date(state=State.SUCCESS, session=session), key='prev_execution_date_success', - replacement='prev_data_interval_start_success', + replacements=['prev_data_interval_start_success'], ), 'prev_start_date_success': lazy_object_proxy.Proxy(get_prev_start_date_success), 'run_id': self.run_id, diff --git a/airflow/models/xcom.py b/airflow/models/xcom.py --- a/airflow/models/xcom.py +++ b/airflow/models/xcom.py @@ -64,6 +64,7 @@ class BaseXCom(Base, LoggingMixin): BaseXCom.execution_date == foreign(DagRun.execution_date) )""", uselist=False, + passive_deletes="all", ) run_id = association_proxy("dag_run", "run_id") diff --git a/airflow/sentry.py b/airflow/sentry.py --- a/airflow/sentry.py +++ b/airflow/sentry.py @@ -109,7 +109,7 @@ def __init__(self): ", ".join(unsupported_options), ) - sentry_config_opts['before_send'] = conf.getimport('sentry', 'before_send') + sentry_config_opts['before_send'] = conf.getimport('sentry', 'before_send', fallback=None) if dsn: sentry_sdk.init(dsn=dsn, integrations=integrations, **sentry_config_opts) diff --git a/airflow/serialization/serialized_objects.py b/airflow/serialization/serialized_objects.py --- a/airflow/serialization/serialized_objects.py +++ b/airflow/serialization/serialized_objects.py @@ -409,6 +409,34 @@ def _value_is_hardcoded_default(cls, attrname: str, value: Any, instance: Any) - return True return False + @classmethod + def _serialize_params_dict(cls, params: ParamsDict): + """Serialize Params dict for a DAG/Task""" + serialized_params = {} + for k, v in params.items(): + # TODO: As of now, we would allow serialization of params which are of type Param only + if f'{v.__module__}.{v.__class__.__name__}' == 'airflow.models.param.Param': + kwargs = v.dump() + kwargs['default'] = kwargs.pop('value') + serialized_params[k] = kwargs + else: + raise ValueError('Params to a DAG or a Task can be only of type airflow.models.param.Param') + return serialized_params + + @classmethod + def _deserialize_params_dict(cls, encoded_params: Dict) -> ParamsDict: + """Deserialize a DAGs Params dict""" + op_params = {} + for k, v in encoded_params.items(): + if isinstance(v, dict) and "__class" in v: + param_class = import_string(v['__class']) + op_params[k] = param_class(**v) + else: + # Old style params, upgrade it + op_params[k] = Param(v) + + return ParamsDict(op_params) + class DependencyDetector: """Detects dependencies between DAGs.""" @@ -517,7 +545,7 @@ def serialize_operator(cls, op: BaseOperator) -> Dict[str, Any]: serialize_op[template_field] = serialize_template_field(value) if op.params: - serialize_op['params'] = cls._serialize_operator_params(op.params) + serialize_op['params'] = cls._serialize_params_dict(op.params) return serialize_op @@ -584,7 +612,7 @@ def deserialize_operator(cls, encoded_op: Dict[str, Any]) -> BaseOperator: elif k == "deps": v = cls._deserialize_deps(v) elif k == "params": - v = cls._deserialize_operator_params(v) + v = cls._deserialize_params_dict(v) elif k in cls._decorated_fields or k not in op.get_serialized_fields(): v = cls._deserialize(v) # else use v as it is @@ -721,29 +749,6 @@ def _serialize_operator_extra_links(cls, operator_extra_links: Iterable[BaseOper return serialize_operator_extra_links - @classmethod - def _deserialize_operator_params(cls, encoded_op_params: Dict) -> Dict[str, Param]: - """Deserialize Params dict of a operator""" - op_params = {} - for k, v in encoded_op_params.items(): - param_class = import_string(v['__class']) - del v['__class'] - op_params[k] = param_class(**v) - - return ParamsDict(op_params) - - @classmethod - def _serialize_operator_params(cls, op_params: ParamsDict): - """Serialize Params dict of a operator""" - serialized_params = {} - for k, v in op_params.items(): - # TODO: As of now, we would allow serialization of params which are of type Param only - if f'{v.__module__}.{v.__class__.__name__}' == 'airflow.models.param.Param': - serialized_params[k] = v.dump() - else: - raise ValueError('Params to a Task can be only of type airflow.models.param.Param') - return serialized_params - class SerializedDAG(DAG, BaseSerialization): """ @@ -802,7 +807,7 @@ def serialize_dag(cls, dag: DAG) -> dict: # Edge info in the JSON exactly matches our internal structure serialize_dag["edge_info"] = dag.edge_info - serialize_dag["params"] = cls._serialize_dag_params(dag.params) + serialize_dag["params"] = cls._serialize_params_dict(dag.params) # has_on_*_callback are only stored if the value is True, as the default is False if dag.has_on_success_callback: @@ -843,7 +848,7 @@ def deserialize_dag(cls, encoded_dag: Dict[str, Any]) -> 'SerializedDAG': elif k in cls._decorated_fields: v = cls._deserialize(v) elif k == "params": - v = cls._deserialize_dag_params(v) + v = cls._deserialize_params_dict(v) # else use v as it is setattr(dag, k, v) @@ -915,29 +920,6 @@ def from_dict(cls, serialized_obj: dict) -> 'SerializedDAG': raise ValueError(f"Unsure how to deserialize version {ver!r}") return cls.deserialize_dag(serialized_obj['dag']) - @classmethod - def _serialize_dag_params(cls, dag_params: ParamsDict): - """Serialize Params dict for a DAG""" - serialized_params = {} - for k, v in dag_params.items(): - # TODO: As of now, we would allow serialization of params which are of type Param only - if f'{v.__module__}.{v.__class__.__name__}' == 'airflow.models.param.Param': - serialized_params[k] = v.dump() - else: - raise ValueError('Params to a DAG can be only of type airflow.models.param.Param') - return serialized_params - - @classmethod - def _deserialize_dag_params(cls, encoded_dag_params: Dict) -> ParamsDict: - """Deserialize a DAGs Params dict""" - op_params = {} - for k, v in encoded_dag_params.items(): - param_class = import_string(v['__class']) - del v['__class'] - op_params[k] = param_class(**v) - - return ParamsDict(op_params) - class SerializedTaskGroup(TaskGroup, BaseSerialization): """A JSON serializable representation of TaskGroup.""" diff --git a/airflow/settings.py b/airflow/settings.py --- a/airflow/settings.py +++ b/airflow/settings.py @@ -344,24 +344,24 @@ def configure_adapters(): """Register Adapters and DB Converters""" from pendulum import DateTime as Pendulum - try: + if SQL_ALCHEMY_CONN.startswith('sqlite'): from sqlite3 import register_adapter register_adapter(Pendulum, lambda val: val.isoformat(' ')) - except ImportError: - pass - try: - import MySQLdb.converters - MySQLdb.converters.conversions[Pendulum] = MySQLdb.converters.DateTime2literal - except ImportError: - pass - try: - import pymysql.converters + if SQL_ALCHEMY_CONN.startswith('mysql'): + try: + import MySQLdb.converters - pymysql.converters.conversions[Pendulum] = pymysql.converters.escape_datetime - except ImportError: - pass + MySQLdb.converters.conversions[Pendulum] = MySQLdb.converters.DateTime2literal + except ImportError: + pass + try: + import pymysql.converters + + pymysql.converters.conversions[Pendulum] = pymysql.converters.escape_datetime + except ImportError: + pass def validate_session(): @@ -563,3 +563,6 @@ def initialize(): # # DASHBOARD_UIALERTS: List["UIAlert"] DASHBOARD_UIALERTS = [] + +# Prefix used to identify tables holding data moved during migration. +AIRFLOW_MOVED_TABLE_PREFIX = "_airflow_moved" diff --git a/airflow/task/task_runner/base_task_runner.py b/airflow/task/task_runner/base_task_runner.py --- a/airflow/task/task_runner/base_task_runner.py +++ b/airflow/task/task_runner/base_task_runner.py @@ -66,7 +66,7 @@ def __init__(self, local_task_job): # want to have to specify them in the sudo call - they would show # up in `ps` that way! And run commands now, as the other user # might not be able to run the cmds to get credentials - cfg_path = tmp_configuration_copy(chmod=0o600) + cfg_path = tmp_configuration_copy(chmod=0o600, include_env=True, include_cmds=True) # Give ownership of file to user; only they can read and write subprocess.call(['sudo', 'chown', self.run_as_user, cfg_path], close_fds=True) @@ -83,7 +83,7 @@ def __init__(self, local_task_job): # we are running as the same user, and can pass through environment # variables then we don't need to include those in the config copy # - the runner can read/execute those values as it needs - cfg_path = tmp_configuration_copy(chmod=0o600) + cfg_path = tmp_configuration_copy(chmod=0o600, include_env=False, include_cmds=False) self._error_file = NamedTemporaryFile(delete=True) if self.run_as_user: diff --git a/airflow/timetables/interval.py b/airflow/timetables/interval.py --- a/airflow/timetables/interval.py +++ b/airflow/timetables/interval.py @@ -74,16 +74,22 @@ def next_dagrun_info( earliest = restriction.earliest if not restriction.catchup: earliest = self._skip_to_latest(earliest) + elif earliest is not None: + earliest = self._align(earliest) if last_automated_data_interval is None: # First run; schedule the run at the first available time matching # the schedule, and retrospectively create a data interval for it. if earliest is None: return None - start = self._align(earliest) - else: - # There's a previous run. Create a data interval starting from when - # the end of the previous interval. - start = last_automated_data_interval.end + start = earliest + else: # There's a previous run. + if earliest is not None: + # Catchup is False or DAG has new start date in the future. + # Make sure we get the later one. + start = max(last_automated_data_interval.end, earliest) + else: + # Data interval starts from the end of the previous interval. + start = last_automated_data_interval.end if restriction.latest is not None and start > restriction.latest: return None end = self._get_next(start) @@ -183,8 +189,8 @@ def _get_prev(self, current: DateTime) -> DateTime: def _align(self, current: DateTime) -> DateTime: """Get the next scheduled time. - This is ``current + interval``, unless ``current`` is first interval, - then ``current`` is returned. + This is ``current + interval``, unless ``current`` falls right on the + interval boundary, when ``current`` is returned. """ next_time = self._get_next(current) if self._get_prev(next_time) != current: @@ -199,14 +205,14 @@ def _skip_to_latest(self, earliest: Optional[DateTime]) -> DateTime: This is slightly different from the delta version at terminal values. If the next schedule should start *right now*, we want the data interval - that start right now now, not the one that ends now. + that start now, not the one that ends now. """ current_time = DateTime.utcnow() - next_start = self._get_next(current_time) last_start = self._get_prev(current_time) - if next_start == current_time: + next_start = self._get_next(last_start) + if next_start == current_time: # Current time is on interval boundary. new_start = last_start - elif next_start > current_time: + elif next_start > current_time: # Current time is between boundaries. new_start = self._get_prev(last_start) else: raise AssertionError("next schedule shouldn't be earlier") diff --git a/airflow/utils/configuration.py b/airflow/utils/configuration.py --- a/airflow/utils/configuration.py +++ b/airflow/utils/configuration.py @@ -23,13 +23,23 @@ from airflow.configuration import conf -def tmp_configuration_copy(chmod=0o600): +def tmp_configuration_copy(chmod=0o600, include_env=True, include_cmds=True): """ Returns a path for a temporary file including a full copy of the configuration settings. + + :param include_env: Should the value of configuration from ``AIRFLOW__`` + environment variables be included or not + :type include_env: bool + :param include_cmds: Should the result of calling any *_cmd config be + set (True, default), or should the _cmd options be left as the + command to run (False) + :type include_cmds: bool :return: a path to a temporary file """ - cfg_dict = conf.as_dict(display_sensitive=True, raw=True) + cfg_dict = conf.as_dict( + display_sensitive=True, raw=True, include_cmds=include_cmds, include_env=include_env + ) temp_fd, cfg_path = mkstemp() with os.fdopen(temp_fd, 'w') as temp_file: diff --git a/airflow/utils/db.py b/airflow/utils/db.py --- a/airflow/utils/db.py +++ b/airflow/utils/db.py @@ -20,7 +20,7 @@ import time from typing import Iterable -from sqlalchemy import Table, exc, func +from sqlalchemy import Table, exc, func, inspect, or_, text from airflow import settings from airflow.configuration import conf @@ -51,15 +51,15 @@ from airflow.models.serialized_dag import SerializedDagModel # noqa: F401 # TODO: remove create_session once we decide to break backward compatibility -from airflow.utils.session import ( # noqa: F401 # pylint: disable=unused-import - create_global_lock, - create_session, - provide_session, -) +from airflow.utils.session import create_global_lock, create_session, provide_session # noqa: F401 log = logging.getLogger(__name__) +def _format_airflow_moved_table_name(source_table, version): + return "__".join([settings.AIRFLOW_MOVED_TABLE_PREFIX, version.replace(".", "_"), source_table]) + + @provide_session def merge_conn(conn, session=None): """Add new Connection.""" @@ -697,47 +697,77 @@ def check_conn_type_null(session=None) -> Iterable[str]: ) +def _format_dangling_error(source_table, target_table, invalid_count, reason): + noun = "row" if invalid_count == 1 else "rows" + return ( + f"The {source_table} table has {invalid_count} {noun} {reason}, which " + f"is invalid. We could not move them out of the way because the " + f"{target_table} table already exists in your database. Please either " + f"drop the {target_table} table, or manually delete the invalid rows " + f"from the {source_table} table." + ) + + +def _move_dangling_run_data_to_new_table(session, source_table, target_table): + where_clause = "where dag_id is null or run_id is null or execution_date is null" + session.execute(text(f"create table {target_table} as select * from {source_table} {where_clause}")) + session.execute(text(f"delete from {source_table} {where_clause}")) + + def check_run_id_null(session) -> Iterable[str]: import sqlalchemy.schema metadata = sqlalchemy.schema.MetaData(session.bind) try: - metadata.reflect(only=["dag_run"]) + metadata.reflect(only=[DagRun.__tablename__]) except exc.InvalidRequestError: # Table doesn't exist -- empty db return - dag_run = metadata.tables["dag_run"] - - for colname in ('run_id', 'dag_id', 'execution_date'): - - col = dag_run.columns.get(colname) - if col is None: - continue - - if not col.nullable: - continue - - num = session.query(dag_run).filter(col.is_(None)).count() - if num > 0: - yield ( - f'The {dag_run.name} table has {num} row{"s" if num != 1 else ""} with a NULL value in ' - f'{col.name!r}. You must manually correct this problem (possibly by deleting the problem ' - 'rows).' + # We can't use the model here since it may differ from the db state due to + # this function is run prior to migration. Use the reflected table instead. + dagrun_table = metadata.tables[DagRun.__tablename__] + + invalid_dagrun_filter = or_( + dagrun_table.c.dag_id.is_(None), + dagrun_table.c.run_id.is_(None), + dagrun_table.c.execution_date.is_(None), + ) + invalid_dagrun_count = session.query(dagrun_table.c.id).filter(invalid_dagrun_filter).count() + if invalid_dagrun_count > 0: + dagrun_dangling_table_name = _format_airflow_moved_table_name(dagrun_table.name, "2.2") + if dagrun_dangling_table_name in inspect(session.get_bind()).get_table_names(): + yield _format_dangling_error( + source_table=dagrun_table.name, + target_table=dagrun_dangling_table_name, + invalid_count=invalid_dagrun_count, + reason="with a NULL dag_id, run_id, or execution_date", ) - session.rollback() + return + _move_dangling_run_data_to_new_table(session, dagrun_table.name, dagrun_dangling_table_name) + + +def _move_dangling_task_data_to_new_table(session, source_table, target_table): + where_clause = f""" + where (task_id, dag_id, execution_date) IN ( + select source.task_id, source.dag_id, source.execution_date + from {source_table} as source + left join dag_run as dr + on (source.dag_id = dr.dag_id and source.execution_date = dr.execution_date) + where dr.id is null + ) + """ + session.execute(text(f"create table {target_table} as select * from {source_table} {where_clause}")) + session.execute(text(f"delete from {source_table} {where_clause}")) def check_task_tables_without_matching_dagruns(session) -> Iterable[str]: - from itertools import chain - import sqlalchemy.schema from sqlalchemy import and_, outerjoin metadata = sqlalchemy.schema.MetaData(session.bind) - models_to_dagrun = [TaskInstance, TaskFail] - models_to_ti = [] - for model in models_to_dagrun + models_to_ti + [DagRun]: + models_to_dagrun = [TaskInstance, TaskReschedule] + for model in models_to_dagrun + [DagRun]: try: metadata.reflect(only=[model.__tablename__]) except exc.InvalidRequestError: @@ -745,43 +775,57 @@ def check_task_tables_without_matching_dagruns(session) -> Iterable[str]: # version pass + # Key table doesn't exist -- likely empty DB. if DagRun.__tablename__ not in metadata or TaskInstance.__tablename__ not in metadata: - # Key table doesn't exist -- likely empty DB - session.rollback() return - for (model, target) in chain( - ((m, metadata.tables[DagRun.__tablename__]) for m in models_to_dagrun), - ((m, metadata.tables[TaskInstance.__tablename__]) for m in models_to_ti), - ): - table = metadata.tables.get(model.__tablename__) - if table is None: + # We can't use the model here since it may differ from the db state due to + # this function is run prior to migration. Use the reflected table instead. + dagrun_table = metadata.tables[DagRun.__tablename__] + + existing_table_names = set(inspect(session.get_bind()).get_table_names()) + errored = False + + for model in models_to_dagrun: + # We can't use the model here since it may differ from the db state due to + # this function is run prior to migration. Use the reflected table instead. + source_table = metadata.tables.get(model.__tablename__) + if source_table is None: continue - if 'run_id' in table.columns: - # Migration already applied, don't check again + + # Migration already applied, don't check again. + if "run_id" in source_table.columns: continue - # We can't use the model here (as that would have the associationproxy, we instead need to use the - # _reflected_ table) - join_cond = and_(table.c.dag_id == target.c.dag_id, table.c.execution_date == target.c.execution_date) - if "task_id" in target.columns: - join_cond = and_(join_cond, table.c.task_id == target.c.task_id) - - query = ( - session.query(table.c.dag_id, table.c.task_id, table.c.execution_date) - .select_from(outerjoin(table, target, join_cond)) - .filter(target.c.dag_id.is_(None)) - ) # type: ignore - - num = query.count() - - if num > 0: - yield ( - f'The {table.name} table has {num} row{"s" if num != 1 else ""} without a ' - f'corresponding {target.name} row. You must manually correct this problem ' - '(possibly by deleting the problem rows).' + source_to_dag_run_join_cond = and_( + source_table.c.dag_id == dagrun_table.c.dag_id, + source_table.c.execution_date == dagrun_table.c.execution_date, + ) + invalid_row_count = ( + session.query(source_table.c.dag_id, source_table.c.task_id, source_table.c.execution_date) + .select_from(outerjoin(source_table, dagrun_table, source_to_dag_run_join_cond)) + .filter(dagrun_table.c.dag_id.is_(None)) + .count() + ) + if invalid_row_count <= 0: + continue + + dangling_table_name = _format_airflow_moved_table_name(source_table.name, "2.2") + if dangling_table_name in existing_table_names: + yield _format_dangling_error( + source_table=source_table.name, + target_table=dangling_table_name, + invalid_count=invalid_row_count, + reason=f"without a corresponding {dagrun_table.name} row", ) - session.rollback() + errored = True + continue + _move_dangling_task_data_to_new_table(session, source_table.name, dangling_table_name) + + if errored: + session.rollback() + else: + session.commit() @provide_session diff --git a/airflow/www/views.py b/airflow/www/views.py --- a/airflow/www/views.py +++ b/airflow/www/views.py @@ -82,7 +82,7 @@ from pendulum.parsing.exceptions import ParserError from pygments import highlight, lexers from pygments.formatters import HtmlFormatter -from sqlalchemy import Date, and_, desc, func, union_all +from sqlalchemy import Date, and_, desc, func, inspect, union_all from sqlalchemy.exc import IntegrityError from sqlalchemy.orm import joinedload from wtforms import SelectField, validators @@ -692,10 +692,21 @@ def index(self): fm for fm in settings.DASHBOARD_UIALERTS if fm.should_show(current_app.appbuilder.sm) ] + def _iter_parsed_moved_data_table_names(): + for table_name in inspect(session.get_bind()).get_table_names(): + segments = table_name.split("__", 2) + if len(segments) < 3: + continue + if segments[0] != settings.AIRFLOW_MOVED_TABLE_PREFIX: + continue + # Second segment is a version marker that we don't need to show. + yield segments[2], table_name + return self.render_template( 'airflow/dags.html', dags=dags, dashboard_alerts=dashboard_alerts, + migration_moved_data_alerts=sorted(set(_iter_parsed_moved_data_table_names())), current_page=current_page, search_query=arg_search_query if arg_search_query else '', page_title=page_title, diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -41,7 +41,7 @@ logger = logging.getLogger(__name__) -version = '2.2.0' +version = '2.2.1' my_dir = dirname(__file__) </patch>
[]
[]
Qiskit__qiskit-6675
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Plotting a circuit with matplotlib interferes with global figure <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Information - **Qiskit Terra version**: 0.18.0.dev0+98b4a1f - **Python version**: 3.8 - **Operating system**: Windows 10 ### What is the current behavior? Plotting a circuit diagram with matplotlib resizes the figure window ### Steps to reproduce the problem Here is a minimal example. We create a matplotlib window with a specified size and layout and we require qiskit to draw into the specified axis `ax2`. ``` import matplotlib.pyplot as plt from qiskit import QuantumCircuit Fig=plt.figure(1, figsize=(4,6)) plt.clf() ax1=Fig.add_subplot(1,2,1) ax2=Fig.add_subplot(1,2,2) ax1.plot([1,2,3], [4,7,4]) print(Fig.get_size_inches()) circ = QuantumCircuit(2, name='test') for ii in range(10): circ.h(1) circ.cz(0,1) circ.draw(ax=ax2, output='mpl') print(Fig.get_size_inches()) ``` After plotting the figure window has been resized. ### What is the expected behavior? Plotting on a specified axis should not interface with the other axis on a figure and the figure itself. ### Suggested solutions The problem is at this line: https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/visualization/matplotlib.py#L912 There the global figure properties are updated. </issue> <code> [start of README.md] 1 # Qiskit Terra 2 [![License](https://img.shields.io/github/license/Qiskit/qiskit-terra.svg?style=popout-square)](https://opensource.org/licenses/Apache-2.0)<!--- long-description-skip-begin -->[![Build Status](https://img.shields.io/travis/com/Qiskit/qiskit-terra/master.svg?style=popout-square)](https://travis-ci.com/Qiskit/qiskit-terra)[![Release](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases)[![Downloads](https://img.shields.io/pypi/dm/qiskit-terra.svg?style=popout-square)](https://pypi.org/project/qiskit-terra/)[![Coverage Status](https://coveralls.io/repos/github/Qiskit/qiskit-terra/badge.svg?branch=master)](https://coveralls.io/github/Qiskit/qiskit-terra?branch=master)<!--- long-description-skip-end --> 3 4 **Qiskit** is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms. 5 6 Qiskit is made up of elements that work together to enable quantum computing. This element is **Terra** and is the foundation on which the rest of Qiskit is built. 7 8 ## Installation 9 10 We encourage installing Qiskit via the pip tool (a python package manager), which installs all Qiskit elements, including Terra. 11 12 ```bash 13 pip install qiskit 14 ``` 15 16 PIP will handle all dependencies automatically and you will always install the latest (and well-tested) version. 17 18 To install from source, follow the instructions in the [documentation](https://qiskit.org/documentation/contributing_to_qiskit.html#install-terra-from-source). 19 20 ## Creating Your First Quantum Program in Qiskit Terra 21 22 Now that Qiskit is installed, it's time to begin working with Terra. 23 24 We are ready to try out a quantum circuit example, which is simulated locally using 25 the Qiskit BasicAer element. This is a simple example that makes an entangled state. 26 27 ``` 28 $ python 29 ``` 30 31 ```python 32 >>> from qiskit import QuantumCircuit, transpile 33 >>> from qiskit.providers.basicaer import QasmSimulatorPy 34 >>> qc = QuantumCircuit(2, 2) 35 >>> qc.h(0) 36 >>> qc.cx(0, 1) 37 >>> qc.measure([0,1], [0,1]) 38 >>> backend_sim = QasmSimulatorPy() 39 >>> transpiled_qc = transpile(qc, backend_sim) 40 >>> result = backend_sim.run(transpiled_qc).result() 41 >>> print(result.get_counts(qc)) 42 ``` 43 44 In this case, the output will be: 45 46 ```python 47 {'00': 513, '11': 511} 48 ``` 49 50 A script is available [here](examples/python/ibmq/hello_quantum.py), where we also show how to 51 run the same program on a real quantum computer via IBMQ. 52 53 ### Executing your code on a real quantum chip 54 55 You can also use Qiskit to execute your code on a 56 **real quantum chip**. 57 In order to do so, you need to configure Qiskit for using the credentials in 58 your IBM Q account: 59 60 #### Configure your IBMQ credentials 61 62 1. Create an _[IBM Q](https://quantum-computing.ibm.com) > Account_ if you haven't already done so. 63 64 2. Get an API token from the IBM Q website under _My Account > API Token_ and the URL for the account. 65 66 3. Take your token and url from step 2, here called `MY_API_TOKEN`, `MY_URL`, and run: 67 68 ```python 69 >>> from qiskit import IBMQ 70 >>> IBMQ.save_account('MY_API_TOKEN', 'MY_URL') 71 ``` 72 73 After calling `IBMQ.save_account()`, your credentials will be stored on disk. 74 Once they are stored, at any point in the future you can load and use them 75 in your program simply via: 76 77 ```python 78 >>> from qiskit import IBMQ 79 >>> IBMQ.load_account() 80 ``` 81 82 Those who do not want to save their credentials to disk should use instead: 83 84 ```python 85 >>> from qiskit import IBMQ 86 >>> IBMQ.enable_account('MY_API_TOKEN') 87 ``` 88 89 and the token will only be active for the session. For examples using Terra with real 90 devices we have provided a set of examples in **examples/python** and we suggest starting with [using_qiskit_terra_level_0.py](examples/python/using_qiskit_terra_level_0.py) and working up in 91 the levels. 92 93 ## Contribution Guidelines 94 95 If you'd like to contribute to Qiskit Terra, please take a look at our 96 [contribution guidelines](CONTRIBUTING.md). This project adheres to Qiskit's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 97 98 We use [GitHub issues](https://github.com/Qiskit/qiskit-terra/issues) for tracking requests and bugs. Please 99 [join the Qiskit Slack community](https://ibm.co/joinqiskitslack) 100 and use our [Qiskit Slack channel](https://qiskit.slack.com) for discussion and simple questions. 101 For questions that are more suited for a forum we use the Qiskit tag in the [Stack Exchange](https://quantumcomputing.stackexchange.com/questions/tagged/qiskit). 102 103 ## Next Steps 104 105 Now you're set up and ready to check out some of the other examples from our 106 [Qiskit Tutorials](https://github.com/Qiskit/qiskit-tutorials) repository. 107 108 ## Authors and Citation 109 110 Qiskit Terra is the work of [many people](https://github.com/Qiskit/qiskit-terra/graphs/contributors) who contribute 111 to the project at different levels. If you use Qiskit, please cite as per the included [BibTeX file](https://github.com/Qiskit/qiskit/blob/master/Qiskit.bib). 112 113 ## Changelog and Release Notes 114 115 The changelog for a particular release is dynamically generated and gets 116 written to the release page on Github for each release. For example, you can 117 find the page for the `0.9.0` release here: 118 119 https://github.com/Qiskit/qiskit-terra/releases/tag/0.9.0 120 121 The changelog for the current release can be found in the releases tab: 122 [![Releases](https://img.shields.io/github/release/Qiskit/qiskit-terra.svg?style=popout-square)](https://github.com/Qiskit/qiskit-terra/releases) 123 The changelog provides a quick overview of notable changes for a given 124 release. 125 126 Additionally, as part of each release detailed release notes are written to 127 document in detail what has changed as part of a release. This includes any 128 documentation on potential breaking changes on upgrade and new features. 129 For example, You can find the release notes for the `0.9.0` release in the 130 Qiskit documentation here: 131 132 https://qiskit.org/documentation/release_notes.html#terra-0-9 133 134 ## License 135 136 [Apache License 2.0](LICENSE.txt) 137 [end of README.md] [start of qiskit/visualization/pulse/matplotlib.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2019. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 # pylint: disable=invalid-name 14 15 """Matplotlib classes for pulse visualization.""" 16 17 import collections 18 from typing import Dict, List, Tuple, Callable, Union, Any 19 20 import numpy as np 21 22 from qiskit.visualization.matplotlib import HAS_MATPLOTLIB 23 from qiskit.exceptions import MissingOptionalLibraryError 24 from qiskit.visualization.pulse.qcstyle import PulseStyle, SchedStyle 25 from qiskit.visualization.pulse.interpolation import step_wise 26 from qiskit.pulse.channels import ( 27 DriveChannel, 28 ControlChannel, 29 MeasureChannel, 30 AcquireChannel, 31 SnapshotChannel, 32 Channel, 33 ) 34 from qiskit.pulse import ( 35 Waveform, 36 Snapshot, 37 Play, 38 Acquire, 39 PulseError, 40 ParametricPulse, 41 SetFrequency, 42 ShiftPhase, 43 Instruction, 44 ShiftFrequency, 45 SetPhase, 46 ) 47 from qiskit.pulse.schedule import ScheduleComponent 48 49 50 class EventsOutputChannels: 51 """Pulse dataset for channel.""" 52 53 def __init__(self, t0: int, tf: int): 54 """Create new channel dataset. 55 56 TODO: remove PV 57 58 Args: 59 t0: starting time of plot 60 tf: ending time of plot 61 """ 62 self.pulses = {} 63 self.t0 = t0 64 self.tf = tf 65 66 self._waveform = None 67 self._framechanges = None 68 self._setphase = None 69 self._frequencychanges = None 70 self._conditionals = None 71 self._snapshots = None 72 self._labels = None 73 self.enable = False 74 75 def add_instruction(self, start_time: int, instruction: Instruction): 76 """Add new pulse instruction to channel. 77 78 Args: 79 start_time: Starting time of instruction 80 instruction: Instruction object to be added 81 """ 82 if isinstance(instruction, Play): 83 pulse = instruction.pulse 84 else: 85 pulse = instruction 86 if start_time in self.pulses.keys(): 87 self.pulses[start_time].append(pulse) 88 else: 89 self.pulses[start_time] = [pulse] 90 91 @property 92 def waveform(self) -> np.ndarray: 93 """Get waveform.""" 94 if self._waveform is None: 95 self._build_waveform() 96 97 return self._waveform[self.t0 : self.tf] 98 99 @property 100 def framechanges(self) -> Dict[int, ShiftPhase]: 101 """Get frame changes.""" 102 if self._framechanges is None: 103 self._build_waveform() 104 105 return self._trim(self._framechanges) 106 107 @property 108 def setphase(self) -> Dict[int, SetPhase]: 109 """Get the SetPhase phase values.""" 110 if self._setphase is None: 111 self._build_waveform() 112 113 return self._trim(self._setphase) 114 115 @property 116 def frequencychanges(self) -> Dict[int, SetFrequency]: 117 """Get the frequency changes.""" 118 if self._frequencychanges is None: 119 self._build_waveform() 120 121 return self._trim(self._frequencychanges) 122 123 @property 124 def frequencyshift(self) -> Dict[int, ShiftFrequency]: 125 """Set the frequency changes.""" 126 if self._frequencychanges is None: 127 self._build_waveform() 128 129 return self._trim(self._frequencychanges) 130 131 @property 132 def conditionals(self) -> Dict[int, str]: 133 """Get conditionals.""" 134 if self._conditionals is None: 135 self._build_waveform() 136 137 return self._trim(self._conditionals) 138 139 @property 140 def snapshots(self) -> Dict[int, Snapshot]: 141 """Get snapshots.""" 142 if self._snapshots is None: 143 self._build_waveform() 144 145 return self._trim(self._snapshots) 146 147 @property 148 def labels(self) -> Dict[int, Union[Waveform, Acquire]]: 149 """Get labels.""" 150 if self._labels is None: 151 self._build_waveform() 152 153 return self._trim(self._labels) 154 155 def is_empty(self) -> bool: 156 """Return if pulse is empty. 157 158 Returns: 159 bool: if the channel has nothing to plot 160 """ 161 if ( 162 any(self.waveform) 163 or self.framechanges 164 or self.setphase 165 or self.conditionals 166 or self.snapshots 167 ): 168 return False 169 170 return True 171 172 def to_table(self, name: str) -> List[Tuple[int, str, str]]: 173 """Get table contains. 174 175 Args: 176 name (str): name of channel 177 178 Returns: 179 A list of events in the channel 180 """ 181 time_event = [] 182 183 framechanges = self.framechanges 184 setphase = self.setphase 185 conditionals = self.conditionals 186 snapshots = self.snapshots 187 frequencychanges = self.frequencychanges 188 189 for key, val in framechanges.items(): 190 data_str = "shift phase: %.2f" % val 191 time_event.append((key, name, data_str)) 192 for key, val in setphase.items(): 193 data_str = "set phase: %.2f" % val 194 time_event.append((key, name, data_str)) 195 for key, val in conditionals.items(): 196 data_str = "conditional, %s" % val 197 time_event.append((key, name, data_str)) 198 for key, val in snapshots.items(): 199 data_str = "snapshot: %s" % val 200 time_event.append((key, name, data_str)) 201 for key, val in frequencychanges.items(): 202 data_str = "frequency: %.4e" % val 203 time_event.append((key, name, data_str)) 204 205 return time_event 206 207 def _build_waveform(self): 208 """Create waveform from stored pulses.""" 209 self._framechanges = {} 210 self._setphase = {} 211 self._frequencychanges = {} 212 self._conditionals = {} 213 self._snapshots = {} 214 self._labels = {} 215 fc = 0 216 pv = np.zeros(self.tf + 1, dtype=np.complex128) 217 wf = np.zeros(self.tf + 1, dtype=np.complex128) 218 for time, commands in sorted(self.pulses.items()): 219 if time > self.tf: 220 break 221 tmp_fc = 0 222 tmp_set_phase = 0 223 tmp_sf = None 224 for command in commands: 225 if isinstance(command, ShiftPhase): 226 tmp_fc += command.phase 227 pv[time:] = 0 228 elif isinstance(command, SetPhase): 229 tmp_set_phase = command.phase 230 pv[time:] = 0 231 elif isinstance(command, SetFrequency): 232 tmp_sf = command.frequency 233 elif isinstance(command, ShiftFrequency): 234 tmp_sf = command.frequency 235 elif isinstance(command, Snapshot): 236 self._snapshots[time] = command.name 237 if tmp_fc != 0: 238 self._framechanges[time] = tmp_fc 239 fc += tmp_fc 240 if tmp_set_phase != 0: 241 self._setphase[time] = tmp_set_phase 242 fc = tmp_set_phase 243 if tmp_sf is not None: 244 self._frequencychanges[time] = tmp_sf 245 246 for command in commands: 247 duration = command.duration 248 tf = min(time + duration, self.tf) 249 if isinstance(command, ParametricPulse): 250 command = command.get_waveform() 251 if isinstance(command, Waveform): 252 wf[time:tf] = np.exp(1j * fc) * command.samples[: tf - time] 253 pv[time:] = 0 254 self._labels[time] = (tf, command) 255 256 elif isinstance(command, Acquire): 257 wf[time:tf] = np.ones(tf - time) 258 self._labels[time] = (tf, command) 259 self._waveform = wf + pv 260 261 def _trim(self, events: Dict[int, Any]) -> Dict[int, Any]: 262 """Return events during given `time_range`. 263 264 Args: 265 events: time and operation of events. 266 267 Returns: 268 Events within the specified time range. 269 """ 270 events_in_time_range = {} 271 272 for k, v in events.items(): 273 if self.t0 <= k <= self.tf: 274 events_in_time_range[k] = v 275 276 return events_in_time_range 277 278 279 class WaveformDrawer: 280 """A class to create figure for sample pulse.""" 281 282 def __init__(self, style: PulseStyle): 283 """Create new figure. 284 285 Args: 286 style: Style sheet for pulse visualization. 287 """ 288 self.style = style or PulseStyle() 289 290 def draw( 291 self, 292 pulse: Waveform, 293 dt: float = 1.0, 294 interp_method: Callable = None, 295 scale: float = 1, 296 draw_title: bool = False, 297 ): 298 """Draw figure. 299 300 Args: 301 pulse: Waveform to draw. 302 dt: time interval. 303 interp_method: interpolation function. 304 scale: Relative visual scaling of waveform amplitudes. 305 draw_title: Add a title to the plot when set to ``True``. 306 307 Returns: 308 matplotlib.figure.Figure: A matplotlib figure object of the pulse envelope. 309 310 Raises: 311 MissingOptionalLibraryError: If matplotlib is not installed 312 """ 313 # If these self.style.dpi or self.style.figsize are None, they will 314 # revert back to their default rcParam keys. 315 if not HAS_MATPLOTLIB: 316 raise MissingOptionalLibraryError( 317 libname="Matplotlib", 318 name="WaveformDrawer", 319 pip_install="pip install matplotlib", 320 ) 321 322 from matplotlib import pyplot as plt 323 324 figure = plt.figure(dpi=self.style.dpi, figsize=self.style.figsize) 325 326 interp_method = interp_method or step_wise 327 328 ax = figure.add_subplot(111) 329 ax.set_facecolor(self.style.bg_color) 330 331 samples = pulse.samples 332 time = np.arange(0, len(samples) + 1, dtype=float) * dt 333 334 time, re, im = interp_method(time, samples, self.style.num_points) 335 336 # plot 337 ax.fill_between( 338 x=time, 339 y1=re, 340 y2=np.zeros_like(time), 341 facecolor=self.style.wave_color[0], 342 alpha=0.3, 343 edgecolor=self.style.wave_color[0], 344 linewidth=1.5, 345 label="real part", 346 ) 347 ax.fill_between( 348 x=time, 349 y1=im, 350 y2=np.zeros_like(time), 351 facecolor=self.style.wave_color[1], 352 alpha=0.3, 353 edgecolor=self.style.wave_color[1], 354 linewidth=1.5, 355 label="imaginary part", 356 ) 357 358 ax.set_xlim(0, pulse.duration * dt) 359 if scale: 360 ax.set_ylim(-1 / scale, 1 / scale) 361 else: 362 v_max = max(max(np.abs(re)), max(np.abs(im))) 363 ax.set_ylim(-1.2 * v_max, 1.2 * v_max) 364 365 bbox = ax.get_position() 366 367 # This check is here for backwards compatibility. Before, the check was around 368 # the suptitle line, however since the font style can take on a type of None 369 # we need to unfortunately check both the type and the value of the object. 370 if isinstance(self.style.title_font_size, int) and self.style.title_font_size > 0: 371 if draw_title: 372 figure.suptitle( 373 pulse.name, fontsize=self.style.title_font_size, y=bbox.y1 + 0.02, va="bottom" 374 ) 375 376 return figure 377 378 379 class ScheduleDrawer: 380 """A class to create figure for schedule and channel.""" 381 382 def __init__(self, style: SchedStyle): 383 """Create new figure. 384 385 Args: 386 style: Style sheet for pulse schedule visualization. 387 Raises: 388 MissingOptionalLibraryError: If matplotlib is not installed 389 """ 390 if not HAS_MATPLOTLIB: 391 raise MissingOptionalLibraryError( 392 libname="Matplotlib", 393 name="ScheduleDrawer", 394 pip_install="pip install matplotlib", 395 ) 396 397 from matplotlib import pyplot as plt 398 399 self.plt_mod = plt 400 from matplotlib import gridspec 401 402 self.gridspec_mod = gridspec 403 self.style = style or SchedStyle() 404 405 def _build_channels( 406 self, 407 schedule: ScheduleComponent, 408 channels: List[Channel], 409 t0: int, 410 tf: int, 411 show_framechange_channels: bool = True, 412 ) -> Tuple[ 413 Dict[Channel, EventsOutputChannels], 414 Dict[Channel, EventsOutputChannels], 415 Dict[Channel, EventsOutputChannels], 416 ]: 417 """Create event table of each pulse channels in the given schedule. 418 419 Args: 420 schedule: Schedule object to plot. 421 channels: Channels to plot. 422 t0: Start time of plot. 423 tf: End time of plot. 424 show_framechange_channels: Plot channels only with FrameChanges (ShiftPhase). 425 426 Returns: 427 channels: All channels. 428 output_channels: All (D, M, U, A) channels. 429 snapshot_channels: Snapshots. 430 """ 431 # prepare waveform channels 432 drive_channels = collections.OrderedDict() 433 measure_channels = collections.OrderedDict() 434 control_channels = collections.OrderedDict() 435 acquire_channels = collections.OrderedDict() 436 snapshot_channels = collections.OrderedDict() 437 _channels = set() 438 if show_framechange_channels: 439 _channels.update(schedule.channels) 440 # take channels that do not only contain framechanges 441 else: 442 for start_time, instruction in schedule.instructions: 443 if not isinstance(instruction, (ShiftPhase, SetPhase)): 444 _channels.update(instruction.channels) 445 446 _channels.update(channels) 447 for chan in _channels: 448 if isinstance(chan, DriveChannel): 449 try: 450 drive_channels[chan] = EventsOutputChannels(t0, tf) 451 except PulseError: 452 pass 453 elif isinstance(chan, MeasureChannel): 454 try: 455 measure_channels[chan] = EventsOutputChannels(t0, tf) 456 except PulseError: 457 pass 458 elif isinstance(chan, ControlChannel): 459 try: 460 control_channels[chan] = EventsOutputChannels(t0, tf) 461 except PulseError: 462 pass 463 elif isinstance(chan, AcquireChannel): 464 try: 465 acquire_channels[chan] = EventsOutputChannels(t0, tf) 466 except PulseError: 467 pass 468 elif isinstance(chan, SnapshotChannel): 469 try: 470 snapshot_channels[chan] = EventsOutputChannels(t0, tf) 471 except PulseError: 472 pass 473 474 output_channels = { 475 **drive_channels, 476 **measure_channels, 477 **control_channels, 478 **acquire_channels, 479 } 480 channels = {**output_channels, **snapshot_channels} 481 # sort by index then name to group qubits together. 482 output_channels = collections.OrderedDict( 483 sorted(output_channels.items(), key=lambda x: (x[0].index, x[0].name)) 484 ) 485 channels = collections.OrderedDict( 486 sorted(channels.items(), key=lambda x: (x[0].index, x[0].name)) 487 ) 488 489 for start_time, instruction in schedule.instructions: 490 for channel in instruction.channels: 491 if channel in output_channels: 492 output_channels[channel].add_instruction(start_time, instruction) 493 elif channel in snapshot_channels: 494 snapshot_channels[channel].add_instruction(start_time, instruction) 495 return channels, output_channels, snapshot_channels 496 497 @staticmethod 498 def _scale_channels( 499 output_channels: Dict[Channel, EventsOutputChannels], 500 scale: float, 501 channel_scales: Dict[Channel, float] = None, 502 channels: List[Channel] = None, 503 plot_all: bool = False, 504 ) -> Dict[Channel, float]: 505 """Count number of channels that contains any instruction to show 506 and find scale factor of that channel. 507 508 Args: 509 output_channels: Event table of channels to show. 510 scale: Global scale factor. 511 channel_scales: Channel specific scale factors. 512 channels: Specified channels to plot. 513 plot_all: Plot empty channel. 514 515 Returns: 516 scale_dict: Scale factor of each channel. 517 """ 518 # count numbers of valid waveform 519 scale_dict = {chan: 0 for chan in output_channels.keys()} 520 for channel, events in output_channels.items(): 521 v_max = 0 522 if channels: 523 if channel in channels: 524 waveform = events.waveform 525 v_max = max( 526 v_max, max(np.abs(np.real(waveform))), max(np.abs(np.imag(waveform))) 527 ) 528 events.enable = True 529 else: 530 if not events.is_empty() or plot_all: 531 waveform = events.waveform 532 v_max = max( 533 v_max, max(np.abs(np.real(waveform))), max(np.abs(np.imag(waveform))) 534 ) 535 events.enable = True 536 537 scale_val = channel_scales.get(channel, scale) 538 if not scale_val: 539 # when input schedule is empty or comprises only frame changes, 540 # we need to overwrite maximum amplitude by a value greater than zero, 541 # otherwise auto axis scaling will fail with zero division. 542 v_max = v_max or 1 543 scale_dict[channel] = 1 / v_max 544 else: 545 scale_dict[channel] = scale_val 546 547 return scale_dict 548 549 def _draw_table(self, figure, channels: Dict[Channel, EventsOutputChannels], dt: float): 550 """Draw event table if events exist. 551 552 Args: 553 figure (matplotlib.figure.Figure): Figure object 554 channels: Dictionary of channel and event table 555 dt: Time interval 556 557 Returns: 558 Tuple[matplotlib.axes.Axes]: Axis objects for table and canvas of pulses. 559 """ 560 # create table 561 table_data = [] 562 if self.style.use_table: 563 for channel, events in channels.items(): 564 if events.enable: 565 table_data.extend(events.to_table(channel.name)) 566 table_data = sorted(table_data, key=lambda x: x[0]) 567 568 # plot table 569 if table_data: 570 # table area size 571 ncols = self.style.table_columns 572 nrows = int(np.ceil(len(table_data) / ncols)) 573 max_size = self.style.max_table_ratio * figure.get_size_inches()[1] 574 max_rows = np.floor(max_size / self.style.fig_unit_h_table / ncols) 575 nrows = int(min(nrows, max_rows)) 576 # don't overflow plot with table data 577 table_data = table_data[: int(nrows * ncols)] 578 # fig size 579 h_table = nrows * self.style.fig_unit_h_table 580 h_waves = figure.get_size_inches()[1] - h_table 581 582 # create subplots 583 gs = self.gridspec_mod.GridSpec(2, 1, height_ratios=[h_table, h_waves], hspace=0) 584 tb = self.plt_mod.subplot(gs[0]) 585 ax = self.plt_mod.subplot(gs[1]) 586 587 # configure each cell 588 tb.axis("off") 589 cell_value = [["" for _kk in range(ncols * 3)] for _jj in range(nrows)] 590 cell_color = [self.style.table_color * ncols for _jj in range(nrows)] 591 cell_width = [*([0.2, 0.2, 0.5] * ncols)] 592 for ii, data in enumerate(table_data): 593 # pylint: disable=unbalanced-tuple-unpacking 594 r, c = np.unravel_index(ii, (nrows, ncols), order="f") 595 # pylint: enable=unbalanced-tuple-unpacking 596 time, ch_name, data_str = data 597 # item 598 cell_value[r][3 * c + 0] = "t = %s" % time * dt 599 cell_value[r][3 * c + 1] = "ch %s" % ch_name 600 cell_value[r][3 * c + 2] = data_str 601 table = tb.table( 602 cellText=cell_value, 603 cellLoc="left", 604 rowLoc="center", 605 colWidths=cell_width, 606 bbox=[0, 0, 1, 1], 607 cellColours=cell_color, 608 ) 609 table.auto_set_font_size(False) 610 table.set_fontsize = self.style.table_font_size 611 else: 612 tb = None 613 ax = figure.add_subplot(111) 614 615 return tb, ax 616 617 @staticmethod 618 def _draw_snapshots( 619 ax, snapshot_channels: Dict[Channel, EventsOutputChannels], y0: float 620 ) -> None: 621 """Draw snapshots to given mpl axis. 622 623 Args: 624 ax (matplotlib.axes.Axes): axis object to draw snapshots. 625 snapshot_channels: Event table of snapshots. 626 y0: vertical position to draw the snapshots. 627 """ 628 for events in snapshot_channels.values(): 629 snapshots = events.snapshots 630 if snapshots: 631 for time in snapshots: 632 ax.annotate( 633 s="\u25D8", 634 xy=(time, y0), 635 xytext=(time, y0 + 0.08), 636 arrowprops={"arrowstyle": "wedge"}, 637 ha="center", 638 ) 639 640 def _draw_framechanges(self, ax, fcs: Dict[int, ShiftPhase], y0: float) -> bool: 641 """Draw frame change of given channel to given mpl axis. 642 643 Args: 644 ax (matplotlib.axes.Axes): axis object to draw frame changes. 645 fcs: Event table of frame changes. 646 y0: vertical position to draw the frame changes. 647 """ 648 for time in fcs.keys(): 649 ax.text( 650 x=time, 651 y=y0, 652 s=r"$\circlearrowleft$", 653 fontsize=self.style.icon_font_size, 654 ha="center", 655 va="center", 656 ) 657 658 def _draw_frequency_changes(self, ax, sf: Dict[int, SetFrequency], y0: float) -> bool: 659 """Draw set frequency of given channel to given mpl axis. 660 661 Args: 662 ax (matplotlib.axes.Axes): axis object to draw frame changes. 663 sf: Event table of set frequency. 664 y0: vertical position to draw the frame changes. 665 """ 666 for time in sf.keys(): 667 ax.text( 668 x=time, 669 y=y0, 670 s=r"$\leftrightsquigarrow$", 671 fontsize=self.style.icon_font_size, 672 ha="center", 673 va="center", 674 rotation=90, 675 ) 676 677 def _get_channel_color(self, channel: Channel) -> str: 678 """Lookup table for waveform color. 679 680 Args: 681 channel: Type of channel. 682 683 Return: 684 Color code or name of color. 685 """ 686 # choose color 687 if isinstance(channel, DriveChannel): 688 color = self.style.d_ch_color 689 elif isinstance(channel, ControlChannel): 690 color = self.style.u_ch_color 691 elif isinstance(channel, MeasureChannel): 692 color = self.style.m_ch_color 693 elif isinstance(channel, AcquireChannel): 694 color = self.style.a_ch_color 695 else: 696 color = "black" 697 return color 698 699 @staticmethod 700 def _prev_label_at_time( 701 prev_labels: List[Dict[int, Union[Waveform, Acquire]]], time: int 702 ) -> bool: 703 """Check overlap of pulses with previous channels. 704 705 Args: 706 prev_labels: List of labels in previous channels. 707 time: Start time of current pulse instruction. 708 709 Returns: 710 `True` if current instruction overlaps with others. 711 """ 712 for labels in prev_labels: 713 for t0, (tf, _) in labels.items(): 714 if time in (t0, tf): 715 return True 716 return False 717 718 def _draw_labels( 719 self, 720 ax, 721 labels: Dict[int, Union[Waveform, Acquire]], 722 prev_labels: List[Dict[int, Union[Waveform, Acquire]]], 723 y0: float, 724 ) -> None: 725 """Draw label of pulse instructions on given mpl axis. 726 727 Args: 728 ax (matplotlib.axes.Axes): axis object to draw labels. 729 labels: Pulse labels of channel. 730 prev_labels: Pulse labels of previous channels. 731 y0: vertical position to draw the labels. 732 """ 733 for t0, (tf, cmd) in labels.items(): 734 if isinstance(cmd, Acquire): 735 name = cmd.name if cmd.name else "acquire" 736 else: 737 name = cmd.name 738 739 ax.annotate( 740 r"%s" % name, 741 xy=((t0 + tf) // 2, y0), 742 xytext=((t0 + tf) // 2, y0 - 0.07), 743 fontsize=self.style.label_font_size, 744 ha="center", 745 va="center", 746 ) 747 748 linestyle = self.style.label_ch_linestyle 749 alpha = self.style.label_ch_alpha 750 color = self.style.label_ch_color 751 752 if not self._prev_label_at_time(prev_labels, t0): 753 ax.axvline(t0, -1, 1, color=color, linestyle=linestyle, alpha=alpha) 754 if not (self._prev_label_at_time(prev_labels, tf) or tf in labels): 755 ax.axvline(tf, -1, 1, color=color, linestyle=linestyle, alpha=alpha) 756 757 def _draw_channels( 758 self, 759 ax, 760 output_channels: Dict[Channel, EventsOutputChannels], 761 interp_method: Callable, 762 t0: int, 763 tf: int, 764 scale_dict: Dict[Channel, float], 765 label: bool = False, 766 framechange: bool = True, 767 frequencychange: bool = True, 768 ) -> float: 769 """Draw pulse instructions on given mpl axis. 770 771 Args: 772 ax (matplotlib.axes.Axes): axis object to draw pulses. 773 output_channels: Event table of channels. 774 interp_method: Callback function for waveform interpolation. 775 t0: Start time of schedule. 776 tf: End time of schedule. 777 scale_dict: Scale factor for each channel. 778 label: When set `True` draw labels. 779 framechange: When set `True` draw frame change symbols. 780 frequencychange: When set `True` draw frequency change symbols. 781 782 Return: 783 Value of final vertical axis of canvas. 784 """ 785 y0 = 0 786 prev_labels = [] 787 for channel, events in output_channels.items(): 788 if events.enable: 789 # scaling value of this channel 790 scale = 0.5 * scale_dict.get(channel, 0.5) 791 # plot waveform 792 waveform = events.waveform 793 time = np.arange(t0, tf + 1, dtype=float) 794 if waveform.any(): 795 time, re, im = interp_method(time, waveform, self.style.num_points) 796 else: 797 # when input schedule is empty or comprises only frame changes, 798 # we should avoid interpolation due to lack of data points. 799 # instead, it just returns vector of zero. 800 re, im = np.zeros_like(time), np.zeros_like(time) 801 color = self._get_channel_color(channel) 802 # Minimum amplitude scaled 803 amp_min = scale * abs(min(0, np.nanmin(re), np.nanmin(im))) 804 # scaling and offset 805 re = scale * re + y0 806 im = scale * im + y0 807 offset = np.zeros_like(time) + y0 808 # plot 809 ax.fill_between( 810 x=time, 811 y1=re, 812 y2=offset, 813 facecolor=color[0], 814 alpha=0.3, 815 edgecolor=color[0], 816 linewidth=1.5, 817 label="real part", 818 ) 819 ax.fill_between( 820 x=time, 821 y1=im, 822 y2=offset, 823 facecolor=color[1], 824 alpha=0.3, 825 edgecolor=color[1], 826 linewidth=1.5, 827 label="imaginary part", 828 ) 829 ax.plot((t0, tf), (y0, y0), color="#000000", linewidth=1.0) 830 831 # plot frame changes 832 fcs = events.framechanges 833 if fcs and framechange: 834 self._draw_framechanges(ax, fcs, y0) 835 # plot frequency changes 836 sf = events.frequencychanges 837 if sf and frequencychange: 838 self._draw_frequency_changes(ax, sf, y0 + 0.05) 839 # plot labels 840 labels = events.labels 841 if labels and label: 842 self._draw_labels(ax, labels, prev_labels, y0) 843 prev_labels.append(labels) 844 845 else: 846 continue 847 848 # plot label 849 ax.text( 850 x=t0, 851 y=y0, 852 s=channel.name, 853 fontsize=self.style.axis_font_size, 854 ha="right", 855 va="center", 856 ) 857 # show scaling factor 858 ax.text( 859 x=t0, 860 y=y0 - 0.1, 861 s="x%.1f" % (2 * scale), 862 fontsize=0.7 * self.style.axis_font_size, 863 ha="right", 864 va="top", 865 ) 866 867 # change the y0 offset for removing spacing when a channel has negative values 868 if self.style.remove_spacing: 869 y0 -= 0.5 + amp_min 870 else: 871 y0 -= 1 872 return y0 873 874 def draw( 875 self, 876 schedule: ScheduleComponent, 877 dt: float, 878 interp_method: Callable, 879 plot_range: Tuple[float, float], 880 scale: float = None, 881 channel_scales: Dict[Channel, float] = None, 882 plot_all: bool = True, 883 table: bool = False, 884 label: bool = False, 885 framechange: bool = True, 886 channels: List[Channel] = None, 887 show_framechange_channels: bool = True, 888 draw_title: bool = False, 889 ): 890 """Draw figure. 891 892 Args: 893 schedule: schedule object to plot. 894 dt: Time interval of samples. Pulses are visualized in the unit of 895 cycle time if not provided. 896 interp_method: Interpolation function. See example. 897 Interpolation is disabled in default. 898 See `qiskit.visualization.pulse.interpolation` for more information. 899 plot_range: A tuple of time range to plot. 900 scale: Scaling of waveform amplitude. Pulses are automatically 901 scaled channel by channel if not provided. 902 channel_scales: Dictionary of scale factor for specific channels. 903 Scale of channels not specified here is overwritten by `scale`. 904 plot_all: When set `True` plot empty channels. 905 table: When set `True` draw event table for supported commands. 906 label: When set `True` draw label for individual instructions. 907 framechange: When set `True` draw framechange indicators. 908 channels: A list of channel names to plot. 909 All non-empty channels are shown if not provided. 910 show_framechange_channels: When set `True` plot channels 911 with only framechange instructions. 912 draw_title: Add a title to the plot when set to ``True``. 913 914 Returns: 915 matplotlib.figure.Figure: A matplotlib figure object for the pulse envelope. 916 917 Raises: 918 VisualizationError: When schedule cannot be drawn 919 """ 920 figure = self.plt_mod.figure(dpi=self.style.dpi, figsize=self.style.figsize) 921 922 if channels is None: 923 channels = [] 924 interp_method = interp_method or step_wise 925 926 if channel_scales is None: 927 channel_scales = {} 928 929 # setup plot range 930 if plot_range: 931 t0 = int(np.floor(plot_range[0])) 932 tf = int(np.floor(plot_range[1])) 933 else: 934 t0 = 0 935 # when input schedule is empty or comprises only frame changes, 936 # we need to overwrite pulse duration by an integer greater than zero, 937 # otherwise waveform returns empty array and matplotlib will be crashed. 938 if channels: 939 tf = schedule.ch_duration(*channels) 940 else: 941 tf = schedule.stop_time 942 tf = tf or 1 943 944 # prepare waveform channels 945 (schedule_channels, output_channels, snapshot_channels) = self._build_channels( 946 schedule, channels, t0, tf, show_framechange_channels 947 ) 948 949 # count numbers of valid waveform 950 scale_dict = self._scale_channels( 951 output_channels, 952 scale=scale, 953 channel_scales=channel_scales, 954 channels=channels, 955 plot_all=plot_all, 956 ) 957 958 if table: 959 tb, ax = self._draw_table(figure, schedule_channels, dt) 960 else: 961 tb = None 962 ax = figure.add_subplot(111) 963 964 ax.set_facecolor(self.style.bg_color) 965 966 y0 = self._draw_channels( 967 ax, 968 output_channels, 969 interp_method, 970 t0, 971 tf, 972 scale_dict, 973 label=label, 974 framechange=framechange, 975 ) 976 977 y_ub = 0.5 + self.style.vertical_span 978 y_lb = y0 + 0.5 - self.style.vertical_span 979 980 self._draw_snapshots(ax, snapshot_channels, y_lb) 981 982 ax.set_xlim(t0, tf) 983 tick_labels = np.linspace(t0, tf, 5) 984 ax.set_xticks(tick_labels) 985 ax.set_xticklabels( 986 [self.style.axis_formatter % label for label in tick_labels * dt], 987 fontsize=self.style.axis_font_size, 988 ) 989 ax.set_ylim(y_lb, y_ub) 990 ax.set_yticklabels([]) 991 992 if tb is not None: 993 bbox = tb.get_position() 994 else: 995 bbox = ax.get_position() 996 997 # This check is here for backwards compatibility. Before, the check was around 998 # the suptitle line, however since the font style can take on a type of None 999 # we need to unfortunately check both the type and the value of the object. 1000 if isinstance(self.style.title_font_size, int) and self.style.title_font_size > 0: 1001 if draw_title: 1002 figure.suptitle( 1003 schedule.name, 1004 fontsize=self.style.title_font_size, 1005 y=bbox.y1 + 0.02, 1006 va="bottom", 1007 ) 1008 return figure 1009 [end of qiskit/visualization/pulse/matplotlib.py] [start of qiskit/visualization/pulse_v2/plotters/matplotlib.py] 1 # This code is part of Qiskit. 2 # 3 # (C) Copyright IBM 2020. 4 # 5 # This code is licensed under the Apache License, Version 2.0. You may 6 # obtain a copy of this license in the LICENSE.txt file in the root directory 7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. 8 # 9 # Any modifications or derivative works of this code must retain this 10 # copyright notice, and modified files need to carry a notice indicating 11 # that they have been altered from the originals. 12 13 # pylint: disable=invalid-name 14 15 """Matplotlib plotter API.""" 16 17 from typing import Optional 18 19 import matplotlib 20 import matplotlib.pyplot as plt 21 import numpy as np 22 from matplotlib.patches import Rectangle 23 24 from qiskit.visualization.exceptions import VisualizationError 25 from qiskit.visualization.pulse_v2 import core, drawings, types 26 from qiskit.visualization.pulse_v2.plotters.base_plotter import BasePlotter 27 28 29 class Mpl2DPlotter(BasePlotter): 30 """Matplotlib API for pulse drawer. 31 32 This plotter places canvas charts along y axis of 2D canvas with vertical offset. 33 Each chart is map to X-Y axis of the canvas. 34 """ 35 36 def __init__(self, canvas: core.DrawerCanvas, axis: Optional[plt.Axes] = None): 37 """Create new plotter. 38 39 Args: 40 canvas: Configured drawer canvas object. Canvas object should be updated 41 with `.update` method before set to the plotter API. 42 axis: Matplotlib axis object. When `axis` is provided, the plotter updates 43 given axis instead of creating and returning new matplotlib figure. 44 """ 45 super().__init__(canvas=canvas) 46 47 # calculate height of all charts 48 canvas_height = 0 49 for chart in self.canvas.charts: 50 if not chart.is_active and not self.canvas.formatter["control.show_empty_channel"]: 51 continue 52 canvas_height += chart.vmax - chart.vmin 53 # set min canvas_height size 54 canvas_height = max(canvas_height, 0.1) 55 56 if axis is None: 57 fig_h = canvas_height * self.canvas.formatter["general.fig_chart_height"] 58 fig_w = self.canvas.formatter["general.fig_width"] 59 60 self.figure = plt.figure(figsize=(fig_w, fig_h)) 61 self.ax = self.figure.add_subplot(1, 1, 1) 62 else: 63 self.figure = axis.figure 64 self.ax = axis 65 66 self.initialize_canvas() 67 68 def initialize_canvas(self): 69 """Format appearance of matplotlib canvas.""" 70 self.ax.set_facecolor(self.canvas.formatter["color.background"]) 71 72 # axis labels 73 self.ax.set_yticklabels([]) 74 self.ax.yaxis.set_tick_params(left=False) 75 76 def draw(self): 77 """Output drawings stored in canvas object.""" 78 # axis configuration 79 axis_config = self.canvas.layout["time_axis_map"]( 80 time_window=self.canvas.time_range, 81 axis_breaks=self.canvas.time_breaks, 82 dt=self.canvas.device.dt, 83 ) 84 85 current_y = 0 86 margin_y = self.canvas.formatter["margin.between_channel"] 87 for chart in self.canvas.charts: 88 if not chart.is_active and not self.canvas.formatter["control.show_empty_channel"]: 89 continue 90 current_y -= chart.vmax 91 for _, data in chart.collections: 92 # calculate scaling factor 93 if not data.ignore_scaling: 94 # product of channel-wise scaling and chart level scaling 95 scale = max(self.canvas.chan_scales.get(chan, 1.0) for chan in data.channels) 96 scale *= chart.scale 97 else: 98 scale = 1.0 99 100 x = data.xvals 101 y = scale * data.yvals + current_y 102 103 if isinstance(data, drawings.LineData): 104 # line object 105 if data.fill: 106 self.ax.fill_between(x, y1=y, y2=current_y * np.ones_like(y), **data.styles) 107 else: 108 self.ax.plot(x, y, **data.styles) 109 elif isinstance(data, drawings.TextData): 110 # text object 111 text = fr"${data.latex}$" if data.latex else data.text 112 # replace dynamic text 113 text = text.replace(types.DynamicString.SCALE, f"{chart.scale:.1f}") 114 self.ax.text(x=x[0], y=y[0], s=text, **data.styles) 115 elif isinstance(data, drawings.BoxData): 116 xy = x[0], y[0] 117 box = Rectangle( 118 xy, width=x[1] - x[0], height=y[1] - y[0], fill=True, **data.styles 119 ) 120 self.ax.add_patch(box) 121 else: 122 VisualizationError( 123 "Data {name} is not supported " 124 "by {plotter}".format(name=data, plotter=self.__class__.__name__) 125 ) 126 # axis break 127 for pos in axis_config.axis_break_pos: 128 self.ax.text( 129 x=pos, 130 y=current_y, 131 s="//", 132 ha="center", 133 va="center", 134 zorder=self.canvas.formatter["layer.axis_label"], 135 fontsize=self.canvas.formatter["text_size.axis_break_symbol"], 136 rotation=180, 137 ) 138 139 # shift chart position 140 current_y += chart.vmin - margin_y 141 142 # remove the last margin 143 current_y += margin_y 144 145 y_max = self.canvas.formatter["margin.top"] 146 y_min = current_y - self.canvas.formatter["margin.bottom"] 147 148 # plot axis break line 149 for pos in axis_config.axis_break_pos: 150 self.ax.plot( 151 [pos, pos], 152 [y_min, y_max], 153 zorder=self.canvas.formatter["layer.fill_waveform"] + 1, 154 linewidth=self.canvas.formatter["line_width.axis_break"], 155 color=self.canvas.formatter["color.background"], 156 ) 157 158 # label 159 self.ax.set_xticks(list(axis_config.axis_map.keys())) 160 self.ax.set_xticklabels( 161 list(axis_config.axis_map.values()), 162 fontsize=self.canvas.formatter["text_size.axis_label"], 163 ) 164 self.ax.set_xlabel( 165 axis_config.label, fontsize=self.canvas.formatter["text_size.axis_label"] 166 ) 167 168 # boundary 169 if axis_config.window == (0, 0): 170 self.ax.set_xlim(0, 1) 171 else: 172 self.ax.set_xlim(*axis_config.window) 173 self.ax.set_ylim(y_min, y_max) 174 175 # title 176 if self.canvas.fig_title: 177 self.ax.text( 178 x=axis_config.window[0], 179 y=y_max, 180 s=self.canvas.fig_title, 181 ha="left", 182 va="bottom", 183 zorder=self.canvas.formatter["layer.fig_title"], 184 color=self.canvas.formatter["color.fig_title"], 185 size=self.canvas.formatter["text_size.fig_title"], 186 ) 187 188 def get_image(self, interactive: bool = False) -> matplotlib.pyplot.Figure: 189 """Get image data to return. 190 191 Args: 192 interactive: When set `True` show the circuit in a new window. 193 This depends on the matplotlib backend being used supporting this. 194 195 Returns: 196 Matplotlib figure data. 197 """ 198 if matplotlib.get_backend() in ["module://ipykernel.pylab.backend_inline", "nbAgg"]: 199 plt.close(self.figure) 200 201 if self.figure and interactive: 202 self.figure.show() 203 204 return self.figure 205 [end of qiskit/visualization/pulse_v2/plotters/matplotlib.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Qiskit/qiskit
45f63bd0b7547721c43eb769bba84e4ae67adf72
Plotting a circuit with matplotlib interferes with global figure <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues --> ### Information - **Qiskit Terra version**: 0.18.0.dev0+98b4a1f - **Python version**: 3.8 - **Operating system**: Windows 10 ### What is the current behavior? Plotting a circuit diagram with matplotlib resizes the figure window ### Steps to reproduce the problem Here is a minimal example. We create a matplotlib window with a specified size and layout and we require qiskit to draw into the specified axis `ax2`. ``` import matplotlib.pyplot as plt from qiskit import QuantumCircuit Fig=plt.figure(1, figsize=(4,6)) plt.clf() ax1=Fig.add_subplot(1,2,1) ax2=Fig.add_subplot(1,2,2) ax1.plot([1,2,3], [4,7,4]) print(Fig.get_size_inches()) circ = QuantumCircuit(2, name='test') for ii in range(10): circ.h(1) circ.cz(0,1) circ.draw(ax=ax2, output='mpl') print(Fig.get_size_inches()) ``` After plotting the figure window has been resized. ### What is the expected behavior? Plotting on a specified axis should not interface with the other axis on a figure and the figure itself. ### Suggested solutions The problem is at this line: https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/visualization/matplotlib.py#L912 There the global figure properties are updated.
2021-07-02T14:48:52Z
<patch> diff --git a/qiskit/visualization/matplotlib.py b/qiskit/visualization/matplotlib.py --- a/qiskit/visualization/matplotlib.py +++ b/qiskit/visualization/matplotlib.py @@ -14,7 +14,6 @@ """mpl circuit visualization backend.""" -import collections import itertools import re from warnings import warn @@ -30,79 +29,30 @@ HAS_PYLATEX = False from qiskit.circuit import ControlledGate +from qiskit.circuit import Measure +from qiskit.circuit.library.standard_gates import ( + SwapGate, + RZZGate, + U1Gate, + PhaseGate, + XGate, + ZGate, +) +from qiskit.extensions import Initialize from qiskit.visualization.qcstyle import load_style from qiskit.visualization.utils import get_gate_ctrl_text, get_param_str -from qiskit.exceptions import MissingOptionalLibraryError from qiskit.circuit.tools.pi_check import pi_check +from qiskit.exceptions import MissingOptionalLibraryError # Default gate width and height WID = 0.65 HIG = 0.65 -BASE_SIZE = 3.01 PORDER_GATE = 5 PORDER_LINE = 3 PORDER_REGLINE = 2 PORDER_GRAY = 3 PORDER_TEXT = 6 -PORDER_SUBP = 4 - - -class Anchor: - """Locate the anchors for the gates""" - - def __init__(self, reg_num, yind, fold): - self.__yind = yind - self.__fold = fold - self.__reg_num = reg_num - self.__gate_placed = [] - self.gate_anchor = 0 - - def plot_coord(self, index, gate_width, x_offset): - """Set the coord positions for an index""" - h_pos = index % self.__fold + 1 - # check folding - if self.__fold > 0: - if h_pos + (gate_width - 1) > self.__fold: - index += self.__fold - (h_pos - 1) - x_pos = index % self.__fold + 0.5 * gate_width + 0.04 - y_pos = self.__yind - (index // self.__fold) * (self.__reg_num + 1) - else: - x_pos = index + 0.5 * gate_width + 0.04 - y_pos = self.__yind - - # could have been updated, so need to store - self.gate_anchor = index - return x_pos + x_offset, y_pos - - def is_locatable(self, index, gate_width): - """Determine if a gate has been placed""" - hold = [index + i for i in range(gate_width)] - for p in hold: - if p in self.__gate_placed: - return False - return True - - def set_index(self, index, gate_width): - """Set the index for a gate""" - if self.__fold < 2: - _index = index - else: - h_pos = index % self.__fold + 1 - if h_pos + (gate_width - 1) > self.__fold: - _index = index + self.__fold - (h_pos - 1) + 1 - else: - _index = index - for ii in range(gate_width): - if _index + ii not in self.__gate_placed: - self.__gate_placed.append(_index + ii) - self.__gate_placed.sort() - - def get_index(self): - """Getter for the index""" - if self.__gate_placed: - return self.__gate_placed[-1] + 1 - return 0 class MatplotlibDrawer: @@ -137,19 +87,19 @@ def __init__( ) from matplotlib import patches - self.patches_mod = patches + self._patches_mod = patches from matplotlib import pyplot as plt - self.plt_mod = plt + self._plt_mod = plt if not HAS_PYLATEX: raise MissingOptionalLibraryError( libname="pylatexenc", name="MatplotlibDrawer", pip_install="pip install pylatexenc", ) - self._clbit = [] - self._qubit = [] - self._registers(clbits, qubits) + + # First load register and index info for the cregs and qregs, + # then add any bits which don't have registers associated with them. self._bit_locations = { bit: {"register": register, "index": index} for register in cregs + qregs @@ -159,10 +109,13 @@ def __init__( if bit not in self._bit_locations: self._bit_locations[bit] = {"register": None, "index": index} - self._qubit_dict = collections.OrderedDict() - self._clbit_dict = collections.OrderedDict() + self._qubit = qubits + self._clbit = clbits + self._qubit_dict = {} + self._clbit_dict = {} self._nodes = nodes self._scale = 1.0 if scale is None else scale + self._style, def_font_ratio = load_style(style) # If font/subfont ratio changes from default, have to scale width calculations for @@ -175,36 +128,40 @@ def __init__( self._fold = fold if self._fold < 2: self._fold = -1 + if ax is None: - self._return_fig = True + self._user_ax = False self._figure = plt.figure() self._figure.patch.set_facecolor(color=self._style["bg"]) self._ax = self._figure.add_subplot(111) else: - self._return_fig = False + self._user_ax = True self._ax = ax self._figure = ax.get_figure() self._ax.axis("off") self._ax.set_aspect("equal") self._ax.tick_params(labelbottom=False, labeltop=False, labelleft=False, labelright=False) + self._initial_state = initial_state self._cregbundle = cregbundle - self._set_cregbundle() self._global_phase = global_phase - self._n_lines = 0 - self._xmax = 0 - self._ymax = 0 - self._x_offset = 0 - self._reg_long_text = 0 - self._style["fs"] *= self._scale - self._style["sfs"] *= self._scale - self._lwidth15 = 1.5 * self._scale - self._lwidth2 = 2.0 * self._scale - self._gate_width = {} - - # these char arrays are for finding text_width when not - # using get_renderer method for the matplotlib backend + self._fs = self._style["fs"] + self._sfs = self._style["sfs"] + self._lwidth1 = 1.0 + self._lwidth15 = 1.5 + self._lwidth2 = 2.0 + self._x_offset = 0.0 + + # _data per node with 'width', 'gate_text', 'raw_gate_text', + # 'ctrl_text', 'param', q_xy', 'c_xy', and 'c_indxs' + # and colors 'fc', 'ec', 'lc', 'sc', 'gt', and 'tc' + self._data = {} + self._layer_widths = [] + self._q_anchors = {} + self._c_anchors = {} + + # _char_list for finding text_width of names, labels, and params self._char_list = { " ": (0.0958, 0.0583), "!": (0.1208, 0.0729), @@ -302,20 +259,108 @@ def __init__( "}": (0.1896, 0.1188), } - def _registers(self, clbit, qubit): - self._clbit = [] - for r in clbit: - self._clbit.append(r) - self._qubit = [] - for r in qubit: - self._qubit.append(r) - - def _set_cregbundle(self): - """Sets the cregbundle to False if there is any instruction that - needs access to individual clbit.""" + def draw(self, filename=None, verbose=False): + """Main entry point to 'matplotlib' ('mpl') drawer. Called from + ``visualization.circuit_drawer`` and from ``QuantumCircuit.draw`` through circuit_drawer. + """ + # All information for the drawing is first loaded into self._data for the gates and into + # self._qubit_dict and self._clbit_dict for the qubits, clbits, and wires, + # followed by the coordinates for each gate. + + # get layer widths + self._get_layer_widths() + + # load the _qubit_dict and _clbit_dict with register info + n_lines = self._get_reg_names_and_numbers() + + # load the coordinates for each gate and compute number of folds + max_anc = self._get_coords(n_lines) + num_folds = max(0, max_anc - 1) // self._fold if self._fold > 0 else 0 + + # The window size limits are computed, followed by one of the four possible ways + # of scaling the drawing. + + # compute the window size + if max_anc > self._fold > 0: + xmax = self._fold + self._x_offset + 0.1 + ymax = (num_folds + 1) * (n_lines + 1) - 1 + else: + x_incr = 0.4 if not self._nodes else 0.9 + xmax = max_anc + 1 + self._x_offset - x_incr + ymax = n_lines + + xl = -self._style["margin"][0] + xr = xmax + self._style["margin"][1] + yb = -ymax - self._style["margin"][2] + 0.5 + yt = self._style["margin"][3] + 0.5 + self._ax.set_xlim(xl, xr) + self._ax.set_ylim(yb, yt) + + # update figure size and, for backward compatibility, + # need to scale by a default value equal to (self._fs * 3.01 / 72 / 0.65) + base_fig_w = (xr - xl) * 0.8361111 + base_fig_h = (yt - yb) * 0.8361111 + scale = self._scale + + # if user passes in an ax, this size takes priority over any other settings + if self._user_ax: + # from stackoverflow #19306510, get the bbox size for the ax and then reset scale + bbox = self._ax.get_window_extent().transformed(self._figure.dpi_scale_trans.inverted()) + scale = bbox.width / base_fig_w / 0.8361111 + + # if scale not 1.0, use this scale factor + elif self._scale != 1.0: + self._figure.set_size_inches(base_fig_w * self._scale, base_fig_h * self._scale) + + # if "figwidth" style param set, use this to scale + elif self._style["figwidth"] > 0.0: + # in order to get actual inches, need to scale by factor + adj_fig_w = self._style["figwidth"] * 1.282736 + self._figure.set_size_inches(adj_fig_w, adj_fig_w * base_fig_h / base_fig_w) + scale = adj_fig_w / base_fig_w + + # otherwise, display default size + else: + self._figure.set_size_inches(base_fig_w, base_fig_h) + + # drawing will scale with 'set_size_inches', but fonts and linewidths do not + if scale != 1.0: + self._fs *= scale + self._sfs *= scale + self._lwidth1 = 1.0 * scale + self._lwidth15 = 1.5 * scale + self._lwidth2 = 2.0 * scale + + # Once the scaling factor has been determined, the global phase, register names + # and numbers, wires, and gates are drawn + if self._global_phase: + self._plt_mod.text( + xl, yt, "Global Phase: %s" % pi_check(self._global_phase, output="mpl") + ) + self._draw_regs_wires(num_folds, xmax, n_lines, max_anc) + self._draw_ops(verbose) + + if filename: + self._figure.savefig( + filename, + dpi=self._style["dpi"], + bbox_inches="tight", + facecolor=self._figure.get_facecolor(), + ) + if not self._user_ax: + from matplotlib import get_backend + + if get_backend() in ["module://ipykernel.pylab.backend_inline", "nbAgg"]: + self._plt_mod.close(self._figure) + return self._figure + + def _get_layer_widths(self): + """Compute the layer_widths for the layers""" for layer in self._nodes: + widest_box = WID for node in layer: - if node.cargs and node.op.name != "measure": + op = node.op + if self._cregbundle and node.cargs and not isinstance(op, Measure): self._cregbundle = False warn( "Cregbundle set to False since an instruction needs to refer" @@ -323,13 +368,245 @@ def _set_cregbundle(self): RuntimeWarning, 2, ) - break + self._data[node] = {} + self._data[node]["width"] = WID + num_ctrl_qubits = 0 if not hasattr(op, "num_ctrl_qubits") else op.num_ctrl_qubits + if op._directive or isinstance(op, Measure): + self._data[node]["raw_gate_text"] = op.name + continue + + base_type = None if not hasattr(op, "base_gate") else op.base_gate + gate_text, ctrl_text, raw_gate_text = get_gate_ctrl_text( + op, "mpl", style=self._style + ) + self._data[node]["gate_text"] = gate_text + self._data[node]["ctrl_text"] = ctrl_text + self._data[node]["raw_gate_text"] = raw_gate_text + self._data[node]["param"] = "" + + # if single qubit, no params, and no labels, layer_width is 1 + if ( + (len(node.qargs) - num_ctrl_qubits) == 1 + and len(gate_text) < 3 + and (not hasattr(op, "params") or len(op.params) == 0) + and ctrl_text is None + ): + continue + + if isinstance(op, SwapGate) or isinstance(base_type, SwapGate): + continue + + # small increments at end of the 3 _get_text_width calls are for small + # spacing adjustments between gates + ctrl_width = self._get_text_width(ctrl_text, fontsize=self._sfs) - 0.05 + + # get param_width, but 0 for gates with array params + if ( + hasattr(op, "params") + and len(op.params) > 0 + and not any(isinstance(param, np.ndarray) for param in op.params) + ): + param = get_param_str(op, "mpl", ndigits=3) + if isinstance(op, Initialize): + param = f"$[{param.replace('$', '')}]$" + self._data[node]["param"] = param + raw_param_width = self._get_text_width(param, fontsize=self._sfs, param=True) + param_width = raw_param_width + 0.08 + else: + param_width = raw_param_width = 0.0 + + # get gate_width for sidetext symmetric gates + if isinstance(op, RZZGate) or isinstance(base_type, (U1Gate, PhaseGate, RZZGate)): + if isinstance(base_type, PhaseGate): + gate_text = "P" + raw_gate_width = ( + self._get_text_width(gate_text + " ()", fontsize=self._sfs) + + raw_param_width + ) + gate_width = (raw_gate_width + 0.08) * 1.5 + + # otherwise, standard gate or multiqubit gate + else: + raw_gate_width = self._get_text_width(gate_text, fontsize=self._fs) + gate_width = raw_gate_width + 0.10 + # add .21 for the qubit numbers on the left of the multibit gates + if len(node.qargs) - num_ctrl_qubits > 1: + gate_width += 0.21 + + box_width = max(gate_width, ctrl_width, param_width, WID) + if box_width > widest_box: + widest_box = box_width + self._data[node]["width"] = max(raw_gate_width, raw_param_width) + + self._layer_widths.append(int(widest_box) + 1) + + def _get_reg_names_and_numbers(self): + """Get all the info for drawing reg names and numbers""" + longest_reg_name_width = 0 + n_lines = 0 + initial_qbit = " |0>" if self._initial_state else "" + initial_cbit = " 0" if self._initial_state else "" + + def _fix_double_script(reg_name): + words = reg_name.split(" ") + words = [word.replace("_", r"\_") if word.count("_") > 1 else word for word in words] + words = [ + word.replace("^", r"\^{\ }") if word.count("^") > 1 else word for word in words + ] + reg_name = " ".join(words).replace(" ", "\\;") + return reg_name + + # quantum register + for ii, reg in enumerate(self._qubit): + register = self._bit_locations[reg]["register"] + index = self._bit_locations[reg]["index"] + + # show register name and number if more than 1 register + if len(self._qubit) > 1: + if self._layout is None: + qubit_name = f"${{{register.name}}}_{{{index}}}$" + else: + if self._layout[index]: + virt_bit = self._layout[index] + try: + virt_reg = next( + reg for reg in self._layout.get_registers() if virt_bit in reg + ) + qubit_name = "${{{name}}}_{{{index}}} \\mapsto {{{physical}}}$".format( + name=virt_reg.name, + index=virt_reg[:].index(virt_bit), + physical=index, + ) + + except StopIteration: + qubit_name = "${{{name}}} \\mapsto {{{physical}}}$".format( + name=virt_bit, physical=index + ) + else: + qubit_name = f"${{{index}}}$" else: - continue - break + qubit_name = f"{register.name}" + + qubit_name = _fix_double_script(qubit_name) + initial_qbit + text_width = self._get_text_width(qubit_name, self._fs) * 1.15 + + if text_width > longest_reg_name_width: + longest_reg_name_width = text_width + pos = -ii + self._qubit_dict[ii] = { + "y": pos, + "reg_name": qubit_name, + "index": index, + "group": register, + } + n_lines += 1 + + # classical register + if self._clbit: + n_clbit = self._clbit.copy() + n_clbit.pop(0) + idx = 0 + y_off = -len(self._qubit) + for ii, (reg, nreg) in enumerate(itertools.zip_longest(self._clbit, n_clbit)): + pos = y_off - idx + register = self._bit_locations[reg]["register"] + index = self._bit_locations[reg]["index"] + + # if cregbundle show non-math reg name, if only 1 clbit, show math name + # else math name and number + if self._cregbundle: + clbit_name = f"{register.name}" + else: + clbit_name = f"${register.name}_{index}$" + clbit_name = _fix_double_script(clbit_name) + initial_cbit + text_width = self._get_text_width(register.name, self._fs) * 1.15 + if text_width > longest_reg_name_width: + longest_reg_name_width = text_width + self._clbit_dict[ii] = { + "y": pos, + "reg_name": clbit_name, + "index": index, + "group": register, + } + if self._cregbundle and not ( + not nreg or register != self._bit_locations[nreg]["register"] + ): + continue + + n_lines += 1 + idx += 1 + + self._x_offset = -1.2 + longest_reg_name_width + return n_lines + + def _get_coords(self, n_lines): + """Load all the coordinate info needed to place the gates on the drawing""" + + # create the anchor arrays + for key, qubit in self._qubit_dict.items(): + self._q_anchors[key] = Anchor(reg_num=n_lines, yind=qubit["y"], fold=self._fold) + for key, clbit in self._clbit_dict.items(): + self._c_anchors[key] = Anchor(reg_num=n_lines, yind=clbit["y"], fold=self._fold) + + # get all the necessary coordinates for placing gates on the wires + prev_anc = -1 + for i, layer in enumerate(self._nodes): + layer_width = self._layer_widths[i] + this_anc = prev_anc + 1 + for node in layer: + # get qubit index + q_indxs = [] + for qarg in node.qargs: + for index, reg in self._qubit_dict.items(): + if ( + reg["group"] == self._bit_locations[qarg]["register"] + and reg["index"] == self._bit_locations[qarg]["index"] + ): + q_indxs.append(index) + break + + # get clbit index + c_indxs = [] + for carg in node.cargs: + for index, reg in self._clbit_dict.items(): + if ( + reg["group"] == self._bit_locations[carg]["register"] + and reg["index"] == self._bit_locations[carg]["index"] + ): + c_indxs.append(index) + break + + # only add the gate to the anchors if it is going to be plotted. + if self._plot_barriers or not node.op._directive: + for ii in q_indxs: + self._q_anchors[ii].set_index(this_anc, layer_width) + + # qubit coordinate + self._data[node]["q_xy"] = [ + self._q_anchors[ii].plot_coord(this_anc, layer_width, self._x_offset) + for ii in q_indxs + ] + # clbit coordinate + self._data[node]["c_xy"] = [ + self._c_anchors[ii].plot_coord(this_anc, layer_width, self._x_offset) + for ii in c_indxs + ] + # update index based on the value from plotting + this_anc = self._q_anchors[q_indxs[0]].gate_anchor + self._data[node]["c_indxs"] = c_indxs + + # adjust the column if there have been barriers encountered, but not plotted + barrier_offset = 0 + if not self._plot_barriers: + # only adjust if everything in the layer wasn't plotted + barrier_offset = -1 if all(nd.op._directive for nd in layer) else 0 + prev_anc = this_anc + layer_width + barrier_offset - 1 + + anchors = [self._q_anchors[ii].get_index() for ii in self._qubit_dict] + return max(anchors) if anchors else 0 - # This computes the width of a string in the default font def _get_text_width(self, text, fontsize, param=False): + """Compute the width of a string in the default font""" if not text: return 0.0 @@ -354,7 +631,7 @@ def _get_text_width(self, text, fontsize, param=False): if param: text = text.replace("-", "+") - f = 0 if fontsize == self._style["fs"] else 1 + f = 0 if fontsize == self._fs else 1 sum_text = 0.0 for c in text: try: @@ -366,264 +643,296 @@ def _get_text_width(self, text, fontsize, param=False): sum_text *= self._subfont_factor return sum_text - def _get_colors(self, op, gate_text): - base_name = None if not hasattr(op, "base_gate") else op.base_gate.name - color = None - if gate_text in self._style["dispcol"]: - color = self._style["dispcol"][gate_text] - elif op.name in self._style["dispcol"]: - color = self._style["dispcol"][op.name] - if color is not None: - # Backward compatibility for style dict using 'displaycolor' with - # gate color and no text color, so test for str first - if isinstance(color, str): - fc = color - gt = self._style["gt"] - else: - fc = color[0] - gt = color[1] - # Treat special case of classical gates in iqx style by making all - # controlled gates of x, dcx, and swap the classical gate color - elif self._style["name"] == "iqx" and base_name in ["x", "dcx", "swap"]: - color = self._style["dispcol"][base_name] - if isinstance(color, str): - fc = color - gt = self._style["gt"] - else: - fc = color[0] - gt = color[1] - else: - fc = self._style["gc"] - gt = self._style["gt"] - - if self._style["name"] == "bw": - ec = self._style["ec"] - lc = self._style["lc"] - else: - ec = fc - lc = fc - # Subtext needs to be same color as gate text - sc = gt - return fc, ec, gt, self._style["tc"], sc, lc - - def _multiqubit_gate( - self, node, xy, c_xy=None, fc=None, ec=None, gt=None, sc=None, text="", subtext="" - ): - xpos = min(x[0] for x in xy) - ypos = min(y[1] for y in xy) - ypos_max = max(y[1] for y in xy) - if c_xy: - cxpos = min(x[0] for x in c_xy) - cypos = min(y[1] for y in c_xy) - ypos = min(ypos, cypos) - fs = self._style["fs"] - sfs = self._style["sfs"] - - wid = max(self._gate_width[node] + 0.21, WID) + def _draw_regs_wires(self, num_folds, xmax, n_lines, max_anc): + """Draw the register names and numbers, wires, and vertical lines at the ends""" - qubit_span = abs(ypos) - abs(ypos_max) + 1 - height = HIG + (qubit_span - 1) - box = self.patches_mod.Rectangle( - xy=(xpos - 0.5 * wid, ypos - 0.5 * HIG), - width=wid, - height=height, - fc=fc, - ec=ec, - linewidth=self._lwidth15, - zorder=PORDER_GATE, - ) - self._ax.add_patch(box) - - # annotate inputs - for bit, y in enumerate([x[1] for x in xy]): - self._ax.text( - xpos + 0.07 - 0.5 * wid, - y, - str(bit), - ha="left", - va="center", - fontsize=fs, - color=gt, - clip_on=True, - zorder=PORDER_TEXT, - ) - if c_xy: - # annotate classical inputs - for bit, y in enumerate([x[1] for x in c_xy]): + for fold_num in range(num_folds + 1): + # quantum registers + for qubit in self._qubit_dict.values(): + qubit_name = qubit["reg_name"] + y = qubit["y"] - fold_num * (n_lines + 1) self._ax.text( - cxpos + 0.07 - 0.5 * wid, + self._x_offset - 0.2, y, - str(bit), - ha="left", + qubit_name, + ha="right", va="center", - fontsize=fs, - color=gt, + fontsize=1.25 * self._fs, + color=self._style["tc"], clip_on=True, zorder=PORDER_TEXT, ) - if text: - if subtext: + # draw the qubit wire + self._line([self._x_offset, y], [xmax, y], zorder=PORDER_REGLINE) + + # classical registers + this_clbit_dict = {} + for clbit in self._clbit_dict.values(): + clbit_name = clbit["reg_name"] + y = clbit["y"] - fold_num * (n_lines + 1) + if y not in this_clbit_dict.keys(): + this_clbit_dict[y] = {"val": 1, "reg_name": clbit_name} + else: + this_clbit_dict[y]["val"] += 1 + + for y, this_clbit in this_clbit_dict.items(): + # cregbundle + if this_clbit["val"] > 1: + self._ax.plot( + [self._x_offset + 0.2, self._x_offset + 0.3], + [y - 0.1, y + 0.1], + color=self._style["cc"], + zorder=PORDER_LINE, + ) + self._ax.text( + self._x_offset + 0.1, + y + 0.1, + str(this_clbit["val"]), + ha="left", + va="bottom", + fontsize=0.8 * self._fs, + color=self._style["tc"], + clip_on=True, + zorder=PORDER_TEXT, + ) self._ax.text( - xpos + 0.11, - ypos + 0.4 * height, - text, - ha="center", + self._x_offset - 0.2, + y, + this_clbit["reg_name"], + ha="right", va="center", - fontsize=fs, - color=gt, + fontsize=1.25 * self._fs, + color=self._style["tc"], clip_on=True, zorder=PORDER_TEXT, ) - self._ax.text( - xpos + 0.11, - ypos + 0.2 * height, - subtext, - ha="center", - va="center", - fontsize=sfs, - color=sc, - clip_on=True, - zorder=PORDER_TEXT, + # draw the clbit wire + self._line( + [self._x_offset, y], + [xmax, y], + lc=self._style["cc"], + ls=self._style["cline"], + zorder=PORDER_REGLINE, ) - else: + + # lf vertical line at either end + feedline_r = num_folds > 0 and num_folds > fold_num + feedline_l = fold_num > 0 + if feedline_l or feedline_r: + xpos_l = self._x_offset - 0.01 + xpos_r = self._fold + self._x_offset + 0.1 + ypos1 = -fold_num * (n_lines + 1) + ypos2 = -(fold_num + 1) * (n_lines) - fold_num + 1 + if feedline_l: + self._ax.plot( + [xpos_l, xpos_l], + [ypos1, ypos2], + color=self._style["lc"], + linewidth=self._lwidth15, + zorder=PORDER_LINE, + ) + if feedline_r: + self._ax.plot( + [xpos_r, xpos_r], + [ypos1, ypos2], + color=self._style["lc"], + linewidth=self._lwidth15, + zorder=PORDER_LINE, + ) + + # draw anchor index number + if self._style["index"]: + for layer_num in range(max_anc): + if self._fold > 0: + x_coord = layer_num % self._fold + self._x_offset + 0.53 + y_coord = -(layer_num // self._fold) * (n_lines + 1) + 0.65 + else: + x_coord = layer_num + self._x_offset + 0.53 + y_coord = 0.65 self._ax.text( - xpos + 0.11, - ypos + 0.5 * (qubit_span - 1), - text, + x_coord, + y_coord, + str(layer_num + 1), ha="center", va="center", - fontsize=fs, - color=gt, + fontsize=self._sfs, + color=self._style["tc"], clip_on=True, zorder=PORDER_TEXT, - wrap=True, ) - def _gate(self, node, xy, fc=None, ec=None, gt=None, sc=None, text="", subtext=""): - xpos, ypos = xy - fs = self._style["fs"] - sfs = self._style["sfs"] + def _draw_ops(self, verbose=False): + """Draw the gates in the circuit""" + prev_anc = -1 + for i, layer in enumerate(self._nodes): + layer_width = self._layer_widths[i] + this_anc = prev_anc + 1 - wid = max(self._gate_width[node], WID) + # draw the gates in this layer + for node in layer: + op = node.op + self._get_colors(node) - box = self.patches_mod.Rectangle( - xy=(xpos - 0.5 * wid, ypos - 0.5 * HIG), - width=wid, - height=HIG, - fc=fc, - ec=ec, - linewidth=self._lwidth15, - zorder=PORDER_GATE, - ) - self._ax.add_patch(box) + if verbose: + print(op) - if text: - if subtext: - self._ax.text( - xpos, - ypos + 0.15 * HIG, - text, - ha="center", - va="center", - fontsize=fs, - color=gt, - clip_on=True, - zorder=PORDER_TEXT, - ) - self._ax.text( - xpos, - ypos - 0.3 * HIG, - subtext, - ha="center", - va="center", - fontsize=sfs, - color=sc, - clip_on=True, - zorder=PORDER_TEXT, - ) - else: - self._ax.text( - xpos, - ypos, - text, - ha="center", - va="center", - fontsize=fs, - color=gt, - clip_on=True, - zorder=PORDER_TEXT, - ) + # add conditional + if op.condition: + cond_xy = [ + self._c_anchors[ii].plot_coord(this_anc, layer_width, self._x_offset) + for ii in self._clbit_dict + ] + self._condition(node, cond_xy) - def _sidetext(self, node, xy, tc=None, text=""): - xpos, ypos = xy + # draw measure + if isinstance(op, Measure): + self._measure(node) - # 0.11 = the initial gap, add 1/2 text width to place on the right - text_width = self._gate_width[node] - xp = xpos + 0.11 + text_width / 2 + # draw barriers, snapshots, etc. + elif op._directive: + if self._plot_barriers: + self._barrier(node) + + # draw single qubit gates + elif len(self._data[node]["q_xy"]) == 1 and not node.cargs: + self._gate(node) + + # draw controlled gates + elif isinstance(op, ControlledGate): + self._control_gate(node) + + # draw multi-qubit gate as final default + else: + self._multiqubit_gate(node) + + # adjust the column if there have been barriers encountered, but not plotted + barrier_offset = 0 + if not self._plot_barriers: + # only adjust if everything in the layer wasn't plotted + barrier_offset = -1 if all(nd.op._directive for nd in layer) else 0 + + prev_anc = this_anc + layer_width + barrier_offset - 1 + + def _get_colors(self, node): + """Get all the colors needed for drawing the circuit""" + op = node.op + base_name = None if not hasattr(op, "base_gate") else op.base_gate.name + color = None + if self._data[node]["raw_gate_text"] in self._style["dispcol"]: + color = self._style["dispcol"][self._data[node]["raw_gate_text"]] + elif op.name in self._style["dispcol"]: + color = self._style["dispcol"][op.name] + if color is not None: + # Backward compatibility for style dict using 'displaycolor' with + # gate color and no text color, so test for str first + if isinstance(color, str): + fc = color + gt = self._style["gt"] + else: + fc = color[0] + gt = color[1] + # Treat special case of classical gates in iqx style by making all + # controlled gates of x, dcx, and swap the classical gate color + elif self._style["name"] == "iqx" and base_name in ["x", "dcx", "swap"]: + color = self._style["dispcol"][base_name] + if isinstance(color, str): + fc = color + gt = self._style["gt"] + else: + fc = color[0] + gt = color[1] + else: + fc = self._style["gc"] + gt = self._style["gt"] + + if self._style["name"] == "bw": + ec = self._style["ec"] + lc = self._style["lc"] + else: + ec = fc + lc = fc + # Subtext needs to be same color as gate text + sc = gt + self._data[node]["fc"] = fc + self._data[node]["ec"] = ec + self._data[node]["gt"] = gt + self._data[node]["tc"] = self._style["tc"] + self._data[node]["sc"] = sc + self._data[node]["lc"] = lc + + def _condition(self, node, cond_xy): + """Add a conditional to a gate""" + mask = 0 + qubit_b = min(self._data[node]["q_xy"], key=lambda xy: xy[1]) + for index, cbit in enumerate(self._clbit): + if self._bit_locations[cbit]["register"] == node.op.condition[0]: + mask |= 1 << index + val = node.op.condition[1] + + # cbit list to consider + fmt_c = f"{{:0{len(cond_xy)}b}}" + cmask = list(fmt_c.format(mask))[::-1] + + # value + fmt_v = f"{{:0{cmask.count('1')}b}}" + vlist = list(fmt_v.format(val)) + if not self._reverse_bits: + vlist = vlist[::-1] + + # plot conditionals + v_ind = 0 + xy_plot = [] + for xy, m in zip(cond_xy, cmask): + if m == "1": + if xy not in xy_plot: + if vlist[v_ind] == "1" or self._cregbundle: + fc = self._style["lc"] + else: + fc = self._style["bg"] + box = self._patches_mod.Circle( + xy=xy, + radius=WID * 0.15, + fc=fc, + ec=self._style["lc"], + linewidth=self._lwidth15, + zorder=PORDER_GATE, + ) + self._ax.add_patch(box) + xy_plot.append(xy) + v_ind += 1 + clbit_b = min(xy_plot, key=lambda xy: xy[1]) + xpos, ypos = clbit_b self._ax.text( - xp, - ypos + HIG, - text, + xpos, + ypos - 0.3 * HIG, + hex(val), ha="center", va="top", - fontsize=self._style["sfs"], - color=tc, + fontsize=self._sfs, + color=self._style["tc"], clip_on=True, zorder=PORDER_TEXT, ) + self._line(qubit_b, clbit_b, lc=self._style["cc"], ls=self._style["cline"]) - def _line(self, xy0, xy1, lc=None, ls=None, zorder=PORDER_LINE): - x0, y0 = xy0 - x1, y1 = xy1 - linecolor = self._style["lc"] if lc is None else lc - linestyle = "solid" if ls is None else ls - - if linestyle == "doublet": - theta = np.arctan2(np.abs(x1 - x0), np.abs(y1 - y0)) - dx = 0.05 * WID * np.cos(theta) - dy = 0.05 * WID * np.sin(theta) - self._ax.plot( - [x0 + dx, x1 + dx], - [y0 + dy, y1 + dy], - color=linecolor, - linewidth=self._lwidth2, - linestyle="solid", - zorder=zorder, - ) - self._ax.plot( - [x0 - dx, x1 - dx], - [y0 - dy, y1 - dy], - color=linecolor, - linewidth=self._lwidth2, - linestyle="solid", - zorder=zorder, - ) - else: - self._ax.plot( - [x0, x1], - [y0, y1], - color=linecolor, - linewidth=self._lwidth2, - linestyle=linestyle, - zorder=zorder, - ) - - def _measure(self, node, qxy, cxy, cid, fc=None, ec=None, gt=None, sc=None): - qx, qy = qxy - cx, cy = cxy + def _measure(self, node): + """Draw the measure symbol and the line to the clbit""" + qx, qy = self._data[node]["q_xy"][0] + cx, cy = self._data[node]["c_xy"][0] + cid = self._clbit_dict[self._data[node]["c_indxs"][0]]["index"] # draw gate box - self._gate(node, qxy, fc=fc, ec=ec, gt=gt, sc=sc) + self._gate(node) # add measure symbol - arc = self.patches_mod.Arc( + arc = self._patches_mod.Arc( xy=(qx, qy - 0.15 * HIG), width=WID * 0.7, height=HIG * 0.7, theta1=0, theta2=180, fill=False, - ec=gt, + ec=self._data[node]["gt"], linewidth=self._lwidth2, zorder=PORDER_GATE, ) @@ -631,13 +940,18 @@ def _measure(self, node, qxy, cxy, cid, fc=None, ec=None, gt=None, sc=None): self._ax.plot( [qx, qx + 0.35 * WID], [qy - 0.15 * HIG, qy + 0.20 * HIG], - color=gt, + color=self._data[node]["gt"], linewidth=self._lwidth2, zorder=PORDER_GATE, ) # arrow - self._line(qxy, [cx, cy + 0.35 * WID], lc=self._style["cc"], ls=self._style["cline"]) - arrowhead = self.patches_mod.Polygon( + self._line( + self._data[node]["q_xy"][0], + [cx, cy + 0.35 * WID], + lc=self._style["cc"], + ls=self._style["cline"], + ) + arrowhead = self._patches_mod.Polygon( ( (cx - 0.20 * WID, cy + 0.35 * WID), (cx + 0.20 * WID, cy + 0.35 * WID), @@ -655,66 +969,216 @@ def _measure(self, node, qxy, cxy, cid, fc=None, ec=None, gt=None, sc=None): str(cid), ha="left", va="bottom", - fontsize=0.8 * self._style["fs"], + fontsize=0.8 * self._fs, color=self._style["tc"], clip_on=True, zorder=PORDER_TEXT, ) - def _conditional(self, xy, istrue=False): + def _barrier(self, node): + """Draw a barrier""" + for xy in self._data[node]["q_xy"]: + xpos, ypos = xy + self._ax.plot( + [xpos, xpos], + [ypos + 0.5, ypos - 0.5], + linewidth=self._lwidth1, + linestyle="dashed", + color=self._style["lc"], + zorder=PORDER_TEXT, + ) + box = self._patches_mod.Rectangle( + xy=(xpos - (0.3 * WID), ypos - 0.5), + width=0.6 * WID, + height=1, + fc=self._style["bc"], + ec=None, + alpha=0.6, + linewidth=self._lwidth15, + zorder=PORDER_GRAY, + ) + self._ax.add_patch(box) + + def _gate(self, node, xy=None): + """Draw a 1-qubit gate""" + if xy is None: + xy = self._data[node]["q_xy"][0] xpos, ypos = xy + wid = max(self._data[node]["width"], WID) - fc = self._style["lc"] if istrue else self._style["bg"] - box = self.patches_mod.Circle( - xy=(xpos, ypos), - radius=WID * 0.15, - fc=fc, - ec=self._style["lc"], + box = self._patches_mod.Rectangle( + xy=(xpos - 0.5 * wid, ypos - 0.5 * HIG), + width=wid, + height=HIG, + fc=self._data[node]["fc"], + ec=self._data[node]["ec"], linewidth=self._lwidth15, zorder=PORDER_GATE, ) self._ax.add_patch(box) - def _ctrl_qubit(self, xy, fc=None, ec=None, tc=None, text="", text_top=None): - xpos, ypos = xy - box = self.patches_mod.Circle( - xy=(xpos, ypos), - radius=WID * 0.15, - fc=fc, - ec=ec, - linewidth=self._lwidth15, - zorder=PORDER_GATE, - ) - self._ax.add_patch(box) - # display the control label at the top or bottom if there is one - if text_top is True: - self._ax.text( - xpos, - ypos + 0.7 * HIG, - text, - ha="center", - va="top", - fontsize=self._style["sfs"], - color=tc, - clip_on=True, - zorder=PORDER_TEXT, - ) - elif text_top is False: + if "gate_text" in self._data[node]: + gate_ypos = ypos + if "param" in self._data[node] and self._data[node]["param"] != "": + gate_ypos = ypos + 0.15 * HIG + self._ax.text( + xpos, + ypos - 0.3 * HIG, + self._data[node]["param"], + ha="center", + va="center", + fontsize=self._sfs, + color=self._data[node]["sc"], + clip_on=True, + zorder=PORDER_TEXT, + ) self._ax.text( xpos, - ypos - 0.3 * HIG, - text, + gate_ypos, + self._data[node]["gate_text"], ha="center", - va="top", - fontsize=self._style["sfs"], - color=tc, + va="center", + fontsize=self._fs, + color=self._data[node]["gt"], clip_on=True, zorder=PORDER_TEXT, ) + def _multiqubit_gate(self, node, xy=None): + """Draw a gate covering more than one qubit""" + op = node.op + if xy is None: + xy = self._data[node]["q_xy"] + + # Swap gate + if isinstance(op, SwapGate): + self._swap(xy, self._data[node]["lc"]) + return + + # RZZ Gate + elif isinstance(op, RZZGate): + self._symmetric_gate(node, RZZGate) + return + + c_xy = self._data[node]["c_xy"] + xpos = min([x[0] for x in xy]) + ypos = min([y[1] for y in xy]) + ypos_max = max([y[1] for y in xy]) + if c_xy: + cxpos = min([x[0] for x in c_xy]) + cypos = min([y[1] for y in c_xy]) + ypos = min(ypos, cypos) + + wid = max(self._data[node]["width"] + 0.21, WID) + + qubit_span = abs(ypos) - abs(ypos_max) + 1 + height = HIG + (qubit_span - 1) + box = self._patches_mod.Rectangle( + xy=(xpos - 0.5 * wid, ypos - 0.5 * HIG), + width=wid, + height=height, + fc=self._data[node]["fc"], + ec=self._data[node]["ec"], + linewidth=self._lwidth15, + zorder=PORDER_GATE, + ) + self._ax.add_patch(box) + + # annotate inputs + for bit, y in enumerate([x[1] for x in xy]): + self._ax.text( + xpos + 0.07 - 0.5 * wid, + y, + str(bit), + ha="left", + va="center", + fontsize=self._fs, + color=self._data[node]["gt"], + clip_on=True, + zorder=PORDER_TEXT, + ) + if c_xy: + # annotate classical inputs + for bit, y in enumerate([x[1] for x in c_xy]): + self._ax.text( + cxpos + 0.07 - 0.5 * wid, + y, + str(bit), + ha="left", + va="center", + fontsize=self._fs, + color=self._data[node]["gt"], + clip_on=True, + zorder=PORDER_TEXT, + ) + if "gate_text" in self._data[node] and self._data[node]["gate_text"] != "": + gate_ypos = ypos + 0.5 * (qubit_span - 1) + if "param" in self._data[node] and self._data[node]["param"] != "": + gate_ypos = ypos + 0.4 * height + self._ax.text( + xpos + 0.11, + ypos + 0.2 * height, + self._data[node]["param"], + ha="center", + va="center", + fontsize=self._sfs, + color=self._data[node]["sc"], + clip_on=True, + zorder=PORDER_TEXT, + ) + self._ax.text( + xpos + 0.11, + gate_ypos, + self._data[node]["gate_text"], + ha="center", + va="center", + fontsize=self._fs, + color=self._data[node]["gt"], + clip_on=True, + zorder=PORDER_TEXT, + ) + + def _control_gate(self, node): + """Draw a controlled gate""" + op = node.op + base_type = None if not hasattr(op, "base_gate") else op.base_gate + xy = self._data[node]["q_xy"] + qubit_b = min(xy, key=lambda xy: xy[1]) + qubit_t = max(xy, key=lambda xy: xy[1]) + num_ctrl_qubits = op.num_ctrl_qubits + num_qargs = len(xy) - num_ctrl_qubits + self._set_ctrl_bits( + op.ctrl_state, + num_ctrl_qubits, + xy, + ec=self._data[node]["ec"], + tc=self._data[node]["tc"], + text=self._data[node]["ctrl_text"], + qargs=node.qargs, + ) + self._line(qubit_b, qubit_t, lc=self._data[node]["lc"]) + + if isinstance(op, RZZGate) or isinstance(base_type, (U1Gate, PhaseGate, ZGate, RZZGate)): + self._symmetric_gate(node, base_type) + + elif num_qargs == 1 and isinstance(base_type, XGate): + tgt_color = self._style["dispcol"]["target"] + tgt = tgt_color if isinstance(tgt_color, str) else tgt_color[0] + self._x_tgt_qubit(xy[num_ctrl_qubits], ec=self._data[node]["ec"], ac=tgt) + + elif num_qargs == 1: + self._gate(node, xy[num_ctrl_qubits:][0]) + + elif isinstance(base_type, SwapGate): + self._swap(xy[num_ctrl_qubits:], self._data[node]["lc"]) + + else: + self._multiqubit_gate(node, xy[num_ctrl_qubits:]) + def _set_ctrl_bits( self, ctrl_state, num_ctrl_qubits, qbit, ec=None, tc=None, text="", qargs=None ): + """Determine which qubits are controls and whether they are open or closed""" # place the control label at the top or bottom of controls if text: qlist = [self._bit_locations[qubit]["index"] for qubit in qargs] @@ -736,10 +1200,40 @@ def _set_ctrl_bits( text_top = False self._ctrl_qubit(qbit[i], fc=fc_open_close, ec=ec, tc=tc, text=text, text_top=text_top) + def _ctrl_qubit(self, xy, fc=None, ec=None, tc=None, text="", text_top=None): + """Draw a control circle and if top or bottom control, draw control label""" + xpos, ypos = xy + box = self._patches_mod.Circle( + xy=(xpos, ypos), + radius=WID * 0.15, + fc=fc, + ec=ec, + linewidth=self._lwidth15, + zorder=PORDER_GATE, + ) + self._ax.add_patch(box) + if text_top is None: + return + + # display the control label at the top or bottom if there is one + ctrl_ypos = ypos + 0.7 * HIG if text_top else ypos - 0.3 * HIG + self._ax.text( + xpos, + ctrl_ypos, + text, + ha="center", + va="top", + fontsize=self._sfs, + color=tc, + clip_on=True, + zorder=PORDER_TEXT, + ) + def _x_tgt_qubit(self, xy, ec=None, ac=None): + """Draw the cnot target symbol""" linewidth = self._lwidth2 xpos, ypos = xy - box = self.patches_mod.Circle( + box = self._patches_mod.Circle( xy=(xpos, ypos), radius=HIG * 0.35, fc=ec, @@ -765,7 +1259,42 @@ def _x_tgt_qubit(self, xy, ec=None, ac=None): zorder=PORDER_GATE + 1, ) + def _symmetric_gate(self, node, base_type): + """Draw symmetric gates for cz, cu1, cp, and rzz""" + op = node.op + xy = self._data[node]["q_xy"] + qubit_b = min(xy, key=lambda xy: xy[1]) + qubit_t = max(xy, key=lambda xy: xy[1]) + base_type = None if not hasattr(op, "base_gate") else op.base_gate + ec = self._data[node]["ec"] + tc = self._data[node]["tc"] + lc = self._data[node]["lc"] + + # cz and mcz gates + if not isinstance(op, ZGate) and isinstance(base_type, ZGate): + num_ctrl_qubits = op.num_ctrl_qubits + self._ctrl_qubit(xy[-1], fc=ec, ec=ec, tc=tc) + self._line(qubit_b, qubit_t, lc=lc, zorder=PORDER_LINE + 1) + + # cu1, cp, rzz, and controlled rzz gates (sidetext gates) + elif isinstance(op, RZZGate) or isinstance(base_type, (U1Gate, PhaseGate, RZZGate)): + num_ctrl_qubits = 0 if isinstance(op, RZZGate) else op.num_ctrl_qubits + gate_text = "P" if isinstance(base_type, PhaseGate) else self._data[node]["gate_text"] + + self._ctrl_qubit(xy[num_ctrl_qubits], fc=ec, ec=ec, tc=tc) + if not isinstance(base_type, (U1Gate, PhaseGate)): + self._ctrl_qubit(xy[num_ctrl_qubits + 1], fc=ec, ec=ec, tc=tc) + self._sidetext(node, qubit_b, tc=tc, text=f"{gate_text} ({self._data[node]['param']})") + self._line(qubit_b, qubit_t, lc=lc) + def _swap(self, xy, color=None): + """Draw a Swap gate""" + self._swap_cross(xy[0], color=color) + self._swap_cross(xy[1], color=color) + self._line(xy[0], xy[1], lc=color) + + def _swap_cross(self, xy, color=None): + """Draw the Swap cross symbol""" xpos, ypos = xy self._ax.plot( @@ -783,695 +1312,111 @@ def _swap(self, xy, color=None): zorder=PORDER_LINE + 1, ) - def _barrier(self, config): - xys = config["coord"] - for xy in xys: - xpos, ypos = xy - self._ax.plot( - [xpos, xpos], - [ypos + 0.5, ypos - 0.5], - linewidth=self._scale, - linestyle="dashed", - color=self._style["lc"], - zorder=PORDER_TEXT, - ) - box = self.patches_mod.Rectangle( - xy=(xpos - (0.3 * WID), ypos - 0.5), - width=0.6 * WID, - height=1, - fc=self._style["bc"], - ec=None, - alpha=0.6, - linewidth=self._lwidth15, - zorder=PORDER_GRAY, - ) - self._ax.add_patch(box) + def _sidetext(self, node, xy, tc=None, text=""): + """Draw the sidetext for symmetric gates""" + xpos, ypos = xy - def draw(self, filename=None, verbose=False): - """Draw method called from circuit_drawer""" - self._draw_regs() - self._draw_ops(verbose) - _xl = -self._style["margin"][0] - _xr = self._xmax + self._style["margin"][1] - _yb = -self._ymax - self._style["margin"][2] + 1 - 0.5 - _yt = self._style["margin"][3] + 0.5 - self._ax.set_xlim(_xl, _xr) - self._ax.set_ylim(_yb, _yt) - - # update figure size - fig_w = _xr - _xl - fig_h = _yt - _yb - if self._style["figwidth"] < 0.0: - self._style["figwidth"] = fig_w * BASE_SIZE * self._style["fs"] / 72 / WID - self._figure.set_size_inches( - self._style["figwidth"], self._style["figwidth"] * fig_h / fig_w + # 0.11 = the initial gap, add 1/2 text width to place on the right + xp = xpos + 0.11 + self._data[node]["width"] / 2 + self._ax.text( + xp, + ypos + HIG, + text, + ha="center", + va="top", + fontsize=self._sfs, + color=tc, + clip_on=True, + zorder=PORDER_TEXT, ) - if self._global_phase: - self.plt_mod.text( - _xl, _yt, "Global Phase: %s" % pi_check(self._global_phase, output="mpl") - ) - - if filename: - self._figure.savefig( - filename, - dpi=self._style["dpi"], - bbox_inches="tight", - facecolor=self._figure.get_facecolor(), - ) - if self._return_fig: - from matplotlib import get_backend - - if get_backend() in ["module://ipykernel.pylab.backend_inline", "nbAgg"]: - self.plt_mod.close(self._figure) - return self._figure - - def _draw_regs(self): - longest_reg_name_width = 0 - initial_qbit = " |0>" if self._initial_state else "" - initial_cbit = " 0" if self._initial_state else "" - - def _fix_double_script(reg_name): - words = reg_name.split(" ") - words = [word.replace("_", r"\_") if word.count("_") > 1 else word for word in words] - words = [ - word.replace("^", r"\^{\ }") if word.count("^") > 1 else word for word in words - ] - reg_name = " ".join(words).replace(" ", "\\;") - return reg_name - - # quantum register - fs = self._style["fs"] - for ii, reg in enumerate(self._qubit): - register = self._bit_locations[reg]["register"] - index = self._bit_locations[reg]["index"] - - if len(self._qubit) > 1: - if self._layout is None: - qubit_name = f"${{{register.name}}}_{{{index}}}$" - else: - if self._layout[index]: - virt_bit = self._layout[index] - try: - virt_reg = next( - reg for reg in self._layout.get_registers() if virt_bit in reg - ) - qubit_name = "${{{name}}}_{{{index}}} \\mapsto {{{physical}}}$".format( - name=virt_reg.name, - index=virt_reg[:].index(virt_bit), - physical=index, - ) - - except StopIteration: - qubit_name = "${{{name}}} \\mapsto {{{physical}}}$".format( - name=virt_bit, physical=index - ) - else: - qubit_name = f"${{{index}}}$" - else: - qubit_name = f"{register.name}" - qubit_name = _fix_double_script(qubit_name) + initial_qbit - text_width = self._get_text_width(qubit_name, fs) * 1.15 - if text_width > longest_reg_name_width: - longest_reg_name_width = text_width - pos = -ii - self._qubit_dict[ii] = { - "y": pos, - "reg_name": qubit_name, - "index": index, - "group": register, - } - self._n_lines += 1 - - # classical register - if self._clbit: - n_clbit = self._clbit.copy() - n_clbit.pop(0) - idx = 0 - y_off = -len(self._qubit) - for ii, (reg, nreg) in enumerate(itertools.zip_longest(self._clbit, n_clbit)): - pos = y_off - idx - register = self._bit_locations[reg]["register"] - index = self._bit_locations[reg]["index"] - - if self._cregbundle: - clbit_name = f"{register.name}" - clbit_name = _fix_double_script(clbit_name) + initial_cbit - text_width = self._get_text_width(register.name, fs) * 1.15 - if text_width > longest_reg_name_width: - longest_reg_name_width = text_width - self._clbit_dict[ii] = { - "y": pos, - "reg_name": clbit_name, - "index": index, - "group": register, - } - if not (not nreg or register != self._bit_locations[nreg]["register"]): - continue - else: - clbit_name = f"${register.name}_{{{index}}}$" - clbit_name = _fix_double_script(clbit_name) + initial_cbit - text_width = self._get_text_width(register.name, fs) * 1.15 - if text_width > longest_reg_name_width: - longest_reg_name_width = text_width - self._clbit_dict[ii] = { - "y": pos, - "reg_name": clbit_name, - "index": index, - "group": register, - } - self._n_lines += 1 - idx += 1 - - self._reg_long_text = longest_reg_name_width - self._x_offset = -1.2 + self._reg_long_text + def _line(self, xy0, xy1, lc=None, ls=None, zorder=PORDER_LINE): + """Draw a line from xy0 to xy1""" + x0, y0 = xy0 + x1, y1 = xy1 + linecolor = self._style["lc"] if lc is None else lc + linestyle = "solid" if ls is None else ls - def _draw_regs_sub(self, n_fold, feedline_l=False, feedline_r=False): - # quantum register - fs = self._style["fs"] - for qubit in self._qubit_dict.values(): - qubit_name = qubit["reg_name"] - y = qubit["y"] - n_fold * (self._n_lines + 1) - self._ax.text( - self._x_offset - 0.2, - y, - qubit_name, - ha="right", - va="center", - fontsize=1.25 * fs, - color=self._style["tc"], - clip_on=True, - zorder=PORDER_TEXT, + if linestyle == "doublet": + theta = np.arctan2(np.abs(x1 - x0), np.abs(y1 - y0)) + dx = 0.05 * WID * np.cos(theta) + dy = 0.05 * WID * np.sin(theta) + self._ax.plot( + [x0 + dx, x1 + dx], + [y0 + dy, y1 + dy], + color=linecolor, + linewidth=self._lwidth2, + linestyle="solid", + zorder=zorder, ) - self._line([self._x_offset, y], [self._xmax, y], zorder=PORDER_REGLINE) - - # classical register - this_clbit_dict = {} - for clbit in self._clbit_dict.values(): - clbit_name = clbit["reg_name"] - y = clbit["y"] - n_fold * (self._n_lines + 1) - if y not in this_clbit_dict.keys(): - this_clbit_dict[y] = {"val": 1, "reg_name": clbit_name} - else: - this_clbit_dict[y]["val"] += 1 - for y, this_clbit in this_clbit_dict.items(): - # cregbundle - if this_clbit["val"] > 1: - self._ax.plot( - [self._x_offset + 0.2, self._x_offset + 0.3], - [y - 0.1, y + 0.1], - color=self._style["cc"], - zorder=PORDER_LINE, - ) - self._ax.text( - self._x_offset + 0.1, - y + 0.1, - str(this_clbit["val"]), - ha="left", - va="bottom", - fontsize=0.8 * fs, - color=self._style["tc"], - clip_on=True, - zorder=PORDER_TEXT, - ) - self._ax.text( - self._x_offset - 0.2, - y, - this_clbit["reg_name"], - ha="right", - va="center", - fontsize=1.25 * fs, - color=self._style["tc"], - clip_on=True, - zorder=PORDER_TEXT, + self._ax.plot( + [x0 - dx, x1 - dx], + [y0 - dy, y1 - dy], + color=linecolor, + linewidth=self._lwidth2, + linestyle="solid", + zorder=zorder, ) - self._line( - [self._x_offset, y], - [self._xmax, y], - lc=self._style["cc"], - ls=self._style["cline"], - zorder=PORDER_REGLINE, + else: + self._ax.plot( + [x0, x1], + [y0, y1], + color=linecolor, + linewidth=self._lwidth2, + linestyle=linestyle, + zorder=zorder, ) - # lf vertical line at either end - if feedline_l or feedline_r: - xpos_l = self._x_offset - 0.01 - xpos_r = self._fold + self._x_offset + 0.1 - ypos1 = -n_fold * (self._n_lines + 1) - ypos2 = -(n_fold + 1) * (self._n_lines) - n_fold + 1 - if feedline_l: - self._ax.plot( - [xpos_l, xpos_l], - [ypos1, ypos2], - color=self._style["lc"], - linewidth=self._lwidth15, - zorder=PORDER_LINE, - ) - if feedline_r: - self._ax.plot( - [xpos_r, xpos_r], - [ypos1, ypos2], - color=self._style["lc"], - linewidth=self._lwidth15, - zorder=PORDER_LINE, - ) - - def _draw_ops(self, verbose=False): - _standard_1q_gates = [ - "x", - "y", - "z", - "id", - "h", - "r", - "s", - "sdg", - "t", - "tdg", - "rx", - "ry", - "rz", - "rxx", - "ryy", - "rzx", - "u1", - "u2", - "u3", - "u", - "swap", - "reset", - "sx", - "sxdg", - "p", - ] - - # generate coordinate manager - q_anchors = {} - for key, qubit in self._qubit_dict.items(): - q_anchors[key] = Anchor(reg_num=self._n_lines, yind=qubit["y"], fold=self._fold) - c_anchors = {} - for key, clbit in self._clbit_dict.items(): - c_anchors[key] = Anchor(reg_num=self._n_lines, yind=clbit["y"], fold=self._fold) - # - # draw the ops - # - prev_anc = -1 - fs = self._style["fs"] - sfs = self._style["sfs"] - for layer in self._nodes: - widest_box = 0.0 - self._gate_width = {} - # - # compute the layer_width for this layer - # - for node in layer: - op = node.op - self._gate_width[node] = WID - - if op._directive or op.name == "measure": - continue - - base_name = None if not hasattr(op, "base_gate") else op.base_gate.name - gate_text, ctrl_text, _ = get_gate_ctrl_text(op, "mpl", style=self._style) - - # if a standard_gate, no params, and no labels, layer_width is 1 - if not hasattr(op, "params") and ( - (op.name in _standard_1q_gates or base_name in _standard_1q_gates) - and gate_text in (op.name, base_name) - and ctrl_text is None - ): - continue - - # small increments at end of the 3 _get_text_width calls are for small - # spacing adjustments between gates - ctrl_width = self._get_text_width(ctrl_text, fontsize=sfs) - 0.05 - - # get param_width, but 0 for gates with array params - if ( - hasattr(op, "params") - and not any(isinstance(param, np.ndarray) for param in op.params) - and len(op.params) > 0 - ): - param = get_param_str(op, "mpl", ndigits=3) - if op.name == "initialize": - param = "[%s]" % param - raw_param_width = self._get_text_width(param, fontsize=sfs, param=True) - param_width = raw_param_width + 0.08 - else: - param_width = raw_param_width = 0.0 - - if op.name == "rzz" or base_name in ["u1", "p", "rzz"]: - if base_name == "u1": - tname = "U1" - elif base_name == "p": - tname = "P" - else: - tname = "ZZ" - raw_gate_width = ( - self._get_text_width(tname + " ()", fontsize=sfs) + raw_param_width - ) - gate_width = (raw_gate_width + 0.08) * 1.5 - else: - raw_gate_width = self._get_text_width(gate_text, fontsize=fs) - gate_width = raw_gate_width + 0.10 - # add .21 for the qubit numbers on the left of the multibit gates - if op.name not in _standard_1q_gates and base_name not in _standard_1q_gates: - gate_width += 0.21 - - box_width = max(gate_width, ctrl_width, param_width, WID) - if box_width > widest_box: - widest_box = box_width - self._gate_width[node] = max(raw_gate_width, raw_param_width) - - layer_width = int(widest_box) + 1 - this_anc = prev_anc + 1 - # - # draw the gates in this layer - # - for node in layer: - op = node.op - base_name = None if not hasattr(op, "base_gate") else op.base_gate.name - gate_text, ctrl_text, raw_gate_text = get_gate_ctrl_text( - op, "mpl", style=self._style - ) - fc, ec, gt, tc, sc, lc = self._get_colors(op, raw_gate_text) - - # get qubit index - q_idxs = [] - for qarg in node.qargs: - for index, reg in self._qubit_dict.items(): - if ( - reg["group"] == self._bit_locations[qarg]["register"] - and reg["index"] == self._bit_locations[qarg]["index"] - ): - q_idxs.append(index) - break - - # get clbit index - c_idxs = [] - for carg in node.cargs: - for index, reg in self._clbit_dict.items(): - if ( - reg["group"] == self._bit_locations[carg]["register"] - and reg["index"] == self._bit_locations[carg]["index"] - ): - c_idxs.append(index) - break - - # only add the gate to the anchors if it is going to be plotted. - # this prevents additional blank wires at the end of the line if - # the last instruction is a barrier type - if self._plot_barriers or not op._directive: - for ii in q_idxs: - q_anchors[ii].set_index(this_anc, layer_width) - # qubit coordinate - q_xy = [ - q_anchors[ii].plot_coord(this_anc, layer_width, self._x_offset) for ii in q_idxs - ] - # clbit coordinate - c_xy = [ - c_anchors[ii].plot_coord(this_anc, layer_width, self._x_offset) for ii in c_idxs - ] - # bottom and top point of qubit - qubit_b = min(q_xy, key=lambda xy: xy[1]) - qubit_t = max(q_xy, key=lambda xy: xy[1]) - - # update index based on the value from plotting - this_anc = q_anchors[q_idxs[0]].gate_anchor - - if verbose: - print(op) - - # load param - if ( - hasattr(op, "params") - and len(op.params) > 0 - and not any(isinstance(param, np.ndarray) for param in op.params) - ): - param = f"{get_param_str(op, 'mpl', ndigits=3)}" - else: - param = "" - - # conditional gate - if op.condition: - c_xy = [ - c_anchors[ii].plot_coord(this_anc, layer_width, self._x_offset) - for ii in self._clbit_dict - ] - mask = 0 - for index, cbit in enumerate(self._clbit): - if self._bit_locations[cbit]["register"] == op.condition[0]: - mask |= 1 << index - val = op.condition[1] - # cbit list to consider - fmt_c = f"{{:0{len(c_xy)}b}}" - cmask = list(fmt_c.format(mask))[::-1] - # value - fmt_v = f"{{:0{cmask.count('1')}b}}" - vlist = list(fmt_v.format(val)) - if not self._reverse_bits: - vlist = vlist[::-1] - - # plot conditionals - v_ind = 0 - xy_plot = [] - for xy, m in zip(c_xy, cmask): - if m == "1": - if xy not in xy_plot: - if vlist[v_ind] == "1" or self._cregbundle: - self._conditional(xy, istrue=True) - else: - self._conditional(xy, istrue=False) - xy_plot.append(xy) - v_ind += 1 - clbit_b = sorted(xy_plot, key=lambda xy: xy[1])[0] - xpos, ypos = clbit_b - self._ax.text( - xpos, - ypos - 0.3 * HIG, - hex(val), - ha="center", - va="top", - fontsize=sfs, - color=self._style["tc"], - clip_on=True, - zorder=PORDER_TEXT, - ) - self._line(qubit_t, clbit_b, lc=self._style["cc"], ls=self._style["cline"]) - # - # draw special gates - # - if op.name == "measure": - vv = self._clbit_dict[c_idxs[0]]["index"] - self._measure(node, q_xy[0], c_xy[0], vv, fc=fc, ec=ec, gt=gt, sc=sc) - - elif op._directive: - _barriers = {"coord": [], "group": []} - for index, qbit in enumerate(q_idxs): - q_group = self._qubit_dict[qbit]["group"] - if q_group not in _barriers["group"]: - _barriers["group"].append(q_group) - _barriers["coord"].append(q_xy[index]) - if self._plot_barriers: - self._barrier(_barriers) - - elif op.name == "initialize": - vec = f"$[{param.replace('$', '')}]$" - if len(q_xy) == 1: - self._gate( - node, q_xy[0], fc=fc, ec=ec, gt=gt, sc=sc, text=gate_text, subtext=vec - ) - else: - self._multiqubit_gate( - node, q_xy, fc=fc, ec=ec, gt=gt, sc=sc, text=gate_text, subtext=vec - ) - # - # draw single qubit gates - # - elif len(q_xy) == 1 and not node.cargs: - self._gate( - node, - q_xy[0], - fc=fc, - ec=ec, - gt=gt, - sc=sc, - text=gate_text, - subtext=str(param), - ) - # - # draw controlled and special gates - # - # cz and mcz gates - elif op.name != "z" and base_name == "z": - num_ctrl_qubits = op.num_ctrl_qubits - self._set_ctrl_bits( - op.ctrl_state, - num_ctrl_qubits, - q_xy, - ec=ec, - tc=tc, - text=ctrl_text, - qargs=node.qargs, - ) - self._ctrl_qubit(q_xy[-1], fc=ec, ec=ec, tc=tc) - self._line(qubit_b, qubit_t, lc=lc, zorder=PORDER_LINE + 1) - - # cu1, cp, rzz, and controlled rzz gates (sidetext gates) - elif op.name == "rzz" or base_name in ["u1", "p", "rzz"]: - num_ctrl_qubits = 0 if op.name == "rzz" else op.num_ctrl_qubits - if op.name != "rzz": - self._set_ctrl_bits( - op.ctrl_state, - num_ctrl_qubits, - q_xy, - ec=ec, - tc=tc, - text=ctrl_text, - qargs=node.qargs, - ) - self._ctrl_qubit(q_xy[num_ctrl_qubits], fc=ec, ec=ec, tc=tc) - if base_name not in ["u1", "p"]: - self._ctrl_qubit(q_xy[num_ctrl_qubits + 1], fc=ec, ec=ec, tc=tc) - if base_name == "u1": - if self._style["disptex"]["u1"].find("\\mathrm") >= 0: - stext = self._style["disptex"]["u1"] - else: - stext = f"$\\mathrm{{{self._style['disptex']['u1']}}}$" - elif base_name == "p": - stext = "P" - else: - stext = "ZZ" - self._sidetext(node, qubit_b, tc=tc, text=f"{stext} ({param})") - self._line(qubit_b, qubit_t, lc=lc) - - # swap gate - elif op.name == "swap": - self._swap(q_xy[0], color=lc) - self._swap(q_xy[1], color=lc) - self._line(qubit_b, qubit_t, lc=lc) - - # cswap gate - elif op.name != "swap" and base_name == "swap": - num_ctrl_qubits = op.num_ctrl_qubits - self._set_ctrl_bits( - op.ctrl_state, - num_ctrl_qubits, - q_xy, - ec=ec, - tc=tc, - text=ctrl_text, - qargs=node.qargs, - ) - self._swap(q_xy[num_ctrl_qubits], color=lc) - self._swap(q_xy[num_ctrl_qubits + 1], color=lc) - self._line(qubit_b, qubit_t, lc=lc) +class Anchor: + """Locate the anchors for the gates""" - # all other controlled gates - elif isinstance(op, ControlledGate): - num_ctrl_qubits = op.num_ctrl_qubits - num_qargs = len(q_xy) - num_ctrl_qubits - self._set_ctrl_bits( - op.ctrl_state, - num_ctrl_qubits, - q_xy, - ec=ec, - tc=tc, - text=ctrl_text, - qargs=node.qargs, - ) - self._line(qubit_b, qubit_t, lc=lc) - if num_qargs == 1 and base_name == "x": - tgt_color = self._style["dispcol"]["target"] - tgt = tgt_color if isinstance(tgt_color, str) else tgt_color[0] - self._x_tgt_qubit(q_xy[num_ctrl_qubits], ec=ec, ac=tgt) - elif num_qargs == 1: - self._gate( - node, - q_xy[num_ctrl_qubits], - fc=fc, - ec=ec, - gt=gt, - sc=sc, - text=gate_text, - subtext=f"{param}", - ) - else: - self._multiqubit_gate( - node, - q_xy[num_ctrl_qubits:], - fc=fc, - ec=ec, - gt=gt, - sc=sc, - text=gate_text, - subtext=f"{param}", - ) + def __init__(self, reg_num, yind, fold): + self._yind = yind + self._fold = fold + self._reg_num = reg_num + self._gate_placed = [] + self.nxt_anchor_idx = 0 + self.gate_anchor = 0 - # draw multi-qubit gate as final default - else: - self._multiqubit_gate( - node, - q_xy, - c_xy, - fc=fc, - ec=ec, - gt=gt, - sc=sc, - text=gate_text, - subtext=f"{param}", - ) + def plot_coord(self, index, gate_width, x_offset): + """Set the coord positions for an index""" + h_pos = index % self._fold + 1 + # check folding + if self._fold > 0: + if h_pos + (gate_width - 1) > self._fold: + index += self._fold - (h_pos - 1) + x_pos = index % self._fold + 0.5 * gate_width + 0.04 + y_pos = self._yind - (index // self._fold) * (self._reg_num + 1) + else: + x_pos = index + 0.5 * gate_width + 0.04 + y_pos = self._yind - # adjust the column if there have been barriers encountered, but not plotted - barrier_offset = 0 - if not self._plot_barriers: - # only adjust if everything in the layer wasn't plotted - barrier_offset = -1 if all(op._directive for node in layer) else 0 + # could have been updated, so need to store + self.gate_anchor = index + return x_pos + x_offset, y_pos - prev_anc = this_anc + layer_width + barrier_offset - 1 - # - # adjust window size and draw horizontal lines - # - anchors = [q_anchors[ii].get_index() for ii in self._qubit_dict] - max_anc = max(anchors) if anchors else 0 - n_fold = max(0, max_anc - 1) // self._fold if self._fold > 0 else 0 - - # window size - if max_anc > self._fold > 0: - self._xmax = self._fold + 1 + self._x_offset - 0.9 - self._ymax = (n_fold + 1) * (self._n_lines + 1) - 1 + def set_index(self, index, layer_width): + """Set the index for a gate""" + if self._fold < 2: + _index = index else: - x_incr = 0.4 if not self._nodes else 0.9 - self._xmax = max_anc + 1 + self._x_offset - x_incr - self._ymax = self._n_lines - - # add horizontal lines - for ii in range(n_fold + 1): - feedline_r = n_fold > 0 and n_fold > ii - feedline_l = ii > 0 - self._draw_regs_sub(ii, feedline_l, feedline_r) + h_pos = index % self._fold + 1 + if h_pos + (layer_width - 1) > self._fold: + _index = index + self._fold - (h_pos - 1) + 1 + else: + _index = index + for ii in range(layer_width): + idx = _index + ii + if idx not in self._gate_placed: + self._gate_placed.append(idx) + self.nxt_anchor_idx = idx + 1 - # draw anchor index number - if self._style["index"]: - for ii in range(max_anc): - if self._fold > 0: - x_coord = ii % self._fold + self._reg_long_text - 0.67 - y_coord = -(ii // self._fold) * (self._n_lines + 1) + 0.7 - else: - x_coord = ii + self._reg_long_text - 0.67 - y_coord = 0.7 - self._ax.text( - x_coord, - y_coord, - str(ii + 1), - ha="center", - va="center", - fontsize=sfs, - color=self._style["tc"], - clip_on=True, - zorder=PORDER_TEXT, - ) + def get_index(self): + """Getter for the index""" + if self._gate_placed: + return self._gate_placed[-1] + 1 + return 0 class HasMatplotlibWrapper: diff --git a/qiskit/visualization/text.py b/qiskit/visualization/text.py --- a/qiskit/visualization/text.py +++ b/qiskit/visualization/text.py @@ -783,7 +783,6 @@ def wire_names(self, with_initial_state=False): label.format( name=self.bit_locations[bit]["register"].name, index=self.bit_locations[bit]["index"], - physical="", ) ) else: diff --git a/qiskit/visualization/utils.py b/qiskit/visualization/utils.py --- a/qiskit/visualization/utils.py +++ b/qiskit/visualization/utils.py @@ -271,11 +271,6 @@ def _get_gate_span(qubits, node, reverse_bits): else: return qubits[min_index : len(qubits)] - if node.cargs: - return qubits[min_index:] - if node.op.condition: - return qubits[min_index:] - return qubits[min_index : max_index + 1] </patch>
[]
[]
gitpython-developers__GitPython-1224
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> GitCommand missing stderr With GitPython 3.1.14 `GitCommandError`, for failed clones would pass on the captured stderr output into its `stderr` attribute. With 3.1.15 this contains the empty string instead. This was caught by this unit test: https://github.com/tomtom-international/hopic/blob/9a35520388f8109ccff6a89407bc2429ed0d0557/hopic/test/test_checkout.py#L84-L103 A reduced test that reduces this to _only_ testing GitPython (instead of the full integration with hopic) looks like the code below. That exhibits the exact same problem: ```python import git import pytest def test_checkout_in_non_empty_dir(tmp_path): orig_repo = tmp_path / "orig" with git.Repo.init(orig_repo, expand_vars=False) as repo: repo.index.commit(message='Initial commit', **_commitargs) non_empty_dir = tmp_path / 'non-empty-clone' non_empty_dir.mkdir(parents=True) garbage_file = non_empty_dir / 'not-empty' garbage_file.write_text('Garbage!') # Verify that the clone fails complaining about the target directory not being empty/non-existent with pytest.raises(git.GitCommandError, match=r'(?is).*\bfatal:\s+destination\s+path\b.*\bexists\b.*\bnot\b.*\bempty\s+directory\b'): git.Repo.clone_from(orig_repo, non_empty_dir) assert garbage_file.exists() ``` With 3.1.14 this passes. With the `pytest.raises` expectation removed this output is displayed: ```console ___________________________________________________________________ test_checkout_in_non_empty_dir ____________________________________________________________________ tmp_path = PosixPath('/tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0') def test_checkout_in_non_empty_dir(tmp_path): orig_repo = tmp_path / "orig" with git.Repo.init(orig_repo, expand_vars=False) as repo: repo.index.commit(message='Initial commit', **_commitargs) non_empty_dir = tmp_path / 'non-empty-clone' non_empty_dir.mkdir(parents=True) garbage_file = non_empty_dir / 'not-empty' garbage_file.write_text('Garbage!') # Verify that the clone fails complaining about the target directory not being empty/non-existent > git.Repo.clone_from(orig_repo, non_empty_dir) hopic/test/test_checkout.py:95: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ v3.8/lib/python3.8/site-packages/git/repo/base.py:1032: in clone_from return cls._clone(git, url, to_path, GitCmdObjectDB, progress, multi_options, **kwargs) v3.8/lib/python3.8/site-packages/git/repo/base.py:973: in _clone finalize_process(proc, stderr=stderr) v3.8/lib/python3.8/site-packages/git/util.py:329: in finalize_process proc.wait(**kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <git.cmd.Git.AutoInterrupt object at 0x7f6a5883f9a0> stderr = b"fatal: destination path '/tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/non-empty-clone' already exists and is not an empty directory.\n" def wait(self, stderr=b''): # TODO: Bad choice to mimic `proc.wait()` but with different args. """Wait for the process and return its status code. :param stderr: Previously read value of stderr, in case stderr is already closed. :warn: may deadlock if output or error pipes are used and not handled separately. :raise GitCommandError: if the return status is not 0""" if stderr is None: stderr = b'' stderr = force_bytes(data=stderr, encoding='utf-8') status = self.proc.wait() def read_all_from_possibly_closed_stream(stream): try: return stderr + force_bytes(stream.read()) except ValueError: return stderr or b'' if status != 0: errstr = read_all_from_possibly_closed_stream(self.proc.stderr) log.debug('AutoInterrupt wait stderr: %r' % (errstr,)) > raise GitCommandError(self.args, status, errstr) E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128) E cmdline: git clone -v /tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/orig /tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/non-empty-clone E stderr: 'fatal: destination path '/tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/non-empty-clone' already exists and is not an empty directory. E ' v3.8/lib/python3.8/site-packages/git/cmd.py:408: GitCommandError ``` With 3.1.15 this output is produced instead (notice the absence of the "stderr: ..." line): ```console ___________________________________________________________________ test_checkout_in_non_empty_dir ____________________________________________________________________ tmp_path = PosixPath('/tmp/pytest-of-vanschig/pytest-336/test_checkout_in_non_empty_dir0') def test_checkout_in_non_empty_dir(tmp_path): orig_repo = tmp_path / "orig" with git.Repo.init(orig_repo, expand_vars=False) as repo: repo.index.commit(message='Initial commit', **_commitargs) non_empty_dir = tmp_path / 'non-empty-clone' non_empty_dir.mkdir(parents=True) garbage_file = non_empty_dir / 'not-empty' garbage_file.write_text('Garbage!') # Verify that the clone fails complaining about the target directory not being empty/non-existent > git.Repo.clone_from(orig_repo, non_empty_dir) hopic/test/test_checkout.py:95: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ v3.8/lib/python3.8/site-packages/git/repo/base.py:1087: in clone_from return cls._clone(git, url, to_path, GitCmdObjectDB, progress, multi_options, **kwargs) v3.8/lib/python3.8/site-packages/git/repo/base.py:1017: in _clone handle_process_output(proc, None, progress_checked.new_message_handler(), v3.8/lib/python3.8/site-packages/git/cmd.py:116: in handle_process_output return finalizer(process) v3.8/lib/python3.8/site-packages/git/util.py:354: in finalize_process proc.wait(**kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <git.cmd.Git.AutoInterrupt object at 0x7fcc8fd95820>, stderr = b'' def wait(self, stderr=b''): # TODO: Bad choice to mimic `proc.wait()` but with different args. """Wait for the process and return its status code. :param stderr: Previously read value of stderr, in case stderr is already closed. :warn: may deadlock if output or error pipes are used and not handled separately. :raise GitCommandError: if the return status is not 0""" if stderr is None: stderr = b'' stderr = force_bytes(data=stderr, encoding='utf-8') status = self.proc.wait() def read_all_from_possibly_closed_stream(stream): try: return stderr + force_bytes(stream.read()) except ValueError: return stderr or b'' if status != 0: errstr = read_all_from_possibly_closed_stream(self.proc.stderr) log.debug('AutoInterrupt wait stderr: %r' % (errstr,)) > raise GitCommandError(remove_password_if_present(self.args), status, errstr) E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128) E cmdline: git clone -v --progress /tmp/pytest-of-vanschig/pytest-336/test_checkout_in_non_empty_dir0/orig /tmp/pytest-of-vanschig/pytest-336/test_checkout_in_non_empty_dir0/non-empty-clone v3.8/lib/python3.8/site-packages/git/cmd.py:409: GitCommandError ``` When catching the exception and dumping it's `stderr` attribute it turns out to be an empty string (`''`). This seems related to #1220. But it's presentation is different as that issue still reports having content, just wrongly encoded. </issue> <code> [start of README.md] 1 ## [Gitoxide](https://github.com/Byron/gitoxide): A peek into the future… 2 3 I started working on GitPython in 2009, back in the days when Python was 'my thing' and I had great plans with it. 4 Of course, back in the days, I didn't really know what I was doing and this shows in many places. Somewhat similar to 5 Python this happens to be 'good enough', but at the same time is deeply flawed and broken beyond repair. 6 7 By now, GitPython is widely used and I am sure there is a good reason for that, it's something to be proud of and happy about. 8 The community is maintaining the software and is keeping it relevant for which I am absolutely grateful. For the time to come I am happy to continue maintaining GitPython, remaining hopeful that one day it won't be needed anymore. 9 10 More than 15 years after my first meeting with 'git' I am still in excited about it, and am happy to finally have the tools and 11 probably the skills to scratch that itch of mine: implement `git` in a way that makes tool creation a piece of cake for most. 12 13 If you like the idea and want to learn more, please head over to [gitoxide](https://github.com/Byron/gitoxide), an 14 implementation of 'git' in [Rust](https://www.rust-lang.org). 15 16 ## GitPython 17 18 GitPython is a python library used to interact with git repositories, high-level like git-porcelain, 19 or low-level like git-plumbing. 20 21 It provides abstractions of git objects for easy access of repository data, and additionally 22 allows you to access the git repository more directly using either a pure python implementation, 23 or the faster, but more resource intensive *git command* implementation. 24 25 The object database implementation is optimized for handling large quantities of objects and large datasets, 26 which is achieved by using low-level structures and data streaming. 27 28 29 ### REQUIREMENTS 30 31 GitPython needs the `git` executable to be installed on the system and available 32 in your `PATH` for most operations. 33 If it is not in your `PATH`, you can help GitPython find it by setting 34 the `GIT_PYTHON_GIT_EXECUTABLE=<path/to/git>` environment variable. 35 36 * Git (1.7.x or newer) 37 * Python >= 3.5 38 39 The list of dependencies are listed in `./requirements.txt` and `./test-requirements.txt`. 40 The installer takes care of installing them for you. 41 42 ### INSTALL 43 44 If you have downloaded the source code: 45 46 python setup.py install 47 48 or if you want to obtain a copy from the Pypi repository: 49 50 pip install GitPython 51 52 Both commands will install the required package dependencies. 53 54 A distribution package can be obtained for manual installation at: 55 56 http://pypi.python.org/pypi/GitPython 57 58 If you like to clone from source, you can do it like so: 59 60 ```bash 61 git clone https://github.com/gitpython-developers/GitPython 62 git submodule update --init --recursive 63 ./init-tests-after-clone.sh 64 ``` 65 66 ### Limitations 67 68 #### Leakage of System Resources 69 70 GitPython is not suited for long-running processes (like daemons) as it tends to 71 leak system resources. It was written in a time where destructors (as implemented 72 in the `__del__` method) still ran deterministically. 73 74 In case you still want to use it in such a context, you will want to search the 75 codebase for `__del__` implementations and call these yourself when you see fit. 76 77 Another way assure proper cleanup of resources is to factor out GitPython into a 78 separate process which can be dropped periodically. 79 80 #### Windows support 81 82 See [Issue #525](https://github.com/gitpython-developers/GitPython/issues/525). 83 84 ### RUNNING TESTS 85 86 *Important*: Right after cloning this repository, please be sure to have executed 87 the `./init-tests-after-clone.sh` script in the repository root. Otherwise 88 you will encounter test failures. 89 90 On *Windows*, make sure you have `git-daemon` in your PATH. For MINGW-git, the `git-daemon.exe` 91 exists in `Git\mingw64\libexec\git-core\`; CYGWIN has no daemon, but should get along fine 92 with MINGW's. 93 94 The easiest way to run tests is by using [tox](https://pypi.python.org/pypi/tox) 95 a wrapper around virtualenv. It will take care of setting up environments with the proper 96 dependencies installed and execute test commands. To install it simply: 97 98 pip install tox 99 100 Then run: 101 102 tox 103 104 105 For more fine-grained control, you can use `unittest`. 106 107 ### Contributions 108 109 Please have a look at the [contributions file][contributing]. 110 111 ### INFRASTRUCTURE 112 113 * [User Documentation](http://gitpython.readthedocs.org) 114 * [Questions and Answers](http://stackexchange.com/filters/167317/gitpython) 115 * Please post on stackoverflow and use the `gitpython` tag 116 * [Issue Tracker](https://github.com/gitpython-developers/GitPython/issues) 117 * Post reproducible bugs and feature requests as a new issue. 118 Please be sure to provide the following information if posting bugs: 119 * GitPython version (e.g. `import git; git.__version__`) 120 * Python version (e.g. `python --version`) 121 * The encountered stack-trace, if applicable 122 * Enough information to allow reproducing the issue 123 124 ### How to make a new release 125 126 * Update/verify the **version** in the `VERSION` file 127 * Update/verify that the `doc/source/changes.rst` changelog file was updated 128 * Commit everything 129 * Run `git tag -s <version>` to tag the version in Git 130 * Run `make release` 131 * Close the milestone mentioned in the _changelog_ and create a new one. _Do not reuse milestones by renaming them_. 132 * set the upcoming version in the `VERSION` file, usually be 133 incrementing the patch level, and possibly by appending `-dev`. Probably you 134 want to `git push` once more. 135 136 ### How to verify a release 137 138 Please only use releases from `pypi` as you can verify the respective source 139 tarballs. 140 141 This script shows how to verify the tarball was indeed created by the authors of 142 this project: 143 144 ``` 145 curl https://files.pythonhosted.org/packages/09/bc/ae32e07e89cc25b9e5c793d19a1e5454d30a8e37d95040991160f942519e/GitPython-3.1.8-py3-none-any.whl > gitpython.whl 146 curl https://files.pythonhosted.org/packages/09/bc/ae32e07e89cc25b9e5c793d19a1e5454d30a8e37d95040991160f942519e/GitPython-3.1.8-py3-none-any.whl.asc > gitpython-signature.asc 147 gpg --verify gitpython-signature.asc gitpython.whl 148 ``` 149 150 which outputs 151 152 ``` 153 gpg: Signature made Fr 4 Sep 10:04:50 2020 CST 154 gpg: using RSA key 27C50E7F590947D7273A741E85194C08421980C9 155 gpg: Good signature from "Sebastian Thiel (YubiKey USB-C) <[email protected]>" [ultimate] 156 gpg: aka "Sebastian Thiel (In Rust I trust) <[email protected]>" [ultimate] 157 ``` 158 159 You can verify that the keyid indeed matches the release-signature key provided in this 160 repository by looking at the keys details: 161 162 ``` 163 gpg --list-packets ./release-verification-key.asc 164 ``` 165 166 You can verify that the commit adding it was also signed by it using: 167 168 ``` 169 git show --show-signature ./release-verification-key.asc 170 ``` 171 172 If you would like to trust it permanently, you can import and sign it: 173 174 ``` 175 gpg --import ./release-verification-key.asc 176 gpg --edit-key 4C08421980C9 177 178 > sign 179 > save 180 ``` 181 182 ### Projects using GitPython 183 184 * [PyDriller](https://github.com/ishepard/pydriller) 185 * [Kivy Designer](https://github.com/kivy/kivy-designer) 186 * [Prowl](https://github.com/nettitude/Prowl) 187 * [Python Taint](https://github.com/python-security/pyt) 188 * [Buster](https://github.com/axitkhurana/buster) 189 * [git-ftp](https://github.com/ezyang/git-ftp) 190 * [Git-Pandas](https://github.com/wdm0006/git-pandas) 191 * [PyGitUp](https://github.com/msiemens/PyGitUp) 192 * [PyJFuzz](https://github.com/mseclab/PyJFuzz) 193 * [Loki](https://github.com/Neo23x0/Loki) 194 * [Omniwallet](https://github.com/OmniLayer/omniwallet) 195 * [GitViper](https://github.com/BeayemX/GitViper) 196 * [Git Gud](https://github.com/bthayer2365/git-gud) 197 198 ### LICENSE 199 200 New BSD License. See the LICENSE file. 201 202 ### DEVELOPMENT STATUS 203 204 ![Python package](https://github.com/gitpython-developers/GitPython/workflows/Python%20package/badge.svg) 205 [![Documentation Status](https://readthedocs.org/projects/gitpython/badge/?version=stable)](https://readthedocs.org/projects/gitpython/?badge=stable) 206 [![Packaging status](https://repology.org/badge/tiny-repos/python:gitpython.svg)](https://repology.org/metapackage/python:gitpython/versions) 207 208 This project is in **maintenance mode**, which means that 209 210 * …there will be no feature development, unless these are contributed 211 * …there will be no bug fixes, unless they are relevant to the safety of users, or contributed 212 * …issues will be responded to with waiting times of up to a month 213 214 The project is open to contributions of all kinds, as well as new maintainers. 215 216 [contributing]: https://github.com/gitpython-developers/GitPython/blob/master/CONTRIBUTING.md 217 [end of README.md] [start of git/exc.py] 1 # exc.py 2 # Copyright (C) 2008, 2009 Michael Trier ([email protected]) and contributors 3 # 4 # This module is part of GitPython and is released under 5 # the BSD License: http://www.opensource.org/licenses/bsd-license.php 6 """ Module containing all exceptions thrown throughout the git package, """ 7 8 from gitdb.exc import * # NOQA @UnusedWildImport skipcq: PYL-W0401, PYL-W0614 9 from git.compat import safe_decode 10 11 # typing ---------------------------------------------------- 12 13 from typing import IO, List, Optional, Tuple, Union, TYPE_CHECKING 14 from git.types import PathLike 15 16 if TYPE_CHECKING: 17 from git.repo.base import Repo 18 19 # ------------------------------------------------------------------ 20 21 22 class GitError(Exception): 23 """ Base class for all package exceptions """ 24 25 26 class InvalidGitRepositoryError(GitError): 27 """ Thrown if the given repository appears to have an invalid format. """ 28 29 30 class WorkTreeRepositoryUnsupported(InvalidGitRepositoryError): 31 """ Thrown to indicate we can't handle work tree repositories """ 32 33 34 class NoSuchPathError(GitError, OSError): 35 """ Thrown if a path could not be access by the system. """ 36 37 38 class CommandError(GitError): 39 """Base class for exceptions thrown at every stage of `Popen()` execution. 40 41 :param command: 42 A non-empty list of argv comprising the command-line. 43 """ 44 45 #: A unicode print-format with 2 `%s for `<cmdline>` and the rest, 46 #: e.g. 47 #: "'%s' failed%s" 48 _msg = "Cmd('%s') failed%s" 49 50 def __init__(self, command: Union[List[str], Tuple[str, ...], str], 51 status: Union[str, None, Exception] = None, 52 stderr: Optional[IO[str]] = None, stdout: Optional[IO[str]] = None) -> None: 53 if not isinstance(command, (tuple, list)): 54 command = command.split() 55 self.command = command 56 self.status = status 57 if status: 58 if isinstance(status, Exception): 59 status = "%s('%s')" % (type(status).__name__, safe_decode(str(status))) 60 else: 61 try: 62 status = 'exit code(%s)' % int(status) 63 except (ValueError, TypeError): 64 s = safe_decode(str(status)) 65 status = "'%s'" % s if isinstance(status, str) else s 66 67 self._cmd = safe_decode(command[0]) 68 self._cmdline = ' '.join(safe_decode(i) for i in command) 69 self._cause = status and " due to: %s" % status or "!" 70 stdout_decode = safe_decode(stdout) 71 stderr_decode = safe_decode(stderr) 72 self.stdout = stdout_decode and "\n stdout: '%s'" % stdout_decode or '' 73 self.stderr = stderr_decode and "\n stderr: '%s'" % stderr_decode or '' 74 75 def __str__(self) -> str: 76 return (self._msg + "\n cmdline: %s%s%s") % ( 77 self._cmd, self._cause, self._cmdline, self.stdout, self.stderr) 78 79 80 class GitCommandNotFound(CommandError): 81 """Thrown if we cannot find the `git` executable in the PATH or at the path given by 82 the GIT_PYTHON_GIT_EXECUTABLE environment variable""" 83 84 def __init__(self, command: Union[List[str], Tuple[str], str], cause: Union[str, Exception]) -> None: 85 super(GitCommandNotFound, self).__init__(command, cause) 86 self._msg = "Cmd('%s') not found%s" 87 88 89 class GitCommandError(CommandError): 90 """ Thrown if execution of the git command fails with non-zero status code. """ 91 92 def __init__(self, command: Union[List[str], Tuple[str, ...], str], 93 status: Union[str, None, Exception] = None, 94 stderr: Optional[IO[str]] = None, 95 stdout: Optional[IO[str]] = None, 96 ) -> None: 97 super(GitCommandError, self).__init__(command, status, stderr, stdout) 98 99 100 class CheckoutError(GitError): 101 """Thrown if a file could not be checked out from the index as it contained 102 changes. 103 104 The .failed_files attribute contains a list of relative paths that failed 105 to be checked out as they contained changes that did not exist in the index. 106 107 The .failed_reasons attribute contains a string informing about the actual 108 cause of the issue. 109 110 The .valid_files attribute contains a list of relative paths to files that 111 were checked out successfully and hence match the version stored in the 112 index""" 113 114 def __init__(self, message: str, failed_files: List[PathLike], valid_files: List[PathLike], 115 failed_reasons: List[str]) -> None: 116 117 Exception.__init__(self, message) 118 self.failed_files = failed_files 119 self.failed_reasons = failed_reasons 120 self.valid_files = valid_files 121 122 def __str__(self) -> str: 123 return Exception.__str__(self) + ":%s" % self.failed_files 124 125 126 class CacheError(GitError): 127 128 """Base for all errors related to the git index, which is called cache internally""" 129 130 131 class UnmergedEntriesError(CacheError): 132 """Thrown if an operation cannot proceed as there are still unmerged 133 entries in the cache""" 134 135 136 class HookExecutionError(CommandError): 137 """Thrown if a hook exits with a non-zero exit code. It provides access to the exit code and the string returned 138 via standard output""" 139 140 def __init__(self, command: Union[List[str], Tuple[str, ...], str], status: Optional[str], 141 stderr: Optional[IO[str]] = None, stdout: Optional[IO[str]] = None) -> None: 142 super(HookExecutionError, self).__init__(command, status, stderr, stdout) 143 self._msg = "Hook('%s') failed%s" 144 145 146 class RepositoryDirtyError(GitError): 147 """Thrown whenever an operation on a repository fails as it has uncommitted changes that would be overwritten""" 148 149 def __init__(self, repo: 'Repo', message: str) -> None: 150 self.repo = repo 151 self.message = message 152 153 def __str__(self) -> str: 154 return "Operation cannot be performed on %r: %s" % (self.repo, self.message) 155 [end of git/exc.py] [start of git/index/fun.py] 1 # Contains standalone functions to accompany the index implementation and make it 2 # more versatile 3 # NOTE: Autodoc hates it if this is a docstring 4 from io import BytesIO 5 import os 6 from stat import ( 7 S_IFDIR, 8 S_IFLNK, 9 S_ISLNK, 10 S_ISDIR, 11 S_IFMT, 12 S_IFREG, 13 ) 14 import subprocess 15 16 from git.cmd import PROC_CREATIONFLAGS, handle_process_output 17 from git.compat import ( 18 defenc, 19 force_text, 20 force_bytes, 21 is_posix, 22 safe_decode, 23 ) 24 from git.exc import ( 25 UnmergedEntriesError, 26 HookExecutionError 27 ) 28 from git.objects.fun import ( 29 tree_to_stream, 30 traverse_tree_recursive, 31 traverse_trees_recursive 32 ) 33 from git.util import IndexFileSHA1Writer, finalize_process 34 from gitdb.base import IStream 35 from gitdb.typ import str_tree_type 36 37 import os.path as osp 38 39 from .typ import ( 40 BaseIndexEntry, 41 IndexEntry, 42 CE_NAMEMASK, 43 CE_STAGESHIFT 44 ) 45 from .util import ( 46 pack, 47 unpack 48 ) 49 50 51 S_IFGITLINK = S_IFLNK | S_IFDIR # a submodule 52 CE_NAMEMASK_INV = ~CE_NAMEMASK 53 54 __all__ = ('write_cache', 'read_cache', 'write_tree_from_cache', 'entry_key', 55 'stat_mode_to_index_mode', 'S_IFGITLINK', 'run_commit_hook', 'hook_path') 56 57 58 def hook_path(name, git_dir): 59 """:return: path to the given named hook in the given git repository directory""" 60 return osp.join(git_dir, 'hooks', name) 61 62 63 def run_commit_hook(name, index, *args): 64 """Run the commit hook of the given name. Silently ignores hooks that do not exist. 65 :param name: name of hook, like 'pre-commit' 66 :param index: IndexFile instance 67 :param args: arguments passed to hook file 68 :raises HookExecutionError: """ 69 hp = hook_path(name, index.repo.git_dir) 70 if not os.access(hp, os.X_OK): 71 return 72 73 env = os.environ.copy() 74 env['GIT_INDEX_FILE'] = safe_decode(index.path) 75 env['GIT_EDITOR'] = ':' 76 try: 77 cmd = subprocess.Popen([hp] + list(args), 78 env=env, 79 stdout=subprocess.PIPE, 80 stderr=subprocess.PIPE, 81 cwd=index.repo.working_dir, 82 close_fds=is_posix, 83 creationflags=PROC_CREATIONFLAGS,) 84 except Exception as ex: 85 raise HookExecutionError(hp, ex) from ex 86 else: 87 stdout = [] 88 stderr = [] 89 handle_process_output(cmd, stdout.append, stderr.append, finalize_process) 90 stdout = ''.join(stdout) 91 stderr = ''.join(stderr) 92 if cmd.returncode != 0: 93 stdout = force_text(stdout, defenc) 94 stderr = force_text(stderr, defenc) 95 raise HookExecutionError(hp, cmd.returncode, stderr, stdout) 96 # end handle return code 97 98 99 def stat_mode_to_index_mode(mode): 100 """Convert the given mode from a stat call to the corresponding index mode 101 and return it""" 102 if S_ISLNK(mode): # symlinks 103 return S_IFLNK 104 if S_ISDIR(mode) or S_IFMT(mode) == S_IFGITLINK: # submodules 105 return S_IFGITLINK 106 return S_IFREG | 0o644 | (mode & 0o111) # blobs with or without executable bit 107 108 109 def write_cache(entries, stream, extension_data=None, ShaStreamCls=IndexFileSHA1Writer): 110 """Write the cache represented by entries to a stream 111 112 :param entries: **sorted** list of entries 113 :param stream: stream to wrap into the AdapterStreamCls - it is used for 114 final output. 115 116 :param ShaStreamCls: Type to use when writing to the stream. It produces a sha 117 while writing to it, before the data is passed on to the wrapped stream 118 119 :param extension_data: any kind of data to write as a trailer, it must begin 120 a 4 byte identifier, followed by its size ( 4 bytes )""" 121 # wrap the stream into a compatible writer 122 stream = ShaStreamCls(stream) 123 124 tell = stream.tell 125 write = stream.write 126 127 # header 128 version = 2 129 write(b"DIRC") 130 write(pack(">LL", version, len(entries))) 131 132 # body 133 for entry in entries: 134 beginoffset = tell() 135 write(entry[4]) # ctime 136 write(entry[5]) # mtime 137 path = entry[3] 138 path = force_bytes(path, encoding=defenc) 139 plen = len(path) & CE_NAMEMASK # path length 140 assert plen == len(path), "Path %s too long to fit into index" % entry[3] 141 flags = plen | (entry[2] & CE_NAMEMASK_INV) # clear possible previous values 142 write(pack(">LLLLLL20sH", entry[6], entry[7], entry[0], 143 entry[8], entry[9], entry[10], entry[1], flags)) 144 write(path) 145 real_size = ((tell() - beginoffset + 8) & ~7) 146 write(b"\0" * ((beginoffset + real_size) - tell())) 147 # END for each entry 148 149 # write previously cached extensions data 150 if extension_data is not None: 151 stream.write(extension_data) 152 153 # write the sha over the content 154 stream.write_sha() 155 156 157 def read_header(stream): 158 """Return tuple(version_long, num_entries) from the given stream""" 159 type_id = stream.read(4) 160 if type_id != b"DIRC": 161 raise AssertionError("Invalid index file header: %r" % type_id) 162 version, num_entries = unpack(">LL", stream.read(4 * 2)) 163 164 # TODO: handle version 3: extended data, see read-cache.c 165 assert version in (1, 2) 166 return version, num_entries 167 168 169 def entry_key(*entry): 170 """:return: Key suitable to be used for the index.entries dictionary 171 :param entry: One instance of type BaseIndexEntry or the path and the stage""" 172 if len(entry) == 1: 173 return (entry[0].path, entry[0].stage) 174 return tuple(entry) 175 # END handle entry 176 177 178 def read_cache(stream): 179 """Read a cache file from the given stream 180 :return: tuple(version, entries_dict, extension_data, content_sha) 181 * version is the integer version number 182 * entries dict is a dictionary which maps IndexEntry instances to a path at a stage 183 * extension_data is '' or 4 bytes of type + 4 bytes of size + size bytes 184 * content_sha is a 20 byte sha on all cache file contents""" 185 version, num_entries = read_header(stream) 186 count = 0 187 entries = {} 188 189 read = stream.read 190 tell = stream.tell 191 while count < num_entries: 192 beginoffset = tell() 193 ctime = unpack(">8s", read(8))[0] 194 mtime = unpack(">8s", read(8))[0] 195 (dev, ino, mode, uid, gid, size, sha, flags) = \ 196 unpack(">LLLLLL20sH", read(20 + 4 * 6 + 2)) 197 path_size = flags & CE_NAMEMASK 198 path = read(path_size).decode(defenc) 199 200 real_size = ((tell() - beginoffset + 8) & ~7) 201 read((beginoffset + real_size) - tell()) 202 entry = IndexEntry((mode, sha, flags, path, ctime, mtime, dev, ino, uid, gid, size)) 203 # entry_key would be the method to use, but we safe the effort 204 entries[(path, entry.stage)] = entry 205 count += 1 206 # END for each entry 207 208 # the footer contains extension data and a sha on the content so far 209 # Keep the extension footer,and verify we have a sha in the end 210 # Extension data format is: 211 # 4 bytes ID 212 # 4 bytes length of chunk 213 # repeated 0 - N times 214 extension_data = stream.read(~0) 215 assert len(extension_data) > 19, "Index Footer was not at least a sha on content as it was only %i bytes in size"\ 216 % len(extension_data) 217 218 content_sha = extension_data[-20:] 219 220 # truncate the sha in the end as we will dynamically create it anyway 221 extension_data = extension_data[:-20] 222 223 return (version, entries, extension_data, content_sha) 224 225 226 def write_tree_from_cache(entries, odb, sl, si=0): 227 """Create a tree from the given sorted list of entries and put the respective 228 trees into the given object database 229 230 :param entries: **sorted** list of IndexEntries 231 :param odb: object database to store the trees in 232 :param si: start index at which we should start creating subtrees 233 :param sl: slice indicating the range we should process on the entries list 234 :return: tuple(binsha, list(tree_entry, ...)) a tuple of a sha and a list of 235 tree entries being a tuple of hexsha, mode, name""" 236 tree_items = [] 237 tree_items_append = tree_items.append 238 ci = sl.start 239 end = sl.stop 240 while ci < end: 241 entry = entries[ci] 242 if entry.stage != 0: 243 raise UnmergedEntriesError(entry) 244 # END abort on unmerged 245 ci += 1 246 rbound = entry.path.find('/', si) 247 if rbound == -1: 248 # its not a tree 249 tree_items_append((entry.binsha, entry.mode, entry.path[si:])) 250 else: 251 # find common base range 252 base = entry.path[si:rbound] 253 xi = ci 254 while xi < end: 255 oentry = entries[xi] 256 orbound = oentry.path.find('/', si) 257 if orbound == -1 or oentry.path[si:orbound] != base: 258 break 259 # END abort on base mismatch 260 xi += 1 261 # END find common base 262 263 # enter recursion 264 # ci - 1 as we want to count our current item as well 265 sha, _tree_entry_list = write_tree_from_cache(entries, odb, slice(ci - 1, xi), rbound + 1) 266 tree_items_append((sha, S_IFDIR, base)) 267 268 # skip ahead 269 ci = xi 270 # END handle bounds 271 # END for each entry 272 273 # finally create the tree 274 sio = BytesIO() 275 tree_to_stream(tree_items, sio.write) 276 sio.seek(0) 277 278 istream = odb.store(IStream(str_tree_type, len(sio.getvalue()), sio)) 279 return (istream.binsha, tree_items) 280 281 282 def _tree_entry_to_baseindexentry(tree_entry, stage): 283 return BaseIndexEntry((tree_entry[1], tree_entry[0], stage << CE_STAGESHIFT, tree_entry[2])) 284 285 286 def aggressive_tree_merge(odb, tree_shas): 287 """ 288 :return: list of BaseIndexEntries representing the aggressive merge of the given 289 trees. All valid entries are on stage 0, whereas the conflicting ones are left 290 on stage 1, 2 or 3, whereas stage 1 corresponds to the common ancestor tree, 291 2 to our tree and 3 to 'their' tree. 292 :param tree_shas: 1, 2 or 3 trees as identified by their binary 20 byte shas 293 If 1 or two, the entries will effectively correspond to the last given tree 294 If 3 are given, a 3 way merge is performed""" 295 out = [] 296 out_append = out.append 297 298 # one and two way is the same for us, as we don't have to handle an existing 299 # index, instrea 300 if len(tree_shas) in (1, 2): 301 for entry in traverse_tree_recursive(odb, tree_shas[-1], ''): 302 out_append(_tree_entry_to_baseindexentry(entry, 0)) 303 # END for each entry 304 return out 305 # END handle single tree 306 307 if len(tree_shas) > 3: 308 raise ValueError("Cannot handle %i trees at once" % len(tree_shas)) 309 310 # three trees 311 for base, ours, theirs in traverse_trees_recursive(odb, tree_shas, ''): 312 if base is not None: 313 # base version exists 314 if ours is not None: 315 # ours exists 316 if theirs is not None: 317 # it exists in all branches, if it was changed in both 318 # its a conflict, otherwise we take the changed version 319 # This should be the most common branch, so it comes first 320 if(base[0] != ours[0] and base[0] != theirs[0] and ours[0] != theirs[0]) or \ 321 (base[1] != ours[1] and base[1] != theirs[1] and ours[1] != theirs[1]): 322 # changed by both 323 out_append(_tree_entry_to_baseindexentry(base, 1)) 324 out_append(_tree_entry_to_baseindexentry(ours, 2)) 325 out_append(_tree_entry_to_baseindexentry(theirs, 3)) 326 elif base[0] != ours[0] or base[1] != ours[1]: 327 # only we changed it 328 out_append(_tree_entry_to_baseindexentry(ours, 0)) 329 else: 330 # either nobody changed it, or they did. In either 331 # case, use theirs 332 out_append(_tree_entry_to_baseindexentry(theirs, 0)) 333 # END handle modification 334 else: 335 336 if ours[0] != base[0] or ours[1] != base[1]: 337 # they deleted it, we changed it, conflict 338 out_append(_tree_entry_to_baseindexentry(base, 1)) 339 out_append(_tree_entry_to_baseindexentry(ours, 2)) 340 # else: 341 # we didn't change it, ignore 342 # pass 343 # END handle our change 344 # END handle theirs 345 else: 346 if theirs is None: 347 # deleted in both, its fine - its out 348 pass 349 else: 350 if theirs[0] != base[0] or theirs[1] != base[1]: 351 # deleted in ours, changed theirs, conflict 352 out_append(_tree_entry_to_baseindexentry(base, 1)) 353 out_append(_tree_entry_to_baseindexentry(theirs, 3)) 354 # END theirs changed 355 # else: 356 # theirs didn't change 357 # pass 358 # END handle theirs 359 # END handle ours 360 else: 361 # all three can't be None 362 if ours is None: 363 # added in their branch 364 out_append(_tree_entry_to_baseindexentry(theirs, 0)) 365 elif theirs is None: 366 # added in our branch 367 out_append(_tree_entry_to_baseindexentry(ours, 0)) 368 else: 369 # both have it, except for the base, see whether it changed 370 if ours[0] != theirs[0] or ours[1] != theirs[1]: 371 out_append(_tree_entry_to_baseindexentry(ours, 2)) 372 out_append(_tree_entry_to_baseindexentry(theirs, 3)) 373 else: 374 # it was added the same in both 375 out_append(_tree_entry_to_baseindexentry(ours, 0)) 376 # END handle two items 377 # END handle heads 378 # END handle base exists 379 # END for each entries tuple 380 381 return out 382 [end of git/index/fun.py] [start of git/refs/head.py] 1 from git.config import SectionConstraint 2 from git.util import join_path 3 from git.exc import GitCommandError 4 5 from .symbolic import SymbolicReference 6 from .reference import Reference 7 8 __all__ = ["HEAD", "Head"] 9 10 11 def strip_quotes(string): 12 if string.startswith('"') and string.endswith('"'): 13 return string[1:-1] 14 return string 15 16 17 class HEAD(SymbolicReference): 18 19 """Special case of a Symbolic Reference as it represents the repository's 20 HEAD reference.""" 21 _HEAD_NAME = 'HEAD' 22 _ORIG_HEAD_NAME = 'ORIG_HEAD' 23 __slots__ = () 24 25 def __init__(self, repo, path=_HEAD_NAME): 26 if path != self._HEAD_NAME: 27 raise ValueError("HEAD instance must point to %r, got %r" % (self._HEAD_NAME, path)) 28 super(HEAD, self).__init__(repo, path) 29 30 def orig_head(self): 31 """ 32 :return: SymbolicReference pointing at the ORIG_HEAD, which is maintained 33 to contain the previous value of HEAD""" 34 return SymbolicReference(self.repo, self._ORIG_HEAD_NAME) 35 36 def reset(self, commit='HEAD', index=True, working_tree=False, 37 paths=None, **kwargs): 38 """Reset our HEAD to the given commit optionally synchronizing 39 the index and working tree. The reference we refer to will be set to 40 commit as well. 41 42 :param commit: 43 Commit object, Reference Object or string identifying a revision we 44 should reset HEAD to. 45 46 :param index: 47 If True, the index will be set to match the given commit. Otherwise 48 it will not be touched. 49 50 :param working_tree: 51 If True, the working tree will be forcefully adjusted to match the given 52 commit, possibly overwriting uncommitted changes without warning. 53 If working_tree is True, index must be true as well 54 55 :param paths: 56 Single path or list of paths relative to the git root directory 57 that are to be reset. This allows to partially reset individual files. 58 59 :param kwargs: 60 Additional arguments passed to git-reset. 61 62 :return: self""" 63 mode = "--soft" 64 if index: 65 mode = "--mixed" 66 67 # it appears, some git-versions declare mixed and paths deprecated 68 # see http://github.com/Byron/GitPython/issues#issue/2 69 if paths: 70 mode = None 71 # END special case 72 # END handle index 73 74 if working_tree: 75 mode = "--hard" 76 if not index: 77 raise ValueError("Cannot reset the working tree if the index is not reset as well") 78 79 # END working tree handling 80 81 try: 82 self.repo.git.reset(mode, commit, '--', paths, **kwargs) 83 except GitCommandError as e: 84 # git nowadays may use 1 as status to indicate there are still unstaged 85 # modifications after the reset 86 if e.status != 1: 87 raise 88 # END handle exception 89 90 return self 91 92 93 class Head(Reference): 94 95 """A Head is a named reference to a Commit. Every Head instance contains a name 96 and a Commit object. 97 98 Examples:: 99 100 >>> repo = Repo("/path/to/repo") 101 >>> head = repo.heads[0] 102 103 >>> head.name 104 'master' 105 106 >>> head.commit 107 <git.Commit "1c09f116cbc2cb4100fb6935bb162daa4723f455"> 108 109 >>> head.commit.hexsha 110 '1c09f116cbc2cb4100fb6935bb162daa4723f455'""" 111 _common_path_default = "refs/heads" 112 k_config_remote = "remote" 113 k_config_remote_ref = "merge" # branch to merge from remote 114 115 @classmethod 116 def delete(cls, repo, *heads, **kwargs): 117 """Delete the given heads 118 119 :param force: 120 If True, the heads will be deleted even if they are not yet merged into 121 the main development stream. 122 Default False""" 123 force = kwargs.get("force", False) 124 flag = "-d" 125 if force: 126 flag = "-D" 127 repo.git.branch(flag, *heads) 128 129 def set_tracking_branch(self, remote_reference): 130 """ 131 Configure this branch to track the given remote reference. This will alter 132 this branch's configuration accordingly. 133 134 :param remote_reference: The remote reference to track or None to untrack 135 any references 136 :return: self""" 137 from .remote import RemoteReference 138 if remote_reference is not None and not isinstance(remote_reference, RemoteReference): 139 raise ValueError("Incorrect parameter type: %r" % remote_reference) 140 # END handle type 141 142 with self.config_writer() as writer: 143 if remote_reference is None: 144 writer.remove_option(self.k_config_remote) 145 writer.remove_option(self.k_config_remote_ref) 146 if len(writer.options()) == 0: 147 writer.remove_section() 148 else: 149 writer.set_value(self.k_config_remote, remote_reference.remote_name) 150 writer.set_value(self.k_config_remote_ref, Head.to_full_path(remote_reference.remote_head)) 151 152 return self 153 154 def tracking_branch(self): 155 """ 156 :return: The remote_reference we are tracking, or None if we are 157 not a tracking branch""" 158 from .remote import RemoteReference 159 reader = self.config_reader() 160 if reader.has_option(self.k_config_remote) and reader.has_option(self.k_config_remote_ref): 161 ref = Head(self.repo, Head.to_full_path(strip_quotes(reader.get_value(self.k_config_remote_ref)))) 162 remote_refpath = RemoteReference.to_full_path(join_path(reader.get_value(self.k_config_remote), ref.name)) 163 return RemoteReference(self.repo, remote_refpath) 164 # END handle have tracking branch 165 166 # we are not a tracking branch 167 return None 168 169 def rename(self, new_path, force=False): 170 """Rename self to a new path 171 172 :param new_path: 173 Either a simple name or a path, i.e. new_name or features/new_name. 174 The prefix refs/heads is implied 175 176 :param force: 177 If True, the rename will succeed even if a head with the target name 178 already exists. 179 180 :return: self 181 :note: respects the ref log as git commands are used""" 182 flag = "-m" 183 if force: 184 flag = "-M" 185 186 self.repo.git.branch(flag, self, new_path) 187 self.path = "%s/%s" % (self._common_path_default, new_path) 188 return self 189 190 def checkout(self, force=False, **kwargs): 191 """Checkout this head by setting the HEAD to this reference, by updating the index 192 to reflect the tree we point to and by updating the working tree to reflect 193 the latest index. 194 195 The command will fail if changed working tree files would be overwritten. 196 197 :param force: 198 If True, changes to the index and the working tree will be discarded. 199 If False, GitCommandError will be raised in that situation. 200 201 :param kwargs: 202 Additional keyword arguments to be passed to git checkout, i.e. 203 b='new_branch' to create a new branch at the given spot. 204 205 :return: 206 The active branch after the checkout operation, usually self unless 207 a new branch has been created. 208 If there is no active branch, as the HEAD is now detached, the HEAD 209 reference will be returned instead. 210 211 :note: 212 By default it is only allowed to checkout heads - everything else 213 will leave the HEAD detached which is allowed and possible, but remains 214 a special state that some tools might not be able to handle.""" 215 kwargs['f'] = force 216 if kwargs['f'] is False: 217 kwargs.pop('f') 218 219 self.repo.git.checkout(self, **kwargs) 220 if self.repo.head.is_detached: 221 return self.repo.head 222 return self.repo.active_branch 223 224 #{ Configuration 225 def _config_parser(self, read_only): 226 if read_only: 227 parser = self.repo.config_reader() 228 else: 229 parser = self.repo.config_writer() 230 # END handle parser instance 231 232 return SectionConstraint(parser, 'branch "%s"' % self.name) 233 234 def config_reader(self): 235 """ 236 :return: A configuration parser instance constrained to only read 237 this instance's values""" 238 return self._config_parser(read_only=True) 239 240 def config_writer(self): 241 """ 242 :return: A configuration writer instance with read-and write access 243 to options of this head""" 244 return self._config_parser(read_only=False) 245 246 #} END configuration 247 [end of git/refs/head.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
gitpython-developers/GitPython
3211ae9dbfc6aadd2dd1d7d0f9f3af37ead19383
GitCommand missing stderr With GitPython 3.1.14 `GitCommandError`, for failed clones would pass on the captured stderr output into its `stderr` attribute. With 3.1.15 this contains the empty string instead. This was caught by this unit test: https://github.com/tomtom-international/hopic/blob/9a35520388f8109ccff6a89407bc2429ed0d0557/hopic/test/test_checkout.py#L84-L103 A reduced test that reduces this to _only_ testing GitPython (instead of the full integration with hopic) looks like the code below. That exhibits the exact same problem: ```python import git import pytest def test_checkout_in_non_empty_dir(tmp_path): orig_repo = tmp_path / "orig" with git.Repo.init(orig_repo, expand_vars=False) as repo: repo.index.commit(message='Initial commit', **_commitargs) non_empty_dir = tmp_path / 'non-empty-clone' non_empty_dir.mkdir(parents=True) garbage_file = non_empty_dir / 'not-empty' garbage_file.write_text('Garbage!') # Verify that the clone fails complaining about the target directory not being empty/non-existent with pytest.raises(git.GitCommandError, match=r'(?is).*\bfatal:\s+destination\s+path\b.*\bexists\b.*\bnot\b.*\bempty\s+directory\b'): git.Repo.clone_from(orig_repo, non_empty_dir) assert garbage_file.exists() ``` With 3.1.14 this passes. With the `pytest.raises` expectation removed this output is displayed: ```console ___________________________________________________________________ test_checkout_in_non_empty_dir ____________________________________________________________________ tmp_path = PosixPath('/tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0') def test_checkout_in_non_empty_dir(tmp_path): orig_repo = tmp_path / "orig" with git.Repo.init(orig_repo, expand_vars=False) as repo: repo.index.commit(message='Initial commit', **_commitargs) non_empty_dir = tmp_path / 'non-empty-clone' non_empty_dir.mkdir(parents=True) garbage_file = non_empty_dir / 'not-empty' garbage_file.write_text('Garbage!') # Verify that the clone fails complaining about the target directory not being empty/non-existent > git.Repo.clone_from(orig_repo, non_empty_dir) hopic/test/test_checkout.py:95: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ v3.8/lib/python3.8/site-packages/git/repo/base.py:1032: in clone_from return cls._clone(git, url, to_path, GitCmdObjectDB, progress, multi_options, **kwargs) v3.8/lib/python3.8/site-packages/git/repo/base.py:973: in _clone finalize_process(proc, stderr=stderr) v3.8/lib/python3.8/site-packages/git/util.py:329: in finalize_process proc.wait(**kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <git.cmd.Git.AutoInterrupt object at 0x7f6a5883f9a0> stderr = b"fatal: destination path '/tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/non-empty-clone' already exists and is not an empty directory.\n" def wait(self, stderr=b''): # TODO: Bad choice to mimic `proc.wait()` but with different args. """Wait for the process and return its status code. :param stderr: Previously read value of stderr, in case stderr is already closed. :warn: may deadlock if output or error pipes are used and not handled separately. :raise GitCommandError: if the return status is not 0""" if stderr is None: stderr = b'' stderr = force_bytes(data=stderr, encoding='utf-8') status = self.proc.wait() def read_all_from_possibly_closed_stream(stream): try: return stderr + force_bytes(stream.read()) except ValueError: return stderr or b'' if status != 0: errstr = read_all_from_possibly_closed_stream(self.proc.stderr) log.debug('AutoInterrupt wait stderr: %r' % (errstr,)) > raise GitCommandError(self.args, status, errstr) E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128) E cmdline: git clone -v /tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/orig /tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/non-empty-clone E stderr: 'fatal: destination path '/tmp/pytest-of-vanschig/pytest-334/test_checkout_in_non_empty_dir0/non-empty-clone' already exists and is not an empty directory. E ' v3.8/lib/python3.8/site-packages/git/cmd.py:408: GitCommandError ``` With 3.1.15 this output is produced instead (notice the absence of the "stderr: ..." line): ```console ___________________________________________________________________ test_checkout_in_non_empty_dir ____________________________________________________________________ tmp_path = PosixPath('/tmp/pytest-of-vanschig/pytest-336/test_checkout_in_non_empty_dir0') def test_checkout_in_non_empty_dir(tmp_path): orig_repo = tmp_path / "orig" with git.Repo.init(orig_repo, expand_vars=False) as repo: repo.index.commit(message='Initial commit', **_commitargs) non_empty_dir = tmp_path / 'non-empty-clone' non_empty_dir.mkdir(parents=True) garbage_file = non_empty_dir / 'not-empty' garbage_file.write_text('Garbage!') # Verify that the clone fails complaining about the target directory not being empty/non-existent > git.Repo.clone_from(orig_repo, non_empty_dir) hopic/test/test_checkout.py:95: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ v3.8/lib/python3.8/site-packages/git/repo/base.py:1087: in clone_from return cls._clone(git, url, to_path, GitCmdObjectDB, progress, multi_options, **kwargs) v3.8/lib/python3.8/site-packages/git/repo/base.py:1017: in _clone handle_process_output(proc, None, progress_checked.new_message_handler(), v3.8/lib/python3.8/site-packages/git/cmd.py:116: in handle_process_output return finalizer(process) v3.8/lib/python3.8/site-packages/git/util.py:354: in finalize_process proc.wait(**kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <git.cmd.Git.AutoInterrupt object at 0x7fcc8fd95820>, stderr = b'' def wait(self, stderr=b''): # TODO: Bad choice to mimic `proc.wait()` but with different args. """Wait for the process and return its status code. :param stderr: Previously read value of stderr, in case stderr is already closed. :warn: may deadlock if output or error pipes are used and not handled separately. :raise GitCommandError: if the return status is not 0""" if stderr is None: stderr = b'' stderr = force_bytes(data=stderr, encoding='utf-8') status = self.proc.wait() def read_all_from_possibly_closed_stream(stream): try: return stderr + force_bytes(stream.read()) except ValueError: return stderr or b'' if status != 0: errstr = read_all_from_possibly_closed_stream(self.proc.stderr) log.debug('AutoInterrupt wait stderr: %r' % (errstr,)) > raise GitCommandError(remove_password_if_present(self.args), status, errstr) E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128) E cmdline: git clone -v --progress /tmp/pytest-of-vanschig/pytest-336/test_checkout_in_non_empty_dir0/orig /tmp/pytest-of-vanschig/pytest-336/test_checkout_in_non_empty_dir0/non-empty-clone v3.8/lib/python3.8/site-packages/git/cmd.py:409: GitCommandError ``` When catching the exception and dumping it's `stderr` attribute it turns out to be an empty string (`''`). This seems related to #1220. But it's presentation is different as that issue still reports having content, just wrongly encoded.
@muggenhor I don't believe the issue I raised is related to this. My issue is specifically that bytes are typecast to a str, not that bytes are missing. I have pinpointed the change that has caused this change in behavior in the issue, and these should not have the effect of bytes missing Right, this seems to be introduced by a different change too. This is instead introduced by 85ebfb2f0dedb18673a2d756274bbcecd1f034c4 from #1192. Thanks a lot for researching this. I would be super happy about a PR for a fix that doesn't undo the entirety of 85ebfb2, which assumably you find via bisection or similar. When looking at the changes, it wasn't immediately obvious why `clone()` would be affected in that way: https://github.com/gitpython-developers/GitPython/commit/85ebfb2f0dedb18673a2d756274bbcecd1f034c4#diff-3cc1aaf2f1e2bc1341d3f71ceec44b2762b981280b4d162e26327bd558721fe1R1050
2021-04-22T07:30:10Z
<patch> diff --git a/git/repo/base.py b/git/repo/base.py --- a/git/repo/base.py +++ b/git/repo/base.py @@ -988,8 +988,6 @@ def init(cls, path: PathLike = None, mkdir: bool = True, odbt: Type[GitCmdObject def _clone(cls, git: 'Git', url: PathLike, path: PathLike, odb_default_type: Type[GitCmdObjectDB], progress: Optional[Callable], multi_options: Optional[List[str]] = None, **kwargs: Any ) -> 'Repo': - progress_checked = to_progress_instance(progress) - odbt = kwargs.pop('odbt', odb_default_type) # when pathlib.Path or other classbased path is passed @@ -1012,9 +1010,9 @@ def _clone(cls, git: 'Git', url: PathLike, path: PathLike, odb_default_type: Typ if multi_options: multi = ' '.join(multi_options).split(' ') proc = git.clone(multi, Git.polish_url(url), clone_path, with_extended_output=True, as_process=True, - v=True, universal_newlines=True, **add_progress(kwargs, git, progress_checked)) - if progress_checked: - handle_process_output(proc, None, progress_checked.new_message_handler(), + v=True, universal_newlines=True, **add_progress(kwargs, git, progress)) + if progress: + handle_process_output(proc, None, to_progress_instance(progress).new_message_handler(), finalize_process, decode_streams=False) else: (stdout, stderr) = proc.communicate() </patch>
[]
[]
pandas-dev__pandas-24547
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> (row) Index Name with to_html(header=False) is not displayed #### Code Sample, a copy-pastable example if possible ```python import pandas as pd import numpy as np df = pd.DataFrame(np.zeros((2, 2), dtype=int)) df.index.name = 'index.name' df.to_html(header=False) ``` <table border="1" class="dataframe"> <tbody> <tr> <th>0</th> <td>0</td> <td>0</td> </tr> <tr> <th>1</th> <td>0</td> <td>0</td> </tr> </tbody> </table> ```python import pandas as pd import numpy as np df = pd.DataFrame(np.zeros((2, 2), dtype=int)) df.index = pd.MultiIndex.from_product([['a'], ['b', 'c']], names=[ 'index.name.0', 'index.name.1']) df.to_html(header=False) ``` <table border="1" class="dataframe"> <tbody> <tr> <th rowspan="2" valign="top">a</th> <th>b</th> <td>0</td> <td>0</td> </tr> <tr> <th>c</th> <td>0</td> <td>0</td> </tr> </tbody> </table> #### Problem description `to_html(header=False)` is not displaying the (row) Index names. The `header` parameter should be analogous to the `index` parameter and hide the columns Index only and leave the (row) Index names displayed. to hide the display of the row Index names, the `index_names=False` parameter should be used. the problem is due to an early return in the composition of the HTML header. the `to_html` parameter `header` should not be confused with the HTML header. https://github.com/pandas-dev/pandas/blob/deb7b4d5003b939f47e525bcdaceeea48622a73a/pandas/io/formats/html.py#L196-L201 #### Expected Output <table border="1" class="dataframe"> <thead> <tr> <th>index.name</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0</td> <td>0</td> </tr> <tr> <th>1</th> <td>0</td> <td>0</td> </tr> </tbody> </table> <table border="1" class="dataframe"> <thead> <tr> <th>index.name.0</th> <th>index.name.1</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th rowspan="2" valign="top">a</th> <th>b</th> <td>0</td> <td>0</td> </tr> <tr> <th>c</th> <td>0</td> <td>0</td> </tr> </tbody> </table> #### Output of ``pd.show_versions()`` <details> INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None pandas: 0.23.0 pytest: 3.5.1 pip: 10.0.1 setuptools: 39.1.0 Cython: 0.28.2 numpy: 1.14.3 scipy: 1.1.0 pyarrow: None xarray: None IPython: 6.4.0 sphinx: 1.7.4 patsy: 0.5.0 dateutil: 2.7.3 pytz: 2018.4 blosc: None bottleneck: 1.2.1 tables: 3.4.3 numexpr: 2.6.5 feather: None matplotlib: 2.2.2 openpyxl: 2.5.3 xlrd: 1.1.0 xlwt: 1.3.0 xlsxwriter: 1.0.4 lxml: 4.2.1 bs4: 4.6.0 html5lib: 1.0.1 sqlalchemy: 1.2.7 pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None </details> cc @WillAyd </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://github.com/pandas-dev/pandas/blob/master/doc/logo/pandas_logo.png"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 9 <table> 10 <tr> 11 <td>Latest Release</td> 12 <td> 13 <a href="https://pypi.org/project/pandas/"> 14 <img src="https://img.shields.io/pypi/v/pandas.svg" alt="latest release" /> 15 </a> 16 </td> 17 </tr> 18 <td></td> 19 <td> 20 <a href="https://anaconda.org/anaconda/pandas/"> 21 <img src="https://anaconda.org/conda-forge/pandas/badges/version.svg" alt="latest release" /> 22 </a> 23 </td> 24 </tr> 25 <tr> 26 <td>Package Status</td> 27 <td> 28 <a href="https://pypi.org/project/pandas/"> 29 <img src="https://img.shields.io/pypi/status/pandas.svg" alt="status" /></td> 30 </a> 31 </tr> 32 <tr> 33 <td>License</td> 34 <td> 35 <a href="https://github.com/pandas-dev/pandas/blob/master/LICENSE"> 36 <img src="https://img.shields.io/pypi/l/pandas.svg" alt="license" /> 37 </a> 38 </td> 39 </tr> 40 <tr> 41 <td>Build Status</td> 42 <td> 43 <a href="https://travis-ci.org/pandas-dev/pandas"> 44 <img src="https://travis-ci.org/pandas-dev/pandas.svg?branch=master" alt="travis build status" /> 45 </a> 46 </td> 47 </tr> 48 <tr> 49 <td></td> 50 <td> 51 <a href="https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master"> 52 <img src="https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master" alt="Azure Pipelines build status" /> 53 </a> 54 </td> 55 </tr> 56 <tr> 57 <td>Coverage</td> 58  <td> 59 <a href="https://codecov.io/gh/pandas-dev/pandas"> 60 <img src="https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master" alt="coverage" /> 61 </a> 62 </td> 63 </tr> 64 <tr> 65 <td>Downloads</td> 66 <td> 67 <a href="https://pandas.pydata.org"> 68 <img src="https://anaconda.org/conda-forge/pandas/badges/downloads.svg" alt="conda-forge downloads" /> 69 </a> 70 </td> 71 </tr> 72 <tr> 73 <td>Gitter</td> 74 <td> 75 <a href="https://gitter.im/pydata/pandas"> 76 <img src="https://badges.gitter.im/Join%20Chat.svg" 77 </a> 78 </td> 79 </tr> 80 </table> 81 82 83 84 ## What is it? 85 86 **pandas** is a Python package providing fast, flexible, and expressive data 87 structures designed to make working with "relational" or "labeled" data both 88 easy and intuitive. It aims to be the fundamental high-level building block for 89 doing practical, **real world** data analysis in Python. Additionally, it has 90 the broader goal of becoming **the most powerful and flexible open source data 91 analysis / manipulation tool available in any language**. It is already well on 92 its way towards this goal. 93 94 ## Main Features 95 Here are just a few of the things that pandas does well: 96 97 - Easy handling of [**missing data**][missing-data] (represented as 98 `NaN`) in floating point as well as non-floating point data 99 - Size mutability: columns can be [**inserted and 100 deleted**][insertion-deletion] from DataFrame and higher dimensional 101 objects 102 - Automatic and explicit [**data alignment**][alignment]: objects can 103 be explicitly aligned to a set of labels, or the user can simply 104 ignore the labels and let `Series`, `DataFrame`, etc. automatically 105 align the data for you in computations 106 - Powerful, flexible [**group by**][groupby] functionality to perform 107 split-apply-combine operations on data sets, for both aggregating 108 and transforming data 109 - Make it [**easy to convert**][conversion] ragged, 110 differently-indexed data in other Python and NumPy data structures 111 into DataFrame objects 112 - Intelligent label-based [**slicing**][slicing], [**fancy 113 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 114 large data sets 115 - Intuitive [**merging**][merging] and [**joining**][joining] data 116 sets 117 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 118 data sets 119 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 120 labels per tick) 121 - Robust IO tools for loading data from [**flat files**][flat-files] 122 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 123 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 124 - [**Time series**][timeseries]-specific functionality: date range 125 generation and frequency conversion, moving window statistics, 126 moving window linear regressions, date shifting and lagging, etc. 127 128 129 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 130 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 131 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 132 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 133 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 134 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 135 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 136 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 137 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 138 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 139 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 140 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 141 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 142 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 143 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 144 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 145 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 146 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 147 148 ## Where to get it 149 The source code is currently hosted on GitHub at: 150 https://github.com/pandas-dev/pandas 151 152 Binary installers for the latest released version are available at the [Python 153 package index](https://pypi.org/project/pandas) and on conda. 154 155 ```sh 156 # conda 157 conda install pandas 158 ``` 159 160 ```sh 161 # or PyPI 162 pip install pandas 163 ``` 164 165 ## Dependencies 166 - [NumPy](https://www.numpy.org): 1.12.0 or higher 167 - [python-dateutil](https://labix.org/python-dateutil): 2.5.0 or higher 168 - [pytz](https://pythonhosted.org/pytz): 2011k or higher 169 170 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) 171 for recommended and optional dependencies. 172 173 ## Installation from sources 174 To install pandas from source you need Cython in addition to the normal 175 dependencies above. Cython can be installed from pypi: 176 177 ```sh 178 pip install cython 179 ``` 180 181 In the `pandas` directory (same one where you found this file after 182 cloning the git repo), execute: 183 184 ```sh 185 python setup.py install 186 ``` 187 188 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 189 190 ```sh 191 python setup.py develop 192 ``` 193 194 Alternatively, you can use `pip` if you want all the dependencies pulled 195 in automatically (the `-e` option is for installing it in [development 196 mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs)): 197 198 ```sh 199 pip install -e . 200 ``` 201 202 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 203 204 ## License 205 [BSD 3](LICENSE) 206 207 ## Documentation 208 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 209 210 ## Background 211 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 212 has been under active development since then. 213 214 ## Getting Help 215 216 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 217 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 218 219 ## Discussion and Development 220 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 221 222 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 223 224 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 225 226 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas-docs.github.io/pandas-docs-travis/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 227 228 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 229 230 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 231 232 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 233 234 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 235 [end of README.md] [start of asv_bench/benchmarks/io/stata.py] 1 import numpy as np 2 from pandas import DataFrame, date_range, read_stata 3 import pandas.util.testing as tm 4 5 from ..pandas_vb_common import BaseIO 6 7 8 class Stata(BaseIO): 9 10 params = ['tc', 'td', 'tm', 'tw', 'th', 'tq', 'ty'] 11 param_names = ['convert_dates'] 12 13 def setup(self, convert_dates): 14 self.fname = '__test__.dta' 15 N = 100000 16 C = 5 17 self.df = DataFrame(np.random.randn(N, C), 18 columns=['float{}'.format(i) for i in range(C)], 19 index=date_range('20000101', periods=N, freq='H')) 20 self.df['object'] = tm.makeStringIndex(N) 21 self.df['int8_'] = np.random.randint(np.iinfo(np.int8).min, 22 np.iinfo(np.int8).max - 27, N) 23 self.df['int16_'] = np.random.randint(np.iinfo(np.int16).min, 24 np.iinfo(np.int16).max - 27, N) 25 self.df['int32_'] = np.random.randint(np.iinfo(np.int32).min, 26 np.iinfo(np.int32).max - 27, N) 27 self.df['float32_'] = np.array(np.random.randn(N), 28 dtype=np.float32) 29 self.convert_dates = {'index': convert_dates} 30 self.df.to_stata(self.fname, self.convert_dates) 31 32 def time_read_stata(self, convert_dates): 33 read_stata(self.fname) 34 35 def time_write_stata(self, convert_dates): 36 self.df.to_stata(self.fname, self.convert_dates) 37 38 39 from ..pandas_vb_common import setup # noqa: F401 40 [end of asv_bench/benchmarks/io/stata.py] [start of asv_bench/benchmarks/timedelta.py] 1 import datetime 2 3 import numpy as np 4 5 from pandas import ( 6 DataFrame, Series, Timedelta, Timestamp, timedelta_range, to_timedelta) 7 8 9 class TimedeltaConstructor(object): 10 11 def time_from_int(self): 12 Timedelta(123456789) 13 14 def time_from_unit(self): 15 Timedelta(1, unit='d') 16 17 def time_from_components(self): 18 Timedelta(days=1, hours=2, minutes=3, seconds=4, milliseconds=5, 19 microseconds=6, nanoseconds=7) 20 21 def time_from_datetime_timedelta(self): 22 Timedelta(datetime.timedelta(days=1, seconds=1)) 23 24 def time_from_np_timedelta(self): 25 Timedelta(np.timedelta64(1, 'ms')) 26 27 def time_from_string(self): 28 Timedelta('1 days') 29 30 def time_from_iso_format(self): 31 Timedelta('P4DT12H30M5S') 32 33 def time_from_missing(self): 34 Timedelta('nat') 35 36 37 class ToTimedelta(object): 38 39 def setup(self): 40 self.ints = np.random.randint(0, 60, size=10000) 41 self.str_days = [] 42 self.str_seconds = [] 43 for i in self.ints: 44 self.str_days.append('{0} days'.format(i)) 45 self.str_seconds.append('00:00:{0:02d}'.format(i)) 46 47 def time_convert_int(self): 48 to_timedelta(self.ints, unit='s') 49 50 def time_convert_string_days(self): 51 to_timedelta(self.str_days) 52 53 def time_convert_string_seconds(self): 54 to_timedelta(self.str_seconds) 55 56 57 class ToTimedeltaErrors(object): 58 59 params = ['coerce', 'ignore'] 60 param_names = ['errors'] 61 62 def setup(self, errors): 63 ints = np.random.randint(0, 60, size=10000) 64 self.arr = ['{0} days'.format(i) for i in ints] 65 self.arr[-1] = 'apple' 66 67 def time_convert(self, errors): 68 to_timedelta(self.arr, errors=errors) 69 70 71 class TimedeltaOps(object): 72 73 def setup(self): 74 self.td = to_timedelta(np.arange(1000000)) 75 self.ts = Timestamp('2000') 76 77 def time_add_td_ts(self): 78 self.td + self.ts 79 80 81 class TimedeltaProperties(object): 82 83 def setup_cache(self): 84 td = Timedelta(days=365, minutes=35, seconds=25, milliseconds=35) 85 return td 86 87 def time_timedelta_days(self, td): 88 td.days 89 90 def time_timedelta_seconds(self, td): 91 td.seconds 92 93 def time_timedelta_microseconds(self, td): 94 td.microseconds 95 96 def time_timedelta_nanoseconds(self, td): 97 td.nanoseconds 98 99 100 class DatetimeAccessor(object): 101 102 def setup_cache(self): 103 N = 100000 104 series = Series(timedelta_range('1 days', periods=N, freq='h')) 105 return series 106 107 def time_dt_accessor(self, series): 108 series.dt 109 110 def time_timedelta_days(self, series): 111 series.dt.days 112 113 def time_timedelta_seconds(self, series): 114 series.dt.seconds 115 116 def time_timedelta_microseconds(self, series): 117 series.dt.microseconds 118 119 def time_timedelta_nanoseconds(self, series): 120 series.dt.nanoseconds 121 122 123 class TimedeltaIndexing(object): 124 125 def setup(self): 126 self.index = timedelta_range(start='1985', periods=1000, freq='D') 127 self.index2 = timedelta_range(start='1986', periods=1000, freq='D') 128 self.series = Series(range(1000), index=self.index) 129 self.timedelta = self.index[500] 130 131 def time_get_loc(self): 132 self.index.get_loc(self.timedelta) 133 134 def time_shape(self): 135 self.index.shape 136 137 def time_shallow_copy(self): 138 self.index._shallow_copy() 139 140 def time_series_loc(self): 141 self.series.loc[self.timedelta] 142 143 def time_align(self): 144 DataFrame({'a': self.series, 'b': self.series[:500]}) 145 146 def time_intersection(self): 147 self.index.intersection(self.index2) 148 149 def time_union(self): 150 self.index.union(self.index2) 151 152 def time_unique(self): 153 self.index.unique() 154 [end of asv_bench/benchmarks/timedelta.py] [start of pandas/io/formats/html.py] 1 # -*- coding: utf-8 -*- 2 """ 3 Module for formatting output data in HTML. 4 """ 5 6 from __future__ import print_function 7 8 from textwrap import dedent 9 10 from pandas.compat import OrderedDict, lzip, map, range, u, unichr, zip 11 12 from pandas.core.dtypes.generic import ABCMultiIndex 13 14 from pandas import compat 15 import pandas.core.common as com 16 from pandas.core.config import get_option 17 18 from pandas.io.common import _is_url 19 from pandas.io.formats.format import ( 20 TableFormatter, buffer_put_lines, get_level_lengths) 21 from pandas.io.formats.printing import pprint_thing 22 23 24 class HTMLFormatter(TableFormatter): 25 26 indent_delta = 2 27 28 def __init__(self, formatter, classes=None, notebook=False, border=None, 29 table_id=None, render_links=False): 30 self.fmt = formatter 31 self.classes = classes 32 33 self.frame = self.fmt.frame 34 self.columns = self.fmt.tr_frame.columns 35 self.elements = [] 36 self.bold_rows = self.fmt.kwds.get('bold_rows', False) 37 self.escape = self.fmt.kwds.get('escape', True) 38 self.show_dimensions = self.fmt.show_dimensions 39 self.notebook = notebook 40 if border is None: 41 border = get_option('display.html.border') 42 self.border = border 43 self.table_id = table_id 44 self.render_links = render_links 45 46 @property 47 def show_col_idx_names(self): 48 # see gh-22579 49 # Column misalignment also occurs for 50 # a standard index when the columns index is named. 51 # Determine if ANY column names need to be displayed 52 # since if the row index is not displayed a column of 53 # blank cells need to be included before the DataFrame values. 54 # TODO: refactor to add show_col_idx_names property to 55 # DataFrameFormatter 56 return all((self.fmt.has_column_names, 57 self.fmt.show_index_names, 58 self.fmt.header)) 59 60 @property 61 def row_levels(self): 62 if self.fmt.index: 63 # showing (row) index 64 return self.frame.index.nlevels 65 elif self.show_col_idx_names: 66 # see gh-22579 67 # Column misalignment also occurs for 68 # a standard index when the columns index is named. 69 # If the row index is not displayed a column of 70 # blank cells need to be included before the DataFrame values. 71 return 1 72 # not showing (row) index 73 return 0 74 75 @property 76 def is_truncated(self): 77 return self.fmt.is_truncated 78 79 @property 80 def ncols(self): 81 return len(self.fmt.tr_frame.columns) 82 83 def write(self, s, indent=0): 84 rs = pprint_thing(s) 85 self.elements.append(' ' * indent + rs) 86 87 def write_th(self, s, indent=0, tags=None): 88 if self.fmt.col_space is not None and self.fmt.col_space > 0: 89 tags = (tags or "") 90 tags += ('style="min-width: {colspace};"' 91 .format(colspace=self.fmt.col_space)) 92 93 return self._write_cell(s, kind='th', indent=indent, tags=tags) 94 95 def write_td(self, s, indent=0, tags=None): 96 return self._write_cell(s, kind='td', indent=indent, tags=tags) 97 98 def _write_cell(self, s, kind='td', indent=0, tags=None): 99 if tags is not None: 100 start_tag = '<{kind} {tags}>'.format(kind=kind, tags=tags) 101 else: 102 start_tag = '<{kind}>'.format(kind=kind) 103 104 if self.escape: 105 # escape & first to prevent double escaping of & 106 esc = OrderedDict([('&', r'&amp;'), ('<', r'&lt;'), 107 ('>', r'&gt;')]) 108 else: 109 esc = {} 110 111 rs = pprint_thing(s, escape_chars=esc).strip() 112 113 if self.render_links and _is_url(rs): 114 rs_unescaped = pprint_thing(s, escape_chars={}).strip() 115 start_tag += '<a href="{url}" target="_blank">'.format( 116 url=rs_unescaped) 117 end_a = '</a>' 118 else: 119 end_a = '' 120 121 self.write(u'{start}{rs}{end_a}</{kind}>'.format( 122 start=start_tag, rs=rs, end_a=end_a, kind=kind), indent) 123 124 def write_tr(self, line, indent=0, indent_delta=0, header=False, 125 align=None, tags=None, nindex_levels=0): 126 if tags is None: 127 tags = {} 128 129 if align is None: 130 self.write('<tr>', indent) 131 else: 132 self.write('<tr style="text-align: {align};">' 133 .format(align=align), indent) 134 indent += indent_delta 135 136 for i, s in enumerate(line): 137 val_tag = tags.get(i, None) 138 if header or (self.bold_rows and i < nindex_levels): 139 self.write_th(s, indent, tags=val_tag) 140 else: 141 self.write_td(s, indent, tags=val_tag) 142 143 indent -= indent_delta 144 self.write('</tr>', indent) 145 146 def write_style(self): 147 # We use the "scoped" attribute here so that the desired 148 # style properties for the data frame are not then applied 149 # throughout the entire notebook. 150 template_first = """\ 151 <style scoped>""" 152 template_last = """\ 153 </style>""" 154 template_select = """\ 155 .dataframe %s { 156 %s: %s; 157 }""" 158 element_props = [('tbody tr th:only-of-type', 159 'vertical-align', 160 'middle'), 161 ('tbody tr th', 162 'vertical-align', 163 'top')] 164 if isinstance(self.columns, ABCMultiIndex): 165 element_props.append(('thead tr th', 166 'text-align', 167 'left')) 168 if all((self.fmt.has_index_names, 169 self.fmt.index, 170 self.fmt.show_index_names)): 171 element_props.append(('thead tr:last-of-type th', 172 'text-align', 173 'right')) 174 else: 175 element_props.append(('thead th', 176 'text-align', 177 'right')) 178 template_mid = '\n\n'.join(map(lambda t: template_select % t, 179 element_props)) 180 template = dedent('\n'.join((template_first, 181 template_mid, 182 template_last))) 183 if self.notebook: 184 self.write(template) 185 186 def write_result(self, buf): 187 indent = 0 188 id_section = "" 189 frame = self.frame 190 191 _classes = ['dataframe'] # Default class. 192 use_mathjax = get_option("display.html.use_mathjax") 193 if not use_mathjax: 194 _classes.append('tex2jax_ignore') 195 if self.classes is not None: 196 if isinstance(self.classes, str): 197 self.classes = self.classes.split() 198 if not isinstance(self.classes, (list, tuple)): 199 raise AssertionError('classes must be list or tuple, not {typ}' 200 .format(typ=type(self.classes))) 201 _classes.extend(self.classes) 202 203 if self.notebook: 204 self.write('<div>') 205 206 self.write_style() 207 208 if self.table_id is not None: 209 id_section = ' id="{table_id}"'.format(table_id=self.table_id) 210 self.write('<table border="{border}" class="{cls}"{id_section}>' 211 .format(border=self.border, cls=' '.join(_classes), 212 id_section=id_section), indent) 213 214 indent += self.indent_delta 215 indent = self._write_header(indent) 216 indent = self._write_body(indent) 217 218 self.write('</table>', indent) 219 if self.should_show_dimensions: 220 by = chr(215) if compat.PY3 else unichr(215) # × 221 self.write(u('<p>{rows} rows {by} {cols} columns</p>') 222 .format(rows=len(frame), 223 by=by, 224 cols=len(frame.columns))) 225 226 if self.notebook: 227 self.write('</div>') 228 229 buffer_put_lines(buf, self.elements) 230 231 def _write_header(self, indent): 232 truncate_h = self.fmt.truncate_h 233 234 if not self.fmt.header: 235 # write nothing 236 return indent 237 238 self.write('<thead>', indent) 239 240 indent += self.indent_delta 241 242 if isinstance(self.columns, ABCMultiIndex): 243 template = 'colspan="{span:d}" halign="left"' 244 245 if self.fmt.sparsify: 246 # GH3547 247 sentinel = com.sentinel_factory() 248 else: 249 sentinel = None 250 levels = self.columns.format(sparsify=sentinel, adjoin=False, 251 names=False) 252 level_lengths = get_level_lengths(levels, sentinel) 253 inner_lvl = len(level_lengths) - 1 254 for lnum, (records, values) in enumerate(zip(level_lengths, 255 levels)): 256 if truncate_h: 257 # modify the header lines 258 ins_col = self.fmt.tr_col_num 259 if self.fmt.sparsify: 260 recs_new = {} 261 # Increment tags after ... col. 262 for tag, span in list(records.items()): 263 if tag >= ins_col: 264 recs_new[tag + 1] = span 265 elif tag + span > ins_col: 266 recs_new[tag] = span + 1 267 if lnum == inner_lvl: 268 values = (values[:ins_col] + (u('...'),) + 269 values[ins_col:]) 270 else: 271 # sparse col headers do not receive a ... 272 values = (values[:ins_col] + 273 (values[ins_col - 1], ) + 274 values[ins_col:]) 275 else: 276 recs_new[tag] = span 277 # if ins_col lies between tags, all col headers 278 # get ... 279 if tag + span == ins_col: 280 recs_new[ins_col] = 1 281 values = (values[:ins_col] + (u('...'),) + 282 values[ins_col:]) 283 records = recs_new 284 inner_lvl = len(level_lengths) - 1 285 if lnum == inner_lvl: 286 records[ins_col] = 1 287 else: 288 recs_new = {} 289 for tag, span in list(records.items()): 290 if tag >= ins_col: 291 recs_new[tag + 1] = span 292 else: 293 recs_new[tag] = span 294 recs_new[ins_col] = 1 295 records = recs_new 296 values = (values[:ins_col] + [u('...')] + 297 values[ins_col:]) 298 299 # see gh-22579 300 # Column Offset Bug with to_html(index=False) with 301 # MultiIndex Columns and Index. 302 # Initially fill row with blank cells before column names. 303 # TODO: Refactor to remove code duplication with code 304 # block below for standard columns index. 305 row = [''] * (self.row_levels - 1) 306 if self.fmt.index or self.show_col_idx_names: 307 # see gh-22747 308 # If to_html(index_names=False) do not show columns 309 # index names. 310 # TODO: Refactor to use _get_column_name_list from 311 # DataFrameFormatter class and create a 312 # _get_formatted_column_labels function for code 313 # parity with DataFrameFormatter class. 314 if self.fmt.show_index_names: 315 name = self.columns.names[lnum] 316 row.append(pprint_thing(name or '')) 317 else: 318 row.append('') 319 320 tags = {} 321 j = len(row) 322 for i, v in enumerate(values): 323 if i in records: 324 if records[i] > 1: 325 tags[j] = template.format(span=records[i]) 326 else: 327 continue 328 j += 1 329 row.append(v) 330 self.write_tr(row, indent, self.indent_delta, tags=tags, 331 header=True) 332 else: 333 # see gh-22579 334 # Column misalignment also occurs for 335 # a standard index when the columns index is named. 336 # Initially fill row with blank cells before column names. 337 # TODO: Refactor to remove code duplication with code block 338 # above for columns MultiIndex. 339 row = [''] * (self.row_levels - 1) 340 if self.fmt.index or self.show_col_idx_names: 341 # see gh-22747 342 # If to_html(index_names=False) do not show columns 343 # index names. 344 # TODO: Refactor to use _get_column_name_list from 345 # DataFrameFormatter class. 346 if self.fmt.show_index_names: 347 row.append(self.columns.name or '') 348 else: 349 row.append('') 350 row.extend(self.columns) 351 align = self.fmt.justify 352 353 if truncate_h: 354 ins_col = self.row_levels + self.fmt.tr_col_num 355 row.insert(ins_col, '...') 356 357 self.write_tr(row, indent, self.indent_delta, header=True, 358 align=align) 359 360 if all((self.fmt.has_index_names, 361 self.fmt.index, 362 self.fmt.show_index_names)): 363 row = ([x if x is not None else '' for x in self.frame.index.names] 364 + [''] * (self.ncols + (1 if truncate_h else 0))) 365 self.write_tr(row, indent, self.indent_delta, header=True) 366 367 indent -= self.indent_delta 368 self.write('</thead>', indent) 369 370 return indent 371 372 def _write_body(self, indent): 373 self.write('<tbody>', indent) 374 indent += self.indent_delta 375 376 fmt_values = {i: self.fmt._format_col(i) for i in range(self.ncols)} 377 378 # write values 379 if self.fmt.index and isinstance(self.frame.index, ABCMultiIndex): 380 self._write_hierarchical_rows(fmt_values, indent) 381 else: 382 self._write_regular_rows(fmt_values, indent) 383 384 indent -= self.indent_delta 385 self.write('</tbody>', indent) 386 indent -= self.indent_delta 387 388 return indent 389 390 def _write_regular_rows(self, fmt_values, indent): 391 truncate_h = self.fmt.truncate_h 392 truncate_v = self.fmt.truncate_v 393 394 nrows = len(self.fmt.tr_frame) 395 396 if self.fmt.index: 397 fmt = self.fmt._get_formatter('__index__') 398 if fmt is not None: 399 index_values = self.fmt.tr_frame.index.map(fmt) 400 else: 401 index_values = self.fmt.tr_frame.index.format() 402 403 row = [] 404 for i in range(nrows): 405 406 if truncate_v and i == (self.fmt.tr_row_num): 407 str_sep_row = ['...'] * len(row) 408 self.write_tr(str_sep_row, indent, self.indent_delta, 409 tags=None, nindex_levels=self.row_levels) 410 411 row = [] 412 if self.fmt.index: 413 row.append(index_values[i]) 414 # see gh-22579 415 # Column misalignment also occurs for 416 # a standard index when the columns index is named. 417 # Add blank cell before data cells. 418 elif self.show_col_idx_names: 419 row.append('') 420 row.extend(fmt_values[j][i] for j in range(self.ncols)) 421 422 if truncate_h: 423 dot_col_ix = self.fmt.tr_col_num + self.row_levels 424 row.insert(dot_col_ix, '...') 425 self.write_tr(row, indent, self.indent_delta, tags=None, 426 nindex_levels=self.row_levels) 427 428 def _write_hierarchical_rows(self, fmt_values, indent): 429 template = 'rowspan="{span}" valign="top"' 430 431 truncate_h = self.fmt.truncate_h 432 truncate_v = self.fmt.truncate_v 433 frame = self.fmt.tr_frame 434 nrows = len(frame) 435 # TODO: after gh-22887 fixed, refactor to use class property 436 # in place of row_levels 437 row_levels = self.frame.index.nlevels 438 439 idx_values = frame.index.format(sparsify=False, adjoin=False, 440 names=False) 441 idx_values = lzip(*idx_values) 442 443 if self.fmt.sparsify: 444 # GH3547 445 sentinel = com.sentinel_factory() 446 levels = frame.index.format(sparsify=sentinel, adjoin=False, 447 names=False) 448 449 level_lengths = get_level_lengths(levels, sentinel) 450 inner_lvl = len(level_lengths) - 1 451 if truncate_v: 452 # Insert ... row and adjust idx_values and 453 # level_lengths to take this into account. 454 ins_row = self.fmt.tr_row_num 455 inserted = False 456 for lnum, records in enumerate(level_lengths): 457 rec_new = {} 458 for tag, span in list(records.items()): 459 if tag >= ins_row: 460 rec_new[tag + 1] = span 461 elif tag + span > ins_row: 462 rec_new[tag] = span + 1 463 464 # GH 14882 - Make sure insertion done once 465 if not inserted: 466 dot_row = list(idx_values[ins_row - 1]) 467 dot_row[-1] = u('...') 468 idx_values.insert(ins_row, tuple(dot_row)) 469 inserted = True 470 else: 471 dot_row = list(idx_values[ins_row]) 472 dot_row[inner_lvl - lnum] = u('...') 473 idx_values[ins_row] = tuple(dot_row) 474 else: 475 rec_new[tag] = span 476 # If ins_row lies between tags, all cols idx cols 477 # receive ... 478 if tag + span == ins_row: 479 rec_new[ins_row] = 1 480 if lnum == 0: 481 idx_values.insert(ins_row, tuple( 482 [u('...')] * len(level_lengths))) 483 484 # GH 14882 - Place ... in correct level 485 elif inserted: 486 dot_row = list(idx_values[ins_row]) 487 dot_row[inner_lvl - lnum] = u('...') 488 idx_values[ins_row] = tuple(dot_row) 489 level_lengths[lnum] = rec_new 490 491 level_lengths[inner_lvl][ins_row] = 1 492 for ix_col in range(len(fmt_values)): 493 fmt_values[ix_col].insert(ins_row, '...') 494 nrows += 1 495 496 for i in range(nrows): 497 row = [] 498 tags = {} 499 500 sparse_offset = 0 501 j = 0 502 for records, v in zip(level_lengths, idx_values[i]): 503 if i in records: 504 if records[i] > 1: 505 tags[j] = template.format(span=records[i]) 506 else: 507 sparse_offset += 1 508 continue 509 510 j += 1 511 row.append(v) 512 513 row.extend(fmt_values[j][i] for j in range(self.ncols)) 514 if truncate_h: 515 row.insert(row_levels - sparse_offset + 516 self.fmt.tr_col_num, '...') 517 self.write_tr(row, indent, self.indent_delta, tags=tags, 518 nindex_levels=len(levels) - sparse_offset) 519 else: 520 for i in range(len(frame)): 521 idx_values = list(zip(*frame.index.format( 522 sparsify=False, adjoin=False, names=False))) 523 row = [] 524 row.extend(idx_values[i]) 525 row.extend(fmt_values[j][i] for j in range(self.ncols)) 526 if truncate_h: 527 row.insert(row_levels + self.fmt.tr_col_num, '...') 528 self.write_tr(row, indent, self.indent_delta, tags=None, 529 nindex_levels=frame.index.nlevels) 530 [end of pandas/io/formats/html.py] [start of pandas/util/_print_versions.py] 1 import codecs 2 import importlib 3 import locale 4 import os 5 import platform 6 import struct 7 import subprocess 8 import sys 9 10 11 def get_sys_info(): 12 "Returns system information as a dict" 13 14 blob = [] 15 16 # get full commit hash 17 commit = None 18 if os.path.isdir(".git") and os.path.isdir("pandas"): 19 try: 20 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "), 21 stdout=subprocess.PIPE, 22 stderr=subprocess.PIPE) 23 so, serr = pipe.communicate() 24 except (OSError, ValueError): 25 pass 26 else: 27 if pipe.returncode == 0: 28 commit = so 29 try: 30 commit = so.decode('utf-8') 31 except ValueError: 32 pass 33 commit = commit.strip().strip('"') 34 35 blob.append(('commit', commit)) 36 37 try: 38 (sysname, nodename, release, 39 version, machine, processor) = platform.uname() 40 blob.extend([ 41 ("python", '.'.join(map(str, sys.version_info))), 42 ("python-bits", struct.calcsize("P") * 8), 43 ("OS", "{sysname}".format(sysname=sysname)), 44 ("OS-release", "{release}".format(release=release)), 45 # ("Version", "{version}".format(version=version)), 46 ("machine", "{machine}".format(machine=machine)), 47 ("processor", "{processor}".format(processor=processor)), 48 ("byteorder", "{byteorder}".format(byteorder=sys.byteorder)), 49 ("LC_ALL", "{lc}".format(lc=os.environ.get('LC_ALL', "None"))), 50 ("LANG", "{lang}".format(lang=os.environ.get('LANG', "None"))), 51 ("LOCALE", '.'.join(map(str, locale.getlocale()))), 52 ]) 53 except (KeyError, ValueError): 54 pass 55 56 return blob 57 58 59 def show_versions(as_json=False): 60 sys_info = get_sys_info() 61 62 deps = [ 63 # (MODULE_NAME, f(mod) -> mod version) 64 ("pandas", lambda mod: mod.__version__), 65 ("pytest", lambda mod: mod.__version__), 66 ("pip", lambda mod: mod.__version__), 67 ("setuptools", lambda mod: mod.__version__), 68 ("Cython", lambda mod: mod.__version__), 69 ("numpy", lambda mod: mod.version.version), 70 ("scipy", lambda mod: mod.version.version), 71 ("pyarrow", lambda mod: mod.__version__), 72 ("xarray", lambda mod: mod.__version__), 73 ("IPython", lambda mod: mod.__version__), 74 ("sphinx", lambda mod: mod.__version__), 75 ("patsy", lambda mod: mod.__version__), 76 ("dateutil", lambda mod: mod.__version__), 77 ("pytz", lambda mod: mod.VERSION), 78 ("blosc", lambda mod: mod.__version__), 79 ("bottleneck", lambda mod: mod.__version__), 80 ("tables", lambda mod: mod.__version__), 81 ("numexpr", lambda mod: mod.__version__), 82 ("feather", lambda mod: mod.__version__), 83 ("matplotlib", lambda mod: mod.__version__), 84 ("openpyxl", lambda mod: mod.__version__), 85 ("xlrd", lambda mod: mod.__VERSION__), 86 ("xlwt", lambda mod: mod.__VERSION__), 87 ("xlsxwriter", lambda mod: mod.__version__), 88 ("lxml.etree", lambda mod: mod.__version__), 89 ("bs4", lambda mod: mod.__version__), 90 ("html5lib", lambda mod: mod.__version__), 91 ("sqlalchemy", lambda mod: mod.__version__), 92 ("pymysql", lambda mod: mod.__version__), 93 ("psycopg2", lambda mod: mod.__version__), 94 ("jinja2", lambda mod: mod.__version__), 95 ("s3fs", lambda mod: mod.__version__), 96 ("fastparquet", lambda mod: mod.__version__), 97 ("pandas_gbq", lambda mod: mod.__version__), 98 ("pandas_datareader", lambda mod: mod.__version__), 99 ("gcsfs", lambda mod: mod.__version__), 100 ] 101 102 deps_blob = list() 103 for (modname, ver_f) in deps: 104 try: 105 if modname in sys.modules: 106 mod = sys.modules[modname] 107 else: 108 mod = importlib.import_module(modname) 109 ver = ver_f(mod) 110 deps_blob.append((modname, ver)) 111 except ImportError: 112 deps_blob.append((modname, None)) 113 114 if (as_json): 115 try: 116 import json 117 except ImportError: 118 import simplejson as json 119 120 j = dict(system=dict(sys_info), dependencies=dict(deps_blob)) 121 122 if as_json is True: 123 print(j) 124 else: 125 with codecs.open(as_json, "wb", encoding='utf8') as f: 126 json.dump(j, f, indent=2) 127 128 else: 129 130 print("\nINSTALLED VERSIONS") 131 print("------------------") 132 133 for k, stat in sys_info: 134 print("{k}: {stat}".format(k=k, stat=stat)) 135 136 print("") 137 for k, stat in deps_blob: 138 print("{k}: {stat}".format(k=k, stat=stat)) 139 140 141 def main(): 142 from optparse import OptionParser 143 parser = OptionParser() 144 parser.add_option("-j", "--json", metavar="FILE", nargs=1, 145 help="Save output as JSON into file, pass in " 146 "'-' to output to stdout") 147 148 (options, args) = parser.parse_args() 149 150 if options.json == "-": 151 options.json = True 152 153 show_versions(as_json=options.json) 154 155 return 0 156 157 158 if __name__ == "__main__": 159 sys.exit(main()) 160 [end of pandas/util/_print_versions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
e76c90ed99efe6b2b827b10c01d759f85ece8045
(row) Index Name with to_html(header=False) is not displayed #### Code Sample, a copy-pastable example if possible ```python import pandas as pd import numpy as np df = pd.DataFrame(np.zeros((2, 2), dtype=int)) df.index.name = 'index.name' df.to_html(header=False) ``` <table border="1" class="dataframe"> <tbody> <tr> <th>0</th> <td>0</td> <td>0</td> </tr> <tr> <th>1</th> <td>0</td> <td>0</td> </tr> </tbody> </table> ```python import pandas as pd import numpy as np df = pd.DataFrame(np.zeros((2, 2), dtype=int)) df.index = pd.MultiIndex.from_product([['a'], ['b', 'c']], names=[ 'index.name.0', 'index.name.1']) df.to_html(header=False) ``` <table border="1" class="dataframe"> <tbody> <tr> <th rowspan="2" valign="top">a</th> <th>b</th> <td>0</td> <td>0</td> </tr> <tr> <th>c</th> <td>0</td> <td>0</td> </tr> </tbody> </table> #### Problem description `to_html(header=False)` is not displaying the (row) Index names. The `header` parameter should be analogous to the `index` parameter and hide the columns Index only and leave the (row) Index names displayed. to hide the display of the row Index names, the `index_names=False` parameter should be used. the problem is due to an early return in the composition of the HTML header. the `to_html` parameter `header` should not be confused with the HTML header. https://github.com/pandas-dev/pandas/blob/deb7b4d5003b939f47e525bcdaceeea48622a73a/pandas/io/formats/html.py#L196-L201 #### Expected Output <table border="1" class="dataframe"> <thead> <tr> <th>index.name</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th>0</th> <td>0</td> <td>0</td> </tr> <tr> <th>1</th> <td>0</td> <td>0</td> </tr> </tbody> </table> <table border="1" class="dataframe"> <thead> <tr> <th>index.name.0</th> <th>index.name.1</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <th rowspan="2" valign="top">a</th> <th>b</th> <td>0</td> <td>0</td> </tr> <tr> <th>c</th> <td>0</td> <td>0</td> </tr> </tbody> </table> #### Output of ``pd.show_versions()`` <details> INSTALLED VERSIONS ------------------ commit: None python: 3.6.5.final.0 python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 58 Stepping 9, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None pandas: 0.23.0 pytest: 3.5.1 pip: 10.0.1 setuptools: 39.1.0 Cython: 0.28.2 numpy: 1.14.3 scipy: 1.1.0 pyarrow: None xarray: None IPython: 6.4.0 sphinx: 1.7.4 patsy: 0.5.0 dateutil: 2.7.3 pytz: 2018.4 blosc: None bottleneck: 1.2.1 tables: 3.4.3 numexpr: 2.6.5 feather: None matplotlib: 2.2.2 openpyxl: 2.5.3 xlrd: 1.1.0 xlwt: 1.3.0 xlsxwriter: 1.0.4 lxml: 4.2.1 bs4: 4.6.0 html5lib: 1.0.1 sqlalchemy: 1.2.7 pymysql: None psycopg2: None jinja2: 2.10 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None </details> cc @WillAyd
This isn't just restricted to HTML. It's a bug across the board, even with `to_csv`.
2019-01-02T11:58:05Z
<patch> diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst --- a/doc/source/whatsnew/v0.24.0.rst +++ b/doc/source/whatsnew/v0.24.0.rst @@ -1600,6 +1600,7 @@ Notice how we now instead output ``np.nan`` itself instead of a stringified form - Bug in :func:`to_html()` with ``index=False`` misses truncation indicators (...) on truncated DataFrame (:issue:`15019`, :issue:`22783`) - Bug in :func:`to_html()` with ``index=False`` when both columns and row index are ``MultiIndex`` (:issue:`22579`) - Bug in :func:`to_html()` with ``index_names=False`` displaying index name (:issue:`22747`) +- Bug in :func:`to_html()` with ``header=False`` not displaying row index names (:issue:`23788`) - Bug in :func:`DataFrame.to_string()` that broke column alignment when ``index=False`` and width of first column's values is greater than the width of first column's header (:issue:`16839`, :issue:`13032`) - Bug in :func:`DataFrame.to_string()` that caused representations of :class:`DataFrame` to not take up the whole window (:issue:`22984`) - Bug in :func:`DataFrame.to_csv` where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (:issue:`19589`). diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py --- a/pandas/io/formats/html.py +++ b/pandas/io/formats/html.py @@ -43,6 +43,12 @@ def __init__(self, formatter, classes=None, notebook=False, border=None, self.table_id = table_id self.render_links = render_links + @property + def show_row_idx_names(self): + return all((self.fmt.has_index_names, + self.fmt.index, + self.fmt.show_index_names)) + @property def show_col_idx_names(self): # see gh-22579 @@ -165,9 +171,7 @@ def write_style(self): element_props.append(('thead tr th', 'text-align', 'left')) - if all((self.fmt.has_index_names, - self.fmt.index, - self.fmt.show_index_names)): + if self.show_row_idx_names: element_props.append(('thead tr:last-of-type th', 'text-align', 'right')) @@ -228,17 +232,8 @@ def write_result(self, buf): buffer_put_lines(buf, self.elements) - def _write_header(self, indent): + def _write_col_header(self, indent): truncate_h = self.fmt.truncate_h - - if not self.fmt.header: - # write nothing - return indent - - self.write('<thead>', indent) - - indent += self.indent_delta - if isinstance(self.columns, ABCMultiIndex): template = 'colspan="{span:d}" halign="left"' @@ -357,12 +352,25 @@ def _write_header(self, indent): self.write_tr(row, indent, self.indent_delta, header=True, align=align) - if all((self.fmt.has_index_names, - self.fmt.index, - self.fmt.show_index_names)): - row = ([x if x is not None else '' for x in self.frame.index.names] - + [''] * (self.ncols + (1 if truncate_h else 0))) - self.write_tr(row, indent, self.indent_delta, header=True) + def _write_row_header(self, indent): + truncate_h = self.fmt.truncate_h + row = ([x if x is not None else '' for x in self.frame.index.names] + + [''] * (self.ncols + (1 if truncate_h else 0))) + self.write_tr(row, indent, self.indent_delta, header=True) + + def _write_header(self, indent): + if not (self.fmt.header or self.show_row_idx_names): + # write nothing + return indent + + self.write('<thead>', indent) + indent += self.indent_delta + + if self.fmt.header: + self._write_col_header(indent) + + if self.show_row_idx_names: + self._write_row_header(indent) indent -= self.indent_delta self.write('</thead>', indent) </patch>
[]
[]
pandas-dev__pandas-33513
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: maximum of pd.Series([np.nan],dtype=ordered_category) raise - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- #### Code Sample, a copy-pastable example ```python In [1]: pd.__version__ Out[1]: '1.0.3' In [2]: pd.Series([np.nan],dtype=pd.CategoricalDtype([0,1],ordered=True)).max() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-5a47d189b696> in <module> ----> 1 pd.Series([np.nan],dtype=pd.CategoricalDtype([0,1],ordered=True)).max() ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/generic.py in stat_func(self, axis, skipna, level, numeric_only, **kwargs) 11213 return self._agg_by_level(name, axis=axis, level=level, skipna=skipna) 11214 return self._reduce( > 11215 f, name, axis=axis, skipna=skipna, numeric_only=numeric_only 11216 ) 11217 ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/series.py in _reduce(self, op, name, axis, skipna, numeric_only, filter_type, **kwds) 3870 3871 if isinstance(delegate, Categorical): -> 3872 return delegate._reduce(name, skipna=skipna, **kwds) 3873 elif isinstance(delegate, ExtensionArray): 3874 # dispatch to ExtensionArray interface ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/arrays/categorical.py in _reduce(self, name, axis, **kwargs) 2123 if func is None: 2124 raise TypeError(f"Categorical cannot perform the operation {name}") -> 2125 return func(**kwargs) 2126 2127 @deprecate_kwarg(old_arg_name="numeric_only", new_arg_name="skipna") ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 212 else: 213 kwargs[new_arg_name] = new_arg_value --> 214 return func(*args, **kwargs) 215 216 return cast(F, wrapper) ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/arrays/categorical.py in max(self, skipna) 2188 if not good.all(): 2189 if skipna: -> 2190 pointer = self._codes[good].max() 2191 else: 2192 return np.nan ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/numpy/core/_methods.py in _amax(a, axis, out, keepdims, initial, where) 28 def _amax(a, axis=None, out=None, keepdims=False, 29 initial=_NoValue, where=True): ---> 30 return umr_maximum(a, axis, None, out, keepdims, initial, where) 31 32 def _amin(a, axis=None, out=None, keepdims=False, ValueError: zero-size array to reduction operation maximum which has no identity ``` In the older version, the same code didn't raise an error. ```python In [10]: pd.__version__ Out[10]: '0.25.3' In [11]: pd.Series([np.nan],dtype=pd.CategoricalDtype([0,1],ordered=True)).max() ...: Out[11]: nan ``` #### Problem description Because of this behavior, I failed to df.groupby().max() for ordered categories. #### Expected Output Expected output should be np.nan #### Output of ``pd.show_versions()`` <details> INSTALLED VERSIONS ------------------ commit : None python : 3.7.4.final.0 python-bits : 64 OS : Darwin OS-release : 18.7.0 machine : x86_64 processor : i386 byteorder : little LC_ALL : None LANG : None LOCALE : en_US.UTF-8 pandas : 1.0.3 numpy : 1.17.3 pytz : 2019.3 dateutil : 2.8.0 pip : 20.0.2 setuptools : 40.8.0 Cython : None pytest : 5.2.1 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : 1.2.2 lxml.etree : 4.4.2 html5lib : None pymysql : None psycopg2 : None jinja2 : 2.10.3 IPython : 7.8.0 pandas_datareader: None bs4 : None bottleneck : None fastparquet : None gcsfs : None lxml.etree : 4.4.2 matplotlib : 3.1.1 numexpr : None odfpy : None openpyxl : 3.0.0 pandas_gbq : None pyarrow : 0.15.0 pytables : None pytest : 5.2.1 pyxlsb : None s3fs : 0.2.2 scipy : 1.3.1 sqlalchemy : None tables : None tabulate : 0.8.5 xarray : None xlrd : 1.2.0 xlwt : None xlsxwriter : 1.2.2 numba : None </details> </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 [![PyPI Latest Release](https://img.shields.io/pypi/v/pandas.svg)](https://pypi.org/project/pandas/) 9 [![Conda Latest Release](https://anaconda.org/conda-forge/pandas/badges/version.svg)](https://anaconda.org/anaconda/pandas/) 10 [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3509134.svg)](https://doi.org/10.5281/zenodo.3509134) 11 [![Package Status](https://img.shields.io/pypi/status/pandas.svg)](https://pypi.org/project/pandas/) 12 [![License](https://img.shields.io/pypi/l/pandas.svg)](https://github.com/pandas-dev/pandas/blob/master/LICENSE) 13 [![Travis Build Status](https://travis-ci.org/pandas-dev/pandas.svg?branch=master)](https://travis-ci.org/pandas-dev/pandas) 14 [![Azure Build Status](https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master)](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master) 15 [![Coverage](https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master)](https://codecov.io/gh/pandas-dev/pandas) 16 [![Downloads](https://anaconda.org/conda-forge/pandas/badges/downloads.svg)](https://pandas.pydata.org) 17 [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas) 18 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org) 19 20 ## What is it? 21 22 **pandas** is a Python package providing fast, flexible, and expressive data 23 structures designed to make working with "relational" or "labeled" data both 24 easy and intuitive. It aims to be the fundamental high-level building block for 25 doing practical, **real world** data analysis in Python. Additionally, it has 26 the broader goal of becoming **the most powerful and flexible open source data 27 analysis / manipulation tool available in any language**. It is already well on 28 its way towards this goal. 29 30 ## Main Features 31 Here are just a few of the things that pandas does well: 32 33 - Easy handling of [**missing data**][missing-data] (represented as 34 `NaN`) in floating point as well as non-floating point data 35 - Size mutability: columns can be [**inserted and 36 deleted**][insertion-deletion] from DataFrame and higher dimensional 37 objects 38 - Automatic and explicit [**data alignment**][alignment]: objects can 39 be explicitly aligned to a set of labels, or the user can simply 40 ignore the labels and let `Series`, `DataFrame`, etc. automatically 41 align the data for you in computations 42 - Powerful, flexible [**group by**][groupby] functionality to perform 43 split-apply-combine operations on data sets, for both aggregating 44 and transforming data 45 - Make it [**easy to convert**][conversion] ragged, 46 differently-indexed data in other Python and NumPy data structures 47 into DataFrame objects 48 - Intelligent label-based [**slicing**][slicing], [**fancy 49 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 50 large data sets 51 - Intuitive [**merging**][merging] and [**joining**][joining] data 52 sets 53 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 54 data sets 55 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 56 labels per tick) 57 - Robust IO tools for loading data from [**flat files**][flat-files] 58 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 59 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 60 - [**Time series**][timeseries]-specific functionality: date range 61 generation and frequency conversion, moving window statistics, 62 date shifting and lagging. 63 64 65 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 66 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 67 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 68 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 69 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 70 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 71 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 72 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 73 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 74 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 75 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 76 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 77 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 78 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 79 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 80 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 81 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 82 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 83 84 ## Where to get it 85 The source code is currently hosted on GitHub at: 86 https://github.com/pandas-dev/pandas 87 88 Binary installers for the latest released version are available at the [Python 89 package index](https://pypi.org/project/pandas) and on conda. 90 91 ```sh 92 # conda 93 conda install pandas 94 ``` 95 96 ```sh 97 # or PyPI 98 pip install pandas 99 ``` 100 101 ## Dependencies 102 - [NumPy](https://www.numpy.org) 103 - [python-dateutil](https://labix.org/python-dateutil) 104 - [pytz](https://pythonhosted.org/pytz) 105 106 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies. 107 108 ## Installation from sources 109 To install pandas from source you need Cython in addition to the normal 110 dependencies above. Cython can be installed from pypi: 111 112 ```sh 113 pip install cython 114 ``` 115 116 In the `pandas` directory (same one where you found this file after 117 cloning the git repo), execute: 118 119 ```sh 120 python setup.py install 121 ``` 122 123 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 124 125 126 ```sh 127 python -m pip install -e . --no-build-isolation --no-use-pep517 128 ``` 129 130 If you have `make`, you can also use `make develop` to run the same command. 131 132 or alternatively 133 134 ```sh 135 python setup.py develop 136 ``` 137 138 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 139 140 ## License 141 [BSD 3](LICENSE) 142 143 ## Documentation 144 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 145 146 ## Background 147 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 148 has been under active development since then. 149 150 ## Getting Help 151 152 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 153 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 154 155 ## Discussion and Development 156 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 157 158 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 159 160 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 161 162 A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 163 164 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 165 166 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 167 168 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 169 170 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 171 172 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md) 173 [end of README.md] [start of pandas/core/arrays/numpy_.py] 1 import numbers 2 from typing import Optional, Tuple, Type, Union 3 4 import numpy as np 5 from numpy.lib.mixins import NDArrayOperatorsMixin 6 7 from pandas._libs import lib 8 from pandas.compat.numpy import function as nv 9 from pandas.util._decorators import doc 10 from pandas.util._validators import validate_fillna_kwargs 11 12 from pandas.core.dtypes.dtypes import ExtensionDtype 13 from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries 14 from pandas.core.dtypes.inference import is_array_like 15 from pandas.core.dtypes.missing import isna 16 17 from pandas import compat 18 from pandas.core import nanops 19 from pandas.core.algorithms import searchsorted, take, unique 20 from pandas.core.arrays.base import ExtensionArray, ExtensionOpsMixin 21 from pandas.core.construction import extract_array 22 from pandas.core.indexers import check_array_indexer 23 from pandas.core.missing import backfill_1d, pad_1d 24 25 26 class PandasDtype(ExtensionDtype): 27 """ 28 A Pandas ExtensionDtype for NumPy dtypes. 29 30 .. versionadded:: 0.24.0 31 32 This is mostly for internal compatibility, and is not especially 33 useful on its own. 34 35 Parameters 36 ---------- 37 dtype : object 38 Object to be converted to a NumPy data type object. 39 40 See Also 41 -------- 42 numpy.dtype 43 """ 44 45 _metadata = ("_dtype",) 46 47 def __init__(self, dtype: object): 48 self._dtype = np.dtype(dtype) 49 50 def __repr__(self) -> str: 51 return f"PandasDtype({repr(self.name)})" 52 53 @property 54 def numpy_dtype(self) -> np.dtype: 55 """ 56 The NumPy dtype this PandasDtype wraps. 57 """ 58 return self._dtype 59 60 @property 61 def name(self) -> str: 62 """ 63 A bit-width name for this data-type. 64 """ 65 return self._dtype.name 66 67 @property 68 def type(self) -> Type[np.generic]: 69 """ 70 The type object used to instantiate a scalar of this NumPy data-type. 71 """ 72 return self._dtype.type 73 74 @property 75 def _is_numeric(self) -> bool: 76 # exclude object, str, unicode, void. 77 return self.kind in set("biufc") 78 79 @property 80 def _is_boolean(self) -> bool: 81 return self.kind == "b" 82 83 @classmethod 84 def construct_from_string(cls, string: str) -> "PandasDtype": 85 try: 86 dtype = np.dtype(string) 87 except TypeError as err: 88 if not isinstance(string, str): 89 msg = f"'construct_from_string' expects a string, got {type(string)}" 90 else: 91 msg = f"Cannot construct a 'PandasDtype' from '{string}'" 92 raise TypeError(msg) from err 93 return cls(dtype) 94 95 @classmethod 96 def construct_array_type(cls) -> Type["PandasArray"]: 97 """ 98 Return the array type associated with this dtype. 99 100 Returns 101 ------- 102 type 103 """ 104 return PandasArray 105 106 @property 107 def kind(self) -> str: 108 """ 109 A character code (one of 'biufcmMOSUV') identifying the general kind of data. 110 """ 111 return self._dtype.kind 112 113 @property 114 def itemsize(self) -> int: 115 """ 116 The element size of this data-type object. 117 """ 118 return self._dtype.itemsize 119 120 121 class PandasArray(ExtensionArray, ExtensionOpsMixin, NDArrayOperatorsMixin): 122 """ 123 A pandas ExtensionArray for NumPy data. 124 125 .. versionadded:: 0.24.0 126 127 This is mostly for internal compatibility, and is not especially 128 useful on its own. 129 130 Parameters 131 ---------- 132 values : ndarray 133 The NumPy ndarray to wrap. Must be 1-dimensional. 134 copy : bool, default False 135 Whether to copy `values`. 136 137 Attributes 138 ---------- 139 None 140 141 Methods 142 ------- 143 None 144 """ 145 146 # If you're wondering why pd.Series(cls) doesn't put the array in an 147 # ExtensionBlock, search for `ABCPandasArray`. We check for 148 # that _typ to ensure that that users don't unnecessarily use EAs inside 149 # pandas internals, which turns off things like block consolidation. 150 _typ = "npy_extension" 151 __array_priority__ = 1000 152 _ndarray: np.ndarray 153 154 # ------------------------------------------------------------------------ 155 # Constructors 156 157 def __init__(self, values: Union[np.ndarray, "PandasArray"], copy: bool = False): 158 if isinstance(values, type(self)): 159 values = values._ndarray 160 if not isinstance(values, np.ndarray): 161 raise ValueError( 162 f"'values' must be a NumPy array, not {type(values).__name__}" 163 ) 164 165 if values.ndim != 1: 166 raise ValueError("PandasArray must be 1-dimensional.") 167 168 if copy: 169 values = values.copy() 170 171 self._ndarray = values 172 self._dtype = PandasDtype(values.dtype) 173 174 @classmethod 175 def _from_sequence(cls, scalars, dtype=None, copy: bool = False) -> "PandasArray": 176 if isinstance(dtype, PandasDtype): 177 dtype = dtype._dtype 178 179 result = np.asarray(scalars, dtype=dtype) 180 if copy and result is scalars: 181 result = result.copy() 182 return cls(result) 183 184 @classmethod 185 def _from_factorized(cls, values, original) -> "PandasArray": 186 return cls(values) 187 188 @classmethod 189 def _concat_same_type(cls, to_concat) -> "PandasArray": 190 return cls(np.concatenate(to_concat)) 191 192 # ------------------------------------------------------------------------ 193 # Data 194 195 @property 196 def dtype(self) -> PandasDtype: 197 return self._dtype 198 199 # ------------------------------------------------------------------------ 200 # NumPy Array Interface 201 202 def __array__(self, dtype=None) -> np.ndarray: 203 return np.asarray(self._ndarray, dtype=dtype) 204 205 _HANDLED_TYPES = (np.ndarray, numbers.Number) 206 207 def __array_ufunc__(self, ufunc, method: str, *inputs, **kwargs): 208 # Lightly modified version of 209 # https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/\ 210 # numpy.lib.mixins.NDArrayOperatorsMixin.html 211 # The primary modification is not boxing scalar return values 212 # in PandasArray, since pandas' ExtensionArrays are 1-d. 213 out = kwargs.get("out", ()) 214 for x in inputs + out: 215 # Only support operations with instances of _HANDLED_TYPES. 216 # Use PandasArray instead of type(self) for isinstance to 217 # allow subclasses that don't override __array_ufunc__ to 218 # handle PandasArray objects. 219 if not isinstance(x, self._HANDLED_TYPES + (PandasArray,)): 220 return NotImplemented 221 222 # Defer to the implementation of the ufunc on unwrapped values. 223 inputs = tuple(x._ndarray if isinstance(x, PandasArray) else x for x in inputs) 224 if out: 225 kwargs["out"] = tuple( 226 x._ndarray if isinstance(x, PandasArray) else x for x in out 227 ) 228 result = getattr(ufunc, method)(*inputs, **kwargs) 229 230 if type(result) is tuple and len(result): 231 # multiple return values 232 if not lib.is_scalar(result[0]): 233 # re-box array-like results 234 return tuple(type(self)(x) for x in result) 235 else: 236 # but not scalar reductions 237 return result 238 elif method == "at": 239 # no return value 240 return None 241 else: 242 # one return value 243 if not lib.is_scalar(result): 244 # re-box array-like results, but not scalar reductions 245 result = type(self)(result) 246 return result 247 248 # ------------------------------------------------------------------------ 249 # Pandas ExtensionArray Interface 250 251 def __getitem__(self, item): 252 if isinstance(item, type(self)): 253 item = item._ndarray 254 255 item = check_array_indexer(self, item) 256 257 result = self._ndarray[item] 258 if not lib.is_scalar(item): 259 result = type(self)(result) 260 return result 261 262 def __setitem__(self, key, value) -> None: 263 value = extract_array(value, extract_numpy=True) 264 265 key = check_array_indexer(self, key) 266 scalar_value = lib.is_scalar(value) 267 268 if not scalar_value: 269 value = np.asarray(value, dtype=self._ndarray.dtype) 270 271 self._ndarray[key] = value 272 273 def __len__(self) -> int: 274 return len(self._ndarray) 275 276 @property 277 def nbytes(self) -> int: 278 return self._ndarray.nbytes 279 280 def isna(self) -> np.ndarray: 281 return isna(self._ndarray) 282 283 def fillna( 284 self, value=None, method: Optional[str] = None, limit: Optional[int] = None, 285 ) -> "PandasArray": 286 # TODO(_values_for_fillna): remove this 287 value, method = validate_fillna_kwargs(value, method) 288 289 mask = self.isna() 290 291 if is_array_like(value): 292 if len(value) != len(self): 293 raise ValueError( 294 f"Length of 'value' does not match. Got ({len(value)}) " 295 f" expected {len(self)}" 296 ) 297 value = value[mask] 298 299 if mask.any(): 300 if method is not None: 301 func = pad_1d if method == "pad" else backfill_1d 302 new_values = func(self._ndarray, limit=limit, mask=mask) 303 new_values = self._from_sequence(new_values, dtype=self.dtype) 304 else: 305 # fill with value 306 new_values = self.copy() 307 new_values[mask] = value 308 else: 309 new_values = self.copy() 310 return new_values 311 312 def take(self, indices, allow_fill=False, fill_value=None) -> "PandasArray": 313 if fill_value is None: 314 # Primarily for subclasses 315 fill_value = self.dtype.na_value 316 result = take( 317 self._ndarray, indices, allow_fill=allow_fill, fill_value=fill_value 318 ) 319 return type(self)(result) 320 321 def copy(self) -> "PandasArray": 322 return type(self)(self._ndarray.copy()) 323 324 def _values_for_argsort(self) -> np.ndarray: 325 return self._ndarray 326 327 def _values_for_factorize(self) -> Tuple[np.ndarray, int]: 328 return self._ndarray, -1 329 330 def unique(self) -> "PandasArray": 331 return type(self)(unique(self._ndarray)) 332 333 # ------------------------------------------------------------------------ 334 # Reductions 335 336 def _reduce(self, name, skipna=True, **kwargs): 337 meth = getattr(self, name, None) 338 if meth: 339 return meth(skipna=skipna, **kwargs) 340 else: 341 msg = f"'{type(self).__name__}' does not implement reduction '{name}'" 342 raise TypeError(msg) 343 344 def any(self, axis=None, out=None, keepdims=False, skipna=True): 345 nv.validate_any((), dict(out=out, keepdims=keepdims)) 346 return nanops.nanany(self._ndarray, axis=axis, skipna=skipna) 347 348 def all(self, axis=None, out=None, keepdims=False, skipna=True): 349 nv.validate_all((), dict(out=out, keepdims=keepdims)) 350 return nanops.nanall(self._ndarray, axis=axis, skipna=skipna) 351 352 def min(self, axis=None, out=None, keepdims=False, skipna=True): 353 nv.validate_min((), dict(out=out, keepdims=keepdims)) 354 return nanops.nanmin(self._ndarray, axis=axis, skipna=skipna) 355 356 def max(self, axis=None, out=None, keepdims=False, skipna=True): 357 nv.validate_max((), dict(out=out, keepdims=keepdims)) 358 return nanops.nanmax(self._ndarray, axis=axis, skipna=skipna) 359 360 def sum( 361 self, 362 axis=None, 363 dtype=None, 364 out=None, 365 keepdims=False, 366 initial=None, 367 skipna=True, 368 min_count=0, 369 ): 370 nv.validate_sum( 371 (), dict(dtype=dtype, out=out, keepdims=keepdims, initial=initial) 372 ) 373 return nanops.nansum( 374 self._ndarray, axis=axis, skipna=skipna, min_count=min_count 375 ) 376 377 def prod( 378 self, 379 axis=None, 380 dtype=None, 381 out=None, 382 keepdims=False, 383 initial=None, 384 skipna=True, 385 min_count=0, 386 ): 387 nv.validate_prod( 388 (), dict(dtype=dtype, out=out, keepdims=keepdims, initial=initial) 389 ) 390 return nanops.nanprod( 391 self._ndarray, axis=axis, skipna=skipna, min_count=min_count 392 ) 393 394 def mean(self, axis=None, dtype=None, out=None, keepdims=False, skipna=True): 395 nv.validate_mean((), dict(dtype=dtype, out=out, keepdims=keepdims)) 396 return nanops.nanmean(self._ndarray, axis=axis, skipna=skipna) 397 398 def median( 399 self, axis=None, out=None, overwrite_input=False, keepdims=False, skipna=True 400 ): 401 nv.validate_median( 402 (), dict(out=out, overwrite_input=overwrite_input, keepdims=keepdims) 403 ) 404 return nanops.nanmedian(self._ndarray, axis=axis, skipna=skipna) 405 406 def std(self, axis=None, dtype=None, out=None, ddof=1, keepdims=False, skipna=True): 407 nv.validate_stat_ddof_func( 408 (), dict(dtype=dtype, out=out, keepdims=keepdims), fname="std" 409 ) 410 return nanops.nanstd(self._ndarray, axis=axis, skipna=skipna, ddof=ddof) 411 412 def var(self, axis=None, dtype=None, out=None, ddof=1, keepdims=False, skipna=True): 413 nv.validate_stat_ddof_func( 414 (), dict(dtype=dtype, out=out, keepdims=keepdims), fname="var" 415 ) 416 return nanops.nanvar(self._ndarray, axis=axis, skipna=skipna, ddof=ddof) 417 418 def sem(self, axis=None, dtype=None, out=None, ddof=1, keepdims=False, skipna=True): 419 nv.validate_stat_ddof_func( 420 (), dict(dtype=dtype, out=out, keepdims=keepdims), fname="sem" 421 ) 422 return nanops.nansem(self._ndarray, axis=axis, skipna=skipna, ddof=ddof) 423 424 def kurt(self, axis=None, dtype=None, out=None, keepdims=False, skipna=True): 425 nv.validate_stat_ddof_func( 426 (), dict(dtype=dtype, out=out, keepdims=keepdims), fname="kurt" 427 ) 428 return nanops.nankurt(self._ndarray, axis=axis, skipna=skipna) 429 430 def skew(self, axis=None, dtype=None, out=None, keepdims=False, skipna=True): 431 nv.validate_stat_ddof_func( 432 (), dict(dtype=dtype, out=out, keepdims=keepdims), fname="skew" 433 ) 434 return nanops.nanskew(self._ndarray, axis=axis, skipna=skipna) 435 436 # ------------------------------------------------------------------------ 437 # Additional Methods 438 439 def to_numpy( 440 self, dtype=None, copy: bool = False, na_value=lib.no_default 441 ) -> np.ndarray: 442 result = np.asarray(self._ndarray, dtype=dtype) 443 444 if (copy or na_value is not lib.no_default) and result is self._ndarray: 445 result = result.copy() 446 447 if na_value is not lib.no_default: 448 result[self.isna()] = na_value 449 450 return result 451 452 @doc(ExtensionArray.searchsorted) 453 def searchsorted(self, value, side="left", sorter=None): 454 return searchsorted(self.to_numpy(), value, side=side, sorter=sorter) 455 456 # ------------------------------------------------------------------------ 457 # Ops 458 459 def __invert__(self): 460 return type(self)(~self._ndarray) 461 462 @classmethod 463 def _create_arithmetic_method(cls, op): 464 def arithmetic_method(self, other): 465 if isinstance(other, (ABCIndexClass, ABCSeries)): 466 return NotImplemented 467 468 elif isinstance(other, cls): 469 other = other._ndarray 470 471 with np.errstate(all="ignore"): 472 result = op(self._ndarray, other) 473 474 if op is divmod: 475 a, b = result 476 return cls(a), cls(b) 477 478 return cls(result) 479 480 return compat.set_function_name(arithmetic_method, f"__{op.__name__}__", cls) 481 482 _create_comparison_method = _create_arithmetic_method 483 484 485 PandasArray._add_arithmetic_ops() 486 PandasArray._add_comparison_ops() 487 [end of pandas/core/arrays/numpy_.py] [start of pandas/util/_print_versions.py] 1 import codecs 2 import json 3 import locale 4 import os 5 import platform 6 import struct 7 import sys 8 from typing import Dict, Optional, Union 9 10 from pandas._typing import JSONSerializable 11 from pandas.compat._optional import VERSIONS, _get_version, import_optional_dependency 12 13 14 def _get_commit_hash() -> Optional[str]: 15 """ 16 Use vendored versioneer code to get git hash, which handles 17 git worktree correctly. 18 """ 19 from pandas._version import get_versions 20 21 versions = get_versions() 22 return versions["full-revisionid"] 23 24 25 def _get_sys_info() -> Dict[str, JSONSerializable]: 26 """ 27 Returns system information as a JSON serializable dictionary. 28 """ 29 uname_result = platform.uname() 30 language_code, encoding = locale.getlocale() 31 return { 32 "commit": _get_commit_hash(), 33 "python": ".".join(str(i) for i in sys.version_info), 34 "python-bits": struct.calcsize("P") * 8, 35 "OS": uname_result.system, 36 "OS-release": uname_result.release, 37 "Version": uname_result.version, 38 "machine": uname_result.machine, 39 "processor": uname_result.processor, 40 "byteorder": sys.byteorder, 41 "LC_ALL": os.environ.get("LC_ALL"), 42 "LANG": os.environ.get("LANG"), 43 "LOCALE": {"language-code": language_code, "encoding": encoding}, 44 } 45 46 47 def _get_dependency_info() -> Dict[str, JSONSerializable]: 48 """ 49 Returns dependency information as a JSON serializable dictionary. 50 """ 51 deps = [ 52 "pandas", 53 # required 54 "numpy", 55 "pytz", 56 "dateutil", 57 # install / build, 58 "pip", 59 "setuptools", 60 "Cython", 61 # test 62 "pytest", 63 "hypothesis", 64 # docs 65 "sphinx", 66 # Other, need a min version 67 "blosc", 68 "feather", 69 "xlsxwriter", 70 "lxml.etree", 71 "html5lib", 72 "pymysql", 73 "psycopg2", 74 "jinja2", 75 # Other, not imported. 76 "IPython", 77 "pandas_datareader", 78 ] 79 deps.extend(list(VERSIONS)) 80 81 result: Dict[str, JSONSerializable] = {} 82 for modname in deps: 83 mod = import_optional_dependency( 84 modname, raise_on_missing=False, on_version="ignore" 85 ) 86 result[modname] = _get_version(mod) if mod else None 87 return result 88 89 90 def show_versions(as_json: Union[str, bool] = False) -> None: 91 """ 92 Provide useful information, important for bug reports. 93 94 It comprises info about hosting operation system, pandas version, 95 and versions of other installed relative packages. 96 97 Parameters 98 ---------- 99 as_json : str or bool, default False 100 * If False, outputs info in a human readable form to the console. 101 * If str, it will be considered as a path to a file. 102 Info will be written to that file in JSON format. 103 * If True, outputs info in JSON format to the console. 104 """ 105 sys_info = _get_sys_info() 106 deps = _get_dependency_info() 107 108 if as_json: 109 j = dict(system=sys_info, dependencies=deps) 110 111 if as_json is True: 112 print(j) 113 else: 114 assert isinstance(as_json, str) # needed for mypy 115 with codecs.open(as_json, "wb", encoding="utf8") as f: 116 json.dump(j, f, indent=2) 117 118 else: 119 assert isinstance(sys_info["LOCALE"], dict) # needed for mypy 120 language_code = sys_info["LOCALE"]["language-code"] 121 encoding = sys_info["LOCALE"]["encoding"] 122 sys_info["LOCALE"] = f"{language_code}.{encoding}" 123 124 maxlen = max(len(x) for x in deps) 125 print("\nINSTALLED VERSIONS") 126 print("------------------") 127 for k, v in sys_info.items(): 128 print(f"{k:<{maxlen}}: {v}") 129 print("") 130 for k, v in deps.items(): 131 print(f"{k:<{maxlen}}: {v}") 132 133 134 def main() -> int: 135 from optparse import OptionParser 136 137 parser = OptionParser() 138 parser.add_option( 139 "-j", 140 "--json", 141 metavar="FILE", 142 nargs=1, 143 help="Save output as JSON into file, pass in '-' to output to stdout", 144 ) 145 146 (options, args) = parser.parse_args() 147 148 if options.json == "-": 149 options.json = True 150 151 show_versions(as_json=options.json) 152 153 return 0 154 155 156 if __name__ == "__main__": 157 sys.exit(main()) 158 [end of pandas/util/_print_versions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
13dc13f12c0fca943979cde065b7484bb0e40d66
BUG: maximum of pd.Series([np.nan],dtype=ordered_category) raise - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- #### Code Sample, a copy-pastable example ```python In [1]: pd.__version__ Out[1]: '1.0.3' In [2]: pd.Series([np.nan],dtype=pd.CategoricalDtype([0,1],ordered=True)).max() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-5a47d189b696> in <module> ----> 1 pd.Series([np.nan],dtype=pd.CategoricalDtype([0,1],ordered=True)).max() ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/generic.py in stat_func(self, axis, skipna, level, numeric_only, **kwargs) 11213 return self._agg_by_level(name, axis=axis, level=level, skipna=skipna) 11214 return self._reduce( > 11215 f, name, axis=axis, skipna=skipna, numeric_only=numeric_only 11216 ) 11217 ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/series.py in _reduce(self, op, name, axis, skipna, numeric_only, filter_type, **kwds) 3870 3871 if isinstance(delegate, Categorical): -> 3872 return delegate._reduce(name, skipna=skipna, **kwds) 3873 elif isinstance(delegate, ExtensionArray): 3874 # dispatch to ExtensionArray interface ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/arrays/categorical.py in _reduce(self, name, axis, **kwargs) 2123 if func is None: 2124 raise TypeError(f"Categorical cannot perform the operation {name}") -> 2125 return func(**kwargs) 2126 2127 @deprecate_kwarg(old_arg_name="numeric_only", new_arg_name="skipna") ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs) 212 else: 213 kwargs[new_arg_name] = new_arg_value --> 214 return func(*args, **kwargs) 215 216 return cast(F, wrapper) ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/pandas/core/arrays/categorical.py in max(self, skipna) 2188 if not good.all(): 2189 if skipna: -> 2190 pointer = self._codes[good].max() 2191 else: 2192 return np.nan ~/.pyenv/versions/3.7.4/lib/python3.7/site-packages/numpy/core/_methods.py in _amax(a, axis, out, keepdims, initial, where) 28 def _amax(a, axis=None, out=None, keepdims=False, 29 initial=_NoValue, where=True): ---> 30 return umr_maximum(a, axis, None, out, keepdims, initial, where) 31 32 def _amin(a, axis=None, out=None, keepdims=False, ValueError: zero-size array to reduction operation maximum which has no identity ``` In the older version, the same code didn't raise an error. ```python In [10]: pd.__version__ Out[10]: '0.25.3' In [11]: pd.Series([np.nan],dtype=pd.CategoricalDtype([0,1],ordered=True)).max() ...: Out[11]: nan ``` #### Problem description Because of this behavior, I failed to df.groupby().max() for ordered categories. #### Expected Output Expected output should be np.nan #### Output of ``pd.show_versions()`` <details> INSTALLED VERSIONS ------------------ commit : None python : 3.7.4.final.0 python-bits : 64 OS : Darwin OS-release : 18.7.0 machine : x86_64 processor : i386 byteorder : little LC_ALL : None LANG : None LOCALE : en_US.UTF-8 pandas : 1.0.3 numpy : 1.17.3 pytz : 2019.3 dateutil : 2.8.0 pip : 20.0.2 setuptools : 40.8.0 Cython : None pytest : 5.2.1 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : 1.2.2 lxml.etree : 4.4.2 html5lib : None pymysql : None psycopg2 : None jinja2 : 2.10.3 IPython : 7.8.0 pandas_datareader: None bs4 : None bottleneck : None fastparquet : None gcsfs : None lxml.etree : 4.4.2 matplotlib : 3.1.1 numexpr : None odfpy : None openpyxl : 3.0.0 pandas_gbq : None pyarrow : 0.15.0 pytables : None pytest : 5.2.1 pyxlsb : None s3fs : 0.2.2 scipy : 1.3.1 sqlalchemy : None tables : None tabulate : 0.8.5 xarray : None xlrd : 1.2.0 xlwt : None xlsxwriter : 1.2.2 numba : None </details>
Thanks, looks like the bug is on master as well
2020-04-13T03:40:56Z
<patch> diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst --- a/doc/source/whatsnew/v1.1.0.rst +++ b/doc/source/whatsnew/v1.1.0.rst @@ -397,6 +397,7 @@ Categorical - Bug where :class:`Categorical` comparison operator ``__ne__`` would incorrectly evaluate to ``False`` when either element was missing (:issue:`32276`) - :meth:`Categorical.fillna` now accepts :class:`Categorical` ``other`` argument (:issue:`32420`) - Bug where :meth:`Categorical.replace` would replace with ``NaN`` whenever the new value and replacement value were equal (:issue:`33288`) +- Bug where an ordered :class:`Categorical` containing only ``NaN`` values would raise rather than returning ``NaN`` when taking the minimum or maximum (:issue:`33450`) Datetimelike ^^^^^^^^^^^^ diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py --- a/pandas/core/arrays/categorical.py +++ b/pandas/core/arrays/categorical.py @@ -2143,7 +2143,7 @@ def min(self, skipna=True): good = self._codes != -1 if not good.all(): - if skipna: + if skipna and good.any(): pointer = self._codes[good].min() else: return np.nan @@ -2178,7 +2178,7 @@ def max(self, skipna=True): good = self._codes != -1 if not good.all(): - if skipna: + if skipna and good.any(): pointer = self._codes[good].max() else: return np.nan </patch>
[]
[]
mesonbuild__meson-9295
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Python extension_module does not install in site_packages In the *python* module documentation is claimed that `extension_module` install per default in `site_packages`: > `subdir`: By default, meson will install the extension module in the relevant top-level location for the python installation, eg `/usr/lib/site-packages`. When subdir is passed to this method, it will be appended to that location. This keyword argument is mutually exclusive with `install_dir` However, this does not work. When set `install: true` the modules are installed to `$prefix/lib64` on my system. One workaround is to specify the target like: ``` py3_mod = import('python') py3_inst = py3_mod.find_installation('python3') foo = py3_inst.extension_module('foo', <source_target>, install: true, subdir: get_option('prefix') + py3_inst.get_path('purelib'), dependencies: [py3_inst.dependency()] ) ``` It would be nice, if the subdir is set autotically to the `site-packages` folder. This bug is btw similar to #2859. However, this talks about the now deprecated *python3* module. </issue> <code> [start of README.md] 1 <p align="center"> 2 <img src="https://mesonbuild.com/assets/images/meson_logo.png"> 3 </p> 4 Meson® is a project to create the best possible next-generation 5 build system. 6 7 #### Status 8 9 [![PyPI](https://img.shields.io/pypi/v/meson.svg)](https://pypi.python.org/pypi/meson) 10 [![Build Status](https://dev.azure.com/jussi0947/jussi/_apis/build/status/mesonbuild.meson)](https://dev.azure.com/jussi0947/jussi/_build/latest?definitionId=1) 11 [![Codecov](https://codecov.io/gh/mesonbuild/meson/coverage.svg?branch=master)](https://codecov.io/gh/mesonbuild/meson/branch/master) 12 [![Code Quality: Python](https://img.shields.io/lgtm/grade/python/g/mesonbuild/meson.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/mesonbuild/meson/context:python) 13 [![Total Alerts](https://img.shields.io/lgtm/alerts/g/mesonbuild/meson.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/mesonbuild/meson/alerts) 14 15 #### Dependencies 16 17 - [Python](https://python.org) (version 3.6 or newer) 18 - [Ninja](https://ninja-build.org) (version 1.8.2 or newer) 19 20 #### Installing from source 21 22 Meson is available on [PyPi](https://pypi.python.org/pypi/meson), so 23 it can be installed with `pip3 install meson`. The exact command to 24 type to install with `pip` can vary between systems, be sure to use 25 the Python 3 version of `pip`. 26 27 If you wish you can install it locally with the standard Python command: 28 29 ```console 30 python3 -m pip install meson 31 ``` 32 33 For builds using Ninja, Ninja can be downloaded directly from Ninja 34 [GitHub release page](https://github.com/ninja-build/ninja/releases) 35 or via [PyPi](https://pypi.python.org/pypi/ninja) 36 37 ```console 38 python3 -m pip install ninja 39 ``` 40 41 More on Installing Meson build can be found at the 42 [getting meson page](https://mesonbuild.com/Getting-meson.html). 43 44 #### Running 45 46 Meson requires that you have a source directory and a build directory 47 and that these two are different. In your source root must exist a 48 file called `meson.build`. To generate the build system run this 49 command: 50 51 `meson setup <source directory> <build directory>` 52 53 Depending on how you obtained Meson the command might also be called 54 `meson.py` instead of plain `meson`. In the rest of this document we 55 are going to use the latter form. 56 57 You can omit either of the two directories, and Meson will substitute 58 the current directory and autodetect what you mean. This allows you to 59 do things like this: 60 61 ```console 62 cd <source root> 63 meson setup builddir 64 ``` 65 66 To compile, cd into your build directory and type `ninja`. To run unit 67 tests, type `ninja test`. 68 69 More on running Meson build system commands can be found at the 70 [running meson page](https://mesonbuild.com/Running-Meson.html) 71 or by typing `meson --help`. 72 73 #### Contributing 74 75 We love code contributions. See the [contribution 76 page](https://mesonbuild.com/Contributing.html) on the website for 77 details. 78 79 80 #### IRC 81 82 The channel to use is `#mesonbuild` either via Matrix ([web 83 interface][matrix_web]) or [OFTC IRC][oftc_irc]. 84 85 [matrix_web]: https://app.element.io/#/room/#mesonbuild:matrix.org 86 [oftc_irc]: https://www.oftc.net/ 87 88 #### Further info 89 90 More information about the Meson build system can be found at the 91 [project's home page](https://mesonbuild.com). 92 93 Meson is a registered trademark of ***Jussi Pakkanen***. 94 [end of README.md] [start of mesonbuild/dependencies/__init__.py] 1 # Copyright 2017 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from .boost import BoostDependency 16 from .cuda import CudaDependency 17 from .hdf5 import hdf5_factory 18 from .base import Dependency, InternalDependency, ExternalDependency, NotFoundDependency 19 from .base import ( 20 ExternalLibrary, DependencyException, DependencyMethods, 21 BuiltinDependency, SystemDependency) 22 from .cmake import CMakeDependency 23 from .configtool import ConfigToolDependency 24 from .dub import DubDependency 25 from .framework import ExtraFrameworkDependency 26 from .pkgconfig import PkgConfigDependency 27 from .factory import DependencyFactory 28 from .detect import find_external_dependency, get_dep_identifier, packages, _packages_accept_language 29 from .dev import ( 30 ValgrindDependency, JDKSystemDependency, gmock_factory, gtest_factory, 31 llvm_factory, zlib_factory) 32 from .coarrays import coarray_factory 33 from .mpi import mpi_factory 34 from .scalapack import scalapack_factory 35 from .misc import ( 36 BlocksDependency, OpenMPDependency, cups_factory, curses_factory, gpgme_factory, 37 libgcrypt_factory, libwmf_factory, netcdf_factory, pcap_factory, python3_factory, 38 shaderc_factory, threads_factory, ThreadDependency, intl_factory, 39 ) 40 from .platform import AppleFrameworks 41 from .qt import qt4_factory, qt5_factory, qt6_factory 42 from .ui import GnuStepDependency, WxDependency, gl_factory, sdl2_factory, vulkan_factory 43 44 __all__ = [ 45 'Dependency', 46 'InternalDependency', 47 'ExternalDependency', 48 'SystemDependency', 49 'BuiltinDependency', 50 'NotFoundDependency', 51 'ExternalLibrary', 52 'DependencyException', 53 'DependencyMethods', 54 55 'CMakeDependency', 56 'ConfigToolDependency', 57 'DubDependency', 58 'ExtraFrameworkDependency', 59 'PkgConfigDependency', 60 61 'DependencyFactory', 62 63 'ThreadDependency', 64 65 'find_external_dependency', 66 'get_dep_identifier', 67 ] 68 69 """Dependency representations and discovery logic. 70 71 Meson attempts to largely abstract away dependency discovery information, and 72 to encapsulate that logic itself so that the DSL doesn't have too much direct 73 information. There are some cases where this is impossible/undesirable, such 74 as the `get_variable()` method. 75 76 Meson has four primary dependency types: 77 1. pkg-config 78 2. apple frameworks 79 3. CMake 80 4. system 81 82 Plus a few more niche ones. 83 84 When a user calls `dependency('foo')` Meson creates a list of candidates, and 85 tries those candidates in order to find one that matches the criteria 86 provided by the user (such as version requirements, or optional components 87 that are required.) 88 89 Except to work around bugs or handle odd corner cases, pkg-config and CMake 90 generally just work™, though there are exceptions. Most of this package is 91 concerned with dependencies that don't (always) provide CMake and/or 92 pkg-config files. 93 94 For these cases one needs to write a `system` dependency. These dependencies 95 descend directly from `ExternalDependency`, in their constructor they 96 manually set up the necessary link and compile args (and additional 97 dependencies as necessary). 98 99 For example, imagine a dependency called Foo, it uses an environment variable 100 called `$FOO_ROOT` to point to its install root, which looks like this: 101 ```txt 102 $FOOROOT 103 → include/ 104 → lib/ 105 ``` 106 To use Foo, you need its include directory, and you need to link to 107 `lib/libfoo.ext`. 108 109 You could write code that looks like: 110 111 ```python 112 class FooSystemDependency(ExternalDependency): 113 114 def __init__(self, name: str, environment: 'Environment', kwargs: T.Dict[str, T.Any]): 115 super().__init__(name, environment, kwargs) 116 root = os.environ.get('FOO_ROOT') 117 if root is None: 118 mlog.debug('$FOO_ROOT is unset.') 119 self.is_found = False 120 return 121 122 lib = self.clib_compiler.find_library('foo', environment, [os.path.join(root, 'lib')]) 123 if lib is None: 124 mlog.debug('Could not find lib.') 125 self.is_found = False 126 return 127 128 self.compile_args.append(f'-I{os.path.join(root, "include")}') 129 self.link_args.append(lib) 130 self.is_found = True 131 ``` 132 133 This code will look for `FOO_ROOT` in the environment, handle `FOO_ROOT` being 134 undefined gracefully, then set its `compile_args` and `link_args` gracefully. 135 It will also gracefully handle not finding the required lib (hopefully that 136 doesn't happen, but it could if, for example, the lib is only static and 137 shared linking is requested). 138 139 There are a couple of things about this that still aren't ideal. For one, we 140 don't want to be reading random environment variables at this point. Those 141 should actually be added to `envconfig.Properties` and read in 142 `environment.Environment._set_default_properties_from_env` (see how 143 `BOOST_ROOT` is handled). We can also handle the `static` keyword. So 144 now that becomes: 145 146 ```python 147 class FooSystemDependency(ExternalDependency): 148 149 def __init__(self, name: str, environment: 'Environment', kwargs: T.Dict[str, T.Any]): 150 super().__init__(name, environment, kwargs) 151 root = environment.properties[self.for_machine].foo_root 152 if root is None: 153 mlog.debug('foo_root is unset.') 154 self.is_found = False 155 return 156 157 static = Mesonlib.LibType.STATIC if kwargs.get('static', False) else Mesonlib.LibType.SHARED 158 lib = self.clib_compiler.find_library( 159 'foo', environment, [os.path.join(root, 'lib')], libtype=static) 160 if lib is None: 161 mlog.debug('Could not find lib.') 162 self.is_found = False 163 return 164 165 self.compile_args.append(f'-I{os.path.join(root, "include")}') 166 self.link_args.append(lib) 167 self.is_found = True 168 ``` 169 170 This is nicer in a couple of ways. First we can properly cross compile as we 171 are allowed to set `FOO_ROOT` for both the build and host machines, it also 172 means that users can override this in their machine files, and if that 173 environment variables changes during a Meson reconfigure Meson won't re-read 174 it, this is important for reproducibility. Finally, Meson will figure out 175 whether it should be finding `libfoo.so` or `libfoo.a` (or the platform 176 specific names). Things are looking pretty good now, so it can be added to 177 the `packages` dict below: 178 179 ```python 180 packages.update({ 181 'foo': FooSystemDependency, 182 }) 183 ``` 184 185 Now, what if foo also provides pkg-config, but it's only shipped on Unices, 186 or only included in very recent versions of the dependency? We can use the 187 `DependencyFactory` class: 188 189 ```python 190 foo_factory = DependencyFactory( 191 'foo', 192 [DependencyMethods.PKGCONFIG, DependencyMethods.SYSTEM], 193 system_class=FooSystemDependency, 194 ) 195 ``` 196 197 This is a helper function that will generate a default pkg-config based 198 dependency, and use the `FooSystemDependency` as well. It can also handle 199 custom finders for pkg-config and cmake based dependencies that need some 200 extra help. You would then add the `foo_factory` to packages instead of 201 `FooSystemDependency`: 202 203 ```python 204 packages.update({ 205 'foo': foo_factory, 206 }) 207 ``` 208 209 If you have a dependency that is very complicated, (such as having multiple 210 implementations) you may need to write your own factory function. There are a 211 number of examples in this package. 212 213 _Note_ before we moved to factory functions it was common to use an 214 `ExternalDependency` class that would instantiate different types of 215 dependencies and hold the one it found. There are a number of drawbacks to 216 this approach, and no new dependencies should do this. 217 """ 218 219 # This is a dict where the keys should be strings, and the values must be one 220 # of: 221 # - An ExternalDependency subclass 222 # - A DependencyFactory object 223 # - A callable with a signature of (Environment, MachineChoice, Dict[str, Any]) -> List[Callable[[], ExternalDependency]] 224 packages.update({ 225 # From dev: 226 'gtest': gtest_factory, 227 'gmock': gmock_factory, 228 'llvm': llvm_factory, 229 'valgrind': ValgrindDependency, 230 'zlib': zlib_factory, 231 'jdk': JDKSystemDependency, 232 233 'boost': BoostDependency, 234 'cuda': CudaDependency, 235 236 # per-file 237 'coarray': coarray_factory, 238 'hdf5': hdf5_factory, 239 'mpi': mpi_factory, 240 'scalapack': scalapack_factory, 241 242 # From misc: 243 'blocks': BlocksDependency, 244 'curses': curses_factory, 245 'netcdf': netcdf_factory, 246 'openmp': OpenMPDependency, 247 'python3': python3_factory, 248 'threads': threads_factory, 249 'pcap': pcap_factory, 250 'cups': cups_factory, 251 'libwmf': libwmf_factory, 252 'libgcrypt': libgcrypt_factory, 253 'gpgme': gpgme_factory, 254 'shaderc': shaderc_factory, 255 'intl': intl_factory, 256 257 # From platform: 258 'appleframeworks': AppleFrameworks, 259 260 # From ui: 261 'gl': gl_factory, 262 'gnustep': GnuStepDependency, 263 'qt4': qt4_factory, 264 'qt5': qt5_factory, 265 'qt6': qt6_factory, 266 'sdl2': sdl2_factory, 267 'wxwidgets': WxDependency, 268 'vulkan': vulkan_factory, 269 }) 270 _packages_accept_language.update({ 271 'hdf5', 272 'mpi', 273 'netcdf', 274 'openmp', 275 }) 276 [end of mesonbuild/dependencies/__init__.py] [start of mesonbuild/modules/python.py] 1 # Copyright 2018 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import os 16 import json 17 import shutil 18 import typing as T 19 20 from pathlib import Path 21 from .. import mesonlib 22 from ..mesonlib import MachineChoice, MesonException 23 from . import ExtensionModule 24 from ..interpreterbase import ( 25 noPosargs, noKwargs, permittedKwargs, 26 InvalidArguments, 27 FeatureNew, FeatureNewKwargs, disablerIfNotFound 28 ) 29 from ..interpreter import ExternalProgramHolder, extract_required_kwarg, permitted_dependency_kwargs 30 from ..build import known_shmod_kwargs 31 from .. import mlog 32 from ..environment import detect_cpu_family 33 from ..dependencies import DependencyMethods, PkgConfigDependency, NotFoundDependency, SystemDependency 34 from ..programs import ExternalProgram, NonExistingExternalProgram 35 36 mod_kwargs = {'subdir'} 37 mod_kwargs.update(known_shmod_kwargs) 38 mod_kwargs -= {'name_prefix', 'name_suffix'} 39 40 class PythonDependency(SystemDependency): 41 42 def __init__(self, python_holder, environment, kwargs): 43 super().__init__('python', environment, kwargs) 44 self.name = 'python' 45 self.static = kwargs.get('static', False) 46 self.embed = kwargs.get('embed', False) 47 self.version = python_holder.version 48 self.platform = python_holder.platform 49 self.pkgdep = None 50 self.variables = python_holder.variables 51 self.paths = python_holder.paths 52 self.link_libpython = python_holder.link_libpython 53 self.info: T.Optional[T.Dict[str, str]] = None 54 if mesonlib.version_compare(self.version, '>= 3.0'): 55 self.major_version = 3 56 else: 57 self.major_version = 2 58 59 # We first try to find the necessary python variables using pkgconfig 60 if DependencyMethods.PKGCONFIG in self.methods and not python_holder.is_pypy: 61 pkg_version = self.variables.get('LDVERSION') or self.version 62 pkg_libdir = self.variables.get('LIBPC') 63 pkg_embed = '-embed' if self.embed and mesonlib.version_compare(self.version, '>=3.8') else '' 64 pkg_name = f'python-{pkg_version}{pkg_embed}' 65 66 # If python-X.Y.pc exists in LIBPC, we will try to use it 67 if pkg_libdir is not None and Path(os.path.join(pkg_libdir, f'{pkg_name}.pc')).is_file(): 68 old_pkg_libdir = os.environ.get('PKG_CONFIG_LIBDIR') 69 old_pkg_path = os.environ.get('PKG_CONFIG_PATH') 70 71 os.environ.pop('PKG_CONFIG_PATH', None) 72 73 if pkg_libdir: 74 os.environ['PKG_CONFIG_LIBDIR'] = pkg_libdir 75 76 try: 77 self.pkgdep = PkgConfigDependency(pkg_name, environment, kwargs) 78 mlog.debug(f'Found "{pkg_name}" via pkgconfig lookup in LIBPC ({pkg_libdir})') 79 py_lookup_method = 'pkgconfig' 80 except MesonException as e: 81 mlog.debug(f'"{pkg_name}" could not be found in LIBPC ({pkg_libdir})') 82 mlog.debug(e) 83 84 if old_pkg_path is not None: 85 os.environ['PKG_CONFIG_PATH'] = old_pkg_path 86 87 if old_pkg_libdir is not None: 88 os.environ['PKG_CONFIG_LIBDIR'] = old_pkg_libdir 89 else: 90 os.environ.pop('PKG_CONFIG_LIBDIR', None) 91 else: 92 mlog.debug(f'"{pkg_name}" could not be found in LIBPC ({pkg_libdir}), this is likely due to a relocated python installation') 93 94 # If lookup via LIBPC failed, try to use fallback PKG_CONFIG_LIBDIR/PKG_CONFIG_PATH mechanisms 95 if self.pkgdep is None or not self.pkgdep.found(): 96 try: 97 self.pkgdep = PkgConfigDependency(pkg_name, environment, kwargs) 98 mlog.debug(f'Found "{pkg_name}" via fallback pkgconfig lookup in PKG_CONFIG_LIBDIR/PKG_CONFIG_PATH') 99 py_lookup_method = 'pkgconfig-fallback' 100 except MesonException as e: 101 mlog.debug(f'"{pkg_name}" could not be found via fallback pkgconfig lookup in PKG_CONFIG_LIBDIR/PKG_CONFIG_PATH') 102 mlog.debug(e) 103 104 if self.pkgdep and self.pkgdep.found(): 105 self.compile_args = self.pkgdep.get_compile_args() 106 self.link_args = self.pkgdep.get_link_args() 107 self.is_found = True 108 self.pcdep = self.pkgdep 109 else: 110 self.pkgdep = None 111 112 # Finally, try to find python via SYSCONFIG as a final measure 113 if DependencyMethods.SYSCONFIG in self.methods: 114 if mesonlib.is_windows(): 115 self._find_libpy_windows(environment) 116 else: 117 self._find_libpy(python_holder, environment) 118 if self.is_found: 119 mlog.debug(f'Found "python-{self.version}" via SYSCONFIG module') 120 py_lookup_method = 'sysconfig' 121 122 if self.is_found: 123 mlog.log('Dependency', mlog.bold(self.name), 'found:', mlog.green(f'YES ({py_lookup_method})')) 124 else: 125 mlog.log('Dependency', mlog.bold(self.name), 'found:', mlog.red('NO')) 126 127 def _find_libpy(self, python_holder, environment): 128 if python_holder.is_pypy: 129 if self.major_version == 3: 130 libname = 'pypy3-c' 131 else: 132 libname = 'pypy-c' 133 libdir = os.path.join(self.variables.get('base'), 'bin') 134 libdirs = [libdir] 135 else: 136 libname = f'python{self.version}' 137 if 'DEBUG_EXT' in self.variables: 138 libname += self.variables['DEBUG_EXT'] 139 if 'ABIFLAGS' in self.variables: 140 libname += self.variables['ABIFLAGS'] 141 libdirs = [] 142 143 largs = self.clib_compiler.find_library(libname, environment, libdirs) 144 if largs is not None: 145 self.link_args = largs 146 147 self.is_found = largs is not None or self.link_libpython 148 149 inc_paths = mesonlib.OrderedSet([ 150 self.variables.get('INCLUDEPY'), 151 self.paths.get('include'), 152 self.paths.get('platinclude')]) 153 154 self.compile_args += ['-I' + path for path in inc_paths if path] 155 156 def get_windows_python_arch(self): 157 if self.platform == 'mingw': 158 pycc = self.variables.get('CC') 159 if pycc.startswith('x86_64'): 160 return '64' 161 elif pycc.startswith(('i686', 'i386')): 162 return '32' 163 else: 164 mlog.log('MinGW Python built with unknown CC {!r}, please file' 165 'a bug'.format(pycc)) 166 return None 167 elif self.platform == 'win32': 168 return '32' 169 elif self.platform in ('win64', 'win-amd64'): 170 return '64' 171 mlog.log(f'Unknown Windows Python platform {self.platform!r}') 172 return None 173 174 def get_windows_link_args(self): 175 if self.platform.startswith('win'): 176 vernum = self.variables.get('py_version_nodot') 177 if self.static: 178 libpath = Path('libs') / f'libpython{vernum}.a' 179 else: 180 comp = self.get_compiler() 181 if comp.id == "gcc": 182 libpath = f'python{vernum}.dll' 183 else: 184 libpath = Path('libs') / f'python{vernum}.lib' 185 lib = Path(self.variables.get('base')) / libpath 186 elif self.platform == 'mingw': 187 if self.static: 188 libname = self.variables.get('LIBRARY') 189 else: 190 libname = self.variables.get('LDLIBRARY') 191 lib = Path(self.variables.get('LIBDIR')) / libname 192 if not lib.exists(): 193 mlog.log('Could not find Python3 library {!r}'.format(str(lib))) 194 return None 195 return [str(lib)] 196 197 def _find_libpy_windows(self, env): 198 ''' 199 Find python3 libraries on Windows and also verify that the arch matches 200 what we are building for. 201 ''' 202 pyarch = self.get_windows_python_arch() 203 if pyarch is None: 204 self.is_found = False 205 return 206 arch = detect_cpu_family(env.coredata.compilers.host) 207 if arch == 'x86': 208 arch = '32' 209 elif arch == 'x86_64': 210 arch = '64' 211 else: 212 # We can't cross-compile Python 3 dependencies on Windows yet 213 mlog.log(f'Unknown architecture {arch!r} for', 214 mlog.bold(self.name)) 215 self.is_found = False 216 return 217 # Pyarch ends in '32' or '64' 218 if arch != pyarch: 219 mlog.log('Need', mlog.bold(self.name), 'for {}-bit, but ' 220 'found {}-bit'.format(arch, pyarch)) 221 self.is_found = False 222 return 223 # This can fail if the library is not found 224 largs = self.get_windows_link_args() 225 if largs is None: 226 self.is_found = False 227 return 228 self.link_args = largs 229 # Compile args 230 inc_paths = mesonlib.OrderedSet([ 231 self.variables.get('INCLUDEPY'), 232 self.paths.get('include'), 233 self.paths.get('platinclude')]) 234 235 self.compile_args += ['-I' + path for path in inc_paths if path] 236 237 # https://sourceforge.net/p/mingw-w64/mailman/message/30504611/ 238 if pyarch == '64' and self.major_version == 2: 239 self.compile_args += ['-DMS_WIN64'] 240 241 self.is_found = True 242 243 @staticmethod 244 def get_methods(): 245 if mesonlib.is_windows(): 246 return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSCONFIG] 247 elif mesonlib.is_osx(): 248 return [DependencyMethods.PKGCONFIG, DependencyMethods.EXTRAFRAMEWORK] 249 else: 250 return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSCONFIG] 251 252 def get_pkgconfig_variable(self, variable_name, kwargs): 253 if self.pkgdep: 254 return self.pkgdep.get_pkgconfig_variable(variable_name, kwargs) 255 else: 256 return super().get_pkgconfig_variable(variable_name, kwargs) 257 258 259 INTROSPECT_COMMAND = '''import sysconfig 260 import json 261 import sys 262 263 install_paths = sysconfig.get_paths(scheme='posix_prefix', vars={'base': '', 'platbase': '', 'installed_base': ''}) 264 265 def links_against_libpython(): 266 from distutils.core import Distribution, Extension 267 cmd = Distribution().get_command_obj('build_ext') 268 cmd.ensure_finalized() 269 return bool(cmd.get_libraries(Extension('dummy', []))) 270 271 print (json.dumps ({ 272 'variables': sysconfig.get_config_vars(), 273 'paths': sysconfig.get_paths(), 274 'install_paths': install_paths, 275 'version': sysconfig.get_python_version(), 276 'platform': sysconfig.get_platform(), 277 'is_pypy': '__pypy__' in sys.builtin_module_names, 278 'link_libpython': links_against_libpython(), 279 })) 280 ''' 281 282 class PythonExternalProgram(ExternalProgram): 283 def __init__(self, name: str, command: T.Optional[T.List[str]] = None, ext_prog: T.Optional[ExternalProgram] = None): 284 if ext_prog is None: 285 super().__init__(name, command=command, silent=True) 286 else: 287 self.name = ext_prog.name 288 self.command = ext_prog.command 289 self.path = ext_prog.path 290 self.info: T.Dict[str, str] = {} 291 292 class PythonInstallation(ExternalProgramHolder): 293 def __init__(self, python, interpreter): 294 ExternalProgramHolder.__init__(self, python, interpreter) 295 info = python.info 296 prefix = self.interpreter.environment.coredata.get_option(mesonlib.OptionKey('prefix')) 297 self.variables = info['variables'] 298 self.paths = info['paths'] 299 install_paths = info['install_paths'] 300 self.platlib_install_path = os.path.join(prefix, install_paths['platlib'][1:]) 301 self.purelib_install_path = os.path.join(prefix, install_paths['purelib'][1:]) 302 self.version = info['version'] 303 self.platform = info['platform'] 304 self.is_pypy = info['is_pypy'] 305 self.link_libpython = info['link_libpython'] 306 self.methods.update({ 307 'extension_module': self.extension_module_method, 308 'dependency': self.dependency_method, 309 'install_sources': self.install_sources_method, 310 'get_install_dir': self.get_install_dir_method, 311 'language_version': self.language_version_method, 312 'found': self.found_method, 313 'has_path': self.has_path_method, 314 'get_path': self.get_path_method, 315 'has_variable': self.has_variable_method, 316 'get_variable': self.get_variable_method, 317 'path': self.path_method, 318 }) 319 320 @permittedKwargs(mod_kwargs) 321 def extension_module_method(self, args, kwargs): 322 if 'subdir' in kwargs and 'install_dir' in kwargs: 323 raise InvalidArguments('"subdir" and "install_dir" are mutually exclusive') 324 325 if 'subdir' in kwargs: 326 subdir = kwargs.pop('subdir', '') 327 if not isinstance(subdir, str): 328 raise InvalidArguments('"subdir" argument must be a string.') 329 330 kwargs['install_dir'] = os.path.join(self.platlib_install_path, subdir) 331 332 # On macOS and some Linux distros (Debian) distutils doesn't link 333 # extensions against libpython. We call into distutils and mirror its 334 # behavior. See https://github.com/mesonbuild/meson/issues/4117 335 if not self.link_libpython: 336 new_deps = [] 337 for dep in mesonlib.extract_as_list(kwargs, 'dependencies'): 338 if isinstance(dep, PythonDependency): 339 dep = dep.get_partial_dependency(compile_args=True) 340 new_deps.append(dep) 341 kwargs['dependencies'] = new_deps 342 343 suffix = self.variables.get('EXT_SUFFIX') or self.variables.get('SO') or self.variables.get('.so') 344 345 # msys2's python3 has "-cpython-36m.dll", we have to be clever 346 split = suffix.rsplit('.', 1) 347 suffix = split.pop(-1) 348 args[0] += ''.join(s for s in split) 349 350 kwargs['name_prefix'] = '' 351 kwargs['name_suffix'] = suffix 352 353 return self.interpreter.func_shared_module(None, args, kwargs) 354 355 @permittedKwargs(permitted_dependency_kwargs | {'embed'}) 356 @FeatureNewKwargs('python_installation.dependency', '0.53.0', ['embed']) 357 def dependency_method(self, args, kwargs): 358 if args: 359 mlog.warning('python_installation.dependency() does not take any ' 360 'positional arguments. It always returns a Python ' 361 'dependency. This will become an error in the future.', 362 location=self.interpreter.current_node) 363 disabled, required, feature = extract_required_kwarg(kwargs, self.subproject) 364 if disabled: 365 mlog.log('Dependency', mlog.bold('python'), 'skipped: feature', mlog.bold(feature), 'disabled') 366 dep = NotFoundDependency(self.interpreter.environment) 367 else: 368 dep = PythonDependency(self, self.interpreter.environment, kwargs) 369 if required and not dep.found(): 370 raise mesonlib.MesonException('Python dependency not found') 371 return dep 372 373 @permittedKwargs(['pure', 'subdir']) 374 def install_sources_method(self, args, kwargs): 375 pure = kwargs.pop('pure', True) 376 if not isinstance(pure, bool): 377 raise InvalidArguments('"pure" argument must be a boolean.') 378 379 subdir = kwargs.pop('subdir', '') 380 if not isinstance(subdir, str): 381 raise InvalidArguments('"subdir" argument must be a string.') 382 383 if pure: 384 kwargs['install_dir'] = os.path.join(self.purelib_install_path, subdir) 385 else: 386 kwargs['install_dir'] = os.path.join(self.platlib_install_path, subdir) 387 388 return self.interpreter.func_install_data(None, args, kwargs) 389 390 @noPosargs 391 @permittedKwargs(['pure', 'subdir']) 392 def get_install_dir_method(self, args, kwargs): 393 pure = kwargs.pop('pure', True) 394 if not isinstance(pure, bool): 395 raise InvalidArguments('"pure" argument must be a boolean.') 396 397 subdir = kwargs.pop('subdir', '') 398 if not isinstance(subdir, str): 399 raise InvalidArguments('"subdir" argument must be a string.') 400 401 if pure: 402 res = os.path.join(self.purelib_install_path, subdir) 403 else: 404 res = os.path.join(self.platlib_install_path, subdir) 405 406 return res 407 408 @noPosargs 409 @noKwargs 410 def language_version_method(self, args, kwargs): 411 return self.version 412 413 @noKwargs 414 def has_path_method(self, args, kwargs): 415 if len(args) != 1: 416 raise InvalidArguments('has_path takes exactly one positional argument.') 417 path_name = args[0] 418 if not isinstance(path_name, str): 419 raise InvalidArguments('has_path argument must be a string.') 420 421 return path_name in self.paths 422 423 @noKwargs 424 def get_path_method(self, args, kwargs): 425 if len(args) not in (1, 2): 426 raise InvalidArguments('get_path must have one or two arguments.') 427 path_name = args[0] 428 if not isinstance(path_name, str): 429 raise InvalidArguments('get_path argument must be a string.') 430 431 try: 432 path = self.paths[path_name] 433 except KeyError: 434 if len(args) == 2: 435 path = args[1] 436 else: 437 raise InvalidArguments(f'{path_name} is not a valid path name') 438 439 return path 440 441 @noKwargs 442 def has_variable_method(self, args, kwargs): 443 if len(args) != 1: 444 raise InvalidArguments('has_variable takes exactly one positional argument.') 445 var_name = args[0] 446 if not isinstance(var_name, str): 447 raise InvalidArguments('has_variable argument must be a string.') 448 449 return var_name in self.variables 450 451 @noKwargs 452 def get_variable_method(self, args, kwargs): 453 if len(args) not in (1, 2): 454 raise InvalidArguments('get_variable must have one or two arguments.') 455 var_name = args[0] 456 if not isinstance(var_name, str): 457 raise InvalidArguments('get_variable argument must be a string.') 458 459 try: 460 var = self.variables[var_name] 461 except KeyError: 462 if len(args) == 2: 463 var = args[1] 464 else: 465 raise InvalidArguments(f'{var_name} is not a valid variable name') 466 467 return var 468 469 @noPosargs 470 @noKwargs 471 @FeatureNew('Python module path method', '0.50.0') 472 def path_method(self, args, kwargs): 473 return super().path_method(args, kwargs) 474 475 476 class PythonModule(ExtensionModule): 477 478 @FeatureNew('Python Module', '0.46.0') 479 def __init__(self, *args, **kwargs): 480 super().__init__(*args, **kwargs) 481 self.methods.update({ 482 'find_installation': self.find_installation, 483 }) 484 485 # https://www.python.org/dev/peps/pep-0397/ 486 def _get_win_pythonpath(self, name_or_path): 487 if name_or_path not in ['python2', 'python3']: 488 return None 489 if not shutil.which('py'): 490 # program not installed, return without an exception 491 return None 492 ver = {'python2': '-2', 'python3': '-3'}[name_or_path] 493 cmd = ['py', ver, '-c', "import sysconfig; print(sysconfig.get_config_var('BINDIR'))"] 494 _, stdout, _ = mesonlib.Popen_safe(cmd) 495 directory = stdout.strip() 496 if os.path.exists(directory): 497 return os.path.join(directory, 'python') 498 else: 499 return None 500 501 def _check_version(self, name_or_path, version): 502 if name_or_path == 'python2': 503 return mesonlib.version_compare(version, '< 3.0') 504 elif name_or_path == 'python3': 505 return mesonlib.version_compare(version, '>= 3.0') 506 return True 507 508 @FeatureNewKwargs('python.find_installation', '0.49.0', ['disabler']) 509 @FeatureNewKwargs('python.find_installation', '0.51.0', ['modules']) 510 @disablerIfNotFound 511 @permittedKwargs({'required', 'modules'}) 512 def find_installation(self, state, args, kwargs): 513 feature_check = FeatureNew('Passing "feature" option to find_installation', '0.48.0') 514 disabled, required, feature = extract_required_kwarg(kwargs, state.subproject, feature_check) 515 want_modules = mesonlib.extract_as_list(kwargs, 'modules') # type: T.List[str] 516 found_modules = [] # type: T.List[str] 517 missing_modules = [] # type: T.List[str] 518 519 if len(args) > 1: 520 raise InvalidArguments('find_installation takes zero or one positional argument.') 521 522 name_or_path = state.environment.lookup_binary_entry(MachineChoice.HOST, 'python') 523 if name_or_path is None and args: 524 name_or_path = args[0] 525 if not isinstance(name_or_path, str): 526 raise InvalidArguments('find_installation argument must be a string.') 527 528 if disabled: 529 mlog.log('Program', name_or_path or 'python', 'found:', mlog.red('NO'), '(disabled by:', mlog.bold(feature), ')') 530 return NonExistingExternalProgram() 531 532 if not name_or_path: 533 python = PythonExternalProgram('python3', mesonlib.python_command) 534 else: 535 tmp_python = ExternalProgram.from_entry('python3', name_or_path) 536 python = PythonExternalProgram('python3', ext_prog=tmp_python) 537 538 if not python.found() and mesonlib.is_windows(): 539 pythonpath = self._get_win_pythonpath(name_or_path) 540 if pythonpath is not None: 541 name_or_path = pythonpath 542 python = PythonExternalProgram(name_or_path) 543 544 # Last ditch effort, python2 or python3 can be named python 545 # on various platforms, let's not give up just yet, if an executable 546 # named python is available and has a compatible version, let's use 547 # it 548 if not python.found() and name_or_path in ['python2', 'python3']: 549 python = PythonExternalProgram('python') 550 551 if python.found() and want_modules: 552 for mod in want_modules: 553 p, out, err = mesonlib.Popen_safe( 554 python.command + 555 ['-c', f'import {mod}']) 556 if p.returncode != 0: 557 missing_modules.append(mod) 558 else: 559 found_modules.append(mod) 560 561 msg = ['Program', python.name] 562 if want_modules: 563 msg.append('({})'.format(', '.join(want_modules))) 564 msg.append('found:') 565 if python.found() and not missing_modules: 566 msg.extend([mlog.green('YES'), '({})'.format(' '.join(python.command))]) 567 else: 568 msg.append(mlog.red('NO')) 569 if found_modules: 570 msg.append('modules:') 571 msg.append(', '.join(found_modules)) 572 573 mlog.log(*msg) 574 575 if not python.found(): 576 if required: 577 raise mesonlib.MesonException('{} not found'.format(name_or_path or 'python')) 578 return NonExistingExternalProgram() 579 elif missing_modules: 580 if required: 581 raise mesonlib.MesonException('{} is missing modules: {}'.format(name_or_path or 'python', ', '.join(missing_modules))) 582 return NonExistingExternalProgram() 583 else: 584 # Sanity check, we expect to have something that at least quacks in tune 585 try: 586 cmd = python.get_command() + ['-c', INTROSPECT_COMMAND] 587 p, stdout, stderr = mesonlib.Popen_safe(cmd) 588 info = json.loads(stdout) 589 except json.JSONDecodeError: 590 info = None 591 mlog.debug('Could not introspect Python (%s): exit code %d' % (str(p.args), p.returncode)) 592 mlog.debug('Program stdout:\n') 593 mlog.debug(stdout) 594 mlog.debug('Program stderr:\n') 595 mlog.debug(stderr) 596 597 if isinstance(info, dict) and 'version' in info and self._check_version(name_or_path, info['version']): 598 python.info = info 599 return python 600 else: 601 if required: 602 raise mesonlib.MesonException(f'{python} is not a valid python or it is missing setuptools') 603 return NonExistingExternalProgram() 604 605 raise mesonlib.MesonBugException('Unreachable code was reached (PythonModule.find_installation).') 606 607 608 def initialize(*args, **kwargs): 609 mod = PythonModule(*args, **kwargs) 610 mod.interpreter.append_holder_map(PythonExternalProgram, PythonInstallation) 611 return mod 612 [end of mesonbuild/modules/python.py] [start of mesonbuild/modules/python3.py] 1 # Copyright 2016-2017 The Meson development team 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 7 # http://www.apache.org/licenses/LICENSE-2.0 8 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import sysconfig 16 from .. import mesonlib 17 18 from . import ExtensionModule 19 from ..interpreterbase import noKwargs, permittedKwargs, FeatureDeprecated 20 from ..build import known_shmod_kwargs 21 from ..programs import ExternalProgram 22 23 24 class Python3Module(ExtensionModule): 25 @FeatureDeprecated('python3 module', '0.48.0') 26 def __init__(self, *args, **kwargs): 27 super().__init__(*args, **kwargs) 28 self.methods.update({ 29 'extension_module': self.extension_module, 30 'find_python': self.find_python, 31 'language_version': self.language_version, 32 'sysconfig_path': self.sysconfig_path, 33 }) 34 35 @permittedKwargs(known_shmod_kwargs) 36 def extension_module(self, state, args, kwargs): 37 if 'name_prefix' in kwargs: 38 raise mesonlib.MesonException('Name_prefix is set automatically, specifying it is forbidden.') 39 if 'name_suffix' in kwargs: 40 raise mesonlib.MesonException('Name_suffix is set automatically, specifying it is forbidden.') 41 host_system = state.host_machine.system 42 if host_system == 'darwin': 43 # Default suffix is 'dylib' but Python does not use it for extensions. 44 suffix = 'so' 45 elif host_system == 'windows': 46 # On Windows the extension is pyd for some unexplainable reason. 47 suffix = 'pyd' 48 else: 49 suffix = [] 50 kwargs['name_prefix'] = '' 51 kwargs['name_suffix'] = suffix 52 return self.interpreter.func_shared_module(None, args, kwargs) 53 54 @noKwargs 55 def find_python(self, state, args, kwargs): 56 command = state.environment.lookup_binary_entry(mesonlib.MachineChoice.HOST, 'python3') 57 if command is not None: 58 py3 = ExternalProgram.from_entry('python3', command) 59 else: 60 py3 = ExternalProgram('python3', mesonlib.python_command, silent=True) 61 return py3 62 63 @noKwargs 64 def language_version(self, state, args, kwargs): 65 return sysconfig.get_python_version() 66 67 @noKwargs 68 def sysconfig_path(self, state, args, kwargs): 69 if len(args) != 1: 70 raise mesonlib.MesonException('sysconfig_path() requires passing the name of path to get.') 71 path_name = args[0] 72 valid_names = sysconfig.get_path_names() 73 if path_name not in valid_names: 74 raise mesonlib.MesonException(f'{path_name} is not a valid path name {valid_names}.') 75 76 # Get a relative path without a prefix, e.g. lib/python3.6/site-packages 77 return sysconfig.get_path(path_name, vars={'base': '', 'platbase': '', 'installed_base': ''})[1:] 78 79 80 def initialize(*args, **kwargs): 81 return Python3Module(*args, **kwargs) 82 [end of mesonbuild/modules/python3.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
mesonbuild/meson
55f8ab1590232bc89e9079f5d33e79b0a9382033
Python extension_module does not install in site_packages In the *python* module documentation is claimed that `extension_module` install per default in `site_packages`: > `subdir`: By default, meson will install the extension module in the relevant top-level location for the python installation, eg `/usr/lib/site-packages`. When subdir is passed to this method, it will be appended to that location. This keyword argument is mutually exclusive with `install_dir` However, this does not work. When set `install: true` the modules are installed to `$prefix/lib64` on my system. One workaround is to specify the target like: ``` py3_mod = import('python') py3_inst = py3_mod.find_installation('python3') foo = py3_inst.extension_module('foo', <source_target>, install: true, subdir: get_option('prefix') + py3_inst.get_path('purelib'), dependencies: [py3_inst.dependency()] ) ``` It would be nice, if the subdir is set autotically to the `site-packages` folder. This bug is btw similar to #2859. However, this talks about the now deprecated *python3* module.
Same here, except since I build a C module, I need to use `install_dir: py3_inst.get_path('platlib')`
2021-09-24T15:42:18Z
<patch> diff --git a/mesonbuild/modules/python.py b/mesonbuild/modules/python.py --- a/mesonbuild/modules/python.py +++ b/mesonbuild/modules/python.py @@ -279,15 +279,65 @@ def links_against_libpython(): })) ''' +if T.TYPE_CHECKING: + class PythonIntrospectionDict(TypedDict): + + install_paths: T.Dict[str, str] + is_pypy: bool + link_libpython: bool + paths: T.Dict[str, str] + platform: str + suffix : str + variables: T.Dict[str, str] + version: str + class PythonExternalProgram(ExternalProgram): def __init__(self, name: str, command: T.Optional[T.List[str]] = None, ext_prog: T.Optional[ExternalProgram] = None): if ext_prog is None: super().__init__(name, command=command, silent=True) else: - self.name = ext_prog.name + self.name = name self.command = ext_prog.command self.path = ext_prog.path - self.info: T.Dict[str, str] = {} + self.info: 'PythonIntrospectionDict' = { + 'install_paths': {}, + 'is_pypy': False, + 'link_libpython': False, + 'paths': {}, + 'platform': 'sentinal', + 'variables': {}, + 'version': '0.0', + } + + def _check_version(self, version: str) -> bool: + if self.name == 'python2': + return mesonlib.version_compare(version, '< 3.0') + elif self.name == 'python3': + return mesonlib.version_compare(version, '>= 3.0') + return True + + def sanity(self) -> bool: + # Sanity check, we expect to have something that at least quacks in tune + cmd = self.get_command() + ['-c', INTROSPECT_COMMAND] + p, stdout, stderr = mesonlib.Popen_safe(cmd) + try: + info = json.loads(stdout) + except json.JSONDecodeError: + info = None + mlog.debug('Could not introspect Python (%s): exit code %d' % (str(p.args), p.returncode)) + mlog.debug('Program stdout:\n') + mlog.debug(stdout) + mlog.debug('Program stderr:\n') + mlog.debug(stderr) + + if info is not None and self._check_version(info['version']): + variables = info['variables'] + info['suffix'] = variables.get('EXT_SUFFIX') or variables.get('SO') or variables.get('.so') + self.info = T.cast('PythonIntrospectionDict', info) + return True + else: + return False + class PythonInstallation(ExternalProgramHolder): def __init__(self, python, interpreter): @@ -295,6 +345,7 @@ def __init__(self, python, interpreter): info = python.info prefix = self.interpreter.environment.coredata.get_option(mesonlib.OptionKey('prefix')) self.variables = info['variables'] + self.suffix = info['suffix'] self.paths = info['paths'] install_paths = info['install_paths'] self.platlib_install_path = os.path.join(prefix, install_paths['platlib'][1:]) @@ -319,10 +370,10 @@ def __init__(self, python, interpreter): @permittedKwargs(mod_kwargs) def extension_module_method(self, args, kwargs): - if 'subdir' in kwargs and 'install_dir' in kwargs: - raise InvalidArguments('"subdir" and "install_dir" are mutually exclusive') - - if 'subdir' in kwargs: + if 'install_dir' in kwargs: + if 'subdir' in kwargs: + raise InvalidArguments('"subdir" and "install_dir" are mutually exclusive') + else: subdir = kwargs.pop('subdir', '') if not isinstance(subdir, str): raise InvalidArguments('"subdir" argument must be a string.') @@ -340,12 +391,10 @@ def extension_module_method(self, args, kwargs): new_deps.append(dep) kwargs['dependencies'] = new_deps - suffix = self.variables.get('EXT_SUFFIX') or self.variables.get('SO') or self.variables.get('.so') - # msys2's python3 has "-cpython-36m.dll", we have to be clever - split = suffix.rsplit('.', 1) - suffix = split.pop(-1) - args[0] += ''.join(s for s in split) + # FIXME: explain what the specific cleverness is here + split, suffix = self.suffix.rsplit('.', 1) + args[0] += split kwargs['name_prefix'] = '' kwargs['name_suffix'] = suffix @@ -498,12 +547,6 @@ def _get_win_pythonpath(self, name_or_path): else: return None - def _check_version(self, name_or_path, version): - if name_or_path == 'python2': - return mesonlib.version_compare(version, '< 3.0') - elif name_or_path == 'python3': - return mesonlib.version_compare(version, '>= 3.0') - return True @FeatureNewKwargs('python.find_installation', '0.49.0', ['disabler']) @FeatureNewKwargs('python.find_installation', '0.51.0', ['modules']) @@ -515,13 +558,15 @@ def find_installation(self, state, args, kwargs): want_modules = mesonlib.extract_as_list(kwargs, 'modules') # type: T.List[str] found_modules = [] # type: T.List[str] missing_modules = [] # type: T.List[str] + fallback = args[0] if args else '' + display_name = fallback or 'python' if len(args) > 1: raise InvalidArguments('find_installation takes zero or one positional argument.') name_or_path = state.environment.lookup_binary_entry(MachineChoice.HOST, 'python') if name_or_path is None and args: - name_or_path = args[0] + name_or_path = fallback if not isinstance(name_or_path, str): raise InvalidArguments('find_installation argument must be a string.') @@ -532,8 +577,8 @@ def find_installation(self, state, args, kwargs): if not name_or_path: python = PythonExternalProgram('python3', mesonlib.python_command) else: - tmp_python = ExternalProgram.from_entry('python3', name_or_path) - python = PythonExternalProgram('python3', ext_prog=tmp_python) + tmp_python = ExternalProgram.from_entry(display_name, name_or_path) + python = PythonExternalProgram(display_name, ext_prog=tmp_python) if not python.found() and mesonlib.is_windows(): pythonpath = self._get_win_pythonpath(name_or_path) @@ -581,21 +626,9 @@ def find_installation(self, state, args, kwargs): raise mesonlib.MesonException('{} is missing modules: {}'.format(name_or_path or 'python', ', '.join(missing_modules))) return NonExistingExternalProgram() else: - # Sanity check, we expect to have something that at least quacks in tune - try: - cmd = python.get_command() + ['-c', INTROSPECT_COMMAND] - p, stdout, stderr = mesonlib.Popen_safe(cmd) - info = json.loads(stdout) - except json.JSONDecodeError: - info = None - mlog.debug('Could not introspect Python (%s): exit code %d' % (str(p.args), p.returncode)) - mlog.debug('Program stdout:\n') - mlog.debug(stdout) - mlog.debug('Program stderr:\n') - mlog.debug(stderr) - - if isinstance(info, dict) and 'version' in info and self._check_version(name_or_path, info['version']): - python.info = info + sane = python.sanity() + + if sane: return python else: if required: </patch>
[]
[]
conda__conda-3051
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> partially revert no default shortcuts Closes https://github.com/ContinuumIO/navigator/issues/570 This changes shortcuts to again be installed by default, but adds a condarc setting "shortcuts" that can be set to False to disable shortcut installation. CC @joelhullcio @csoja @multicastmatt </issue> <code> [start of README.rst] 1 .. NOTE: This file serves both as the README on GitHub and the index.html for 2 conda.pydata.org. If you update this file, be sure to cd to the web 3 directory and run ``make html; make live`` 4 5 .. image:: https://s3.amazonaws.com/conda-dev/conda_logo.svg 6 :alt: Conda Logo 7 8 ---------------------------------------- 9 10 .. image:: https://travis-ci.org/conda/conda.svg?branch=master 11 :alt: Travis-CI Build Status 12 :target: https://travis-ci.org/conda/conda 13 14 .. image:: https://ci.appveyor.com/api/projects/status/v6fl568drifhia2d/branch/master?svg=true 15 :alt: Appveyor Build Status 16 :target: https://ci.appveyor.com/project/ContinuumAnalyticsFOSS/conda/branch/master 17 18 .. image:: https://codecov.io/github/conda/conda/coverage.svg?branch=master 19 :alt: Codecov Status 20 :target: https://codecov.io/github/conda/conda?branch=master 21 22 .. image:: https://scrutinizer-ci.com/g/conda/conda/badges/quality-score.png?b=master 23 :alt: Scrutinizer Code Quality 24 :target: https://scrutinizer-ci.com/g/conda/conda/?branch=master 25 26 .. image:: https://www.quantifiedcode.com/api/v1/project/81377831ebe54def8b31c55a4b5b4cb0/badge.svg 27 :alt: Quantified Code 28 :target: https://www.quantifiedcode.com/app/project/81377831ebe54def8b31c55a4b5b4cb0 29 30 .. image:: https://badges.gitter.im/conda/conda.svg 31 :alt: Join the chat at https://gitter.im/conda/conda 32 :target: https://gitter.im/conda/conda?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge 33 34 | 35 36 .. image:: https://s3.amazonaws.com/conda-dev/conda-announce-signup-button.svg 37 :alt: Join the Conda Announcment List 38 :target: http://conda.pydata.org/docs 39 40 | 41 42 Conda is a cross-platform, Python-agnostic binary package manager. It is the 43 package manager used by `Anaconda 44 <http://docs.continuum.io/anaconda/index.html>`_ installations, but it may be 45 used for other systems as well. Conda makes environments first-class 46 citizens, making it easy to create independent environments even for C 47 libraries. Conda is written entirely in Python, and is BSD licensed open 48 source. 49 50 Conda is enhanced by organizations, tools, and repositories created and managed by the amazing members of the conda community. Some of them can be found `here <https://github.com/conda/conda/wiki/Conda-Community>`_. 51 52 53 Installation 54 ------------ 55 56 Conda is a part of the `Anaconda distribution <https://store.continuum.io/cshop/anaconda/>`_. You can also download a 57 minimal installation that only includes conda and its dependencies, called 58 `Miniconda <http://conda.pydata.org/miniconda.html>`_. 59 60 61 Getting Started 62 --------------- 63 64 If you install Anaconda, you will already have hundreds of packages 65 installed. You can see what packages are installed by running 66 67 .. code-block:: bash 68 69 $ conda list 70 71 to see all the packages that are available, use 72 73 .. code-block:: bash 74 75 $ conda search 76 77 and to install a package, use 78 79 .. code-block:: bash 80 81 $ conda install <package-name> 82 83 84 The real power of conda comes from its ability to manage environments. In 85 conda, an environment can be thought of as a completely separate installation. 86 Conda installs packages into environments efficiently using `hard links 87 <http://en.wikipedia.org/wiki/Hard_links>`_ by default when it is possible, so 88 environments are space efficient, and take seconds to create. 89 90 The default environment, which ``conda`` itself is installed into is called 91 ``root``. To create another environment, use the ``conda create`` 92 command. For instance, to create an environment with the IPython notebook and 93 NumPy 1.6, which is older than the version that comes with Anaconda by 94 default, you would run 95 96 .. code-block:: bash 97 98 $ conda create -n numpy16 ipython-notebook numpy=1.6 99 100 This creates an environment called ``numpy16`` with the latest version of 101 the IPython notebook, NumPy 1.6, and their dependencies. 102 103 We can now activate this environment, use 104 105 .. code-block:: bash 106 107 # On Linux and Mac OS X 108 $ source activate numpy16 109 110 # On Windows 111 > activate numpy16 112 113 This puts the bin directory of the ``numpy16`` environment in the front of the 114 ``PATH``, and sets it as the default environment for all subsequent conda commands. 115 116 To go back to the root environment, use 117 118 .. code-block:: bash 119 120 # On Linux and Mac OS X 121 $ source deactivate 122 123 # On Windows 124 > deactivate 125 126 127 Building Your Own Packages 128 -------------------------- 129 130 You can easily build your own packages for conda, and upload them 131 to `anaconda.org <https://anaconda.org>`_, a free service for hosting 132 packages for conda, as well as other package managers. 133 To build a package, create a recipe. 134 See http://github.com/conda/conda-recipes for many example recipes, and 135 http://docs.continuum.io/conda/build.html for documentation on how to build 136 recipes. 137 138 To upload to anaconda.org, create an account. Then, install the 139 anaconda-client and login 140 141 .. code-block:: bash 142 143 $ conda install anaconda-client 144 $ anaconda login 145 146 Then, after you build your recipe 147 148 .. code-block:: bash 149 150 $ conda build <recipe-dir> 151 152 you will be prompted to upload to anaconda.org. 153 154 To add your anaconda.org channel, or the channel of others to conda so 155 that ``conda install`` will find and install their packages, run 156 157 .. code-block:: bash 158 159 $ conda config --add channels https://conda.anaconda.org/username 160 161 (replacing ``username`` with the user name of the person whose channel you want 162 to add). 163 164 Getting Help 165 ------------ 166 167 The documentation for conda is at http://conda.pydata.org/docs/. You can 168 subscribe to the `conda mailing list 169 <https://groups.google.com/a/continuum.io/forum/#!forum/conda>`_. The source 170 code and issue tracker for conda are on `GitHub <https://github.com/conda/conda>`_. 171 172 Contributing 173 ------------ 174 175 Contributions to conda are welcome. Just fork the GitHub repository and send a 176 pull request. 177 178 To develop on conda, the easiest way is to use a development build. This can be 179 accomplished as follows: 180 181 * clone the conda git repository to a computer with conda already installed 182 * navigate to the root directory of the git clone 183 * run ``$CONDA/bin/python setup.py develop`` where ``$CONDA`` is the path to your 184 miniconda installation 185 186 Note building a development file requires git to be installed. 187 188 To undo this, run ``$CONDA/bin/python setup.py develop -u``. Note that if you 189 used a python other than ``$CONDA/bin/python`` to install, you may have to manually 190 delete the conda executable. For example, on OS X, if you use a homebrew python 191 located at ``/usr/local/bin/python``, then you'll need to ``rm /usr/local/bin/conda`` 192 so that ``which -a conda`` lists first your miniconda installation. 193 194 If you are worried about breaking your conda installation, you can install a 195 separate instance of `Miniconda <http://conda.pydata.org/miniconda.html>`_ and 196 work off it. This is also the only way to test conda in both Python 2 and 197 Python 3, as conda can only be installed into a root environment. 198 199 Run the conda tests by ``conda install pytest pytest-cov`` and then running ``py.test`` 200 in the conda directory. The tests are also run by Travis CI when you make a 201 pull request. 202 [end of README.rst] [start of conda/cli/install.py] 1 # (c) Continuum Analytics, Inc. / http://continuum.io 2 # All Rights Reserved 3 # 4 # conda is distributed under the terms of the BSD 3-clause license. 5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause. 6 7 from __future__ import print_function, division, absolute_import 8 9 import errno 10 import logging 11 import os 12 import shutil 13 import tarfile 14 import tempfile 15 from difflib import get_close_matches 16 from os.path import isdir, join, basename, exists, abspath 17 18 from conda.api import get_index 19 from ..cli import common 20 from ..cli.find_commands import find_executable 21 from ..config import create_default_packages, force_32bit, root_env_name 22 from ..exceptions import (CondaFileNotFoundError, CondaValueError, DirectoryNotFoundError, 23 CondaEnvironmentError, PackageNotFoundError, TooManyArgumentsError, 24 CondaAssertionError, CondaOSError, CondaImportError, 25 CondaError, DryRunExit, LockError, CondaRuntimeError, 26 CondaSystemExit, NoPackagesFoundError, UnsatisfiableError, CondaIOError) 27 from ..install import linked as install_linked 28 from ..install import name_dist, is_linked 29 from ..misc import explicit, clone_env, append_env, touch_nonadmin 30 from ..plan import (is_root_prefix, get_pinned_specs, install_actions, add_defaults_to_specs, 31 display_actions, revert_actions, nothing_to_do, execute_actions) 32 from ..resolve import Resolve 33 from ..utils import find_parent_shell 34 35 log = logging.getLogger(__name__) 36 37 38 def install_tar(prefix, tar_path, verbose=False): 39 if not exists(tar_path): 40 raise CondaFileNotFoundError(tar_path) 41 tmp_dir = tempfile.mkdtemp() 42 t = tarfile.open(tar_path, 'r') 43 t.extractall(path=tmp_dir) 44 t.close() 45 46 paths = [] 47 for root, dirs, files in os.walk(tmp_dir): 48 for fn in files: 49 if fn.endswith('.tar.bz2'): 50 paths.append(join(root, fn)) 51 52 explicit(paths, prefix, verbose=verbose) 53 shutil.rmtree(tmp_dir) 54 55 56 def check_prefix(prefix, json=False): 57 name = basename(prefix) 58 error = None 59 if name.startswith('.'): 60 error = "environment name cannot start with '.': %s" % name 61 if name == root_env_name: 62 error = "'%s' is a reserved environment name" % name 63 if exists(prefix): 64 if isdir(prefix) and not os.listdir(prefix): 65 return None 66 error = "prefix already exists: %s" % prefix 67 68 if error: 69 raise CondaValueError(error, json) 70 71 72 def clone(src_arg, dst_prefix, json=False, quiet=False, index_args=None): 73 if os.sep in src_arg: 74 src_prefix = abspath(src_arg) 75 if not isdir(src_prefix): 76 raise DirectoryNotFoundError('no such directory: %s' % src_arg, json) 77 else: 78 src_prefix = common.find_prefix_name(src_arg) 79 if src_prefix is None: 80 raise CondaEnvironmentError('could not find environment: %s' % 81 src_arg, json) 82 83 if not json: 84 print("Source: %s" % src_prefix) 85 print("Destination: %s" % dst_prefix) 86 87 with common.json_progress_bars(json=json and not quiet): 88 actions, untracked_files = clone_env(src_prefix, dst_prefix, 89 verbose=not json, 90 quiet=quiet, 91 index_args=index_args) 92 93 if json: 94 common.stdout_json_success( 95 actions=actions, 96 untracked_files=list(untracked_files), 97 src_prefix=src_prefix, 98 dst_prefix=dst_prefix 99 ) 100 101 102 def print_activate(arg): 103 shell = find_parent_shell(path=False) 104 print("#") 105 print("# To activate this environment, use:") 106 if shell in ["powershell.exe", "cmd.exe"]: 107 print("# > activate %s" % arg) 108 print("#") 109 print("# To deactivate this environment, use:") 110 print("# > deactivate") 111 else: 112 print("# $ source activate %s" % arg) 113 print("#") 114 print("# To deactivate this environment, use:") 115 print("# $ source deactivate") 116 print("#") 117 118 119 def get_revision(arg, json=False): 120 try: 121 return int(arg) 122 except ValueError: 123 CondaValueError("expected revision number, not: '%s'" % arg, json) 124 125 126 def install(args, parser, command='install'): 127 """ 128 conda install, conda update, and conda create 129 """ 130 newenv = bool(command == 'create') 131 isupdate = bool(command == 'update') 132 isinstall = bool(command == 'install') 133 if newenv: 134 common.ensure_name_or_prefix(args, command) 135 prefix = common.get_prefix(args, search=not newenv) 136 if newenv: 137 check_prefix(prefix, json=args.json) 138 if force_32bit and is_root_prefix(prefix): 139 raise CondaValueError("cannot use CONDA_FORCE_32BIT=1 in root env") 140 if isupdate and not (args.file or args.all or args.packages): 141 raise CondaValueError("""no package names supplied 142 # If you want to update to a newer version of Anaconda, type: 143 # 144 # $ conda update --prefix %s anaconda 145 """ % prefix, args.json) 146 147 linked = install_linked(prefix) 148 lnames = {name_dist(d) for d in linked} 149 if isupdate and not args.all: 150 for name in args.packages: 151 common.arg2spec(name, json=args.json, update=True) 152 if name not in lnames: 153 raise PackageNotFoundError("Package '%s' is not installed in %s" % 154 (name, prefix), args.json) 155 156 if newenv and not args.no_default_packages: 157 default_packages = create_default_packages[:] 158 # Override defaults if they are specified at the command line 159 for default_pkg in create_default_packages: 160 if any(pkg.split('=')[0] == default_pkg for pkg in args.packages): 161 default_packages.remove(default_pkg) 162 args.packages.extend(default_packages) 163 else: 164 default_packages = [] 165 166 common.ensure_use_local(args) 167 common.ensure_override_channels_requires_channel(args) 168 index_args = { 169 'use_cache': args.use_index_cache, 170 'channel_urls': args.channel or (), 171 'unknown': args.unknown, 172 'prepend': not args.override_channels, 173 'use_local': args.use_local 174 } 175 176 specs = [] 177 if args.file: 178 for fpath in args.file: 179 specs.extend(common.specs_from_url(fpath, json=args.json)) 180 if '@EXPLICIT' in specs: 181 explicit(specs, prefix, verbose=not args.quiet, index_args=index_args) 182 return 183 elif getattr(args, 'all', False): 184 if not linked: 185 raise PackageNotFoundError("There are no packages installed in the " 186 "prefix %s" % prefix) 187 specs.extend(nm for nm in lnames) 188 specs.extend(common.specs_from_args(args.packages, json=args.json)) 189 190 if isinstall and args.revision: 191 get_revision(args.revision, json=args.json) 192 elif not (newenv and args.clone): 193 common.check_specs(prefix, specs, json=args.json, 194 create=(command == 'create')) 195 196 num_cp = sum(s.endswith('.tar.bz2') for s in args.packages) 197 if num_cp: 198 if num_cp == len(args.packages): 199 explicit(args.packages, prefix, verbose=not args.quiet) 200 return 201 else: 202 raise CondaValueError("cannot mix specifications with conda package" 203 " filenames", args.json) 204 205 # handle tar file containing conda packages 206 if len(args.packages) == 1: 207 tar_path = args.packages[0] 208 if tar_path.endswith('.tar'): 209 install_tar(prefix, tar_path, verbose=not args.quiet) 210 return 211 212 if newenv and args.clone: 213 if set(args.packages) - set(default_packages): 214 raise TooManyArgumentsError('did not expect any arguments for' 215 '--clone', args.json) 216 217 clone(args.clone, prefix, json=args.json, quiet=args.quiet, index_args=index_args) 218 append_env(prefix) 219 touch_nonadmin(prefix) 220 if not args.json: 221 print_activate(args.name if args.name else prefix) 222 return 223 224 index = get_index(channel_urls=index_args['channel_urls'], prepend=index_args['prepend'], 225 platform=None, use_local=index_args['use_local'], 226 use_cache=index_args['use_cache'], unknown=index_args['unknown'], 227 prefix=prefix) 228 r = Resolve(index) 229 ospecs = list(specs) 230 add_defaults_to_specs(r, linked, specs, update=isupdate) 231 232 # Don't update packages that are already up-to-date 233 if isupdate and not (args.all or args.force): 234 orig_packages = args.packages[:] 235 installed_metadata = [is_linked(prefix, dist) for dist in linked] 236 for name in orig_packages: 237 vers_inst = [m['version'] for m in installed_metadata if m['name'] == name] 238 build_inst = [m['build_number'] for m in installed_metadata if m['name'] == name] 239 240 try: 241 assert len(vers_inst) == 1, name 242 assert len(build_inst) == 1, name 243 except AssertionError as e: 244 raise CondaAssertionError('', e, args.json) 245 246 pkgs = sorted(r.get_pkgs(name)) 247 if not pkgs: 248 # Shouldn't happen? 249 continue 250 latest = pkgs[-1] 251 252 if (latest.version == vers_inst[0] and 253 latest.build_number == build_inst[0]): 254 args.packages.remove(name) 255 if not args.packages: 256 from .main_list import print_packages 257 258 if not args.json: 259 regex = '^(%s)$' % '|'.join(orig_packages) 260 print('# All requested packages already installed.') 261 print_packages(prefix, regex) 262 else: 263 common.stdout_json_success( 264 message='All requested packages already installed.') 265 return 266 267 if args.force: 268 args.no_deps = True 269 270 if args.no_deps: 271 only_names = set(s.split()[0] for s in ospecs) 272 else: 273 only_names = None 274 275 if not isdir(prefix) and not newenv: 276 if args.mkdir: 277 try: 278 os.makedirs(prefix) 279 except OSError: 280 raise CondaOSError("Error: could not create directory: %s" % 281 prefix, args.json) 282 else: 283 raise CondaEnvironmentError("""\ 284 environment does not exist: %s 285 # 286 # Use 'conda create' to create an environment before installing packages 287 # into it. 288 #""" % prefix, args.json) 289 290 shortcuts = args.shortcuts if hasattr(args, "shortcuts") else None 291 292 try: 293 if isinstall and args.revision: 294 actions = revert_actions(prefix, get_revision(args.revision)) 295 else: 296 with common.json_progress_bars(json=args.json and not args.quiet): 297 actions = install_actions(prefix, index, specs, 298 force=args.force, 299 only_names=only_names, 300 pinned=args.pinned, 301 always_copy=args.copy, 302 minimal_hint=args.alt_hint, 303 update_deps=args.update_deps, 304 shortcuts=shortcuts) 305 except NoPackagesFoundError as e: 306 error_message = [e.args[0]] 307 308 if isupdate and args.all: 309 # Packages not found here just means they were installed but 310 # cannot be found any more. Just skip them. 311 if not args.json: 312 print("Warning: %s, skipping" % error_message) 313 else: 314 # Not sure what to do here 315 pass 316 args._skip = getattr(args, '_skip', ['anaconda']) 317 for pkg in e.pkgs: 318 p = pkg.split()[0] 319 if p in args._skip: 320 # Avoid infinite recursion. This can happen if a spec 321 # comes from elsewhere, like --file 322 raise 323 args._skip.append(p) 324 325 return install(args, parser, command=command) 326 else: 327 packages = {index[fn]['name'] for fn in index} 328 329 nfound = 0 330 for pkg in sorted(e.pkgs): 331 pkg = pkg.split()[0] 332 if pkg in packages: 333 continue 334 close = get_close_matches(pkg, packages, cutoff=0.7) 335 if not close: 336 continue 337 if nfound == 0: 338 error_message.append("\n\nClose matches found; did you mean one of these?\n") 339 error_message.append("\n %s: %s" % (pkg, ', '.join(close))) 340 nfound += 1 341 error_message.append('\n\nYou can search for packages on anaconda.org with') 342 error_message.append('\n\n anaconda search -t conda %s' % pkg) 343 if len(e.pkgs) > 1: 344 # Note this currently only happens with dependencies not found 345 error_message.append('\n\n(and similarly for the other packages)') 346 347 if not find_executable('anaconda', include_others=False): 348 error_message.append('\n\nYou may need to install the anaconda-client') 349 error_message.append(' command line client with') 350 error_message.append('\n\n conda install anaconda-client') 351 352 pinned_specs = get_pinned_specs(prefix) 353 if pinned_specs: 354 path = join(prefix, 'conda-meta', 'pinned') 355 error_message.append("\n\nNote that you have pinned specs in %s:" % path) 356 error_message.append("\n\n %r" % pinned_specs) 357 358 error_message = ''.join(error_message) 359 360 raise PackageNotFoundError(error_message, args.json) 361 362 except (UnsatisfiableError, SystemExit) as e: 363 # Unsatisfiable package specifications/no such revision/import error 364 if e.args and 'could not import' in e.args[0]: 365 raise CondaImportError('', e, args.json) 366 raise CondaError('UnsatisfiableSpecifications', e, args.json) 367 368 if nothing_to_do(actions): 369 from .main_list import print_packages 370 371 if not args.json: 372 regex = '^(%s)$' % '|'.join(s.split()[0] for s in ospecs) 373 print('\n# All requested packages already installed.') 374 print_packages(prefix, regex) 375 else: 376 common.stdout_json_success( 377 message='All requested packages already installed.') 378 return 379 380 if not args.json: 381 print() 382 print("Package plan for installation in environment %s:" % prefix) 383 display_actions(actions, index, show_channel_urls=args.show_channel_urls) 384 385 if command in {'install', 'update'}: 386 common.check_write(command, prefix) 387 388 if not args.json: 389 common.confirm_yn(args) 390 elif args.dry_run: 391 common.stdout_json_success(actions=actions, dry_run=True) 392 raise DryRunExit 393 394 with common.json_progress_bars(json=args.json and not args.quiet): 395 try: 396 execute_actions(actions, index, verbose=not args.quiet) 397 if not (command == 'update' and args.all): 398 try: 399 with open(join(prefix, 'conda-meta', 'history'), 'a') as f: 400 f.write('# %s specs: %s\n' % (command, specs)) 401 except IOError as e: 402 if e.errno == errno.EACCES: 403 log.debug("Can't write the history file") 404 else: 405 raise CondaIOError("Can't write the history file") 406 407 except RuntimeError as e: 408 if len(e.args) > 0 and "LOCKERROR" in e.args[0]: 409 raise LockError('Already locked', e, args.json) 410 else: 411 raise CondaRuntimeError('RuntimeError', e, args.json) 412 except SystemExit as e: 413 raise CondaSystemExit('Exiting', e, args.json) 414 415 if newenv: 416 append_env(prefix) 417 touch_nonadmin(prefix) 418 if not args.json: 419 print_activate(args.name if args.name else prefix) 420 421 if args.json: 422 common.stdout_json_success(actions=actions) 423 [end of conda/cli/install.py] [start of conda/plan.py] 1 """ 2 Handle the planning of installs and their execution. 3 4 NOTE: 5 conda.install uses canonical package names in its interface functions, 6 whereas conda.resolve uses package filenames, as those are used as index 7 keys. We try to keep fixes to this "impedance mismatch" local to this 8 module. 9 """ 10 11 from __future__ import print_function, division, absolute_import 12 13 import os 14 import sys 15 from collections import defaultdict 16 from logging import getLogger 17 from os.path import abspath, basename, dirname, join, exists 18 19 from . import instructions as inst 20 from .config import (always_copy as config_always_copy, channel_priority, 21 show_channel_urls as config_show_channel_urls, is_offline, 22 root_dir, allow_softlinks, default_python, auto_update_conda, 23 track_features, foreign, url_channel, canonical_channel_name) 24 from .exceptions import (TooFewArgumentsError, InstallError, RemoveError, CondaIndexError, 25 CondaRuntimeError) 26 from .history import History 27 from .install import (dist2quad, LINK_HARD, link_name_map, name_dist, is_fetched, 28 is_extracted, is_linked, find_new_location, dist2filename, LINK_COPY, 29 LINK_SOFT, try_hard_link, rm_rf) 30 from .resolve import MatchSpec, Resolve, Package 31 from .utils import md5_file, human_bytes 32 33 # For backwards compatibility 34 35 log = getLogger(__name__) 36 37 def print_dists(dists_extras): 38 fmt = " %-27s|%17s" 39 print(fmt % ('package', 'build')) 40 print(fmt % ('-' * 27, '-' * 17)) 41 for dist, extra in dists_extras: 42 dist = dist2quad(dist) 43 line = fmt % (dist[0]+'-'+dist[1], dist[2]) 44 if extra: 45 line += extra 46 print(line) 47 48 49 def display_actions(actions, index, show_channel_urls=None): 50 if show_channel_urls is None: 51 show_channel_urls = config_show_channel_urls 52 53 def channel_str(rec): 54 if rec.get('schannel'): 55 return rec['schannel'] 56 if rec.get('url'): 57 return url_channel(rec['url'])[1] 58 if rec.get('channel'): 59 return canonical_channel_name(rec['channel']) 60 return '<unknown>' 61 62 def channel_filt(s): 63 if show_channel_urls is False: 64 return '' 65 if show_channel_urls is None and s == 'defaults': 66 return '' 67 return s 68 69 if actions.get(inst.FETCH): 70 print("\nThe following packages will be downloaded:\n") 71 72 disp_lst = [] 73 for dist in actions[inst.FETCH]: 74 info = index[dist + '.tar.bz2'] 75 extra = '%15s' % human_bytes(info['size']) 76 schannel = channel_filt(channel_str(info)) 77 if schannel: 78 extra += ' ' + schannel 79 disp_lst.append((dist, extra)) 80 print_dists(disp_lst) 81 82 if index and len(actions[inst.FETCH]) > 1: 83 num_bytes = sum(index[dist + '.tar.bz2']['size'] 84 for dist in actions[inst.FETCH]) 85 print(' ' * 4 + '-' * 60) 86 print(" " * 43 + "Total: %14s" % human_bytes(num_bytes)) 87 88 # package -> [oldver-oldbuild, newver-newbuild] 89 packages = defaultdict(lambda: list(('', ''))) 90 features = defaultdict(lambda: list(('', ''))) 91 channels = defaultdict(lambda: list(('', ''))) 92 records = defaultdict(lambda: list((None, None))) 93 linktypes = {} 94 95 for arg in actions.get(inst.LINK, []): 96 dist, lt, shortcuts = inst.split_linkarg(arg) 97 fkey = dist + '.tar.bz2' 98 rec = index[fkey] 99 pkg = rec['name'] 100 channels[pkg][1] = channel_str(rec) 101 packages[pkg][1] = rec['version'] + '-' + rec['build'] 102 records[pkg][1] = Package(fkey, rec) 103 linktypes[pkg] = lt 104 features[pkg][1] = rec.get('features', '') 105 for arg in actions.get(inst.UNLINK, []): 106 dist, lt, shortcuts = inst.split_linkarg(arg) 107 fkey = dist + '.tar.bz2' 108 rec = index.get(fkey) 109 if rec is None: 110 pkg, ver, build, schannel = dist2quad(dist) 111 rec = dict(name=pkg, version=ver, build=build, channel=None, 112 schannel='<unknown>', 113 build_number=int(build) if build.isdigit() else 0) 114 pkg = rec['name'] 115 channels[pkg][0] = channel_str(rec) 116 packages[pkg][0] = rec['version'] + '-' + rec['build'] 117 records[pkg][0] = Package(fkey, rec) 118 features[pkg][0] = rec.get('features', '') 119 120 # Put a minimum length here---. .--For the : 121 # v v 122 123 new = {p for p in packages if not packages[p][0]} 124 removed = {p for p in packages if not packages[p][1]} 125 # New packages are actually listed in the left-hand column, 126 # so let's move them over there 127 for pkg in new: 128 for var in (packages, features, channels, records): 129 var[pkg] = var[pkg][::-1] 130 131 if packages: 132 maxpkg = max(len(p) for p in packages) + 1 133 maxoldver = max(len(p[0]) for p in packages.values()) 134 maxnewver = max(len(p[1]) for p in packages.values()) 135 maxoldfeatures = max(len(p[0]) for p in features.values()) 136 maxnewfeatures = max(len(p[1]) for p in features.values()) 137 maxoldchannels = max(len(channel_filt(p[0])) for p in channels.values()) 138 maxnewchannels = max(len(channel_filt(p[1])) for p in channels.values()) 139 updated = set() 140 downgraded = set() 141 channeled = set() 142 oldfmt = {} 143 newfmt = {} 144 for pkg in packages: 145 # That's right. I'm using old-style string formatting to generate a 146 # string with new-style string formatting. 147 oldfmt[pkg] = '{pkg:<%s} {vers[0]:<%s}' % (maxpkg, maxoldver) 148 if maxoldchannels: 149 oldfmt[pkg] += ' {channels[0]:<%s}' % maxoldchannels 150 if features[pkg][0]: 151 oldfmt[pkg] += ' [{features[0]:<%s}]' % maxoldfeatures 152 153 lt = linktypes.get(pkg, LINK_HARD) 154 lt = '' if lt == LINK_HARD else (' (%s)' % link_name_map[lt]) 155 if pkg in removed or pkg in new: 156 oldfmt[pkg] += lt 157 continue 158 159 newfmt[pkg] = '{vers[1]:<%s}' % maxnewver 160 if maxnewchannels: 161 newfmt[pkg] += ' {channels[1]:<%s}' % maxnewchannels 162 if features[pkg][1]: 163 newfmt[pkg] += ' [{features[1]:<%s}]' % maxnewfeatures 164 newfmt[pkg] += lt 165 166 P0 = records[pkg][0] 167 P1 = records[pkg][1] 168 pri0 = P0.priority 169 pri1 = P1.priority 170 if pri0 is None or pri1 is None: 171 pri0 = pri1 = 1 172 try: 173 if str(P1.version) == 'custom': 174 newver = str(P0.version) != 'custom' 175 oldver = not newver 176 else: 177 # <= here means that unchanged packages will be put in updated 178 newver = P0.norm_version < P1.norm_version 179 oldver = P0.norm_version > P1.norm_version 180 except TypeError: 181 newver = P0.version < P1.version 182 oldver = P0.version > P1.version 183 oldbld = P0.build_number > P1.build_number 184 newbld = P0.build_number < P1.build_number 185 if channel_priority and pri1 < pri0 and (oldver or not newver and not newbld): 186 channeled.add(pkg) 187 elif newver: 188 updated.add(pkg) 189 elif pri1 < pri0 and (oldver or not newver and oldbld): 190 channeled.add(pkg) 191 elif oldver: 192 downgraded.add(pkg) 193 elif not oldbld: 194 updated.add(pkg) 195 else: 196 downgraded.add(pkg) 197 198 arrow = ' --> ' 199 lead = ' ' * 4 200 201 def format(s, pkg): 202 chans = [channel_filt(c) for c in channels[pkg]] 203 return lead + s.format(pkg=pkg + ':', vers=packages[pkg], 204 channels=chans, features=features[pkg]) 205 206 if new: 207 print("\nThe following NEW packages will be INSTALLED:\n") 208 for pkg in sorted(new): 209 # New packages have been moved to the "old" column for display 210 print(format(oldfmt[pkg], pkg)) 211 212 if removed: 213 print("\nThe following packages will be REMOVED:\n") 214 for pkg in sorted(removed): 215 print(format(oldfmt[pkg], pkg)) 216 217 if updated: 218 print("\nThe following packages will be UPDATED:\n") 219 for pkg in sorted(updated): 220 print(format(oldfmt[pkg] + arrow + newfmt[pkg], pkg)) 221 222 if channeled: 223 print("\nThe following packages will be SUPERCEDED by a higher-priority channel:\n") 224 for pkg in sorted(channeled): 225 print(format(oldfmt[pkg] + arrow + newfmt[pkg], pkg)) 226 227 if downgraded: 228 print("\nThe following packages will be DOWNGRADED due to dependency conflicts:\n") 229 for pkg in sorted(downgraded): 230 print(format(oldfmt[pkg] + arrow + newfmt[pkg], pkg)) 231 232 print() 233 234 235 def nothing_to_do(actions): 236 for op in inst.action_codes: 237 if actions.get(op): 238 return False 239 return True 240 241 242 def add_unlink(actions, dist): 243 if inst.UNLINK not in actions: 244 actions[inst.UNLINK] = [] 245 actions[inst.UNLINK].append(dist) 246 247 248 def plan_from_actions(actions): 249 if 'op_order' in actions and actions['op_order']: 250 op_order = actions['op_order'] 251 else: 252 op_order = inst.action_codes 253 254 assert inst.PREFIX in actions and actions[inst.PREFIX] 255 res = [('PREFIX', '%s' % actions[inst.PREFIX])] 256 257 if sys.platform == 'win32': 258 # Always link/unlink menuinst first on windows in case a subsequent 259 # package tries to import it to create/remove a shortcut 260 261 for op in (inst.UNLINK, inst.FETCH, inst.EXTRACT, inst.LINK): 262 if op in actions: 263 pkgs = [] 264 for pkg in actions[op]: 265 if 'menuinst' in pkg: 266 res.append((op, pkg)) 267 else: 268 pkgs.append(pkg) 269 actions[op] = pkgs 270 271 log.debug("Adding plans for operations: {0}".format(op_order)) 272 for op in op_order: 273 if op not in actions: 274 log.debug("action {0} not in actions".format(op)) 275 continue 276 if not actions[op]: 277 log.debug("action {0} has None value".format(op)) 278 continue 279 if '_' not in op: 280 res.append((inst.PRINT, '%sing packages ...' % op.capitalize())) 281 elif op.startswith('RM_'): 282 res.append((inst.PRINT, 'Pruning %s packages from the cache ...' % op[3:].lower())) 283 if op in inst.progress_cmds: 284 res.append((inst.PROGRESS, '%d' % len(actions[op]))) 285 for arg in actions[op]: 286 log.debug("appending value {0} for action {1}".format(arg, op)) 287 res.append((op, arg)) 288 289 return res 290 291 292 # force_linked_actions has now been folded into this function, and is enabled by 293 # supplying an index and setting force=True 294 def ensure_linked_actions(dists, prefix, index=None, force=False, 295 always_copy=False, shortcuts=False): 296 actions = defaultdict(list) 297 actions[inst.PREFIX] = prefix 298 actions['op_order'] = (inst.RM_FETCHED, inst.FETCH, inst.RM_EXTRACTED, 299 inst.EXTRACT, inst.UNLINK, inst.LINK, inst.SYMLINK_CONDA) 300 for dist in dists: 301 fetched_in = is_fetched(dist) 302 extracted_in = is_extracted(dist) 303 304 if fetched_in and index is not None: 305 # Test the MD5, and possibly re-fetch 306 fn = dist + '.tar.bz2' 307 try: 308 if md5_file(fetched_in) != index[fn]['md5']: 309 # RM_FETCHED now removes the extracted data too 310 actions[inst.RM_FETCHED].append(dist) 311 # Re-fetch, re-extract, re-link 312 fetched_in = extracted_in = None 313 force = True 314 except KeyError: 315 sys.stderr.write('Warning: cannot lookup MD5 of: %s' % fn) 316 317 if not force and is_linked(prefix, dist): 318 continue 319 320 if extracted_in and force: 321 # Always re-extract in the force case 322 actions[inst.RM_EXTRACTED].append(dist) 323 extracted_in = None 324 325 # Otherwise we need to extract, and possibly fetch 326 if not extracted_in and not fetched_in: 327 # If there is a cache conflict, clean it up 328 fetched_in, conflict = find_new_location(dist) 329 fetched_in = join(fetched_in, dist2filename(dist)) 330 if conflict is not None: 331 actions[inst.RM_FETCHED].append(conflict) 332 actions[inst.FETCH].append(dist) 333 334 if not extracted_in: 335 actions[inst.EXTRACT].append(dist) 336 337 fetched_dist = extracted_in or fetched_in[:-8] 338 fetched_dir = dirname(fetched_dist) 339 340 try: 341 # Determine what kind of linking is necessary 342 if not extracted_in: 343 # If not already extracted, create some dummy 344 # data to test with 345 rm_rf(fetched_dist) 346 ppath = join(fetched_dist, 'info') 347 os.makedirs(ppath) 348 index_json = join(ppath, 'index.json') 349 with open(index_json, 'w'): 350 pass 351 if config_always_copy or always_copy: 352 lt = LINK_COPY 353 elif try_hard_link(fetched_dir, prefix, dist): 354 lt = LINK_HARD 355 elif allow_softlinks and sys.platform != 'win32': 356 lt = LINK_SOFT 357 else: 358 lt = LINK_COPY 359 actions[inst.LINK].append('%s %d %s' % (dist, lt, shortcuts)) 360 361 except (OSError, IOError): 362 actions[inst.LINK].append('%s %d %s' % (dist, LINK_COPY, shortcuts)) 363 finally: 364 if not extracted_in: 365 # Remove the dummy data 366 try: 367 rm_rf(fetched_dist) 368 except (OSError, IOError): 369 pass 370 371 return actions 372 373 # ------------------------------------------------------------------- 374 375 376 def is_root_prefix(prefix): 377 return abspath(prefix) == abspath(root_dir) 378 379 380 def add_defaults_to_specs(r, linked, specs, update=False): 381 # TODO: This should use the pinning mechanism. But don't change the API: 382 # cas uses it. 383 if r.explicit(specs): 384 return 385 log.debug('H0 specs=%r' % specs) 386 linked = [d if d.endswith('.tar.bz2') else d + '.tar.bz2' for d in linked] 387 names_linked = {r.index[fn]['name']: fn for fn in linked if fn in r.index} 388 mspecs = list(map(MatchSpec, specs)) 389 390 for name, def_ver in [('python', default_python), 391 # Default version required, but only used for Python 392 ('lua', None)]: 393 if any(s.name == name and not s.is_simple() for s in mspecs): 394 # if any of the specifications mention the Python/Numpy version, 395 # we don't need to add the default spec 396 log.debug('H1 %s' % name) 397 continue 398 399 depends_on = {s for s in mspecs if r.depends_on(s, name)} 400 any_depends_on = bool(depends_on) 401 log.debug('H2 %s %s' % (name, any_depends_on)) 402 403 if not any_depends_on: 404 # if nothing depends on Python/Numpy AND the Python/Numpy is not 405 # specified, we don't need to add the default spec 406 log.debug('H2A %s' % name) 407 continue 408 409 if any(s.is_exact() for s in depends_on): 410 # If something depends on Python/Numpy, but the spec is very 411 # explicit, we also don't need to add the default spec 412 log.debug('H2B %s' % name) 413 continue 414 415 if name in names_linked: 416 # if Python/Numpy is already linked, we add that instead of the 417 # default 418 log.debug('H3 %s' % name) 419 fkey = names_linked[name] 420 info = r.index[fkey] 421 ver = '.'.join(info['version'].split('.', 2)[:2]) 422 spec = '%s %s* (target=%s)' % (info['name'], ver, fkey) 423 specs.append(spec) 424 continue 425 426 if name == 'python' and def_ver.startswith('3.'): 427 # Don't include Python 3 in the specs if this is the Python 3 428 # version of conda. 429 continue 430 431 if def_ver is not None: 432 specs.append('%s %s*' % (name, def_ver)) 433 log.debug('HF specs=%r' % specs) 434 435 436 def get_pinned_specs(prefix): 437 pinfile = join(prefix, 'conda-meta', 'pinned') 438 if not exists(pinfile): 439 return [] 440 with open(pinfile) as f: 441 return [i for i in f.read().strip().splitlines() if i and not i.strip().startswith('#')] 442 443 444 def install_actions(prefix, index, specs, force=False, only_names=None, always_copy=False, 445 pinned=True, minimal_hint=False, update_deps=True, prune=False, 446 shortcuts=False): 447 r = Resolve(index) 448 linked = r.installed 449 450 if pinned: 451 pinned_specs = get_pinned_specs(prefix) 452 log.debug("Pinned specs=%s" % pinned_specs) 453 specs += pinned_specs 454 455 # Only add a conda spec if conda and conda-env are not in the specs. 456 # Also skip this step if we're offline. 457 root_only = ('conda', 'conda-env') 458 mss = [MatchSpec(s) for s in specs if s.startswith(root_only)] 459 mss = [ms for ms in mss if ms.name in root_only] 460 if is_root_prefix(prefix): 461 if auto_update_conda and not is_offline() and not mss: 462 from . import __version__ as conda_version 463 specs.append('conda >=' + conda_version) 464 specs.append('conda-env') 465 elif basename(prefix).startswith('_'): 466 # anything (including conda) can be installed into environments 467 # starting with '_', mainly to allow conda-build to build conda 468 pass 469 elif mss: 470 raise InstallError("Error: 'conda' can only be installed into the root environment") 471 472 must_have = {} 473 if track_features: 474 specs.extend(x + '@' for x in track_features) 475 476 pkgs = r.install(specs, linked, update_deps=update_deps) 477 478 for fn in pkgs: 479 dist = fn[:-8] 480 name = name_dist(dist) 481 if not name or only_names and name not in only_names: 482 continue 483 must_have[name] = dist 484 485 if is_root_prefix(prefix): 486 for name in foreign: 487 if name in must_have: 488 del must_have[name] 489 elif basename(prefix).startswith('_'): 490 # anything (including conda) can be installed into environments 491 # starting with '_', mainly to allow conda-build to build conda 492 pass 493 494 elif any(s in must_have for s in root_only): 495 # the solver scheduled an install of conda, but it wasn't in the 496 # specs, so it must have been a dependency. 497 specs = [s for s in specs if r.depends_on(s, root_only)] 498 if specs: 499 raise InstallError("""\ 500 Error: the following specs depend on 'conda' and can only be installed 501 into the root environment: %s""" % (' '.join(specs),)) 502 linked = [r.package_name(s) for s in linked] 503 linked = [s for s in linked if r.depends_on(s, root_only)] 504 if linked: 505 raise InstallError("""\ 506 Error: one or more of the packages already installed depend on 'conda' 507 and should only be installed in the root environment: %s 508 These packages need to be removed before conda can proceed.""" % (' '.join(linked),)) 509 raise InstallError("Error: 'conda' can only be installed into the " 510 "root environment") 511 512 smh = r.dependency_sort(must_have) 513 514 actions = ensure_linked_actions( 515 smh, prefix, 516 index=index if force else None, 517 force=force, always_copy=always_copy, 518 shortcuts=shortcuts) 519 520 if actions[inst.LINK]: 521 actions[inst.SYMLINK_CONDA] = [root_dir] 522 523 for fkey in sorted(linked): 524 dist = fkey[:-8] 525 name = name_dist(dist) 526 replace_existing = name in must_have and dist != must_have[name] 527 prune_it = prune and dist not in smh 528 if replace_existing or prune_it: 529 add_unlink(actions, dist) 530 531 return actions 532 533 534 def remove_actions(prefix, specs, index, force=False, pinned=True): 535 r = Resolve(index) 536 linked = r.installed 537 538 if force: 539 mss = list(map(MatchSpec, specs)) 540 nlinked = {r.package_name(fn): fn[:-8] 541 for fn in linked 542 if not any(r.match(ms, fn) for ms in mss)} 543 else: 544 add_defaults_to_specs(r, linked, specs, update=True) 545 nlinked = {r.package_name(fn): fn[:-8] for fn in r.remove(specs, linked)} 546 547 if pinned: 548 pinned_specs = get_pinned_specs(prefix) 549 log.debug("Pinned specs=%s" % pinned_specs) 550 551 linked = {r.package_name(fn): fn[:-8] for fn in linked} 552 553 actions = ensure_linked_actions(r.dependency_sort(nlinked), prefix) 554 for old_fn in reversed(r.dependency_sort(linked)): 555 dist = old_fn + '.tar.bz2' 556 name = r.package_name(dist) 557 if old_fn == nlinked.get(name, ''): 558 continue 559 if pinned and any(r.match(ms, dist) for ms in pinned_specs): 560 msg = "Cannot remove %s becaue it is pinned. Use --no-pin to override." 561 raise CondaRuntimeError(msg % dist) 562 if name == 'conda' and name not in nlinked: 563 if any(s.split(' ', 1)[0] == 'conda' for s in specs): 564 raise RemoveError("'conda' cannot be removed from the root environment") 565 else: 566 raise RemoveError("Error: this 'remove' command cannot be executed because it\n" 567 "would require removing 'conda' dependencies") 568 add_unlink(actions, old_fn) 569 570 return actions 571 572 573 def remove_features_actions(prefix, index, features): 574 r = Resolve(index) 575 linked = r.installed 576 577 actions = defaultdict(list) 578 actions[inst.PREFIX] = prefix 579 _linked = [d + '.tar.bz2' for d in linked] 580 to_link = [] 581 for dist in sorted(linked): 582 fn = dist + '.tar.bz2' 583 if fn not in index: 584 continue 585 if r.track_features(fn).intersection(features): 586 add_unlink(actions, dist) 587 if r.features(fn).intersection(features): 588 add_unlink(actions, dist) 589 subst = r.find_substitute(_linked, features, fn) 590 if subst: 591 to_link.append(subst[:-8]) 592 593 if to_link: 594 actions.update(ensure_linked_actions(to_link, prefix)) 595 return actions 596 597 598 def revert_actions(prefix, revision=-1): 599 h = History(prefix) 600 h.update() 601 try: 602 state = h.get_state(revision) 603 except IndexError: 604 raise CondaIndexError("no such revision: %d" % revision) 605 606 curr = h.get_state() 607 if state == curr: 608 return {} 609 610 actions = ensure_linked_actions(state, prefix) 611 for dist in curr - state: 612 add_unlink(actions, dist) 613 614 return actions 615 616 # ---------------------------- EXECUTION -------------------------- 617 618 619 def execute_actions(actions, index=None, verbose=False): 620 plan = plan_from_actions(actions) 621 with History(actions[inst.PREFIX]): 622 inst.execute_instructions(plan, index, verbose) 623 624 625 def update_old_plan(old_plan): 626 """ 627 Update an old plan object to work with 628 `conda.instructions.execute_instructions` 629 """ 630 plan = [] 631 for line in old_plan: 632 if line.startswith('#'): 633 continue 634 if ' ' not in line: 635 raise TooFewArgumentsError("The instruction '%s' takes at least" 636 " one argument" % line) 637 638 instruction, arg = line.split(' ', 1) 639 plan.append((instruction, arg)) 640 return plan 641 642 643 def execute_plan(old_plan, index=None, verbose=False): 644 """ 645 Deprecated: This should `conda.instructions.execute_instructions` instead 646 """ 647 plan = update_old_plan(old_plan) 648 inst.execute_instructions(plan, index, verbose) 649 650 651 if __name__ == '__main__': 652 # for testing new revert_actions() only 653 from pprint import pprint 654 pprint(dict(revert_actions(sys.prefix, int(sys.argv[1])))) 655 [end of conda/plan.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
conda/conda
0b4b690e6a3e1b5562307b4bda29f2f7cdbb4632
partially revert no default shortcuts Closes https://github.com/ContinuumIO/navigator/issues/570 This changes shortcuts to again be installed by default, but adds a condarc setting "shortcuts" that can be set to False to disable shortcut installation. CC @joelhullcio @csoja @multicastmatt
2016-07-12T20:27:14Z
<patch> diff --git a/conda/cli/install.py b/conda/cli/install.py --- a/conda/cli/install.py +++ b/conda/cli/install.py @@ -31,6 +31,7 @@ display_actions, revert_actions, nothing_to_do, execute_actions) from ..resolve import Resolve from ..utils import find_parent_shell +from .. import config log = logging.getLogger(__name__) @@ -287,7 +288,8 @@ def install(args, parser, command='install'): # into it. #""" % prefix, args.json) - shortcuts = args.shortcuts if hasattr(args, "shortcuts") else None + if hasattr(args, 'shortcuts'): + config.shortcuts = args.shortcuts and config.shortcuts try: if isinstall and args.revision: @@ -300,8 +302,7 @@ def install(args, parser, command='install'): pinned=args.pinned, always_copy=args.copy, minimal_hint=args.alt_hint, - update_deps=args.update_deps, - shortcuts=shortcuts) + update_deps=args.update_deps) except NoPackagesFoundError as e: error_message = [e.args[0]] diff --git a/conda/cli/main_create.py b/conda/cli/main_create.py --- a/conda/cli/main_create.py +++ b/conda/cli/main_create.py @@ -33,9 +33,10 @@ def configure_parser(sub_parsers): ) if on_win: p.add_argument( - "--shortcuts", - action="store_true", - help="Install start menu shortcuts" + "--no-shortcuts", + action="store_false", + help="Prevent installation of start menu shortcuts", + dest='shortcuts', ) add_parser_install(p) diff --git a/conda/cli/main_install.py b/conda/cli/main_install.py --- a/conda/cli/main_install.py +++ b/conda/cli/main_install.py @@ -59,7 +59,16 @@ def configure_parser(sub_parsers): p.add_argument( "--shortcuts", action="store_true", - help="Install start menu shortcuts" + help="Install start menu shortcuts", + dest="shortcuts", + default=True + ) + p.add_argument( + "--no-shortcuts", + action="store_false", + help="Don't install start menu shortcuts", + dest="shortcuts", + default=True ) add_parser_install(p) add_parser_json(p) diff --git a/conda/config.py b/conda/config.py --- a/conda/config.py +++ b/conda/config.py @@ -90,6 +90,7 @@ 'allow_other_channels', 'update_dependencies', 'channel_priority', + 'shortcuts', ] rc_string_keys = [ @@ -459,6 +460,7 @@ def load_condarc(path=None): create_default_packages = list(rc.get('create_default_packages', [])) update_dependencies = bool(rc.get('update_dependencies', True)) channel_priority = bool(rc.get('channel_priority', True)) + shortcuts = bool(rc.get('shortcuts', True)) # ssl_verify can be a boolean value or a filename string ssl_verify = rc.get('ssl_verify', True) diff --git a/conda/install.py b/conda/install.py --- a/conda/install.py +++ b/conda/install.py @@ -50,6 +50,7 @@ from conda.lock import Locked as Locked from conda.utils import win_path_to_unix, url_path from conda.config import remove_binstar_tokens, pkgs_dirs, url_channel + import conda.config as config except ImportError: # Make sure this still works as a standalone script for the Anaconda # installer. @@ -140,8 +141,9 @@ def win_conda_bat_redirect(src, dst, shell): raise # bat file redirect - with open(dst+'.bat', 'w') as f: - f.write('@echo off\ncall "%s" %%*\n' % src) + if not os.path.isfile(dst + '.bat'): + with open(dst + '.bat', 'w') as f: + f.write('@echo off\ncall "%s" %%*\n' % src) # TODO: probably need one here for powershell at some point @@ -149,17 +151,20 @@ def win_conda_bat_redirect(src, dst, shell): # set default shell to bash.exe when not provided, as that's most common if not shell: shell = "bash.exe" - with open(dst, "w") as f: - f.write("#!/usr/bin/env bash \n") - if src.endswith("conda"): - f.write('%s "$@"' % shells[shell]['path_to'](src+".exe")) - else: - f.write('source %s "$@"' % shells[shell]['path_to'](src)) - # Make the new file executable - # http://stackoverflow.com/a/30463972/1170370 - mode = os.stat(dst).st_mode - mode |= (mode & 292) >> 2 # copy R bits to X - os.chmod(dst, mode) + + # technically these are "links" - but islink doesn't work on win + if not os.path.isfile(dst): + with open(dst, "w") as f: + f.write("#!/usr/bin/env bash \n") + if src.endswith("conda"): + f.write('%s "$@"' % shells[shell]['path_to'](src+".exe")) + else: + f.write('source %s "$@"' % shells[shell]['path_to'](src)) + # Make the new file executable + # http://stackoverflow.com/a/30463972/1170370 + mode = os.stat(dst).st_mode + mode |= (mode & 292) >> 2 # copy R bits to X + os.chmod(dst, mode) log = logging.getLogger(__name__) stdoutlog = logging.getLogger('stdoutlog') @@ -1019,7 +1024,7 @@ def move_path_to_trash(path, preclean=True): return False -def link(prefix, dist, linktype=LINK_HARD, index=None, shortcuts=False): +def link(prefix, dist, linktype=LINK_HARD, index=None): """ Set up a package in a specified (environment) prefix. We assume that the package has been extracted (using extract() above). @@ -1080,7 +1085,7 @@ def link(prefix, dist, linktype=LINK_HARD, index=None, shortcuts=False): if isfile(nonadmin): open(join(prefix, ".nonadmin"), 'w').close() - if shortcuts: + if config.shortcuts: mk_menus(prefix, files, remove=False) if not run_script(prefix, dist, 'post-link'): diff --git a/conda/instructions.py b/conda/instructions.py --- a/conda/instructions.py +++ b/conda/instructions.py @@ -68,15 +68,14 @@ def RM_FETCHED_CMD(state, arg): def split_linkarg(arg): - "Return tuple(dist, linktype, shortcuts)" + "Return tuple(dist, linktype)" parts = arg.split() - return (parts[0], int(LINK_HARD if len(parts) < 2 else parts[1]), - False if len(parts) < 3 else parts[2] == 'True') + return (parts[0], int(LINK_HARD if len(parts) < 2 else parts[1])) def LINK_CMD(state, arg): - dist, lt, shortcuts = split_linkarg(arg) - link(state['prefix'], dist, lt, index=state['index'], shortcuts=shortcuts) + dist, lt = split_linkarg(arg) + link(state['prefix'], dist, lt, index=state['index']) def UNLINK_CMD(state, arg): diff --git a/conda/plan.py b/conda/plan.py --- a/conda/plan.py +++ b/conda/plan.py @@ -93,7 +93,7 @@ def channel_filt(s): linktypes = {} for arg in actions.get(inst.LINK, []): - dist, lt, shortcuts = inst.split_linkarg(arg) + dist, lt = inst.split_linkarg(arg) fkey = dist + '.tar.bz2' rec = index[fkey] pkg = rec['name'] @@ -103,7 +103,7 @@ def channel_filt(s): linktypes[pkg] = lt features[pkg][1] = rec.get('features', '') for arg in actions.get(inst.UNLINK, []): - dist, lt, shortcuts = inst.split_linkarg(arg) + dist, lt = inst.split_linkarg(arg) fkey = dist + '.tar.bz2' rec = index.get(fkey) if rec is None: @@ -292,7 +292,7 @@ def plan_from_actions(actions): # force_linked_actions has now been folded into this function, and is enabled by # supplying an index and setting force=True def ensure_linked_actions(dists, prefix, index=None, force=False, - always_copy=False, shortcuts=False): + always_copy=False): actions = defaultdict(list) actions[inst.PREFIX] = prefix actions['op_order'] = (inst.RM_FETCHED, inst.FETCH, inst.RM_EXTRACTED, @@ -356,10 +356,10 @@ def ensure_linked_actions(dists, prefix, index=None, force=False, lt = LINK_SOFT else: lt = LINK_COPY - actions[inst.LINK].append('%s %d %s' % (dist, lt, shortcuts)) + actions[inst.LINK].append('%s %d' % (dist, lt)) except (OSError, IOError): - actions[inst.LINK].append('%s %d %s' % (dist, LINK_COPY, shortcuts)) + actions[inst.LINK].append('%s %d' % (dist, LINK_COPY)) finally: if not extracted_in: # Remove the dummy data @@ -442,8 +442,7 @@ def get_pinned_specs(prefix): def install_actions(prefix, index, specs, force=False, only_names=None, always_copy=False, - pinned=True, minimal_hint=False, update_deps=True, prune=False, - shortcuts=False): + pinned=True, minimal_hint=False, update_deps=True, prune=False): r = Resolve(index) linked = r.installed @@ -514,8 +513,7 @@ def install_actions(prefix, index, specs, force=False, only_names=None, always_c actions = ensure_linked_actions( smh, prefix, index=index if force else None, - force=force, always_copy=always_copy, - shortcuts=shortcuts) + force=force, always_copy=always_copy) if actions[inst.LINK]: actions[inst.SYMLINK_CONDA] = [root_dir] </patch>
[]
[]
Lightning-AI__lightning-2565
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Can't use None (anymore) in checkpoint_callback ## 🐛 Bug using None in checkpoint_callback now errors out ``` -- Process 0 terminated with the following error: Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 562, in ddp_train q.put(self.checkpoint_callback.best_model_path) AttributeError: 'NoneType' object has no attribute 'best_model_path' ``` ### To Reproduce `trainer = Trainer(checkpoint_callback=None)` Ran into this issue from upgrading to masters, was using masters from a few commits ago before Edit: `False` casuses the same error as well </issue> <code> [start of README.md] 1 <div align="center"> 2 3 ![Logo](docs/source/_images/logos/lightning_logo.svg) 4 5 # PyTorch Lightning 6 7 **The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.** 8 9 10 [![PyPI Status](https://badge.fury.io/py/pytorch-lightning.svg)](https://badge.fury.io/py/pytorch-lightning) 11 [![PyPI Status](https://pepy.tech/badge/pytorch-lightning)](https://pepy.tech/project/pytorch-lightning) 12 [![codecov](https://codecov.io/gh/PyTorchLightning/pytorch-lightning/branch/master/graph/badge.svg)](https://codecov.io/gh/PyTorchLightning/pytorch-lightning) 13 14 [![ReadTheDocs](https://readthedocs.org/projects/pytorch-lightning/badge/?version=stable)](https://pytorch-lightning.readthedocs.io/en/stable/) 15 [![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A) 16 [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE) 17 [![Next Release](https://img.shields.io/badge/Next%20Release-May%2029-<COLOR>.svg)](https://shields.io/) 18 19 <!-- 20 [![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning) 21 --> 22 </div> 23 24 --- 25 ## Trending contributors 26 27 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/0)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/0) 28 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/1)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/1) 29 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/2)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/2) 30 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/3)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/3) 31 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/4)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/4) 32 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/5)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/5) 33 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/6)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/6) 34 [![](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/images/7)](https://sourcerer.io/fame/williamFalcon/pytorchlightning/pytorch-lightning/links/7) 35 36 --- 37 38 ## Continuous Integration 39 <center> 40 41 | System / PyTorch ver. | 1.3 (min. req.) | 1.4 | 1.5 (latest) | 42 | :---: | :---: | :---: | :---: | 43 | Conda py3.7 [linux] | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | ![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg) | 44 | Linux py3.7 [GPU] | - | - | [![Build Status](http://35.192.60.23/api/badges/PyTorchLightning/pytorch-lightning/status.svg)](http://35.192.60.23/PyTorchLightning/pytorch-lightning) | 45 | Linux py3.7 [TPU] | - | - | ![TPU tests](https://github.com/PyTorchLightning/pytorch-lightning/workflows/TPU%20tests/badge.svg) | 46 | Linux py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 47 | OSX py3.6 / py3.7 | - | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | 48 | Windows py3.6 / py3.7 / py3.8 | [![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) |[![CI testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20testing/badge.svg?event=push)](https://github.com/PyTorchLightning/pytorch-lightning/actions?query=workflow%3A%22CI+testing%22) | - | 49 50 </center> 51 52 Simple installation from PyPI 53 ```bash 54 pip install pytorch-lightning 55 ``` 56 57 From Conda 58 ```bash 59 conda install pytorch-lightning -c conda-forge 60 ``` 61 62 ## Docs 63 - [master](https://pytorch-lightning.readthedocs.io/en/latest) 64 - [stable](https://pytorch-lightning.readthedocs.io/en/stable) 65 - [0.8.4](https://pytorch-lightning.readthedocs.io/en/0.8.4/) 66 - [0.8.3](https://pytorch-lightning.readthedocs.io/en/0.8.3/) 67 - [0.8.1](https://pytorch-lightning.readthedocs.io/en/0.8.1/) 68 - [0.7.6](https://pytorch-lightning.readthedocs.io/en/0.7.6/) 69 - [0.7.5](https://pytorch-lightning.readthedocs.io/en/0.7.5/) 70 - [0.7.3](https://pytorch-lightning.readthedocs.io/en/0.7.3/) 71 - [0.7.1](https://pytorch-lightning.readthedocs.io/en/0.7.1/) 72 - [0.6.0](https://pytorch-lightning.readthedocs.io/en/0.6.0/) 73 - [0.5.3](https://pytorch-lightning.readthedocs.io/en/0.5.3.2/) 74 75 ## Refactoring your PyTorch code + benefits + full walk-through 76 [![Watch the video](docs/source/_images/general/tutorial_cover.jpg)](https://www.youtube.com/watch?v=QHww1JH7IDU) 77 78 ## Demo 79 Here's a minimal example without a validation or test loop. 80 81 ```python 82 # this is just a plain nn.Module with some structure 83 84 class LitClassifier(pl.LightningModule): 85 86 def __init__(self): 87 super().__init__() 88 self.l1 = torch.nn.Linear(28 * 28, 10) 89 90 def forward(self, x): 91 return torch.relu(self.l1(x.view(x.size(0), -1))) 92 93 def training_step(self, batch, batch_nb): 94 x, y = batch 95 loss = F.cross_entropy(self(x), y) 96 tensorboard_logs = {'train_loss': loss} 97 return {'loss': loss, 'log': tensorboard_logs} 98 99 def configure_optimizers(self): 100 return torch.optim.Adam(self.parameters(), lr=0.02) 101 102 # train! 103 train_loader = DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32) 104 105 model = LitClassifier() 106 trainer = pl.Trainer(gpus=8, precision=16) 107 trainer.fit(model, train_loader) 108 ``` 109 110 Other examples: 111 [MNIST hello world](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=gEulmrbxwaYL) 112 [GAN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=P0bSmCw57aV5) 113 [BERT](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=7uQVI-xv9Ddj) 114 [DQN](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=NWvMLBDySQI5) 115 [MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3) 116 117 ## What is it? 118 [READ THIS QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html) 119 120 Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. 121 It's more of a PyTorch style-guide than a framework. 122 123 In Lightning, you organize your code into 3 distinct categories: 124 125 1. Research code (goes in the LightningModule). 126 2. Engineering code (you delete, and is handled by the Trainer). 127 3. Non-essential research code (logging, etc... this goes in Callbacks). 128 129 Here's an example of how to refactor your research code into a [LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html). 130 131 ![PT to PL](docs/source/_images/lightning_module/pt_to_pl.png) 132 133 The rest of the code is automated by the [Trainer](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html)! 134 ![PT to PL](docs/source/_images/lightning_module/pt_trainer.png) 135 136 ## Testing Rigour 137 All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). 138 139 For every PR we test all combinations of: 140 - PyTorch 1.3, 1.4, 1.5 141 - Python 3.6, 3.7, 3.8 142 - Linux, OSX, Windows 143 - Multiple GPUs 144 145 **How does performance compare with vanilla PyTorch?** 146 We have tests to ensure we get the EXACT same results in under 600 ms difference per epoch. In reality, lightning adds about a 300 ms overhead per epoch. 147 [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks). 148 149 Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts. 150 151 ## How flexible is it? 152 As you see, you're just organizing your PyTorch code - there's no abstraction. 153 154 And for the stuff that the Trainer abstracts out, you can [override any part](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#extensibility) you want to do things like implement your own distributed training, 16-bit precision, or even a custom backward pass. 155 156 For example, here you could do your own backward pass without worrying about GPUs, TPUs or 16-bit since we already handle it. 157 158 ```python 159 class LitModel(LightningModule): 160 def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx, 161 second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): 162 optimizer.step() 163 164 def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx): 165 optimizer.zero_grad() 166 ``` 167 168 For anything else you might need, we have an extensive [callback system](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#callbacks) you can use to add arbitrary functionality not implemented by our team in the Trainer. 169 170 ## Who is Lightning for? 171 - Professional researchers 172 - Ph.D. students 173 - Corporate production teams 174 175 If you're just getting into deep learning, we recommend you learn PyTorch first! Once you've implemented a few models, come back and use all the advanced features of Lightning :) 176 177 ## What does lightning control for me? 178 179 Everything in Blue! 180 This is how lightning separates the science (red) from engineering (blue). 181 182 ![Overview](docs/source/_images/general/pl_overview.gif) 183 184 ## How much effort is it to convert? 185 If your code is not a huge mess you should be able to organize it into a LightningModule in less than 1 hour. 186 If your code IS a mess, then you needed to clean up anyhow ;) 187 188 [Check out this step-by-step guide](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09). 189 [Or watch this video](https://www.youtube.com/watch?v=QHww1JH7IDU). 190 191 192 ## Starting a new project? 193 [Use our seed-project aimed at reproducibility!](https://github.com/PytorchLightning/pytorch-lightning-conference-seed) 194 195 ## Why do I want to use lightning? 196 Although your research/production project might start simple, once you add things like GPU AND TPU training, 16-bit precision, etc, you end up spending more time engineering than researching. Lightning automates AND rigorously tests those parts for you. 197 198 ## Support 199 - [8 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, Ph.D. students from top AI labs. 200 - 100+ community contributors. 201 202 Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support. 203 204 --- 205 206 ## README Table of Contents 207 - [How do I use it](https://github.com/PytorchLightning/pytorch-lightning#how-do-i-do-use-it) 208 - [What lightning automates](https://github.com/PytorchLightning/pytorch-lightning#what-does-lightning-control-for-me) 209 - [Tensorboard integration](https://github.com/PytorchLightning/pytorch-lightning#tensorboard) 210 - [Lightning features](https://github.com/PytorchLightning/pytorch-lightning#lightning-automates-all-of-the-following-each-is-also-configurable) 211 - [Examples](https://github.com/PytorchLightning/pytorch-lightning#examples) 212 - [Tutorials](https://github.com/PytorchLightning/pytorch-lightning#tutorials) 213 - [Asking for help](https://github.com/PytorchLightning/pytorch-lightning#asking-for-help) 214 - [Contributing](https://github.com/PytorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md) 215 - [Bleeding edge install](https://github.com/PytorchLightning/pytorch-lightning#bleeding-edge) 216 - [Lightning Design Principles](https://github.com/PytorchLightning/pytorch-lightning#lightning-design-principles) 217 - [Lightning team](https://github.com/PytorchLightning/pytorch-lightning#lightning-team) 218 - [FAQ](https://github.com/PytorchLightning/pytorch-lightning#faq) 219 220 --- 221 222 ## Realistic example 223 Here's how you would organize a realistic PyTorch project into Lightning. 224 225 ![PT to PL](docs/source/_images/mnist_imgs/pt_to_pl.jpg) 226 227 The LightningModule defines a *system* such as seq-2-seq, GAN, etc... 228 It can ALSO define a simple classifier. 229 230 In summary, you: 231 232 1. Define a [LightningModule](https://pytorch-lightning.rtfd.io/en/latest/lightning-module.html) 233 ```python 234 class LitSystem(pl.LightningModule): 235 236 def __init__(self): 237 super().__init__() 238 # not the best model... 239 self.l1 = torch.nn.Linear(28 * 28, 10) 240 241 def forward(self, x): 242 return torch.relu(self.l1(x.view(x.size(0), -1))) 243 244 def training_step(self, batch, batch_idx): 245 ... 246 ``` 247 248 2. Fit it with a [Trainer](https://pytorch-lightning.rtfd.io/en/latest/pytorch_lightning.trainer.html) 249 ```python 250 from pytorch_lightning import Trainer 251 252 model = LitSystem() 253 254 # most basic trainer, uses good defaults 255 trainer = Trainer() 256 trainer.fit(model) 257 ``` 258 259 [Check out the COLAB demo here](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=HOk9c4_35FKg) 260 261 ## What types of research works? 262 Anything! Remember, that this is just organized PyTorch code. 263 The Training step defines the core complexity found in the training loop. 264 265 #### Could be as complex as a seq2seq 266 267 ```python 268 # define what happens for training here 269 def training_step(self, batch, batch_idx): 270 x, y = batch 271 272 # define your own forward and loss calculation 273 hidden_states = self.encoder(x) 274 275 # even as complex as a seq-2-seq + attn model 276 # (this is just a toy, non-working example to illustrate) 277 start_token = '<SOS>' 278 last_hidden = torch.zeros(...) 279 loss = 0 280 for step in range(max_seq_len): 281 attn_context = self.attention_nn(hidden_states, start_token) 282 pred = self.decoder(start_token, attn_context, last_hidden) 283 last_hidden = pred 284 pred = self.predict_nn(pred) 285 loss += self.loss(last_hidden, y[step]) 286 287 #toy example as well 288 loss = loss / max_seq_len 289 return {'loss': loss} 290 ``` 291 292 #### Or as basic as CNN image classification 293 294 ```python 295 # define what happens for validation here 296 def validation_step(self, batch, batch_idx): 297 x, y = batch 298 299 # or as basic as a CNN classification 300 out = self(x) 301 loss = my_loss(out, y) 302 return {'loss': loss} 303 ``` 304 305 And without changing a single line of code, you could run on CPUs 306 ```python 307 trainer = Trainer(max_epochs=1) 308 ``` 309 310 311 Or GPUs 312 ```python 313 # 8 GPUs 314 trainer = Trainer(max_epochs=1, gpus=8) 315 316 # 256 GPUs 317 trainer = Trainer(max_epochs=1, gpus=8, num_nodes=32) 318 ``` 319 320 Or TPUs 321 ```python 322 # Distributes TPU core training 323 trainer = Trainer(tpu_cores=8) 324 325 # Single TPU core training 326 trainer = Trainer(tpu_cores=[1]) 327 ``` 328 329 When you're done training, run the test accuracy 330 ```python 331 trainer.test() 332 ``` 333 334 ## Visualization 335 Lightning has out-of-the-box integration with the popular logging/visualizing frameworks 336 337 - [Tensorboard](https://pytorch.org/docs/stable/tensorboard.html) 338 - [MLFlow](https://mlflow.org/) 339 - [Neptune.ai](https://neptune.ai/) 340 - [Comet.ml](https://www.comet.ml/site/) 341 - [Wandb](https://www.wandb.com/) 342 - ... 343 344 ![tensorboard-support](docs/source/_images/general/tf_loss.png) 345 346 347 ## Lightning automates 40+ parts of DL/ML research 348 - GPU training 349 - Distributed GPU (cluster) training 350 - TPU training 351 - EarlyStopping 352 - Logging/Visualizing 353 - Checkpointing 354 - Experiment management 355 - [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) 356 357 358 ## Running speed 359 Migrating to lightning does not mean compromising on speed! You can expect an overhead of about 300 ms per epoch compared with pure PyTorch. 360 361 362 ## Examples 363 Check out this awesome list of research papers and implementations done with Lightning. 364 365 - [Contextual Emotion Detection (DoubleDistilBert)](https://github.com/PyTorchLightning/emotion_transformer) 366 - [Generative Adversarial Network](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=TyYOdg8g77P0) 367 - [Hyperparameter optimization with Optuna](https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py) 368 - [Image Inpainting using Partial Convolutions](https://github.com/ryanwongsa/Image-Inpainting) 369 - [MNIST on TPU](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3#scrollTo=BHBz1_AnamN_) 370 - [NER (transformers, TPU, huggingface)](https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D) 371 - [NeuralTexture (CVPR)](https://github.com/PyTorchLightning/neuraltexture) 372 - [Recurrent Attentive Neural Process](https://github.com/PyTorchLightning/attentive-neural-processes) 373 - [Siamese Nets for One-shot Image Recognition](https://github.com/PyTorchLightning/Siamese-Neural-Networks) 374 - [Speech Transformers](https://github.com/PyTorchLightning/speech-transformer-pytorch_lightning) 375 - [Transformers transfer learning (Huggingface)](https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31#scrollTo=yr7eaxkF-djf) 376 - [Transformers text classification](https://github.com/ricardorei/lightning-text-classification) 377 - [VAE Library of over 18+ VAE flavors](https://github.com/AntixK/PyTorch-VAE) 378 - [Finetune BERT, RoBERTa etc on QA Datasets like SQuAD](https://github.com/tshrjn/Finetune-QA/) 379 - [Pytorch-Lightning + Microsoft NNI with Docker](https://github.com/davinnovation/pytorch-boilerplate) 380 381 ## Tutorials 382 Check out our [introduction guide](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) to get started. 383 Or jump straight into [our tutorials](https://pytorch-lightning.readthedocs.io/en/latest/#tutorials). 384 385 --- 386 387 ## Asking for help 388 Welcome to the Lightning community! 389 390 If you have any questions, feel free to: 391 1. [read the docs](https://pytorch-lightning.rtfd.io/en/latest/). 392 2. [Search through the issues](https://github.com/PytorchLightning/pytorch-lightning/issues?utf8=%E2%9C%93&q=my++question). 393 3. [Ask on stackoverflow](https://stackoverflow.com/questions/ask?guided=false) with the tag pytorch-lightning. 394 4. [Join our slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A). 395 396 --- 397 398 ## FAQ 399 **How do I use Lightning for rapid research?** 400 [Here's a walk-through](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html) 401 402 **Why was Lightning created?** 403 Lightning has 3 goals in mind: 404 405 1. Maximal flexibility while abstracting out the common boilerplate across research projects. 406 2. Reproducibility. If all projects use the LightningModule template, it will be much much easier to understand what's going on and where to look! It will also mean every implementation follows a standard format. 407 3. Democratizing PyTorch power-user features. Distributed training? 16-bit? know you need them but don't want to take the time to implement? All good... these come built into Lightning. 408 409 **How does Lightning compare with Ignite and fast.ai?** 410 [Here's a thorough comparison](https://medium.com/@_willfalcon/pytorch-lightning-vs-pytorch-ignite-vs-fast-ai-61dc7480ad8a). 411 412 **Is this another library I have to learn?** 413 Nope! We use pure Pytorch everywhere and don't add unnecessary abstractions! 414 415 **Are there plans to support Python 2?** 416 Nope. 417 418 **Are there plans to support virtualenv?** 419 Nope. Please use anaconda or miniconda. 420 ```bash 421 conda activate my_env 422 pip install pytorch-lightning 423 ``` 424 425 ## Custom installation 426 427 ### Bleeding edge 428 429 If you can't wait for the next release, install the most up to date code with: 430 * using GIT (locally clone whole repo with full history) 431 ```bash 432 pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade 433 ``` 434 * using instant zip (last state of the repo without git history) 435 ```bash 436 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/master.zip --upgrade 437 ``` 438 439 ### Any release installation 440 441 You can also install any past release `0.X.Y` from this repository: 442 ```bash 443 pip install https://github.com/PytorchLightning/pytorch-lightning/archive/0.X.Y.zip --upgrade 444 ``` 445 446 ## Lightning team 447 448 #### Leads 449 - William Falcon [(williamFalcon)](https://github.com/williamFalcon) (Lightning founder) 450 - Jirka Borovec [(Borda)](https://github.com/Borda) (ghost :) 451 - Ethan Harris [(ethanwharris)](https://github.com/ethanwharris) (Torchbearer founder) 452 - Matthew Painter [(MattPainter01)](https://github.com/MattPainter01) (Torchbearer founder) 453 - Justus Schock [(justusschock)](https://github.com/justusschock) (Former Core Member PyTorch Ignite) 454 455 #### Core Maintainers 456 457 - Nick Eggert [(neggert)](https://github.com/neggert) 458 - Jeff Ling [(jeffling)](https://github.com/jeffling) 459 - Jeremy Jordan [(jeremyjordan)](https://github.com/jeremyjordan) 460 - Tullie Murrell [(tullie)](https://github.com/tullie) 461 - Adrian Wälchli [(awaelchli)](https://github.com/awaelchli) 462 - Nicki Skafte [(skaftenicki)](https://github.com/SkafteNicki) 463 464 --- 465 466 #### Funding 467 Building open-source software with only a few part-time people is hard! We've secured funding to make sure we can 468 hire a full-time staff, attend conferences, and move faster through implementing features you request. 469 470 Our goal is to build an incredible research platform and a big supportive community. Many open-source projects 471 have gone on to fund operations through things like support and special help for big corporations! 472 473 If you are one of these corporations, please feel free to reach out to [email protected]! 474 475 ## BibTeX 476 If you want to cite the framework feel free to use this (but only if you loved it 😊): 477 478 ```bibtex 479 @article{falcon2019pytorch, 480 title={PyTorch Lightning}, 481 author={Falcon, WA}, 482 journal={GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by}, 483 volume={3}, 484 year={2019} 485 } 486 ``` 487 [end of README.md] [start of pytorch_lightning/trainer/training_io.py] 1 """ 2 Lightning can automate saving and loading checkpoints 3 ===================================================== 4 5 Checkpointing is enabled by default to the current working directory. 6 To change the checkpoint path pass in:: 7 8 Trainer(default_root_dir='/your/path/to/save/checkpoints') 9 10 11 To modify the behavior of checkpointing pass in your own callback. 12 13 .. code-block:: python 14 15 from pytorch_lightning.callbacks import ModelCheckpoint 16 17 # DEFAULTS used by the Trainer 18 checkpoint_callback = ModelCheckpoint( 19 filepath=os.getcwd(), 20 save_top_k=1, 21 verbose=True, 22 monitor='val_loss', 23 mode='min', 24 prefix='' 25 ) 26 27 trainer = Trainer(checkpoint_callback=checkpoint_callback) 28 29 30 Restoring training session 31 -------------------------- 32 33 You might want to not only load a model but also continue training it. Use this method to 34 restore the trainer state as well. This will continue from the epoch and global step you last left off. 35 However, the dataloaders will start from the first batch again (if you shuffled it shouldn't matter). 36 37 Lightning will restore the session if you pass a logger with the same version and there's a saved checkpoint. 38 39 .. code-block:: python 40 41 from pytorch_lightning import Trainer 42 43 trainer = Trainer( 44 resume_from_checkpoint=PATH 45 ) 46 47 # this fit call loads model weights and trainer state 48 # the trainer continues seamlessly from where you left off 49 # without having to do anything else. 50 trainer.fit(model) 51 52 53 The trainer restores: 54 55 - global_step 56 - current_epoch 57 - All optimizers 58 - All lr_schedulers 59 - Model weights 60 61 You can even change the logic of your model as long as the weights and "architecture" of 62 the system isn't different. If you add a layer, for instance, it might not work. 63 64 At a rough level, here's what happens inside Trainer :py:mod:`pytorch_lightning.base_module.saving.py`: 65 66 .. code-block:: python 67 68 self.global_step = checkpoint['global_step'] 69 self.current_epoch = checkpoint['epoch'] 70 71 # restore the optimizers 72 optimizer_states = checkpoint['optimizer_states'] 73 for optimizer, opt_state in zip(self.optimizers, optimizer_states): 74 optimizer.load_state_dict(opt_state) 75 76 # restore the lr schedulers 77 lr_schedulers = checkpoint['lr_schedulers'] 78 for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers): 79 scheduler['scheduler'].load_state_dict(lrs_state) 80 81 # uses the model you passed into trainer 82 model.load_state_dict(checkpoint['state_dict']) 83 84 """ 85 86 import os 87 import re 88 import signal 89 from abc import ABC 90 from subprocess import call 91 92 import torch 93 import torch.distributed as torch_distrib 94 95 import pytorch_lightning 96 from pytorch_lightning import _logger as log 97 from pytorch_lightning.core.lightning import LightningModule 98 from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping 99 from pytorch_lightning.loggers import LightningLoggerBase 100 from pytorch_lightning.overrides.data_parallel import ( 101 LightningDistributedDataParallel, 102 LightningDataParallel, 103 ) 104 from pytorch_lightning.utilities import rank_zero_warn, NATIVE_AMP_AVALAIBLE 105 from pytorch_lightning.utilities.cloud_io import load as pl_load 106 107 try: 108 import torch_xla 109 import torch_xla.core.xla_model as xm 110 import torch_xla.distributed.xla_multiprocessing as xmp 111 except ImportError: 112 XLA_AVAILABLE = False 113 else: 114 XLA_AVAILABLE = True 115 116 try: 117 import horovod.torch as hvd 118 except (ModuleNotFoundError, ImportError): 119 HOROVOD_AVAILABLE = False 120 else: 121 HOROVOD_AVAILABLE = True 122 123 try: 124 from omegaconf import Container 125 except ImportError: 126 Container = None 127 128 129 class TrainerIOMixin(ABC): 130 131 # this is just a summary on variables used in this abstract class, 132 # the proper values/initialisation should be done in child class 133 model: LightningModule 134 on_gpu: bool 135 root_gpu: ... 136 resume_from_checkpoint: ... 137 use_ddp: bool 138 use_ddp2: bool 139 use_horovod: bool 140 checkpoint_callback: ... 141 global_rank: int 142 weights_save_path: str 143 logger: LightningLoggerBase 144 early_stop_callback: ... 145 lr_schedulers: ... 146 optimizers: ... 147 on_tpu: bool 148 num_training_batches: int 149 accumulate_grad_batches: int 150 use_amp: bool 151 scaler: ... 152 153 def get_model(self): 154 is_dp_module = isinstance(self.model, (LightningDistributedDataParallel, 155 LightningDataParallel)) 156 model = self.model.module if is_dp_module else self.model 157 return model 158 159 # -------------------- 160 # CHECK-POINTING 161 # -------------------- 162 def restore_weights(self, model: LightningModule): 163 """ 164 We attempt to restore weights in this order: 165 1. HPC weights. 166 2. if no HPC weights restore checkpoint_path weights 167 3. otherwise don't restore weights 168 """ 169 # clear cache before restore 170 if self.on_gpu: 171 torch.cuda.empty_cache() 172 173 # if script called from hpc resubmit, load weights 174 did_restore_hpc_weights = self.restore_hpc_weights_if_needed(model) 175 176 # clear cache after restore 177 if self.on_gpu: 178 torch.cuda.empty_cache() 179 180 if not did_restore_hpc_weights: 181 if self.resume_from_checkpoint is not None: 182 self.restore(self.resume_from_checkpoint, on_gpu=self.on_gpu) 183 184 # wait for all models to restore weights 185 if self.use_ddp or self.use_ddp2: 186 # wait for all processes to catch up 187 torch_distrib.barrier() 188 189 # wait for all models to restore weights 190 if self.on_tpu and XLA_AVAILABLE: 191 # wait for all processes to catch up 192 torch_xla.core.xla_model.rendezvous("pl.TrainerIOMixin.restore_weights") 193 194 elif self.use_horovod: 195 # wait for all processes to catch up 196 hvd.join() 197 198 # clear cache after restore 199 if self.on_gpu: 200 torch.cuda.empty_cache() 201 202 # -------------------- 203 # HPC SIGNAL HANDLING 204 # -------------------- 205 def register_slurm_signal_handlers(self): 206 # see if we're using slurm (not interactive) 207 on_slurm = False 208 try: 209 job_name = os.environ['SLURM_JOB_NAME'] 210 if job_name != 'bash': 211 on_slurm = True 212 except Exception: 213 pass 214 215 if on_slurm: 216 log.info('Set SLURM handle signals.') 217 signal.signal(signal.SIGUSR1, self.sig_handler) 218 signal.signal(signal.SIGTERM, self.term_handler) 219 220 def sig_handler(self, signum, frame): # pragma: no-cover 221 if self.is_global_zero: 222 # save weights 223 log.info('handling SIGUSR1') 224 self.hpc_save(self.weights_save_path, self.logger) 225 226 # find job id 227 job_id = os.environ['SLURM_JOB_ID'] 228 cmd = 'scontrol requeue {}'.format(job_id) 229 230 # requeue job 231 log.info(f'requeing job {job_id}...') 232 result = call(cmd, shell=True) 233 234 # print result text 235 if result == 0: 236 log.info(f'requeued exp {job_id}') 237 else: 238 log.warning('requeue failed...') 239 240 # close experiment to avoid issues 241 self.logger.close() 242 243 def term_handler(self, signum, frame): 244 # save 245 log.info("bypassing sigterm") 246 247 # -------------------- 248 # MODEL SAVE CHECKPOINT 249 # -------------------- 250 def _atomic_save(self, checkpoint, filepath: str): 251 """Saves a checkpoint atomically, avoiding the creation of incomplete checkpoints. 252 253 This will create a temporary checkpoint with a suffix of ``.part``, then copy it to the final location once 254 saving is finished. 255 256 Args: 257 checkpoint: The object to save. 258 Built to be used with the ``dump_checkpoint`` method, but can deal with anything which ``torch.save`` 259 accepts. 260 filepath: The path to which the checkpoint will be saved. 261 This points to the file that the checkpoint will be stored in. 262 """ 263 tmp_path = str(filepath) + ".part" 264 torch.save(checkpoint, tmp_path) 265 os.replace(tmp_path, filepath) 266 267 def save_checkpoint(self, filepath, weights_only: bool = False): 268 checkpoint = self.dump_checkpoint(weights_only) 269 270 if self.is_global_zero: 271 # do the actual save 272 try: 273 self._atomic_save(checkpoint, filepath) 274 except AttributeError as err: 275 if LightningModule.CHECKPOINT_HYPER_PARAMS_KEY in checkpoint: 276 del checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_KEY] 277 rank_zero_warn('Warning, `module_arguments` dropped from checkpoint.' 278 f' An attribute is not picklable {err}') 279 self._atomic_save(checkpoint, filepath) 280 281 def restore(self, checkpoint_path: str, on_gpu: bool): 282 """ 283 Restore training state from checkpoint. 284 Also restores all training state like: 285 - epoch 286 - callbacks 287 - schedulers 288 - optimizer 289 """ 290 291 # if on_gpu: 292 # checkpoint = torch.load(checkpoint_path) 293 # else: 294 # load on CPU first 295 checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage) 296 297 # load model state 298 model = self.get_model() 299 300 # load the state_dict on the model automatically 301 model.load_state_dict(checkpoint['state_dict']) 302 303 # give model a chance to load something 304 model.on_load_checkpoint(checkpoint) 305 306 if on_gpu: 307 model.cuda(self.root_gpu) 308 309 # restore amp scaling 310 if self.use_amp and NATIVE_AMP_AVALAIBLE and 'native_amp_scaling_state' in checkpoint: 311 self.scaler.load_state_dict(checkpoint['native_amp_scaling_state']) 312 313 # load training state (affects trainer only) 314 self.restore_training_state(checkpoint) 315 316 def dump_checkpoint(self, weights_only: bool = False) -> dict: 317 """Creating model checkpoint. 318 319 Args: 320 weights_only: saving model weights only 321 322 Return: 323 structured dictionary 324 """ 325 checkpoint = { 326 'epoch': self.current_epoch + 1, 327 'global_step': self.global_step + 1, 328 'pytorch-lightning_version': pytorch_lightning.__version__, 329 } 330 331 if not weights_only: 332 333 # TODO support more generic way for callbacks to persist a state_dict in a checkpoint 334 checkpoint_callbacks = [c for c in self.callbacks if isinstance(c, ModelCheckpoint)] 335 early_stopping_callbacks = [c for c in self.callbacks if isinstance(c, EarlyStopping)] 336 337 if checkpoint_callbacks: 338 # we add the official checkpoint callback to the end of the list 339 # extra user provided callbacks will not be persisted yet 340 checkpoint['checkpoint_callback_best_model_score'] = self.checkpoint_callback.best_model_score 341 checkpoint['checkpoint_callback_best_model_path'] = self.checkpoint_callback.best_model_path 342 343 if early_stopping_callbacks and checkpoint_callbacks: 344 # we add the official early stopping callback to the end of the list 345 # extra user provided callbacks will not be persisted yet 346 checkpoint['early_stop_callback_state_dict'] = early_stopping_callbacks[-1].state_dict() 347 348 # save optimizers 349 optimizer_states = [] 350 for i, optimizer in enumerate(self.optimizers): 351 optimizer_states.append(optimizer.state_dict()) 352 checkpoint['optimizer_states'] = optimizer_states 353 354 # save lr schedulers 355 lr_schedulers = [] 356 for scheduler in self.lr_schedulers: 357 lr_schedulers.append(scheduler['scheduler'].state_dict()) 358 checkpoint['lr_schedulers'] = lr_schedulers 359 360 # save native amp scaling 361 if self.use_amp and NATIVE_AMP_AVALAIBLE: 362 checkpoint['native_amp_scaling_state'] = self.scaler.state_dict() 363 364 # add the module_arguments and state_dict from the model 365 model = self.get_model() 366 367 checkpoint['state_dict'] = model.state_dict() 368 369 if model.hparams: 370 if hasattr(model, '_hparams_name'): 371 checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_NAME] = model._hparams_name 372 # add arguments to the checkpoint 373 checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_KEY] = model.hparams 374 if Container is not None: 375 if isinstance(model.hparams, Container): 376 checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_TYPE] = type(model.hparams) 377 378 # give the model a chance to add a few things 379 model.on_save_checkpoint(checkpoint) 380 381 return checkpoint 382 383 # -------------------- 384 # HPC IO 385 # -------------------- 386 def restore_hpc_weights_if_needed(self, model: LightningModule): 387 """If there is a set of hpc weights, use as signal to restore model.""" 388 did_restore = False 389 390 # look for hpc weights 391 folderpath = self.weights_save_path 392 if os.path.exists(folderpath): 393 files = os.listdir(folderpath) 394 hpc_weight_paths = [x for x in files if 'hpc_ckpt' in x] 395 396 # if hpc weights exist restore model 397 if len(hpc_weight_paths) > 0: 398 self.hpc_load(folderpath, self.on_gpu) 399 did_restore = True 400 return did_restore 401 402 def restore_training_state(self, checkpoint): 403 """ 404 Restore trainer state. 405 Model will get its change to update 406 :param checkpoint: 407 :return: 408 """ 409 if 'optimizer_states' not in checkpoint or 'lr_schedulers' not in checkpoint: 410 raise KeyError( 411 'Trying to restore training state but checkpoint contains only the model.' 412 ' This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`.' 413 ) 414 415 # TODO support more generic way for callbacks to load callback state_dicts 416 checkpoint_callbacks = [c for c in self.callbacks if isinstance(c, ModelCheckpoint)] 417 early_stopping_callbacks = [c for c in self.callbacks if isinstance(c, EarlyStopping)] 418 419 if checkpoint_callbacks: 420 if 'checkpoint_callback_best_model_score' in checkpoint: 421 checkpoint_callbacks[-1].best_model_score = checkpoint['checkpoint_callback_best_model_score'] 422 else: 423 # Old naming until version 0.7.6 424 rank_zero_warn( 425 'Loading a checkpoint created with an old version of Lightning; ' 426 'this will not be supported in the future.' 427 ) 428 checkpoint_callbacks[-1].best_model_score = checkpoint['checkpoint_callback_best'] 429 checkpoint_callbacks[-1].best_model_path = checkpoint['checkpoint_callback_best_model_path'] 430 431 if early_stopping_callbacks: 432 state = checkpoint['early_stop_callback_state_dict'] 433 early_stopping_callbacks[-1].load_state_dict(state) 434 435 self.global_step = checkpoint['global_step'] 436 self.current_epoch = checkpoint['epoch'] 437 438 # Division deals with global step stepping once per accumulated batch 439 # Inequality deals with different global step for odd vs even num_training_batches 440 n_accum = 1 if self.accumulate_grad_batches is None else self.accumulate_grad_batches 441 expected_steps = self.num_training_batches / n_accum 442 if self.num_training_batches != 0 and self.global_step % expected_steps > 1: 443 rank_zero_warn( 444 "You're resuming from a checkpoint that ended mid-epoch. " 445 "This can cause unreliable results if further training is done, " 446 "consider using an end of epoch checkpoint. " 447 ) 448 449 # restore the optimizers 450 optimizer_states = checkpoint['optimizer_states'] 451 for optimizer, opt_state in zip(self.optimizers, optimizer_states): 452 optimizer.load_state_dict(opt_state) 453 454 # move optimizer to GPU 1 weight at a time 455 # avoids OOM 456 if self.root_gpu is not None: 457 for state in optimizer.state.values(): 458 for k, v in state.items(): 459 if isinstance(v, torch.Tensor): 460 state[k] = v.cuda(self.root_gpu) 461 462 # restore the lr schedulers 463 lr_schedulers = checkpoint['lr_schedulers'] 464 for scheduler, lrs_state in zip(self.lr_schedulers, lr_schedulers): 465 scheduler['scheduler'].load_state_dict(lrs_state) 466 467 # ---------------------------------- 468 # PRIVATE OPS 469 # ---------------------------------- 470 def hpc_save(self, folderpath: str, logger): 471 # make sure the checkpoint folder exists 472 os.makedirs(folderpath, exist_ok=True) 473 474 # save logger to make sure we get all the metrics 475 logger.save() 476 477 ckpt_number = self.max_ckpt_in_folder(folderpath) + 1 478 479 if not os.path.exists(folderpath): 480 os.makedirs(folderpath, exist_ok=True) 481 filepath = os.path.join(folderpath, f'hpc_ckpt_{ckpt_number}.ckpt') 482 483 # give model a chance to do something on hpc_save 484 model = self.get_model() 485 checkpoint = self.dump_checkpoint() 486 487 model.on_hpc_save(checkpoint) 488 489 # do the actual save 490 # TODO: fix for anything with multiprocess DP, DDP, DDP2 491 try: 492 self._atomic_save(checkpoint, filepath) 493 except AttributeError as err: 494 if LightningModule.CHECKPOINT_HYPER_PARAMS_KEY in checkpoint: 495 del checkpoint[LightningModule.CHECKPOINT_HYPER_PARAMS_KEY] 496 rank_zero_warn('warning, `module_arguments` dropped from checkpoint.' 497 f' An attribute is not picklable {err}') 498 self._atomic_save(checkpoint, filepath) 499 500 return filepath 501 502 def hpc_load(self, folderpath, on_gpu): 503 filepath = '{}/hpc_ckpt_{}.ckpt'.format(folderpath, self.max_ckpt_in_folder(folderpath)) 504 505 # load on CPU first 506 checkpoint = torch.load(filepath, map_location=lambda storage, loc: storage) 507 508 # load model state 509 model = self.get_model() 510 511 # load the state_dict on the model automatically 512 model.load_state_dict(checkpoint['state_dict']) 513 514 # restore amp scaling 515 if self.use_amp and NATIVE_AMP_AVALAIBLE and 'native_amp_scaling_state' in checkpoint: 516 self.scaler.load_state_dict(checkpoint['native_amp_scaling_state']) 517 518 if self.root_gpu is not None: 519 model.cuda(self.root_gpu) 520 521 # load training state (affects trainer only) 522 self.restore_training_state(checkpoint) 523 524 # call model hook 525 model.on_hpc_load(checkpoint) 526 527 log.info(f'restored hpc model from: {filepath}') 528 529 def max_ckpt_in_folder(self, path, name_key='ckpt_'): 530 files = os.listdir(path) 531 files = [x for x in files if name_key in x] 532 if len(files) == 0: 533 return 0 534 535 ckpt_vs = [] 536 for name in files: 537 name = name.split(name_key)[-1] 538 name = re.sub('[^0-9]', '', name) 539 ckpt_vs.append(int(name)) 540 541 return max(ckpt_vs) 542 [end of pytorch_lightning/trainer/training_io.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
Lightning-AI/lightning
e1bc208f66891e22f0139619a1be5c06235a0f34
Can't use None (anymore) in checkpoint_callback ## 🐛 Bug using None in checkpoint_callback now errors out ``` -- Process 0 terminated with the following error: Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 562, in ddp_train q.put(self.checkpoint_callback.best_model_path) AttributeError: 'NoneType' object has no attribute 'best_model_path' ``` ### To Reproduce `trainer = Trainer(checkpoint_callback=None)` Ran into this issue from upgrading to masters, was using masters from a few commits ago before Edit: `False` casuses the same error as well
2020-07-09T10:46:34Z
<patch> diff --git a/pytorch_lightning/trainer/distrib_data_parallel.py b/pytorch_lightning/trainer/distrib_data_parallel.py --- a/pytorch_lightning/trainer/distrib_data_parallel.py +++ b/pytorch_lightning/trainer/distrib_data_parallel.py @@ -189,6 +189,7 @@ class TrainerDDPMixin(ABC): num_nodes: int node_rank: int tpu_cores: int + testing: bool @property @abstractmethod @@ -555,15 +556,35 @@ def ddp_train(self, process_idx, q, model, is_master=False, proc_offset=0): # continue training routine results = self.run_pretrain_routine(model) + # persist info in ddp_spawn + self.__transfer_ddp_spawn_state_on_fit_end(model, q, results) + # clean up memory torch.cuda.empty_cache() + if self.global_rank == 0 and self.distributed_backend not in ['ddp_spawn', 'ddp_cpu']: + return results + + def __transfer_ddp_spawn_state_on_fit_end(self, model, q, results): + if not self.distributed_backend in ['ddp_spawn', 'ddp_cpu']: + return + + # track the best model path + best_model_path = None + if self.checkpoint_callback is not None: + best_model_path = self.checkpoint_callback.best_model_path + if self.global_rank == 0 and q is not None: - q.put(self.checkpoint_callback.best_model_path) + rank_zero_warn('cleaning up ddp environment...') + q.put(best_model_path) q.put(results) - if self.global_rank == 0 and self.distributed_backend != 'ddp_spawn': - return results + # save the last weights + last_path = None + if not self.testing: + last_path = os.path.join(self.default_root_dir, '__temp_weight_ddp_end.ckpt') + torch.save(model.state_dict(), last_path) + q.put(last_path) def save_spawn_weights(self, model): """ @@ -574,6 +595,7 @@ def save_spawn_weights(self, model): if self.is_global_zero: path = os.path.join(self.default_root_dir, '__temp_weight_ddp_end.ckpt') self.save_checkpoint(path) + return path def load_spawn_weights(self, original_model): """ diff --git a/pytorch_lightning/trainer/trainer.py b/pytorch_lightning/trainer/trainer.py --- a/pytorch_lightning/trainer/trainer.py +++ b/pytorch_lightning/trainer/trainer.py @@ -35,7 +35,7 @@ from pytorch_lightning.utilities import rank_zero_warn, parsing, rank_zero_info, rank_zero_only import warnings -# warnings to ignore +# warnings to ignore in trainer warnings.filterwarnings('ignore', message='torch.distributed.reduce_op is deprecated, ' 'please use torch.distributed.ReduceOp instead') @@ -1063,9 +1063,14 @@ def __run_ddp_spawn(self, model, nprocs): # restore main state with best weights best_path = q.get() results = q.get() - if best_path is not None and len(best_path) > 0: - self.checkpoint_callback.best_model_path = best_path - model.load_from_checkpoint(best_path) + last_path = q.get() + + # transfer back the best path to the trainer + self.checkpoint_callback.best_model_path = best_path + + # load last weights + if last_path is not None and not self.testing: + torch.load(last_path, map_location=lambda storage, loc: storage) self.model = model return results </patch>
[]
[]
pandas-dev__pandas-31679
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> REGR: AssertionError when subtracting Timestamp-valued DataFrames with non-indentical column index ```python import pandas as pd df = pd.DataFrame( { "foo": [pd.Timestamp("2019"), pd.Timestamp("2020")], "bar": [pd.Timestamp("2018"), pd.Timestamp("2021")], } ) df2 = df[["foo"]] print(df - df2) ``` #### Problem description The above snippet raises the following exception: ``` Traceback (most recent call last): File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 149, in na_arithmetic_op result = expressions.evaluate(op, str_rep, left, right) File ".v env/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 208, in evaluate return _evaluate(op, op_str, a, b) File ".venv/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 70, in _evaluate_standard return op(a, b) File ".venv/lib/python3.6/site-packages/pandas/core/ops/common.py", line 64, in new_method return method(self, other) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 500, in wrapper result = arithmetic_op(lvalues, rvalues, op, str_rep) File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 192, in arithmetic_op res_values = dispatch_to_extension_op(op, lvalues, rvalues) File ".venv/lib/python3.6/site-packages/pandas/core/ops/dispatch.py", line 125, in dispatch_to_extension_op res_values = op(left, right) File ".venv/lib/python3.6/site-packages/pandas/core/arrays/datetimelike.py", line 1390, in __rsub__ f"cannot subtract {type(self).__name__} from {type(other).__name__}" TypeError: cannot subtract DatetimeArray from ndarray During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pandas_bug.py", line 36, in <module> print(df2 - df) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 703, in f new_data = left._combine_frame(right, pass_op, fill_value) File ".venv/lib/python3.6/site-packages/pandas/core/frame.py", line 5297, in _combine_frame new_data = ops.dispatch_to_series(self, other, _arith_op) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 416, in dispatch_to_series new_data = expressions.evaluate(column_op, str_rep, left, right) File ".venv/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 208, in evaluate return _evaluate(op, op_str, a, b) File ".venv/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 70, in _evaluate_standard return op(a, b) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 385, in column_op return {i: func(a.iloc[:, i], b.iloc[:, i]) for i in range(len(a.columns))} File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 385, in <dictcomp> return {i: func(a.iloc[:, i], b.iloc[:, i]) for i in range(len(a.columns))} File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 121, in na_op return na_arithmetic_op(x, y, op, str_rep) File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 151, in na_arithmetic_op result = masked_arith_op(left, right, op) File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 75, in masked_arith_op assert isinstance(x, np.ndarray), type(x) ``` This is a 1.0.0 regression; in 0.25.3, the operation succeeds and the unmatched `bar` column is filled with `NaN` in the output. The same error occurs with: * Any combination of incompatible columns (strict subset, strict superset, overlapping, disjoint) * Calling the `subtract` method instead of using the subtraction operator * Timezone-aware `Timestamp`s as well as timezone-naive It does *not* seem to occur with: * Mismatches on the row index; transposing the dataframes in the above example prevents the errors occuring. * `pd.Series` objects with mismatched indexes (e.g. calling the above on the first row of each dataframe works fine) * Other dtypes; `bool`, `float`, and `int` seem to work fine. Similarly, if the dataframes are explicitly cast to dtype `object`, the operation succeeds. #### Expected Output ``` bar foo 0 NaN 0 days 1 NaN 0 days ``` #### Output of ``pd.show_versions()`` <details> ``` INSTALLED VERSIONS ------------------ commit : None python : 3.6.8.final.0 python-bits : 64 OS : Linux OS-release : 4.15.0-74-generic machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_GB.UTF-8 LOCALE : en_GB.UTF-8 pandas : 1.0.0 numpy : 1.18.1 pytz : 2019.3 dateutil : 2.8.1 pip : 19.3.1 setuptools : 41.6.0 Cython : None pytest : 5.3.5 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader: None bs4 : None bottleneck : None fastparquet : None gcsfs : None lxml.etree : None matplotlib : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : None pytables : None pytest : 5.3.5 pyxlsb : None s3fs : None scipy : 1.4.1 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlwt : None xlsxwriter : None numba : None ``` </details> </issue> <code> [start of README.md] 1 <div align="center"> 2 <img src="https://dev.pandas.io/static/img/pandas.svg"><br> 3 </div> 4 5 ----------------- 6 7 # pandas: powerful Python data analysis toolkit 8 [![PyPI Latest Release](https://img.shields.io/pypi/v/pandas.svg)](https://pypi.org/project/pandas/) 9 [![Conda Latest Release](https://anaconda.org/conda-forge/pandas/badges/version.svg)](https://anaconda.org/anaconda/pandas/) 10 [![Package Status](https://img.shields.io/pypi/status/pandas.svg)](https://pypi.org/project/pandas/) 11 [![License](https://img.shields.io/pypi/l/pandas.svg)](https://github.com/pandas-dev/pandas/blob/master/LICENSE) 12 [![Travis Build Status](https://travis-ci.org/pandas-dev/pandas.svg?branch=master)](https://travis-ci.org/pandas-dev/pandas) 13 [![Azure Build Status](https://dev.azure.com/pandas-dev/pandas/_apis/build/status/pandas-dev.pandas?branch=master)](https://dev.azure.com/pandas-dev/pandas/_build/latest?definitionId=1&branch=master) 14 [![Coverage](https://codecov.io/github/pandas-dev/pandas/coverage.svg?branch=master)](https://codecov.io/gh/pandas-dev/pandas) 15 [![Downloads](https://anaconda.org/conda-forge/pandas/badges/downloads.svg)](https://pandas.pydata.org) 16 [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/pydata/pandas) 17 [![Powered by NumFOCUS](https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A)](https://numfocus.org) 18 19 ## What is it? 20 21 **pandas** is a Python package providing fast, flexible, and expressive data 22 structures designed to make working with "relational" or "labeled" data both 23 easy and intuitive. It aims to be the fundamental high-level building block for 24 doing practical, **real world** data analysis in Python. Additionally, it has 25 the broader goal of becoming **the most powerful and flexible open source data 26 analysis / manipulation tool available in any language**. It is already well on 27 its way towards this goal. 28 29 ## Main Features 30 Here are just a few of the things that pandas does well: 31 32 - Easy handling of [**missing data**][missing-data] (represented as 33 `NaN`) in floating point as well as non-floating point data 34 - Size mutability: columns can be [**inserted and 35 deleted**][insertion-deletion] from DataFrame and higher dimensional 36 objects 37 - Automatic and explicit [**data alignment**][alignment]: objects can 38 be explicitly aligned to a set of labels, or the user can simply 39 ignore the labels and let `Series`, `DataFrame`, etc. automatically 40 align the data for you in computations 41 - Powerful, flexible [**group by**][groupby] functionality to perform 42 split-apply-combine operations on data sets, for both aggregating 43 and transforming data 44 - Make it [**easy to convert**][conversion] ragged, 45 differently-indexed data in other Python and NumPy data structures 46 into DataFrame objects 47 - Intelligent label-based [**slicing**][slicing], [**fancy 48 indexing**][fancy-indexing], and [**subsetting**][subsetting] of 49 large data sets 50 - Intuitive [**merging**][merging] and [**joining**][joining] data 51 sets 52 - Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of 53 data sets 54 - [**Hierarchical**][mi] labeling of axes (possible to have multiple 55 labels per tick) 56 - Robust IO tools for loading data from [**flat files**][flat-files] 57 (CSV and delimited), [**Excel files**][excel], [**databases**][db], 58 and saving/loading data from the ultrafast [**HDF5 format**][hdfstore] 59 - [**Time series**][timeseries]-specific functionality: date range 60 generation and frequency conversion, moving window statistics, 61 date shifting and lagging. 62 63 64 [missing-data]: https://pandas.pydata.org/pandas-docs/stable/missing_data.html#working-with-missing-data 65 [insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#column-selection-addition-deletion 66 [alignment]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html?highlight=alignment#intro-to-data-structures 67 [groupby]: https://pandas.pydata.org/pandas-docs/stable/groupby.html#group-by-split-apply-combine 68 [conversion]: https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe 69 [slicing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges 70 [fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#advanced-indexing-with-ix 71 [subsetting]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing 72 [merging]: https://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging 73 [joining]: https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index 74 [reshape]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-and-pivot-tables 75 [pivot-table]: https://pandas.pydata.org/pandas-docs/stable/reshaping.html#pivot-tables-and-cross-tabulations 76 [mi]: https://pandas.pydata.org/pandas-docs/stable/indexing.html#hierarchical-indexing-multiindex 77 [flat-files]: https://pandas.pydata.org/pandas-docs/stable/io.html#csv-text-files 78 [excel]: https://pandas.pydata.org/pandas-docs/stable/io.html#excel-files 79 [db]: https://pandas.pydata.org/pandas-docs/stable/io.html#sql-queries 80 [hdfstore]: https://pandas.pydata.org/pandas-docs/stable/io.html#hdf5-pytables 81 [timeseries]: https://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-series-date-functionality 82 83 ## Where to get it 84 The source code is currently hosted on GitHub at: 85 https://github.com/pandas-dev/pandas 86 87 Binary installers for the latest released version are available at the [Python 88 package index](https://pypi.org/project/pandas) and on conda. 89 90 ```sh 91 # conda 92 conda install pandas 93 ``` 94 95 ```sh 96 # or PyPI 97 pip install pandas 98 ``` 99 100 ## Dependencies 101 - [NumPy](https://www.numpy.org) 102 - [python-dateutil](https://labix.org/python-dateutil) 103 - [pytz](https://pythonhosted.org/pytz) 104 105 See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies. 106 107 ## Installation from sources 108 To install pandas from source you need Cython in addition to the normal 109 dependencies above. Cython can be installed from pypi: 110 111 ```sh 112 pip install cython 113 ``` 114 115 In the `pandas` directory (same one where you found this file after 116 cloning the git repo), execute: 117 118 ```sh 119 python setup.py install 120 ``` 121 122 or for installing in [development mode](https://pip.pypa.io/en/latest/reference/pip_install.html#editable-installs): 123 124 125 ```sh 126 python -m pip install -e . --no-build-isolation --no-use-pep517 127 ``` 128 129 If you have `make`, you can also use `make develop` to run the same command. 130 131 or alternatively 132 133 ```sh 134 python setup.py develop 135 ``` 136 137 See the full instructions for [installing from source](https://pandas.pydata.org/pandas-docs/stable/install.html#installing-from-source). 138 139 ## License 140 [BSD 3](LICENSE) 141 142 ## Documentation 143 The official documentation is hosted on PyData.org: https://pandas.pydata.org/pandas-docs/stable 144 145 ## Background 146 Work on ``pandas`` started at AQR (a quantitative hedge fund) in 2008 and 147 has been under active development since then. 148 149 ## Getting Help 150 151 For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas). 152 Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata). 153 154 ## Discussion and Development 155 Most development discussion is taking place on github in this repo. Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Gitter channel](https://gitter.im/pydata/pandas) is available for quick development related questions. 156 157 ## Contributing to pandas [![Open Source Helpers](https://www.codetriage.com/pandas-dev/pandas/badges/users.svg)](https://www.codetriage.com/pandas-dev/pandas) 158 159 All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. 160 161 A detailed overview on how to contribute can be found in the **[contributing guide](https://dev.pandas.io/docs/contributing.html)**. There is also an [overview](.github/CONTRIBUTING.md) on GitHub. 162 163 If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out. 164 165 You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas). 166 167 Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it! 168 169 Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Gitter](https://gitter.im/pydata/pandas). 170 171 As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/pandas/blob/master/.github/CODE_OF_CONDUCT.md) 172 [end of README.md] [start of pandas/core/ops/__init__.py] 1 """ 2 Arithmetic operations for PandasObjects 3 4 This is not a public API. 5 """ 6 import datetime 7 import operator 8 from typing import Optional, Set, Tuple, Union 9 10 import numpy as np 11 12 from pandas._libs import Timedelta, Timestamp, lib 13 from pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op # noqa:F401 14 from pandas._typing import Level 15 from pandas.util._decorators import Appender 16 17 from pandas.core.dtypes.common import is_list_like, is_timedelta64_dtype 18 from pandas.core.dtypes.generic import ( 19 ABCDataFrame, 20 ABCExtensionArray, 21 ABCIndexClass, 22 ABCSeries, 23 ) 24 from pandas.core.dtypes.missing import isna 25 26 from pandas.core.construction import extract_array 27 from pandas.core.ops.array_ops import ( 28 arithmetic_op, 29 comparison_op, 30 define_na_arithmetic_op, 31 get_array_op, 32 logical_op, 33 ) 34 from pandas.core.ops.array_ops import comp_method_OBJECT_ARRAY # noqa:F401 35 from pandas.core.ops.common import unpack_zerodim_and_defer 36 from pandas.core.ops.dispatch import should_series_dispatch 37 from pandas.core.ops.docstrings import ( 38 _arith_doc_FRAME, 39 _flex_comp_doc_FRAME, 40 _make_flex_doc, 41 _op_descriptions, 42 ) 43 from pandas.core.ops.invalid import invalid_comparison # noqa:F401 44 from pandas.core.ops.mask_ops import kleene_and, kleene_or, kleene_xor # noqa: F401 45 from pandas.core.ops.methods import ( # noqa:F401 46 add_flex_arithmetic_methods, 47 add_special_arithmetic_methods, 48 ) 49 from pandas.core.ops.roperator import ( # noqa:F401 50 radd, 51 rand_, 52 rdiv, 53 rdivmod, 54 rfloordiv, 55 rmod, 56 rmul, 57 ror_, 58 rpow, 59 rsub, 60 rtruediv, 61 rxor, 62 ) 63 64 # ----------------------------------------------------------------------------- 65 # constants 66 ARITHMETIC_BINOPS: Set[str] = { 67 "add", 68 "sub", 69 "mul", 70 "pow", 71 "mod", 72 "floordiv", 73 "truediv", 74 "divmod", 75 "radd", 76 "rsub", 77 "rmul", 78 "rpow", 79 "rmod", 80 "rfloordiv", 81 "rtruediv", 82 "rdivmod", 83 } 84 85 86 COMPARISON_BINOPS: Set[str] = { 87 "eq", 88 "ne", 89 "lt", 90 "gt", 91 "le", 92 "ge", 93 } 94 95 # ----------------------------------------------------------------------------- 96 # Ops Wrapping Utilities 97 98 99 def get_op_result_name(left, right): 100 """ 101 Find the appropriate name to pin to an operation result. This result 102 should always be either an Index or a Series. 103 104 Parameters 105 ---------- 106 left : {Series, Index} 107 right : object 108 109 Returns 110 ------- 111 name : object 112 Usually a string 113 """ 114 # `left` is always a Series when called from within ops 115 if isinstance(right, (ABCSeries, ABCIndexClass)): 116 name = _maybe_match_name(left, right) 117 else: 118 name = left.name 119 return name 120 121 122 def _maybe_match_name(a, b): 123 """ 124 Try to find a name to attach to the result of an operation between 125 a and b. If only one of these has a `name` attribute, return that 126 name. Otherwise return a consensus name if they match of None if 127 they have different names. 128 129 Parameters 130 ---------- 131 a : object 132 b : object 133 134 Returns 135 ------- 136 name : str or None 137 138 See Also 139 -------- 140 pandas.core.common.consensus_name_attr 141 """ 142 a_has = hasattr(a, "name") 143 b_has = hasattr(b, "name") 144 if a_has and b_has: 145 if a.name == b.name: 146 return a.name 147 else: 148 # TODO: what if they both have np.nan for their names? 149 return None 150 elif a_has: 151 return a.name 152 elif b_has: 153 return b.name 154 return None 155 156 157 def maybe_upcast_for_op(obj, shape: Tuple[int, ...]): 158 """ 159 Cast non-pandas objects to pandas types to unify behavior of arithmetic 160 and comparison operations. 161 162 Parameters 163 ---------- 164 obj: object 165 shape : tuple[int] 166 167 Returns 168 ------- 169 out : object 170 171 Notes 172 ----- 173 Be careful to call this *after* determining the `name` attribute to be 174 attached to the result of the arithmetic operation. 175 """ 176 from pandas.core.arrays import DatetimeArray, TimedeltaArray 177 178 if type(obj) is datetime.timedelta: 179 # GH#22390 cast up to Timedelta to rely on Timedelta 180 # implementation; otherwise operation against numeric-dtype 181 # raises TypeError 182 return Timedelta(obj) 183 elif isinstance(obj, np.datetime64): 184 # GH#28080 numpy casts integer-dtype to datetime64 when doing 185 # array[int] + datetime64, which we do not allow 186 if isna(obj): 187 # Avoid possible ambiguities with pd.NaT 188 obj = obj.astype("datetime64[ns]") 189 right = np.broadcast_to(obj, shape) 190 return DatetimeArray(right) 191 192 return Timestamp(obj) 193 194 elif isinstance(obj, np.timedelta64): 195 if isna(obj): 196 # wrapping timedelta64("NaT") in Timedelta returns NaT, 197 # which would incorrectly be treated as a datetime-NaT, so 198 # we broadcast and wrap in a TimedeltaArray 199 obj = obj.astype("timedelta64[ns]") 200 right = np.broadcast_to(obj, shape) 201 return TimedeltaArray(right) 202 203 # In particular non-nanosecond timedelta64 needs to be cast to 204 # nanoseconds, or else we get undesired behavior like 205 # np.timedelta64(3, 'D') / 2 == np.timedelta64(1, 'D') 206 return Timedelta(obj) 207 208 elif isinstance(obj, np.ndarray) and is_timedelta64_dtype(obj.dtype): 209 # GH#22390 Unfortunately we need to special-case right-hand 210 # timedelta64 dtypes because numpy casts integer dtypes to 211 # timedelta64 when operating with timedelta64 212 return TimedeltaArray._from_sequence(obj) 213 return obj 214 215 216 # ----------------------------------------------------------------------------- 217 218 219 def _get_frame_op_default_axis(name): 220 """ 221 Only DataFrame cares about default_axis, specifically: 222 special methods have default_axis=None and flex methods 223 have default_axis='columns'. 224 225 Parameters 226 ---------- 227 name : str 228 229 Returns 230 ------- 231 default_axis: str or None 232 """ 233 if name.replace("__r", "__") in ["__and__", "__or__", "__xor__"]: 234 # bool methods 235 return "columns" 236 elif name.startswith("__"): 237 # __add__, __mul__, ... 238 return None 239 else: 240 # add, mul, ... 241 return "columns" 242 243 244 def _get_opstr(op): 245 """ 246 Find the operation string, if any, to pass to numexpr for this 247 operation. 248 249 Parameters 250 ---------- 251 op : binary operator 252 253 Returns 254 ------- 255 op_str : string or None 256 """ 257 return { 258 operator.add: "+", 259 radd: "+", 260 operator.mul: "*", 261 rmul: "*", 262 operator.sub: "-", 263 rsub: "-", 264 operator.truediv: "/", 265 rtruediv: "/", 266 operator.floordiv: "//", 267 rfloordiv: "//", 268 operator.mod: "%", 269 rmod: "%", 270 operator.pow: "**", 271 rpow: "**", 272 operator.eq: "==", 273 operator.ne: "!=", 274 operator.le: "<=", 275 operator.lt: "<", 276 operator.ge: ">=", 277 operator.gt: ">", 278 operator.and_: "&", 279 rand_: "&", 280 operator.or_: "|", 281 ror_: "|", 282 operator.xor: "^", 283 rxor: "^", 284 divmod: None, 285 rdivmod: None, 286 }[op] 287 288 289 def _get_op_name(op, special): 290 """ 291 Find the name to attach to this method according to conventions 292 for special and non-special methods. 293 294 Parameters 295 ---------- 296 op : binary operator 297 special : bool 298 299 Returns 300 ------- 301 op_name : str 302 """ 303 opname = op.__name__.strip("_") 304 if special: 305 opname = f"__{opname}__" 306 return opname 307 308 309 # ----------------------------------------------------------------------------- 310 # Masking NA values and fallbacks for operations numpy does not support 311 312 313 def fill_binop(left, right, fill_value): 314 """ 315 If a non-None fill_value is given, replace null entries in left and right 316 with this value, but only in positions where _one_ of left/right is null, 317 not both. 318 319 Parameters 320 ---------- 321 left : array-like 322 right : array-like 323 fill_value : object 324 325 Returns 326 ------- 327 left : array-like 328 right : array-like 329 330 Notes 331 ----- 332 Makes copies if fill_value is not None and NAs are present. 333 """ 334 if fill_value is not None: 335 left_mask = isna(left) 336 right_mask = isna(right) 337 338 # one but not both 339 mask = left_mask ^ right_mask 340 341 if left_mask.any(): 342 # Avoid making a copy if we can 343 left = left.copy() 344 left[left_mask & mask] = fill_value 345 346 if right_mask.any(): 347 # Avoid making a copy if we can 348 right = right.copy() 349 right[right_mask & mask] = fill_value 350 351 return left, right 352 353 354 # ----------------------------------------------------------------------------- 355 # Dispatch logic 356 357 358 def dispatch_to_series(left, right, func, str_rep=None, axis=None): 359 """ 360 Evaluate the frame operation func(left, right) by evaluating 361 column-by-column, dispatching to the Series implementation. 362 363 Parameters 364 ---------- 365 left : DataFrame 366 right : scalar or DataFrame 367 func : arithmetic or comparison operator 368 str_rep : str or None, default None 369 axis : {None, 0, 1, "index", "columns"} 370 371 Returns 372 ------- 373 DataFrame 374 """ 375 # Note: we use iloc to access columns for compat with cases 376 # with non-unique columns. 377 import pandas.core.computation.expressions as expressions 378 379 right = lib.item_from_zerodim(right) 380 if lib.is_scalar(right) or np.ndim(right) == 0: 381 382 # Get the appropriate array-op to apply to each block's values. 383 array_op = get_array_op(func, str_rep=str_rep) 384 bm = left._data.apply(array_op, right=right) 385 return type(left)(bm) 386 387 elif isinstance(right, ABCDataFrame): 388 assert right._indexed_same(left) 389 390 def column_op(a, b): 391 return {i: func(a.iloc[:, i], b.iloc[:, i]) for i in range(len(a.columns))} 392 393 elif isinstance(right, ABCSeries) and axis == "columns": 394 # We only get here if called via _combine_series_frame, 395 # in which case we specifically want to operate row-by-row 396 assert right.index.equals(left.columns) 397 398 if right.dtype == "timedelta64[ns]": 399 # ensure we treat NaT values as the correct dtype 400 # Note: we do not do this unconditionally as it may be lossy or 401 # expensive for EA dtypes. 402 right = np.asarray(right) 403 404 def column_op(a, b): 405 return {i: func(a.iloc[:, i], b[i]) for i in range(len(a.columns))} 406 407 else: 408 409 def column_op(a, b): 410 return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))} 411 412 elif isinstance(right, ABCSeries): 413 assert right.index.equals(left.index) # Handle other cases later 414 415 def column_op(a, b): 416 return {i: func(a.iloc[:, i], b) for i in range(len(a.columns))} 417 418 else: 419 # Remaining cases have less-obvious dispatch rules 420 raise NotImplementedError(right) 421 422 new_data = expressions.evaluate(column_op, str_rep, left, right) 423 return new_data 424 425 426 # ----------------------------------------------------------------------------- 427 # Series 428 429 430 def _align_method_SERIES(left, right, align_asobject=False): 431 """ align lhs and rhs Series """ 432 # ToDo: Different from _align_method_FRAME, list, tuple and ndarray 433 # are not coerced here 434 # because Series has inconsistencies described in #13637 435 436 if isinstance(right, ABCSeries): 437 # avoid repeated alignment 438 if not left.index.equals(right.index): 439 440 if align_asobject: 441 # to keep original value's dtype for bool ops 442 left = left.astype(object) 443 right = right.astype(object) 444 445 left, right = left.align(right, copy=False) 446 447 return left, right 448 449 450 def _construct_result( 451 left: ABCSeries, 452 result: Union[np.ndarray, ABCExtensionArray], 453 index: ABCIndexClass, 454 name, 455 ): 456 """ 457 Construct an appropriately-labelled Series from the result of an op. 458 459 Parameters 460 ---------- 461 left : Series 462 result : ndarray or ExtensionArray 463 index : Index 464 name : object 465 466 Returns 467 ------- 468 Series 469 In the case of __divmod__ or __rdivmod__, a 2-tuple of Series. 470 """ 471 if isinstance(result, tuple): 472 # produced by divmod or rdivmod 473 return ( 474 _construct_result(left, result[0], index=index, name=name), 475 _construct_result(left, result[1], index=index, name=name), 476 ) 477 478 # We do not pass dtype to ensure that the Series constructor 479 # does inference in the case where `result` has object-dtype. 480 out = left._constructor(result, index=index) 481 out = out.__finalize__(left) 482 483 # Set the result's name after __finalize__ is called because __finalize__ 484 # would set it back to self.name 485 out.name = name 486 return out 487 488 489 def _arith_method_SERIES(cls, op, special): 490 """ 491 Wrapper function for Series arithmetic operations, to avoid 492 code duplication. 493 """ 494 str_rep = _get_opstr(op) 495 op_name = _get_op_name(op, special) 496 497 @unpack_zerodim_and_defer(op_name) 498 def wrapper(left, right): 499 500 left, right = _align_method_SERIES(left, right) 501 res_name = get_op_result_name(left, right) 502 503 lvalues = extract_array(left, extract_numpy=True) 504 rvalues = extract_array(right, extract_numpy=True) 505 result = arithmetic_op(lvalues, rvalues, op, str_rep) 506 507 return _construct_result(left, result, index=left.index, name=res_name) 508 509 wrapper.__name__ = op_name 510 return wrapper 511 512 513 def _comp_method_SERIES(cls, op, special): 514 """ 515 Wrapper function for Series arithmetic operations, to avoid 516 code duplication. 517 """ 518 op_name = _get_op_name(op, special) 519 520 @unpack_zerodim_and_defer(op_name) 521 def wrapper(self, other): 522 523 res_name = get_op_result_name(self, other) 524 525 if isinstance(other, ABCSeries) and not self._indexed_same(other): 526 raise ValueError("Can only compare identically-labeled Series objects") 527 528 lvalues = extract_array(self, extract_numpy=True) 529 rvalues = extract_array(other, extract_numpy=True) 530 531 res_values = comparison_op(lvalues, rvalues, op) 532 533 return _construct_result(self, res_values, index=self.index, name=res_name) 534 535 wrapper.__name__ = op_name 536 return wrapper 537 538 539 def _bool_method_SERIES(cls, op, special): 540 """ 541 Wrapper function for Series arithmetic operations, to avoid 542 code duplication. 543 """ 544 op_name = _get_op_name(op, special) 545 546 @unpack_zerodim_and_defer(op_name) 547 def wrapper(self, other): 548 self, other = _align_method_SERIES(self, other, align_asobject=True) 549 res_name = get_op_result_name(self, other) 550 551 lvalues = extract_array(self, extract_numpy=True) 552 rvalues = extract_array(other, extract_numpy=True) 553 554 res_values = logical_op(lvalues, rvalues, op) 555 return _construct_result(self, res_values, index=self.index, name=res_name) 556 557 wrapper.__name__ = op_name 558 return wrapper 559 560 561 def _flex_method_SERIES(cls, op, special): 562 name = _get_op_name(op, special) 563 doc = _make_flex_doc(name, "series") 564 565 @Appender(doc) 566 def flex_wrapper(self, other, level=None, fill_value=None, axis=0): 567 # validate axis 568 if axis is not None: 569 self._get_axis_number(axis) 570 571 if isinstance(other, ABCSeries): 572 return self._binop(other, op, level=level, fill_value=fill_value) 573 elif isinstance(other, (np.ndarray, list, tuple)): 574 if len(other) != len(self): 575 raise ValueError("Lengths must be equal") 576 other = self._constructor(other, self.index) 577 return self._binop(other, op, level=level, fill_value=fill_value) 578 else: 579 if fill_value is not None: 580 self = self.fillna(fill_value) 581 582 return op(self, other) 583 584 flex_wrapper.__name__ = name 585 return flex_wrapper 586 587 588 # ----------------------------------------------------------------------------- 589 # DataFrame 590 591 592 def _combine_series_frame(left, right, func, axis: int): 593 """ 594 Apply binary operator `func` to self, other using alignment and fill 595 conventions determined by the axis argument. 596 597 Parameters 598 ---------- 599 left : DataFrame 600 right : Series 601 func : binary operator 602 axis : {0, 1} 603 604 Returns 605 ------- 606 result : DataFrame 607 """ 608 # We assume that self.align(other, ...) has already been called 609 if axis == 0: 610 new_data = left._combine_match_index(right, func) 611 else: 612 new_data = dispatch_to_series(left, right, func, axis="columns") 613 614 return left._construct_result(new_data) 615 616 617 def _align_method_FRAME( 618 left, right, axis, flex: Optional[bool] = False, level: Level = None 619 ): 620 """ 621 Convert rhs to meet lhs dims if input is list, tuple or np.ndarray. 622 623 Parameters 624 ---------- 625 left : DataFrame 626 right : Any 627 axis: int, str, or None 628 flex: bool or None, default False 629 Whether this is a flex op, in which case we reindex. 630 None indicates not to check for alignment. 631 level : int or level name, default None 632 633 Returns 634 ------- 635 left : DataFrame 636 right : Any 637 """ 638 639 def to_series(right): 640 msg = "Unable to coerce to Series, length must be {req_len}: given {given_len}" 641 if axis is not None and left._get_axis_name(axis) == "index": 642 if len(left.index) != len(right): 643 raise ValueError( 644 msg.format(req_len=len(left.index), given_len=len(right)) 645 ) 646 right = left._constructor_sliced(right, index=left.index) 647 else: 648 if len(left.columns) != len(right): 649 raise ValueError( 650 msg.format(req_len=len(left.columns), given_len=len(right)) 651 ) 652 right = left._constructor_sliced(right, index=left.columns) 653 return right 654 655 if isinstance(right, np.ndarray): 656 657 if right.ndim == 1: 658 right = to_series(right) 659 660 elif right.ndim == 2: 661 if right.shape == left.shape: 662 right = left._constructor(right, index=left.index, columns=left.columns) 663 664 elif right.shape[0] == left.shape[0] and right.shape[1] == 1: 665 # Broadcast across columns 666 right = np.broadcast_to(right, left.shape) 667 right = left._constructor(right, index=left.index, columns=left.columns) 668 669 elif right.shape[1] == left.shape[1] and right.shape[0] == 1: 670 # Broadcast along rows 671 right = to_series(right[0, :]) 672 673 else: 674 raise ValueError( 675 "Unable to coerce to DataFrame, shape " 676 f"must be {left.shape}: given {right.shape}" 677 ) 678 679 elif right.ndim > 2: 680 raise ValueError( 681 f"Unable to coerce to Series/DataFrame, dim must be <= 2: {right.shape}" 682 ) 683 684 elif is_list_like(right) and not isinstance(right, (ABCSeries, ABCDataFrame)): 685 # GH17901 686 right = to_series(right) 687 688 if flex is not None and isinstance(right, ABCDataFrame): 689 if not left._indexed_same(right): 690 if flex: 691 left, right = left.align(right, join="outer", level=level, copy=False) 692 else: 693 raise ValueError( 694 "Can only compare identically-labeled DataFrame objects" 695 ) 696 elif isinstance(right, ABCSeries): 697 # axis=1 is default for DataFrame-with-Series op 698 axis = left._get_axis_number(axis) if axis is not None else 1 699 left, right = left.align( 700 right, join="outer", axis=axis, level=level, copy=False 701 ) 702 703 return left, right 704 705 706 def _arith_method_FRAME(cls, op, special): 707 str_rep = _get_opstr(op) 708 op_name = _get_op_name(op, special) 709 default_axis = _get_frame_op_default_axis(op_name) 710 711 na_op = define_na_arithmetic_op(op, str_rep) 712 is_logical = str_rep in ["&", "|", "^"] 713 714 if op_name in _op_descriptions: 715 # i.e. include "add" but not "__add__" 716 doc = _make_flex_doc(op_name, "dataframe") 717 else: 718 doc = _arith_doc_FRAME % op_name 719 720 @Appender(doc) 721 def f(self, other, axis=default_axis, level=None, fill_value=None): 722 723 self, other = _align_method_FRAME(self, other, axis, flex=True, level=level) 724 725 if isinstance(other, ABCDataFrame): 726 # Another DataFrame 727 pass_op = op if should_series_dispatch(self, other, op) else na_op 728 pass_op = pass_op if not is_logical else op 729 730 new_data = self._combine_frame(other, pass_op, fill_value) 731 return self._construct_result(new_data) 732 733 elif isinstance(other, ABCSeries): 734 # For these values of `axis`, we end up dispatching to Series op, 735 # so do not want the masked op. 736 pass_op = op if axis in [0, "columns", None] else na_op 737 pass_op = pass_op if not is_logical else op 738 739 if fill_value is not None: 740 raise NotImplementedError(f"fill_value {fill_value} not supported.") 741 742 axis = self._get_axis_number(axis) if axis is not None else 1 743 return _combine_series_frame(self, other, pass_op, axis=axis) 744 else: 745 # in this case we always have `np.ndim(other) == 0` 746 if fill_value is not None: 747 self = self.fillna(fill_value) 748 749 new_data = dispatch_to_series(self, other, op, str_rep) 750 return self._construct_result(new_data) 751 752 f.__name__ = op_name 753 754 return f 755 756 757 def _flex_comp_method_FRAME(cls, op, special): 758 str_rep = _get_opstr(op) 759 op_name = _get_op_name(op, special) 760 default_axis = _get_frame_op_default_axis(op_name) 761 762 doc = _flex_comp_doc_FRAME.format( 763 op_name=op_name, desc=_op_descriptions[op_name]["desc"] 764 ) 765 766 @Appender(doc) 767 def f(self, other, axis=default_axis, level=None): 768 769 self, other = _align_method_FRAME(self, other, axis, flex=True, level=level) 770 771 if isinstance(other, ABCDataFrame): 772 # Another DataFrame 773 new_data = dispatch_to_series(self, other, op, str_rep) 774 return self._construct_result(new_data) 775 776 elif isinstance(other, ABCSeries): 777 axis = self._get_axis_number(axis) if axis is not None else 1 778 return _combine_series_frame(self, other, op, axis=axis) 779 else: 780 # in this case we always have `np.ndim(other) == 0` 781 new_data = dispatch_to_series(self, other, op) 782 return self._construct_result(new_data) 783 784 f.__name__ = op_name 785 786 return f 787 788 789 def _comp_method_FRAME(cls, op, special): 790 str_rep = _get_opstr(op) 791 op_name = _get_op_name(op, special) 792 793 @Appender(f"Wrapper for comparison method {op_name}") 794 def f(self, other): 795 796 self, other = _align_method_FRAME( 797 self, other, axis=None, level=None, flex=False 798 ) 799 800 if isinstance(other, ABCDataFrame): 801 # Another DataFrame 802 new_data = dispatch_to_series(self, other, op, str_rep) 803 804 elif isinstance(other, ABCSeries): 805 new_data = dispatch_to_series(self, other, op, axis="columns") 806 807 else: 808 809 # straight boolean comparisons we want to allow all columns 810 # (regardless of dtype to pass thru) See #4537 for discussion. 811 new_data = dispatch_to_series(self, other, op) 812 813 return self._construct_result(new_data) 814 815 f.__name__ = op_name 816 817 return f 818 [end of pandas/core/ops/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + points.append((x, y)) return points </patch>
pandas-dev/pandas
7b0887c2ea7255139be1cc16a179e0b4574384d2
REGR: AssertionError when subtracting Timestamp-valued DataFrames with non-indentical column index ```python import pandas as pd df = pd.DataFrame( { "foo": [pd.Timestamp("2019"), pd.Timestamp("2020")], "bar": [pd.Timestamp("2018"), pd.Timestamp("2021")], } ) df2 = df[["foo"]] print(df - df2) ``` #### Problem description The above snippet raises the following exception: ``` Traceback (most recent call last): File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 149, in na_arithmetic_op result = expressions.evaluate(op, str_rep, left, right) File ".v env/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 208, in evaluate return _evaluate(op, op_str, a, b) File ".venv/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 70, in _evaluate_standard return op(a, b) File ".venv/lib/python3.6/site-packages/pandas/core/ops/common.py", line 64, in new_method return method(self, other) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 500, in wrapper result = arithmetic_op(lvalues, rvalues, op, str_rep) File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 192, in arithmetic_op res_values = dispatch_to_extension_op(op, lvalues, rvalues) File ".venv/lib/python3.6/site-packages/pandas/core/ops/dispatch.py", line 125, in dispatch_to_extension_op res_values = op(left, right) File ".venv/lib/python3.6/site-packages/pandas/core/arrays/datetimelike.py", line 1390, in __rsub__ f"cannot subtract {type(self).__name__} from {type(other).__name__}" TypeError: cannot subtract DatetimeArray from ndarray During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pandas_bug.py", line 36, in <module> print(df2 - df) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 703, in f new_data = left._combine_frame(right, pass_op, fill_value) File ".venv/lib/python3.6/site-packages/pandas/core/frame.py", line 5297, in _combine_frame new_data = ops.dispatch_to_series(self, other, _arith_op) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 416, in dispatch_to_series new_data = expressions.evaluate(column_op, str_rep, left, right) File ".venv/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 208, in evaluate return _evaluate(op, op_str, a, b) File ".venv/lib/python3.6/site-packages/pandas/core/computation/expressions.py", line 70, in _evaluate_standard return op(a, b) File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 385, in column_op return {i: func(a.iloc[:, i], b.iloc[:, i]) for i in range(len(a.columns))} File ".venv/lib/python3.6/site-packages/pandas/core/ops/__init__.py", line 385, in <dictcomp> return {i: func(a.iloc[:, i], b.iloc[:, i]) for i in range(len(a.columns))} File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 121, in na_op return na_arithmetic_op(x, y, op, str_rep) File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 151, in na_arithmetic_op result = masked_arith_op(left, right, op) File ".venv/lib/python3.6/site-packages/pandas/core/ops/array_ops.py", line 75, in masked_arith_op assert isinstance(x, np.ndarray), type(x) ``` This is a 1.0.0 regression; in 0.25.3, the operation succeeds and the unmatched `bar` column is filled with `NaN` in the output. The same error occurs with: * Any combination of incompatible columns (strict subset, strict superset, overlapping, disjoint) * Calling the `subtract` method instead of using the subtraction operator * Timezone-aware `Timestamp`s as well as timezone-naive It does *not* seem to occur with: * Mismatches on the row index; transposing the dataframes in the above example prevents the errors occuring. * `pd.Series` objects with mismatched indexes (e.g. calling the above on the first row of each dataframe works fine) * Other dtypes; `bool`, `float`, and `int` seem to work fine. Similarly, if the dataframes are explicitly cast to dtype `object`, the operation succeeds. #### Expected Output ``` bar foo 0 NaN 0 days 1 NaN 0 days ``` #### Output of ``pd.show_versions()`` <details> ``` INSTALLED VERSIONS ------------------ commit : None python : 3.6.8.final.0 python-bits : 64 OS : Linux OS-release : 4.15.0-74-generic machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_GB.UTF-8 LOCALE : en_GB.UTF-8 pandas : 1.0.0 numpy : 1.18.1 pytz : 2019.3 dateutil : 2.8.1 pip : 19.3.1 setuptools : 41.6.0 Cython : None pytest : 5.3.5 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader: None bs4 : None bottleneck : None fastparquet : None gcsfs : None lxml.etree : None matplotlib : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : None pytables : None pytest : 5.3.5 pyxlsb : None s3fs : None scipy : 1.4.1 sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlwt : None xlsxwriter : None numba : None ``` </details>
Thanks for the report. The NaNs are introduced in https://github.com/pandas-dev/pandas/blob/a2721fd602e43128314d4efd056dae56a89197bf/pandas/core/ops/__init__.py#L725, which calls DataFrame.align. I wonder, should this be changed? ```python In [6]: df.align(df2)[1] Out[6]: bar foo 0 NaN 2019-01-01 1 NaN 2020-01-01 ``` to have `bar` be datetime64[ns] dtype, to match the left? cc @jbrockmendel. ill look at this today So this is pretty ugly, but one option that tentatively works is to patch ops._arith_method_FRAME so that we only operate on shared columns, then reindex the result. might actually improve perf for cases where we have very few shared columns That seems reasonable. The alternative is to ensure that the correct `fill_value` is used in align, which seems difficult since we'd potentially have different fill values for different columns / dtypes. Is that likely to cause issues with methods like `DataFrame.add`? I forget whether the fill_value from add is done before or after the op. > The alternative is to ensure that the correct fill_value is used in align, which seems difficult since we'd potentially have different fill values for different columns / dtypes. yah, it would also depend on op, which would become a nightmare. I'll put up a proof of concept in a bit
2020-02-05T00:55:52Z
<patch> diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst --- a/doc/source/whatsnew/v1.0.2.rst +++ b/doc/source/whatsnew/v1.0.2.rst @@ -19,6 +19,7 @@ Fixed regressions - Fixed regression in :meth:`Series.align` when ``other`` is a DataFrame and ``method`` is not None (:issue:`31785`) - Fixed regression in :meth:`pandas.core.groupby.RollingGroupby.apply` where the ``raw`` parameter was ignored (:issue:`31754`) - Fixed regression in :meth:`rolling(..).corr() <pandas.core.window.Rolling.corr>` when using a time offset (:issue:`31789`) +- Fixed regression in :class:`DataFrame` arithmetic operations with mis-matched columns (:issue:`31623`) - .. --------------------------------------------------------------------------- diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py --- a/pandas/core/ops/__init__.py +++ b/pandas/core/ops/__init__.py @@ -5,7 +5,7 @@ """ import datetime import operator -from typing import Optional, Set, Tuple, Union +from typing import TYPE_CHECKING, Optional, Set, Tuple, Union import numpy as np @@ -61,6 +61,9 @@ rxor, ) +if TYPE_CHECKING: + from pandas import DataFrame # noqa:F401 + # ----------------------------------------------------------------------------- # constants ARITHMETIC_BINOPS: Set[str] = { @@ -703,6 +706,58 @@ def to_series(right): return left, right +def _should_reindex_frame_op( + left: "DataFrame", right, axis, default_axis: int, fill_value, level +) -> bool: + """ + Check if this is an operation between DataFrames that will need to reindex. + """ + assert isinstance(left, ABCDataFrame) + + if not isinstance(right, ABCDataFrame): + return False + + if fill_value is None and level is None and axis is default_axis: + # TODO: any other cases we should handle here? + cols = left.columns.intersection(right.columns) + if not (cols.equals(left.columns) and cols.equals(right.columns)): + return True + + return False + + +def _frame_arith_method_with_reindex( + left: "DataFrame", right: "DataFrame", op +) -> "DataFrame": + """ + For DataFrame-with-DataFrame operations that require reindexing, + operate only on shared columns, then reindex. + + Parameters + ---------- + left : DataFrame + right : DataFrame + op : binary operator + + Returns + ------- + DataFrame + """ + # GH#31623, only operate on shared columns + cols = left.columns.intersection(right.columns) + + new_left = left[cols] + new_right = right[cols] + result = op(new_left, new_right) + + # Do the join on the columns instead of using _align_method_FRAME + # to avoid constructing two potentially large/sparse DataFrames + join_columns, _, _ = left.columns.join( + right.columns, how="outer", level=None, return_indexers=True + ) + return result.reindex(join_columns, axis=1) + + def _arith_method_FRAME(cls, op, special): str_rep = _get_opstr(op) op_name = _get_op_name(op, special) @@ -720,6 +775,9 @@ def _arith_method_FRAME(cls, op, special): @Appender(doc) def f(self, other, axis=default_axis, level=None, fill_value=None): + if _should_reindex_frame_op(self, other, axis, default_axis, fill_value, level): + return _frame_arith_method_with_reindex(self, other, op) + self, other = _align_method_FRAME(self, other, axis, flex=True, level=level) if isinstance(other, ABCDataFrame): </patch>
[]
[]